Issue
- How to test network bandwidth?
- How to use tools such as iperf to see maximum available networking speed?
- Need to measure a system's maximum TCP and UDP bandwidth performance throughput. How this can be achieved?
- Where can I download the iperf utility package?
- How to test network throughput without special tools such as iperf?
- What is iperf and can I use it on a RHEL machine?
Resolution
Several solutions are available for testing network bandwidth:Downloading and installing iperf
Add the EPEL Repository
If using RHEL 7, this step can be skipped as iperf is included in the supported channel.If using RHEL 6 or RHEL 5, add the EPEL repository to get a ready-made RPM package:
Install iperf Package
# yum install iperf3
Bandwidth Test
iperf has the notion of a "client" and "server" for testing network throughput between two systems.The following example sets a large send and receive buffer size to maximise throughput, and performs a test for 60 seconds which should be long enough to fully exercise a network.
Server
On the server system, iperf is told to listen for a client connection:server # iperf3 -i 10 -s
-i the interval to provide periodic bandwidth updates
-s listen as a server
man iperf3
for more information on specific command line switches.Client
On the client system, iperf is told to connect to the listening server via hostname or IP address:client # iperf3 -i 10 -w 1M -t 60 -c <server hostname or ip address>
-i the interval to provide periodic bandwidth updates
-w the socket buffer size (which affects the TCP Window)
the buffer size is also set on the server by this client command
-t the time to run the test in seconds
-c connect to a listening server at...
man iperf3
for more information on specific command line switches.Test Results
Both the client and server report their results once the test is complete:Server
server # iperf3 -i 10 -s
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------
Accepted connection from 10.0.0.2, port 22216
[ 5] local 10.0.0.1 port 5201 connected to 10.0.0.2 port 22218
[ ID] Interval Transfer Bandwidth
[ 5] 0.00-10.00 sec 17.5 GBytes 15.0 Gbits/sec
[ 5] 10.00-20.00 sec 17.6 GBytes 15.2 Gbits/sec
[ 5] 20.00-30.00 sec 18.4 GBytes 15.8 Gbits/sec
[ 5] 30.00-40.00 sec 18.0 GBytes 15.5 Gbits/sec
[ 5] 40.00-50.00 sec 17.5 GBytes 15.1 Gbits/sec
[ 5] 50.00-60.00 sec 18.1 GBytes 15.5 Gbits/sec
[ 5] 60.00-60.04 sec 82.2 MBytes 17.3 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth
[ 5] 0.00-60.04 sec 0.00 Bytes 0.00 bits/sec sender
[ 5] 0.00-60.04 sec 107 GBytes 15.3 Gbits/sec receiver
Client
client # iperf3 -i 10 -w 1M -t 60 -c 10.0.0.1
Connecting to host 10.0.0.1, port 5201
[ 4] local 10.0.0.2 port 22218 connected to 10.0.0.1 port 5201
[ ID] Interval Transfer Bandwidth Retr Cwnd
[ 4] 0.00-10.00 sec 17.6 GBytes 15.1 Gbits/sec 0 6.01 MBytes
[ 4] 10.00-20.00 sec 17.6 GBytes 15.1 Gbits/sec 0 6.01 MBytes
[ 4] 20.00-30.00 sec 18.4 GBytes 15.8 Gbits/sec 0 6.01 MBytes
[ 4] 30.00-40.00 sec 18.0 GBytes 15.5 Gbits/sec 0 6.01 MBytes
[ 4] 40.00-50.00 sec 17.5 GBytes 15.1 Gbits/sec 0 6.01 MBytes
[ 4] 50.00-60.00 sec 18.1 GBytes 15.5 Gbits/sec 0 6.01 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-60.00 sec 107 GBytes 15.4 Gbits/sec 0 sender
[ 4] 0.00-60.00 sec 107 GBytes 15.4 Gbits/sec receiver
Reading the Result
Between these two systems, we could achieve a bandwidth of 15.4 gigabit per second or approximately 1835 MiB (mebibyte) per second.
No comments:
Post a Comment