IBM Support

How do you use iPerf?

Question & Answer


Question

How do you use iPerf?

Answer

Answer

iPerf is simple, open-source, command-line network diagnostic tool, which you can run on Linux, BSD, or Windows operating systems and install on two endpoints. One endpoint runs in a "server" mode, which listens for requests. The other endpoint runs in a "client" mode and sends data. When the tool is activated, the tool attempts to fully utilize the link while it reports results.

How do you install iPerf?

To install iperf, complete the following process that is applicable to your operating system:

  • Windows Server
    Download the latest version of iPerf from SourceForge and install the application on your server.

  • CentOS / Red Hat Linux operating systems
    Run the following command on the command line:
    yum install iperf

  • Debian / Ubuntu Linux operating systems
    Run the following commands on the command line:
    apt-get update
    apt-get -y install iperf

How do you use iPerf?

On the server side, run the following command:
iperf -s

On the client side, run the following command:
iperf -c [server_ip]

The following information is an example of the command and its output on the client side:

iperf -c 10.10.10.5
------------------------------------------------------------
Client connecting to 10.10.10.5, TCP port 5001
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[ 3] local 0.0.0.0 port 46956 connected with 168.192.1.10 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0- 10.0 sec 10.0 MBytes 1.00 Mbits/sec

What are the recommended iPerf setting results that you should send to IBM Cloud Support?

Send IBM Cloud Support the results from the following commands. Copy and paste the text of your results into the ticket and plainly state which device is associated with the results.

  • Bidirectional 1 Gig TCP Traffic Flow
    Server side:
    iperf -s

    Client side:
    iperf -c x.x.x.x -i 10 -t 100 –P 10 –b 100M -d

  • Bidirectional 10 Gig TCP Traffic Flow
    Server side:
    iperf -s

    Client side:
    Iperf -c x.x.x.x -i 10 -t 100 –P 10 –b 1000M -d

  • Bidirectional 1 Gig UDP Traffic Flow
    Server side:
    iperf -s -u

    Client side:
    Iperf -c x.x.x.x -i 10 -t 100 –P 10 –b 100M -u

  • Bidirectional 10 Gig UDP Traffic Flow
    Server side:
    iperf -s -u

    Client side:
    Iperf -c x.x.x.x -i 10 -t 100 –P 10 –b 1000M -u

Quick Tutorial

To test the link for 20 seconds and look at the results every 2 seconds, you can change the commands on the client side and the server side as follows:

  • Client side:

    iperf -c 10.10.10.5 -p 8000 -t 20 -i 2

    Example results:

    ------------------------------------------------------------
    Client connecting to 10.10.10.5, TCP port 8000
    TCP window size: 16.0 KByte (default)
    ------------------------------------------------------------
    [ 3] local 10.10.10.10 port 46956 connected with 10.10.10.5 port 8000
    [ ID] Interval Transfer Bandwidth
    [ 3] 0.0- 2.0 sec 6.00 MBytes 25.2 Mbits/sec
    [ 3] 2.0- 4.0 sec 7.12 MBytes 29.9 Mbits/sec
    [ 3] 4.0- 6.0 sec 7.00 MBytes 29.4 Mbits/sec
    [ 3] 6.0- 8.0 sec 7.12 MBytes 29.9 Mbits/sec
    [ 3] 8.0-10.0 sec 7.25 MBytes 30.4 Mbits/sec
    [ 3] 10.0-12.0 sec 7.00 MBytes 29.4 Mbits/sec
    [ 3] 12.0-14.0 sec 7.12 MBytes 29.9 Mbits/sec
    [ 3] 14.0-16.0 sec 7.25 MBytes 30.4 Mbits/sec
    [ 3] 16.0-18.0 sec 6.88 MBytes 28.8 Mbits/sec
    [ 3] 18.0-20.0 sec 7.25 MBytes 30.4 Mbits/sec
    [ 3] 0.0-20.0 sec 70.1 MBytes 29.4 Mbits/sec

  • Server side:

    iperf -s -p 8000 -i 2

    Example results:

    ------------------------------------------------------------
    Server listening on TCP port 8000
    TCP window size: 8.00 KByte (default)
    ------------------------------------------------------------
    [852] local 10.10.10.5 port 8000 connected with 10.10.10.10 port 58316
    [ ID] Interval Transfer Bandwidth
    [ 4] 0.0- 2.0 sec 6.05 MBytes 25.4 Mbits/sec
    [ 4] 2.0- 4.0 sec 7.19 MBytes 30.1 Mbits/sec
    [ 4] 4.0- 6.0 sec 6.94 MBytes 29.1 Mbits/sec
    [ 4] 6.0- 8.0 sec 7.19 MBytes 30.2 Mbits/sec
    [ 4] 8.0-10.0 sec 7.19 MBytes 30.1 Mbits/sec
    [ 4] 10.0-12.0 sec 6.95 MBytes 29.1 Mbits/sec
    [ 4] 12.0-14.0 sec 7.19 MBytes 30.2 Mbits/sec
    [ 4] 14.0-16.0 sec 7.19 MBytes 30.2 Mbits/sec
    [ 4] 16.0-18.0 sec 6.95 MBytes 29.1 Mbits/sec
    [ 4] 18.0-20.0 sec 7.19 MBytes 30.1 Mbits/sec
    [ 4] 0.0-20.0 sec 70.1 MBytes 29.4 Mbits/sec

How do you change the TCP Port (-p 8000)?

You can complete the changes using the following information:

  • Client Side:

    # iperf -i 2 -t 20 -c 10.10.10.5 -p 8000 -w 1024k

    Example results:
    ------------------------------------------------------------
    Client connecting to 10.10.10.5, TCP port 8000
    TCP window size: 1.00 MByte (WARNING: requested 1.00 MByte)
    ------------------------------------------------------------
    [ 3] local 10.10.10.10 port 53903 connected with 10.10.10.5 port 5001
    [ ID] Interval Transfer Bandwidth
    [ 3] 0.0- 2.0 sec 25.9 MBytes 109 Mbits/sec
    [ 3] 2.0- 4.0 sec 28.5 MBytes 120 Mbits/sec
    [ 3] 4.0- 6.0 sec 28.4 MBytes 119 Mbits/sec
    [ 3] 6.0- 8.0 sec 28.9 MBytes 121 Mbits/sec
    [ 3] 8.0-10.0 sec 28.0 MBytes 117 Mbits/sec
    [ 3] 10.0-12.0 sec 29.0 MBytes 122 Mbits/sec
    [ 3] 12.0-14.0 sec 28.0 MBytes 117 Mbits/sec
    [ 3] 14.0-16.0 sec 29.0 MBytes 122 Mbits/sec
    [ 3] 16.0-18.0 sec 27.9 MBytes 117 Mbits/sec
    [ 3] 18.0-20.0 sec 29.0 MBytes 122 Mbits/sec
    [ 3] 0.0-20.0 sec 283 MBytes 118 Mbits/sec

  • Server Side:

    #iperf -s -w 1024k -i 2 -p 8000

    Example results:
    ------------------------------------------------------------
    Server listening on TCP port 8000
    TCP window size: 1.00 MByte
    ------------------------------------------------------------
    [ 4] local 10.10.10.5 port 8000 connected with 10.10.10.10 port 53903
    [ ID] Interval Transfer Bandwidth
    [ 4] 0.0- 2.0 sec 25.9 MBytes 109 Mbits/sec
    [ 4] 2.0- 4.0 sec 28.6 MBytes 120 Mbits/sec
    [ 4] 4.0- 6.0 sec 28.3 MBytes 119 Mbits/sec
    [ 4] 6.0- 8.0 sec 28.9 MBytes 121 Mbits/sec
    [ 4] 8.0-10.0 sec 28.0 MBytes 117 Mbits/sec
    [ 4] 10.0-12.0 sec 29.0 MBytes 121 Mbits/sec
    [ 4] 12.0-14.0 sec 28.0 MBytes 117 Mbits/sec
    [ 4] 14.0-16.0 sec 29.0 MBytes 122 Mbits/sec
    [ 4] 16.0-18.0 sec 28.0 MBytes 117 Mbits/sec
    [ 4] 18.0-20.0 sec 29.0 MBytes 121 Mbits/sec
    [ 4] 0.0-20.0 sec 283 MBytes 118 Mbits/sec

How do you increase the TCP window?

You can increase the TCP window (-w 1024k) using the following information:

  • Client side:
    iperf -i 2 -t 20 -c 10.10.10.5 -w 1024k

  • Server side:
    iperf -s -w 1024k -i 2

By increasing the TCP window from the default value to 1MB (1024k), you might see a 400% increase in throughput over the baseline. However, unfortunately, this window size value is limited by the specific operating system. To drive greater efficiency, you can consider parallel streams. With multiple simultaneous streams, you can use the pipe close to its maximum usable amount.


How do you use parallel streams (-P 7)?

You can use the following command to simulate multiple processes:
iperf -i 2 -t 20 -c -p 8000 10.10.10.5 -w 1024k -P 7

Note: Although the following examples show 7 parallel streams, it is recommended for greater efficiency that you use 10 parallel streams.

  • Client side:
    iperf -i 2 -t 20 -c -p 10.10.10.5 -w 1024k -P 7

    Example results:
    ------------------------------------------------------------
    Client connecting to 10.10.10.5, TCP port 8000
    TCP window size: 1.00 MByte (WARNING: requested 1.00 MByte)
    ------------------------------------------------------------
    [ ID] Interval Transfer Bandwidth
    [ 9] 0.0- 2.0 sec 24.9 MBytes 104 Mbits/sec
    [ 4] 0.0- 2.0 sec 24.9 MBytes 104 Mbits/sec
    [ 7] 0.0- 2.0 sec 25.6 MBytes 107 Mbits/sec
    [ 8] 0.0- 2.0 sec 24.9 MBytes 104 Mbits/sec
    [ 5] 0.0- 2.0 sec 25.8 MBytes 108 Mbits/sec
    [ 3] 0.0- 2.0 sec 25.9 MBytes 109 Mbits/sec
    [ 6] 0.0- 2.0 sec 25.9 MBytes 109 Mbits/sec
    [SUM] 0.0- 2.0 sec 178 MBytes 746 Mbits/sec

    Note: Some of the output has been omitted for brevity on the server and client.

    [ 7] 18.0-20.0 sec 28.2 MBytes 118 Mbits/sec
    [ 8] 18.0-20.0 sec 28.8 MBytes 121 Mbits/sec
    [ 5] 18.0-20.0 sec 28.0 MBytes 117 Mbits/sec
    [ 4] 18.0-20.0 sec 28.0 MBytes 117 Mbits/sec
    [ 3] 18.0-20.0 sec 28.9 MBytes 121 Mbits/sec
    [ 9] 18.0-20.0 sec 28.8 MBytes 121 Mbits/sec
    [ 6] 18.0-20.0 sec 28.9 MBytes 121 Mbits/sec
    [SUM] 18.0-20.0 sec 200 MBytes 837 Mbits/sec
    [SUM] 0.0-20.0 sec 1.93 GBytes 826 Mbits/sec

  • Server side:
    iperf -s -w 1024k -i 2 -p 8000

    Example results:
    ------------------------------------------------------------
    Server listening on TCP port 8000
    TCP window size: 1.00 MByte
    ------------------------------------------------------------
    [ 4] local 10.10.10.10 port 8000 connected with 10.10.10.5 port 53903
    [ ID] Interval Transfer Bandwidth
    [ 5] 0.0- 2.0 sec 25.7 MBytes 108 Mbits/sec
    [ 8] 0.0- 2.0 sec 24.9 MBytes 104 Mbits/sec
    [ 4] 0.0- 2.0 sec 24.9 MBytes 104 Mbits/sec
    [ 9] 0.0- 2.0 sec 24.9 MBytes 104 Mbits/sec
    [ 10] 0.0- 2.0 sec 25.9 MBytes 108 Mbits/sec
    [ 7] 0.0- 2.0 sec 25.9 MBytes 109 Mbits/sec
    [ 6] 0.0- 2.0 sec 25.9 MBytes 109 Mbits/sec
    [SUM] 0.0- 2.0 sec 178 MBytes 747 Mbits/sec

    [ 4] 18.0-20.0 sec 28.8 MBytes 121 Mbits/sec
    [ 5] 18.0-20.0 sec 28.3 MBytes 119 Mbits/sec
    [ 7] 18.0-20.0 sec 28.8 MBytes 121 Mbits/sec
    [ 10] 18.0-20.0 sec 28.1 MBytes 118 Mbits/sec
    [ 9] 18.0-20.0 sec 28.0 MBytes 118 Mbits/sec
    [ 8] 18.0-20.0 sec 28.8 MBytes 121 Mbits/sec
    [ 6] 18.0-20.0 sec 29.0 MBytes 121 Mbits/sec
    [SUM] 18.0-20.0 sec 200 MBytes 838 Mbits/sec
    [SUM] 0.0-20.1 sec 1.93 GBytes 825 Mbits/sec

As you can see from the previous tests, we were able to increase throughput from 29Mb/s with a single stream and the default TCP Window to 824Mb/s using a higher window and parallel streams. On a Gigabit link, these results are about the maximum throughput that you might achieve before saturating the link and causing packet loss. Using these processes, you can prove out the network and verify that bandwidth capacity is not an issue. Based on that information, you can then focus on tweaking TCP to get the most out of your network.

Typically, 90% utilization is about the real world maximum that you can achieve. Above 90% utilization, you will begin to saturate the link and incur packet loss.

Additional networking tips:

  • You can test UDP if you are primarily concerned about your network';s bandwidth. TCP requires tuning and, therefore, determining whether the host is a contributing factor is more difficult in a TCP stream. If packet loss is an issue, TCP will greatly amplify the symptom. The following commands will measure your network';s bandwidth:

    • Server side:
      iperf -s -u

    • Client side:
      iperf -i 2 -t 20 -c 10.10.10.5 -u

  • The report interval length will contribute to how noisy the report is. For averages, set a longer period.

[{"Business Unit":{"code":"BU053","label":"Cloud & Data Platform"},"Product":{"code":"SSCL81","label":"Network Tools"},"Component":"","Platform":[{"code":"PF025","label":"Platform Independent"}],"Version":"All Versions","Edition":"","Line of Business":{"code":"","label":""}}]

Document Information

Modified date:
01 August 2019

UID

ibm1KB0011467