Comments (14)
  • Add a Comment
  • Edit
  • More Actions v
  • Quarantine this Entry

1 jmsearcy commented Permalink

So is there a road map for Virtual Ethernet adapters that will support higher bandwidths than 1Gb/s? Can you get more performance from link agg of multiple Virtual Ethernet devices on the client LPAR?

2 Wing commented Permalink

You beat me to it Nigel.. I've been meaning to put something together after discovering this a few months ago when we also had 10Gb for our VIOs. From our testing, the limitation is in the network bridge where a virtual ethernet adapter is bridged to the SEA... hence the LPARs accessing network via the VIO SEA will NEVER get 10Gb. Virtual ethernet 'speed' is determined by the MTU.. the higher the MTU, the more effecient and thus higher speeds. You would have thought that two LPAR on two frames connected by 10Gb SEAs could talk at 10Gb.. but as you stated above, this is simply not the case. <div>&nbsp;</div> There is no point in giving more than two (primary/backup or etherchannel) 10Gb adapters to a SEA as the bottleneck is in the network bridge.

3 JackDrapeau commented Permalink

as Wing said everything is determined by the MTU, a 9k MTU will give you a virtual ethernet speed of 4 to 5Gb/s instead of 1.5Gb/s, and this 5Gb/s is maintain even between 2x CEC going through 2x SEA. If you want to do more, reduce the cpu cycle by making your MTU size bigger to what your external switch &amp; network guys support.

4 jmsearcy commented Permalink

I see now from further reading that NIB is the only option with Virtual Adapters at the client LPAR.

5 69HC_Wayne_Monfries commented Permalink

I noted recently that the default network options on a VIOS are very underweight for just about any network connection over 10Mb in bandwidth ... has anyone had a significant look at network tuning on the VIOSs and the VIOCs ? <br /> Is the limited bandwidth you are seeing related to the TCP tuning rather than the implementation within in the hypervisor perhaps ?

6 DanBraden commented Permalink

I'd like to add that one can test network bandwidth with ftp between two AIX (or two Unix) LPARs:

# ftp <AIX box="box">
provide login credentials
ftp> put "|dd if=/dev/zero bs=1m count=100" /dev/null
200 PORT command successful.
150 Opening data connection for /dev/null.
100+0 records in.
100+0 records out.
226 Transfer complete.
104857600 bytes sent in 14.06 seconds (7285 Kbytes/s)
local: |dd if=/dev/zero bs=1m count=100 remote: /dev/null
In this case, we did 100 MB of thruput in 14.06 seconds for a thruput of 7.11 MB/s.
The advantage of this command is that it doesn't do any reads from local disk or writes to remote disk. One can also use a different block size or count for different sized IOs or total IOs respectively. </AIX>

7 acfowler commented Permalink

I agree 100% that MTU size has the greatest effect on performance for Virtual Adapters. A no longer available but often cited redbook described the following Virtual Ethernet performance: <div>&nbsp;</div> The VLAN adapter has a higher raw throughput at all MTU sizes. With an MTU <br /> size of 9000 bytes, the throughput difference is very large (four to five times) <br /> because the physical Ethernet adapter is running at wire speed (989 Mbit/s user <br /> payload), but the VLAN can run much faster because it is limited only by CPU <br /> and memory-to-memory transfer speeds. <div>&nbsp;</div> At MTU sizes of 65K, the VLAN adapter supports a throughput of 10,000MB/s. This is basically 10Gb/s speed. <div>&nbsp;</div> The only other limiting factor is CPU, however with such large MTU sizes, very little CPU resources are required. On Power5, 1 CPU was required to achieve 10Gb/s performance between LPARs.

8 mfaisald commented Permalink

Enabling largesend also does the trick; <br /> ifconfig en0 largesend <div>&nbsp;</div> To verify "ifconfig -a" <div>&nbsp;</div> to disable "ifconfig en0 -largesend" <div>&nbsp;</div> This technique can be used lpar to lpar within the same frame and also if packets have to leave the frame, In which case largesend needs to enabled on SEA bridge device as well as on the physical ethernet associated to SEA for which the attribute will be large_send.

9 nagger commented Permalink

Can all those worrying about the maximum speed of the virtual Ethernet please read a follow up AIXpert Blog about getting your networks to go faster which includes physical 10 Gb and virtual networks. Blog Entry called: 10Gbit Ethernet, bad assumption and Best Practice - Part 1. <div>&nbsp;</div> Find it here: <div>&nbsp;</div> If the VIOS physical network is tuning up from the defaults (which are not good for 10 Gb) for higher performance then the virtual Ethernet stands a chance but it needs a reality check in terms of you MUST have multiple data streams, required POWER7 CPUs to drive this hard and the far end must be able to cope too. A single FTP transfer in a micro-partition (a fraction of a CPU) to a weak backup server is never going to hit the theoretical line speed regardless of using the virtual Ethernet or a physical dedicated Ethernet adapter.

10 Allan Cano commented Permalink

This information is great! <div>&nbsp;</div> I read this as a method of creating a QoS for lpars. Would that be a fair statement? <div>&nbsp;</div> Also, does anyone know if setting the MTU too high on a virtual adapter will cause adverse problems through the SEA do to packet fragmentation? Networking is my strong suite so but I thought having a client have to reform fragmented packets was slower than simply running the source at the same MTU as the client/switch. This combined with a concern that while maximize intra-MS-lpar communication, inter-MS-lpar could start taxing my VIO cpu when it's forced to fragment packets from 65535 to 9000/1500, is it worth it to increase the LPARs veth mtu to max or is it going to suck up more cpu on the VIO than its worth?

11 Allan Cano commented Permalink

That should have read 'networking is NOT my strong suite...

12 PatrickK commented Permalink

Has the virtual ethernet speed limit been surpassed with the latest vios version? We have recently added a Qlogic 10Gb card on our blade 701 server running IVM and vios (with emgr -l : IV37111m2a, IV38225s2a, IV39725m2a) . The card is configured on the vios as a Shared Ethernet Adapter over an etherchannel NIB. Our vioc using this VEA is able to achieve over 6Gb/sec of throughput generated/tested with iperf (largesend enable on SEA, mtu_bypass enabled on the vioc). 256K TCP window size.

VIOC with virtual ethernet adapter (10Gb SEA)
root@tsmbkp /# iperf -s -w 256K
Server listening on TCP port 5001
TCP window size: 256 KByte
[ 4] local port 5001 connected with port 56902
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-10.4 sec 3.76 GBytes 3.12 Gbits/sec <=Single Thread Test
[ 4] local port 5001 connected with port 56946
[ 5] local port 5001 connected with port 56947
[ 6] local port 5001 connected with port 56948
[ 7] local port 5001 connected with port 56949
[ 4] 0.0-11.3 sec 2.02 GBytes 1.54 Gbits/sec
[ 5] 0.0-11.3 sec 2.06 GBytes 1.56 Gbits/sec
[ 6] 0.0-11.3 sec 2.06 GBytes 1.56 Gbits/sec
[ 7] 0.0-11.3 sec 2.06 GBytes 1.56 Gbits/sec
[SUM] 0.0-11.3 sec 8.18 GBytes 6.22 Gbits/sec <=Quadruple Thread Test
VMware windows machine generating traffic:
H:\iperf-2.0.5-2-win32>iperf -c tsmbkp -w256K
Client connecting to tsmbkp, TCP port 5001
TCP window size: 256 KByte
[ 3] local port 56902 connected with port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 3.76 GBytes 3.23 Gbits/sec <=Single Thread Test
H:\iperf-2.0.5-2-win32>iperf -c tsmbkp -w256K -P4
Client connecting to tsmbkp, TCP port 5001
TCP window size: 256 KByte
[ 6] local port 56949 connected with port 5001
[ 5] local port 56948 connected with port 5001
[ 4] local port 56947 connected with port 5001
[ 3] local port 56946 connected with port 5001
[ ID] Interval Transfer Bandwidth
[ 6] 0.0-10.3 sec 2.06 GBytes 1.72 Gbits/sec
[ 5] 0.0-10.2 sec 2.06 GBytes 1.74 Gbits/sec
[ 4] 0.0-10.4 sec 2.06 GBytes 1.70 Gbits/sec
[ 3] 0.0-10.0 sec 2.02 GBytes 1.73 Gbits/sec
[SUM] 0.0-10.4 sec 8.18 GBytes 6.78 Gbits/sec <=Quadruple Thread Test
Any insight would be gratefully accepted,

13 nagger commented Permalink

Well Mr PatrickK - full marks for all your hard work and for giving us some excellent feedback. Since 2011 The POWER machines have got faster, the Hypervisor improved and tuning and the operating systems improved too. But I think the biggest factor is that you have tuned the network in just the right way: Big packets drastically reduce POWER and adapter management overheads and you a using many channels concurrently and getting roughly 1.5 Gbit/second on each. Like I said = full marks. There are some other tuning options covered in a later AIXpert blog at but I am not sure all of them can be enabled in your Power Blade environment. Cheers, Nigel Griffiths

14 PatrickK commented Permalink

Thanks Mr. Griffiths, your input was much appreciated. <br /> Regards, <br /> Patrick