Comments (14)
  • Add a Comment
  • Edit
  • More Actions v
  • Quarantine this Entry

1 jmsearcy commented Permalink

So is there a road map for Virtual Ethernet adapters that will support higher bandwidths than 1Gb/s? Can you get more performance from link agg of multiple Virtual Ethernet devices on the client LPAR?

2 Wing commented Permalink

You beat me to it Nigel.. I've been meaning to put something together after discovering this a few months ago when we also had 10Gb for our VIOs. From our testing, the limitation is in the network bridge where a virtual ethernet adapter is bridged to the SEA... hence the LPARs accessing network via the VIO SEA will NEVER get 10Gb. Virtual ethernet 'speed' is determined by the MTU.. the higher the MTU, the more effecient and thus higher speeds. You would have thought that two LPAR on two frames connected by 10Gb SEAs could talk at 10Gb.. but as you stated above, this is simply not the case. <div>&nbsp;</div> There is no point in giving more than two (primary/backup or etherchannel) 10Gb adapters to a SEA as the bottleneck is in the network bridge.

3 JackDrapeau commented Permalink

as Wing said everything is determined by the MTU, a 9k MTU will give you a virtual ethernet speed of 4 to 5Gb/s instead of 1.5Gb/s, and this 5Gb/s is maintain even between 2x CEC going through 2x SEA. If you want to do more, reduce the cpu cycle by making your MTU size bigger to what your external switch &amp; network guys support.

4 jmsearcy commented Permalink

I see now from further reading that NIB is the only option with Virtual Adapters at the client LPAR.

5 69HC_Wayne_Monfries commented Permalink

I noted recently that the default network options on a VIOS are very underweight for just about any network connection over 10Mb in bandwidth ... has anyone had a significant look at network tuning on the VIOSs and the VIOCs ? <br /> Is the limited bandwidth you are seeing related to the TCP tuning rather than the implementation within in the hypervisor perhaps ?

6 DanBraden commented Permalink

I'd like to add that one can test network bandwidth with ftp between two AIX (or two Unix) LPARs:

 
# ftp <AIX box="box">
provide login credentials
 
ftp> put "|dd if=/dev/zero bs=1m count=100" /dev/null
200 PORT command successful.
150 Opening data connection for /dev/null.
100+0 records in.
100+0 records out.
226 Transfer complete.
104857600 bytes sent in 14.06 seconds (7285 Kbytes/s)
local: |dd if=/dev/zero bs=1m count=100 remote: /dev/null
 
In this case, we did 100 MB of thruput in 14.06 seconds for a thruput of 7.11 MB/s.
 
The advantage of this command is that it doesn't do any reads from local disk or writes to remote disk. One can also use a different block size or count for different sized IOs or total IOs respectively. </AIX>

7 acfowler commented Permalink

I agree 100% that MTU size has the greatest effect on performance for Virtual Adapters. A no longer available but often cited redbook described the following Virtual Ethernet performance: <div>&nbsp;</div> The VLAN adapter has a higher raw throughput at all MTU sizes. With an MTU <br /> size of 9000 bytes, the throughput difference is very large (four to five times) <br /> because the physical Ethernet adapter is running at wire speed (989 Mbit/s user <br /> payload), but the VLAN can run much faster because it is limited only by CPU <br /> and memory-to-memory transfer speeds. <div>&nbsp;</div> At MTU sizes of 65K, the VLAN adapter supports a throughput of 10,000MB/s. This is basically 10Gb/s speed. <div>&nbsp;</div> The only other limiting factor is CPU, however with such large MTU sizes, very little CPU resources are required. On Power5, 1 CPU was required to achieve 10Gb/s performance between LPARs.

8 mfaisald commented Permalink

Enabling largesend also does the trick; <br /> ifconfig en0 largesend <div>&nbsp;</div> To verify "ifconfig -a" <div>&nbsp;</div> to disable "ifconfig en0 -largesend" <div>&nbsp;</div> This technique can be used lpar to lpar within the same frame and also if packets have to leave the frame, In which case largesend needs to enabled on SEA bridge device as well as on the physical ethernet associated to SEA for which the attribute will be large_send.

9 nagger commented Permalink

Can all those worrying about the maximum speed of the virtual Ethernet please read a follow up AIXpert Blog about getting your networks to go faster which includes physical 10 Gb and virtual networks. Blog Entry called: 10Gbit Ethernet, bad assumption and Best Practice - Part 1. <div>&nbsp;</div> Find it here: https://www.ibm.com/developerworks/mydeveloperworks/blogs/aixpert/entry/10gbit_ethernet_bad_assumption_and_best_practice_part_137 <div>&nbsp;</div> If the VIOS physical network is tuning up from the defaults (which are not good for 10 Gb) for higher performance then the virtual Ethernet stands a chance but it needs a reality check in terms of you MUST have multiple data streams, required POWER7 CPUs to drive this hard and the far end must be able to cope too. A single FTP transfer in a micro-partition (a fraction of a CPU) to a weak backup server is never going to hit the theoretical line speed regardless of using the virtual Ethernet or a physical dedicated Ethernet adapter.

10 Allan Cano commented Permalink

This information is great! <div>&nbsp;</div> I read this as a method of creating a QoS for lpars. Would that be a fair statement? <div>&nbsp;</div> Also, does anyone know if setting the MTU too high on a virtual adapter will cause adverse problems through the SEA do to packet fragmentation? Networking is my strong suite so but I thought having a client have to reform fragmented packets was slower than simply running the source at the same MTU as the client/switch. This combined with a concern that while maximize intra-MS-lpar communication, inter-MS-lpar could start taxing my VIO cpu when it's forced to fragment packets from 65535 to 9000/1500, is it worth it to increase the LPARs veth mtu to max or is it going to suck up more cpu on the VIO than its worth?