IBM Redbooks PowerVM workshop Madrid, Days 2-3
Daniel Martin-Corben 270004CY29 Visits (1578)
Continuing on from the first day of the workshop in Madrid we followed up days 2 and 3 with further PowerVM topics. Some key points that I picked up where as follows -
A typical Hypervisor uses about 1-1.5MB of memory per virtual SCSI adapter with a max queue depth of 510 queued commands per device, compared to virtual fibre (for NPIV) which has a Hypervisor memory footprint of about 140MB. Then in comparison again to infiniband which uses 1GB per adapter; this is due to the high performance considerations with these devices.
Along with the queue depth on the adapter, each disk device will have on defined for itself, on the VIOS and the client. If you then have the related disk device definitions installed on the VIOS or client then the systems should set the disk queue depth to the recommended amount for that model storage. If not the default is only 3, so a huge potential bottleneck on you VIOS or client.
Memory considerations in the Hypervisor for multiple network switches and adapters; same as vSCSi is about 1-1.5MB per adapter, where as HEA (Host Ethernet Adapter) uses around 102MB per adapter; so are not as efficient. Along with there CPU consumption from the VIOS, as they don't have a dedicated CPU. The virtual switches do have a memory usage too but it's miniscule and not something to worry about.
The Hypervisor contained virtual network gives you a equivalent speed of about 16Gbit per second, so much improved solution for network backups and file serving like NFS. You will need to set the MTU to a size of 65394 on the VIOS, switch and clients, so this means its not available outside of the virtualised network as the max is 9000. The higher MTU will decrease CPU load on you system too, this will be the case so if you are doing backups and such, HTTP reads and smaller transactions it will make no difference.
Note that 10Gbps adapters due to the TCP/IP load will require 2 to 3 POWER7 cores to run at full speed, that's case is true if it was 10x 1Gbps adapters (see below). Remember to set the large_send parameters and for 10Gbps cards the large_receive, plus jumbo frames. Sometimes the VIOS is mistaken as a bottleneck for network performance when in reality it's the huge CPU demand that TCP/IP makes for the Gbps bandwidth systems need. So if your not using 10Gbps cards do you really need them.
Note: Linux on Power at present does not support the large_receive parameter, so to improve performance give those systems there own dedicated SEA.
Sizing Network Recomendations -
Sum all partitions for each environment and size according to the results, these examples are over conservative -
30 systems, 12 Prod, 18 Non-prod
Prod 12x 0.4Gbps = 4.8Gbps
Non 18x 0.2Gbps = 3.6Gbps
So this would be a max of 1x 10Gbit card or 10x1Gbit, using at least 2-3 P7 cores on the VIO Server in a worst case fully loaded network. Duplicating the core numbers in a dual VIO Server environment, along with the CPU demand that is on the related virtual machines and clients. So a 10Gbp card needs 2 cores on the VIOS, the LPAR and the system it's being sent to, along with the network demand in relation to switch ports and uplinks. Remember if you have a slow up-link somewhere all this is then pointless, your just flood the port and end up running at the slowest denominator.
Recommended TCP/IP CPU usage.
As we all know TCP/IP is CPU intensive (designed for unreliable slow networks) and as such it is recommended that for each 1Gbit of network bandwidth you will need 1Ghz of CPU time. So you could potentially off load some of your CPU consumption to your VIO Server (compared to a dedicated environment), but you will need to take this into account when building the systems. If you have a 10Gbit Ethernet card serving your clients via Shared Ethernet Adapter, then 10Ghz of CPU will need to be potently available just for that. So for a POWER server running at 3.8Ghz per core, your going to need 2.6 cores just to cover that possible max bandwidth. In reality this will probably be slower, but you need be sure there is enough there, both on the VIOS and Client, as it too has a TCP/IP stack. So do think about shared processor pools and uncapped weight numbers.
So why have the virtual network in the first place; its all about potential usage. If you need a 10Gbit network, you can ensure that just the VIOS has that potential mas load and the dedicated card, while splitting the rest of your bandwidth across all your clients. Sure they can probably get more network traffic if they need it but do they really all need a 10Gbit card each. More so most of the traffic will probably be within the virtual network, with is high bandwidth in its self.