Local, Near & Far Memory part 7 - VM placement also needs RAM
nagger 100000MRSJ Comment (1) Visits (12549)
We all tend to concentrate on the CPU first and the memory second. CPUs, as the "brains" of the machine, does get a high focus and have a lot of extreme technology within it but the RAM is the "guts" of the machine to "feed" the CPU with nutrient data. OK, let us stop the analogy there :-) Along with reducing the number of CPUs via a lower virtual processor count, we also need to have the CPUs matching the memory - so AIX has a fighting change to localise a running process to its home SRAD and thus have it's data local for maximum speed.
Logical Memory Block size
At the SRAD level, we deal with whole CPUs but memory is finer grain in many MB terms. The smallest memory chunk is the Logical Memory Block, often just called LMB. This is set to a default size depending on the install memory when the machine booted up for the first time. I think the intention is to end up with something like 1000 chunks of memory for flexibility in assigning memory to a virtual machine, for Dynamic memory changes in sensible sizes but not too much waste in tracking millions of little memory chunks. On my medium and large POWER7 machines, I have seen LBM defaults of 64 MB (Power 750) to 256 MB (Power 795). After trying Live Partition Mobility, you quickly realise that the source and target machine have to have the same LMB size - so we standardised on 128 MB in our computer room.
How to set the LMB?
Assigning memory to virtual machines ad SRADs
When starting your virtual machine the Hypervisor will have to decide the optimal placement for both the CPU and memory. I have customers that have for similar workloads a fixed CPU to RAM ratio of say 1 CPU to 32 GB of memory - if the machine was configured to have this ratio and the virtual machines do too. then there can be a good chance there is RAM available with with each processor - it depends if you feel lucky! I am thinking here of a machine that is say 80% allocated to running virtual machines and we are adding a further one. If instead there is no fixed ratio and the ratio is decided based on needs then I think you are likely to have more situations where there are CPU and memory islands - by which I mean
In diagram form:
This (above) is a worst case and what we want to avoid, if at all possible.
Virtual machine persistent placement:
A few examples of good, bad and "oh dear"
I have seen some customer virtual machines that have ended up with bad layouts in SRAD placement terms.
First here is my 20 Virtual Processor virtual machine (remember Entitlement is irrelevant) as it started up after a 7 hour power outage (which took down a 2 mile radius block of London - not our fault!) so it was definitely a cold boot.
The machine only has 32 CPUs in two CEC drawers of the Power 770 and it has decided to lay it out across all four SRADs with a pretty even spread across them and fairly even memory to match. This virtual machine was started after three Virtual I/O Servers, an NIM server and my Systems Director virtual machine. This might explain why it is not absolutely consistent but will be fairly typical of a largish virtual machine started when the machine was (guessing here 25% already running other smallish virtual machines. I am please with this placement.
Update: I then tried a few experiments.
I dropped the VP to 16 thinking it would use less SRADs than all four of my Power 770 and got the following:
Above is still using all four SRAD but then I remembered this virtual machine has 64 GB of RAM. The machine has 128 in total so that is 32 GB per POWER7 - as I have other LPARs running some memory of each SRAD was probably used so it could not go for 100% of the memory of two SRADs and was thus spreading out the VM. I then dropped the memory from 64 GB to 32 GB with the same 16 virtual processors and restarted the VM and got ...
Above we now have
This is not the way I would planned it but it is not bad.
What have we learnt?
Here is a "not to bad" example of a smaller 8 Virtual Processor virtual machine:
above most of the virtual machine (Logical (SMT) CPUs 0 - 27 = 28 in total = 7 physical CPUs) is in one SRAD but I can only guess but the last CPU was allocated forcing one in a different SRAD. The memory is not balanced either
Now here is a pretty bad example, from a customer Power 795 (no names and it has subsequently but addresses):
Above we see first that the Power 795 can have four SRADs per Processor book as it contains four POWER7 chips to make up the 32 per book. There is clearly five SRADs with one CPU-core and no memory at all.
So have a look at your virtual machines with lssrad -av and decide if you like the look of them. This may influence the VM start-up order next time you restart your machine (after you have reduced you VP count, of course). There is one way to make your placement much worse (covered in part 8) but no simple way to improve it, unless you are on old firmware and have Capacity Upgrade on Demand (covered in part 9).
Get Out of Jail Free Card:
Just to remind readers again the POWER7 based system have excellent memory sub-systems, excellent inter-node memory bandwidth and the machines will still perform well with heavy use of Far memory accesses. In addition, AIX will place processes and take action to minimise Far memory access. But it is not optimal and with planning and care we can get the performance a bit higher. In the case above, I could see AIX trying not to use the last five SRAD's until absolutely necessary but we have already seen that the first SMT thread on each physical CPU-core is used before other threads. Unfortunately, I don't have the above layout to experiment on to find out which algorithm takes wins.