Local, Near & Far Memory part 8 - Dynamic LPAR changes can mess up your placement
nagger 100000MRSJ Comments (2) Visits (17998)
After you have started and used your virtual machine (VM) for a while, you may decide to change its size using a Dynamic LPAR (DLPAR) change from the HMC (or SDMC or IVM, of course). This has virtual machine placement implications.
When playing with DLPAR on my non-production box, I frequently was scratching my head thinking: Why it was doing certain things? and then later suddenly working it out. Examples: I worked out the second SRAD was empty (must not have any LPAR using CPUs nor memory) and that is why it keeps getting more CPU+RAM to it. Or I was asking for 32 GB and realised it can't make a single SRAD VM because there is a little overhead but it could make a 31 GB one. In practice, I have found the hypervisor is making good choices but often the reasons are a little mysterious and not every one can spend half a day experimenting to work out why.
In the Advanced Technical Support group, we tend to have the latest HMC, firmware (hypervisor) and operating systems installed - in fact, we often have early beta versions as we get involved with testing, user experience feedback, improving the documentation etc. When I was first experimenting with early POWER7 machines and looking at CPU and Memory Affinity, we were on an older firmware level and I found the shrinking and enlarging virtual machines got into some odd situations and more asymmetric placements over time. But today, when I went to capture some examples, of shrinking a 16 virtual processor + 32 GB of memory virtual machine down to just to 2 CPUs and 4 GB and back again it was quite well behaved. We have, of course, updated the firmware in between the testing.
It is good practice to keep the CPU to memory ratio about the same for our workloads but this requires two Dynamic LPAR operations (we can't remove CPU and memory at the same time - if you try this you will fine it locks the virtual machine during the first operation so not further changes are possible until it completes) - this makes it harder work for the hypervisor as it can't guess your intentions or what you will or may do in the near future. Those that have been around since POWER4 days will know that it is recommended best practice not to make very large memory reductions in one go but to take say 4 GB a few times. Releasing memory from the virtual machine requires AIX to empty the memory first. If the memory is in use it means paging the content out and that can take a considerable time. My test VM was not using the memory so it was quick but on a busy server it can take many minutes or an hour if you use a draconian removal size. I noticed that on large memory reductions, the HMC shows the Reference Code 2003. If you click on the code it tells you a "Dynamic LPAR Memory Removal" is in progress.
Starting with Entitlement=16 (note that is not important), virtual processors=16 and desired memory=32 GB
This was taken down in various stages to E=2, VP=2 and RAM=8 GB
Then boosted up to E=24, VP=24, RAM=48 GB
Followed by going back to the start position of E=16, VP=16 and RAM=32 GB
I took the VM down to an extremely (from 16 CPUs and 32 GB ) small one to just VP=2 and RAM=4 GB and then started adding 1 GB at a time:
Now we have orphaned memory in SRAD 2 and a rather unbalanced memory with 5.3 GB to one CPU and 1.2 GB for the others. Also I think the two CPUs are in different Power 770 CEC drawers - indicated with REF0 and REF1.
Now I ask the impossible - while our experimental virtual machine is tiny I start up a large virtual machine that takes up all the resources of the machine except enough left over for our test VM to grow back to its original size. This means this new large VM has probably got the CPU's and memory that was used in out test VM so the test VM has to get the "left overs" - there was no way the hypervisor would know we would grow again.
First, I add back the memory to 32 GB
# lssrad -av
This is not a pleasant VM placement at all - but the hypervisor has no real options as nearly all the machines memory is in use now.
Second, I add back the CPUs to VP=16:
# lssrad -av
As expected the memory is the same and the CPUs with their "sticky" nature (you tend to get the same every time were added back as they were. If I have started and stops a dozen other VM's on this VM's CPUs I suspect that may not be the case - I have not tested that idea. The worst aspects of our latest placement is memory in SRAD2. Did you spot SRAD 0 has one more CPU than SRAD 1? Also note that we have gaps in the CPU numbers and have a top Logical CPU number of 71 for 16 physical CPUs with SMT=4 that gives us 64 Logical CPUs. Can you spot the missing ones?
In the case above, the warning is clear that drastic DLPAR changes and starting new virtual machines in between can lead to sub-optimal VM placement.
Of course, smaller changes or temporary boost will not have such a dramatic effect as this deliberately extreme test case.
Get Out of Jail Free Card