How to determine optimal memory size for a VM from nmon data?
nagger 100000MRSJ Visits (4780)
This question was asked in the developerWorks Forum - a good question too.
Oh, if only life was that simple! A rule of thumb you could apply, perhaps automatically. But you are asking the impossible.
The first rule of paging space is: Never ever run out of paging space as "absolute mayhem is guaranteed". This is a quote from the very early UNIX manual pages.
Lets split this in to difference cases:
1) There is many GB of unused RAM for a long period = clearly the free memory not needed (unless the workload changes drastically at only some parts of the year).
You can probably Dynamic LPAR remove the RAM out of the LPAR and reuse it elsewhere. Perhaps leave a little just in case.
BUT you might find a large chunk of the used RAM is in the file system cache (see numperm numbers) and disk blocks that were read once days or week ago and don't need to be in memory any longer - so you could remove even more than the free memory. You could use the rmss trick.
2) Memory use is near full say 90%+ and no paging (or just a tiny amount).
As above, you might find a large chunk of the used RAM is in the file system cache (see numperm numbers) and disk blocks that were read once days or week ago and don't need to be in memory any longer so you could remove it - perhaps 25% of the filesystem cache size as a wild stab in the dark. This is very hard to determine. You could use the rmss trick.
3) Memory is 100% used and a little paging *
This is the NORMAL way AIX runs (unlike other wacky UNIX systems that demand to have lots of unused memory as normal = a waste of money), AIX optimised the unused memory to minimise the disk I/O, so it tends to soak up memory into the filesystem cache over time.
The little paging shows that you have about the right memory size.
You may get a way with a small reduction of memory or you may not. You could use the rmss trick.
4) Memory is 100% used and a load of paging **
This is the very hard one as you can't predict how much extra RAM you need to reduce the paging to a reasonable level.
All you can do is add 10% more RAM and watch the paging and do this until it stops paging
5) There is free memory and its still paging!
We see this often this is due to the application using advanced features like memory mapped files paging to/from the filesystem disks. Don't worry about it but tune your disks.
Now a word of caution: Some large systems (say 16 CPU and up) running complex workloads (like many applications of different types) just page all day. The programs are regularly changing the memory Resident Set pages as users do different things during they day. In which case, adding memory may not help at all. You just have to deal with handling high paging rate I/O.
What is "lots of paging" ? That depends :-) it depends on the size of your LPAR, of course. I microLPAR (less than a CPU) can't handle much but a 128 CPU LPAR can shrug off lots of I/O without breaking a sweet. As a starting point 100 to 200 Paging I/O per second per CPU. This does assume a nice paging setup across loads of faster disks.
* A little paging is below this rate
So that is five cases and no simple rule thumb.
Why not? Well memory is far more complicated than most people can imagine with many overlapping types and purpose. Plus memory operations (allocating it, paging it out and back in and releasing it) are happening so fast that its impossible to really track as we would drowning in the volume of stats it would generate.