Using z/VM STOP and BEGIN method

This test should show where the pages of the guest reside when the idling guests are paused using the z/VM® CP STOP and BEGIN commands, and memory pressure is created by bringing the standby guests under load.

For more information about the guest systems and their types, see Guest usage.

Middle of warmup phase

Figure 1 shows the location of the memory pages for the various Linux® guests in the middle of the warmup phase.
Figure 1. STOP and BEGIN of idling guests, location of memory pages during the warmup phase
Test case 2: For STOP and BEGIN testing of idling guests, two bar graphs that show the location of the memory pages for the various Linux guests in the middle of the warm up phase. The graph on the left is for pages in XSTOR and the graph on the right is for pages in real memory. For both graphs, the x-axis has one bar for each of the eleven systems used in the test, having these names: (1) and (2) System of interest running WebSphere, (3) and (4) System of interest with an RDB, (5) and (6) Standby system running WebSphere, (7) and (8) Standby system with an RDB, (9) Base load system running WebSphere, (10) Base load system running an RDB, (11) Deployment manager system. For both graphs, the y-axis is the number of pages, ranging from 0 to 200000. For Graph 1 (pages in XSTOR), the bars have these values for the eleven systems: 2000, 4000, 5000, 4000, 9000, 8000, 13000, 14000, 6000, 4000, and 4500. For Graph 2 (pages in real memory), the bars have these values for the eleven systems: 190000, 184000, 156000, 158000, 156000, 158000, 1500, 1000, 181000, 178000, and 183000.

Observations

The distribution of the pages is very similar to the case that uses the Linux suspend and resume mechanism. The idling Standby WebSphere® guests have an unexpectedly high number of pages in real storage. However, the Systems of interest WebSphere guests have more pages in real storage than at the beginning of the suspend and resume test.

Conclusions

So far this should be the same starting situation as in the Linux suspend and resume case. The large number of pages in real storage from Standby WebSphere guests is due to the active node agents. The slightly different distribution of the System of interest WebSphere guests is not really relevant, but it shows very clearly that the paging behavior is not completely deterministic.

Middle of suspend phase

Figure 2 shows the location of the memory pages for the various Linux guests in the middle of the suspend phase.
Figure 2. STOP and BEGIN of idling guests, location of memory pages during the suspend phase
Test case 2: For STOP and BEGIN testing, two bar graphs that show the location of the memory pages for the various Linux guests in the middle of the suspend phase. The graph on the left is for pages in XSTOR, and the graph on the right is for pages in real memory. For both graphs, the x-axis has one bar for each of the eleven systems used in the test, having these names: (1) and (2) System of interest running WebSphere, (3) and (4) System of interest with an RDB, (5) and (6) Standby system running WebSphere, (7) and (8) Standby system with an RDB, (9) Base load system running WebSphere, (10) Base load system running an RDB, (11) Deployment manager system. For both graphs, the y-axis is the number of pages, ranging from 0 to 200000. For Graph 1 (pages in XSTOR), the bars have these values for the eleven systems: 8000, 19000, 98000, 141000, 2000, 1000, 28000, 28000, 79000, 11000, and 6000. For Graph 2 (pages in real memory), the bars have these values for the eleven systems: 186000, 178000, 68000, 21000, 193000, 195000, 77000, 76000, 118000, 157000, and 192000.

Observations

Once again, the distribution of the pages is very similar to the case that used the Linux suspend and resume mechanism, but the distribution is not identical. This time, the majority of pages are taken from only one of the suspended WebSphere guests and one of the RDB guests. And again, one of the base load system has a significant number of pages in XSTOR and in real storage.

Conclusion

There is a clear preference to take pages away from the suspended guests.

After resume

Figure 3 shows the location of the memory pages for the various Linux guests at the end phase after resume.
Figure 3. STOP and BEGIN of idling guests, location of memory pages at the end phase after resume
Test case 2: For STOP and BEGIN testing, two bar graphs that show the location of the memory pages for the various Linux guests at the end of the resume phase. The graph on the left is for pages in XSTOR, and the graph on the right is for pages in real memory. For both graphs, the x-axis has one bar for each of the eleven systems used in the test, having these names: (1) and (2) System of interest running WebSphere, (3) and (4) System of interest with an RDB, (5) and (6) Standby system running WebSphere, (7) and (8) Standby system with an RDB, (9) Base load system running WebSphere, (10) Base load system running an RDB, (11) Deployment manager system. For both graphs, the y-axis is the number of pages, ranging from 0 to 200000. For Graph 1 (pages in XSTOR), the bars have these values for the eleven systems: 33000, 29000, 12000, 12000, 8000, 7000, 88000, 124000, 81000, 11000, and 6000. For Graph 2 (pages in real memory), the bars have these values for the eleven systems: 164000, 159000, 157000, 156000, 183000, 184000, 32000, 1000, 110000, 157000, and 190000.

Observation

Pages from the now suspended Standby guests are moved out to XSTOR, and the pages from the resumed Systems of interest are moved back into real storage.

Conclusions

The process where the pages of the dormant guest are moved to XSTOR, and the pages of the active guests are moved into real storage works in a similar manner for both pausing mechanisms. It seems also that z/VM tries to minimize the movement of pages, because in no case are all pages from one guest located only in one storage location. And there is always a slight non-deterministic characteristic, which makes it hard to predict exactly what will happen. This is certainly caused by the complexity of the algorithms used.