Summary for the z/VM large memory tests

After performing performance tests on z/VM® Linux® guests running a database server on z/VM 5.2 and z/VM 5.3, and on z/VM 5.3, with and without the VMRM-CMM and CMMA features, we compiled a summary of our results and recommendations.

A transactional OLTP workload using asynchronous disk I/O was used to drive the guests. The number of guests was scaled to create a memory overcommitment on z/VM. Memory overcommitment is expected to cause a performance degradation with a higher overcommitment having a bigger impact.

The following are our summary results:
  • As the guests were scaled from five to ten, z/VM 5.3 achieved much higher throughput than z/VM 5.2 . At ten guests, and a planned memory overcommitment of 100%, throughput on z/VM 5.3 was about 90% higher than on z/VM 5.2.
  • VMRM-CMM and CMMA were enabled on z/VM 5.3. It is important to note that several APAR fixes and bugzilla bug fixes needed to be applied to z/VM 5.3 and Linux. (See Software setup for details.) Both VMRM-CMM and CMMA improved the throughput results obtained from the ten guest runs with 100% memory overcommitment.
    • CMMA showed an 8% improvement in throughput over the results for ten guests on z/VM 5.3. There was a significant reduction in paging activity.
    • VMRM-CMM showed a significant improvement of 50% in throughput over the results for ten guests on z/VM 5.3.
    • With VMRM-CMM, the throughput result for the ten guest scenario, which uses twice as much virtual memory than is physically available, was only 13% lower than the results with five guests without memory overcommitment. This very good result is expected to be specific for this workload and can not be generalized without further tests or other workloads.