More details about Linux memory size and throughput
This topic describes the impact of Linux™ memory size and throughput for individual guests and different configuration files.
Impact of Linux memory size for the individual guests and three different configuration files

Observation
In addition to the WebSphere® Application Server, the database servers are now also very close to the manually sized systems. Even the web servers are only moderately oversized. The combos are further reduced in size.
Conclusion
It seems that configuration 7 is more appropriate for the Web and the Application Servers, while the database server is more optimally sized by configuration 8. When considering that the database servers are the only systems in our setup using a significant amount of page cache for disk I/O, this confirms that treating page cache as free memory for this purpose is a good approach. Remembering that configuration 4 leads to a throughput degradation for combos, it is to be expected that the rules evaluated here will perform even worse for the combos.
Throughput reached for the individual components and three different configuration files

Observation
The triplets achieve a higher throughput when applying these rules. The combos, however, suffer significantly.
Conclusion
It seems that the concept of using one set of rules to manage all servers is flawed by the issue that throughput and size can not be optimized at the same time. The rule sets using direct page scans for memory plugging and those using the sum of free memory and page cache (as computed from the difference between cache and shared memory) both perform well; the difference between them is which values are used as limits. It seems that compared to larger systems, smaller systems end up closer to the manually sized configuration when less memory is left free.
- A generic approach (which is our suggested default), which provides
a good fit for all of our tested workloads and server types. It provides
a slightly worse throughput, and slightly oversized systems (which
leaves some space for optimizations for z/VM®):
- Plug memory when direct page scans exceed 20 pages/sec
CMM DEC=total mem /40
- A server type dependent approach:
Table 1. Recommended rules set depending on server type Recommended rules set depending on server type
Server type Recommended rules CMM_INC Unplug, when 1 Web server configuration 7 free memory /40 (free mem+page cache) > 5% 2 WebSphere Application Server configuration 7 free memory /40 (free mem+page cache) > 5% 3 Database server configuration 8 (free mem+page cache)/40 (free mem+page cache) > 5% 4 Combo configuration 4 free memory /40 (free mem+page cache) > 10%
- The Web servers, which are a front end for the Application servers, are very small systems. In this case it may be appropriate to use even smaller unplugging conditions. Important for these systems is that they just transfer the requests to the Application Server. A stand-alone Web server which provides a large amount of data from its file system is probably better treated in the same way as a database server.
- The alternative manual sizing needs to be done for each system individually and fits really well for one level of workload. But is also an valuable option when the additional win in performance is necessary, especially for servers with very constant resource requirements.