Hardware equipment and software environment for the z/VM large memory tests
To perform our large memory tests, we created a customer-like environment. We configured the hardware and software for both the client and the server.
Host hardware
One LPAR on a 16-way System z9®, type 2094-S18, equipped with:
System/Guest | Storage | Dedicated CPUs | |
Central | Expanded | ||
z/VM® LPAR | 80 GB | 4 GB | 10 |
- 1 OSA Express 2 gigabit Ethernet card (OSA code level 0805) with two ports
- Each of the five guests required 1 Ethernet card
- 8 FICON® express
cards (each card had 2 ports)
- Each storage server was connected via 8 FICON paths
Network setup
The z/VM LPAR was connected to the clients via two Fiber Gigabit Ethernet Interfaces.
Storage server setup
- IBM 3390 disk models 3 and 9
- Physical DDMS with 15,000 RPMs
- Linux® and
database disks
- Linux SLES10 GA system was installed on six 3390 mod-3s and the Linux SLES10 SP1 system was installed on three 3390 mod-9s
- The database was defined on five 3390 mod-9s. Each set of five database disks was defined on the same rank. For the ten guests, a total of ten ranks were used for the database disks.
- z/VM paging space
The second storage server was used for the disks needed for the z/VM system and paging space.
z/VM guest setup
The z/VM recommendation for the paging setup is to not use more than 50% of the DASD paging space. Following this recommendation, the setup should have a minimum of 152 GB of DASD paging space ((160 GB guests memory - 84 GB total memory) * 2).
Total pages allocated for z/VM paging space was 37,256,192 pages.
Paging space defined was 1.66 times the number of pages defined for storage and expanded storage.
Paging space was distributed across 31 3390 mod-3s and 10 3390 mod-9s as shown in Table 2. This was done to get a high I/O bandwidth for the paging I/O.
Enterprise Storage Server® 1 (mod3s) | Enterprise Storage Server 2 (mod 9s) | ||
Rank | Number of Disks | Rank | Number of Disks |
1 | 5 | 1 | 1 |
2 | 6 | 2 | 1 |
3 | 6 | 3 | 1 |
4 | 6 | 4 | 1 |
5 | 6 | 5 | 1 |
6 | 2 | 6 | 1 |
7 | 1 | ||
8 | 1 | ||
9 | 1 | ||
10 | 1 |
Table 3 shows the z/VM guest configuration we used for our tests.
System/Guest | Memory | # of Virtual CPUs |
---|---|---|
Database server guest 1-10 | 16 GB | 3 |
Client hardware
One through ten x330 PCs with two Intel 1.26 Ghz processors.
The workload driver ran on each client and only one client was attached to each guest database server.
Software setup
Product | Version/Level |
---|---|
Red Hat Enterprise Linux | RHEL 4 ES (clients) |
SUSE Linux Enterprise Server | SLES10 GA (kernel level 2.6.16.21-0.8)
SLES10 SP1 (kernel level 2.6.16.46-0.12) |
z/VM | z/VM Version 5 Release 2.0, service level 0501 (64-bit)
z/VM Version 5 Release 3.0, service level 0701 (64-bit) |
- In order to enable CMMA and VMRM-CMM the following z/VM APAR fixes are required: VM64297, VM64253, VM63968, VM64225, VM64226, VM64228. SLES10 SP1 needs the patch for bugzilla bug 38500, which is available in kernel-default-2.6.16.53-0.16.s390x.rpm from the Novell support center.
- Use of CMMA requires SLES10 SP1 or higher.
- With SLES10 SP1, we also needed to set QDIOASSIST OFF for each guest virtual machine. When a fix is available for z/VM APAR VM64287, QDIOASSIST ON should function correctly.