Test methodology and scenarios
This section is an overview of how the WebSphere® Application Servers are configured to compare vertical JVM stacking to horizontal JVM stacking. The test scenarios are also described.
- The WebSphere binaries are loaded from a shared DCSS mounted with the -xip option.
- Alternatively, WebSphere binaries are loaded from a shared minidisk.
- The physical resources in terms of processors, memory, network cards, and so on are kept constant throughout the test.
- The number of JVMs in the LPAR are kept constant throughout the test. The number of JVMs represents the total workload that the customer needs to run.
- The workload configuration per JVM is kept constant, which means the load level created from the workload driver is the same in all cases.
- These parameters are varied:
- The number of guests
- The number of virtual CPUs (VCPUs) per guest, which varies the total number of virtual CPUs in use. The number of virtual CPUs per guest followed the rule that no guest should have more virtual CPUs than the z/VM® has physical CPUs.
- The distribution of the JVMs among the guests.
- The WebSphere parameters were adjusted so that no swap space is used on any guest configuration.
- The following fields were set in all the WebSphere Application Servers:
WebSphere configuration: ----------------------- Enforce Java2 Security: false Servers: server1 EJB/ORB ---------------------------------------- NoLocalCopies: true Web -------------------------------------------- Min WebContainer Pool Size: 20 Max WebContainer Pool Size: 20 JVM -------------------------------------------- Min JVM Heap Size: 700 Max JVM Heap Size: 700 Verbose GC: true Generic JVM Arguments: Logging ---------------------------------------- System Log Rollover Type: NONE Trace Specification: *=all=disabled Rollover Size: 100 Max Backup Files: 10 Misc ------------------------------------------- Enable PMI Service: false Uninstall default apps: true
Test scenarios
- Test scenario 1: Guest scaling
- Scale the number of guests: 2, 4, 10, 20, 50, 100, 200, and distribute
the JVMs as described in Table 1.
Table 1. Test scenario 1 - Guest scaling: Test case configurations Number of JVMs per guest Number of guests Number of virtual CPUs per guest Total number of virtual CPUs Ratio of virtual to real CPUs Number of JVMs per virtual CPU Guest memory size in GB Total virtual memory size in GB Comments 200 1 24 24 1.0:1 8.3 200 200 100 2 12 24 1.0:1 8.3 100 200 50 4 6 24 1.0:1 8.3 50 200 20 10 3 30 1.3:1 6.7 20 200 CPU over commitment 10 20 2 40 1.7:1 5.0 10 200 4 50 1 50 2.1:1 4.0 4 200 Uniprocessor and CPU over commitment 2 100 1 100 4.2:1 2.0 2 200 1 200 1 200 8.3:1 1.0 1 200 The results of this test can be found in Test scenario 1: Guest scaling.
- Test scenario 2: Varying the number of virtual CPUs for 20 guests
- A CPU scaling was done for the 20 guest scenario with 10 JVM per
guest.
The results of this test can be found in Test scenario 2: Varying the number of virtual CPUs for 20 guests.
- Test scenario 3: Varying the number of virtual CPUs for 200 guests
- A CPU scaling was also done for the 200 guest scenario with one
JVM per guest.
The results of this test can be found in Test scenario 3: Varying the number of virtual CPUs for 200 guests.
- Test scenario 4: Virtual CPU scaling and WebSphere threads
- The number of virtual CPUs was varied to values of: 1, 2, 3, and
24.
The results of this test can be found in Test scenario 4: Virtual CPU scaling and WebSphere threads.
- Setup test 1: Large guests
- Define a single guest with 200 WebSphere Application
Server nodes.
The results of this test can be found in Setup test 1: Large guests.
- Setup test 2: Using a shared minidisk for WebSphere binaries with and without MDC
- Replace the DCSS within the WAS installation tree by a shared read-only disk, enabled for MiniDiskCache.
- Repeat the test without MiniDiskCache.
The results of this test can be found in Setup test 2: Using a shared minidisk for WebSphere binaries with and without MDC.