System setup

This section describes the steps necessary to set up the systems for the WebSphere® Application Server JVM stacking tests.

Basic setup

  • The LPAR size is the same in all test cases.
  • The number of JVMs are the same in all test cases.
  • The z/VM® and Linux® guests do not have any memory constraints. There is no memory overcommitment.
  • Focus is on transactional throughput, as reported by the workload driver.
  • The final results are expressed as throughput and total CPU utilization versus the number of JVMs per guest.

z/VM guest setup

Each z/VM Linux guest was defined on minidisks. The guest IDs used were LNX00001 through LNX00200. Table 1 shows the minidisk sizes used.
Table 1. z/VM guest minidisk sizes
Guest ID Minidisk address Minidisk size Function
LNX00001 - LNX00002
  • 100
  • 101
  • 102
  • 103
  • 107
  • 60
  • 1609
  • 1669
  • 3338
  • 5008
  • /boot
  • /usr/share
  • /usr/local
  • /
  • /opt/wasprofiles
LNX00003 - LNX00004
  • 100
  • 101
  • 102
  • 103
  • 107
  • 60
  • 1609
  • 1669
  • 3338
  • 1500
  • /boot
  • /usr/share
  • /usr/local
  • /
  • /opt/wasprofiles
LNX00005 - LNX00200
  • 100
  • 101
  • 102
  • 103
  • 107
  • 60
  • 1609
  • 200
  • 3338
  • 1000
  • /boot
  • /usr/share
  • /usr/local
  • /
  • /opt/wasprofiles
Shared minidisks
  • 104
  • 105
  • 5800
  • 200
  • WebSphere binaries
  • WebSphere .nif
The Linux guests were cloned from a master Linux image with the process outline in the following documents:
WebSphere was installed and set up following the process outlined in:
Then the shared WebSphere installation was copied from the two mini disks to two DCSSs. The DCSSs were defined as follows:
cp defseg S11WAS70 3331f00-33a1eff sr loadnshr
cp defseg S11WASNF 33a1f00-33d1eff sr loadnshr
On the Linux guests, the DCSSs are mounted as read only files with the -xip (execute in place) option. This output is from the mount command:
/dev/dcssblk0 on /opt/IBM/WebSphere type ext2 (ro,noatime,nodiratime,xip,acl,user_xattr)
/dev/dcssblk1 on /opt/.ibm/.nif type ext2 (ro,noatime,nodiratime,xip,acl,user_xattr) 

WebSphere Application Server setup

WebSphere Application Server, Network Deployment, Version 7.0 was used to create the test environment. Figure 1 illustrates the new server creation.

There is a one-to-one relationship between the node and application server profiles. When a profile is federated into a deployment manager cell, that profile becomes a node in the cell. Then, when another profile is federated into that same cell, that profile becomes a second and unique node in the cell, and so on. The deployment manager administrator console can be used to create new application servers, which become new nodes in the cell.

Figure 1. WebSphere Administrator console: New server creation
Screen image of the WebSphere Application Server administrators console for creating a server. There is a wizard with four Steps. The page illustrated is Step 1: Select a node. The user chooses a node from a dropdown list of nodes, types a server name.

WebSphere administration scripts were used to create the required 200 nodes within a single deployment manager cell for a single z/VM guest. For multiple guest tests, the number of nodes required on each guest ranged form: 100, 50, 25, 20, 10, 4, 2, to 1. Each guest was set up as a unique deployment manager cell. WebSphere administration scripts were used to create the appropriate number of nodes within the cell.