Environment setup

These are instructions to set up for the environment using the Oracle 10g R2 database on Linux® on IBM® System z®.

Storage server

We configured a DS8000® 2107-932 storage server with
  • 16 FCP ranks and 256 GB cache
  • physical DDMs: 300 GB, 15k RPMS
  • firmware level 63.0.106.0
  • 4 GBits/sec FCP channel connectivity (2 channels were used)
Using striped FCP disks
We created 3 SCSI volumes. For easy handling we configured them to a size of 300 GB. In the standard setup each of these disks would have been from a single rank, which would have been a significant performance bottleneck. The DS8000 firmware level allowed us to use a new feature: the storage pool striping function. To ensure maximum I/O bandwidth, we created one extent pool over the eight ranks on each of the two internal servers of the DS8000, and striped the volumes across all ranks of this extent pool. The storage server was connected to the System z via two 4 Gbit FCP channels.

Host

For the tests we used one logical partition (LPAR) from each of these two systems:
  • IBM System z9® Enterprise Class (z9® EC), model 2094-S18 (1.7GHz)
  • IBM System z10® Enterprise Class (z10 EC), model 2097-E26 (4.4GHz)

The LPAR used was equipped with 5 CPUs and 20 GiB central storage (see Table 1). The remaining hardware was not used for this test. z/VM® was set up in this LPAR providing a virtual environment (guest) to run the database server as a Linux guest. The Linux guest itself uses a subset of the resources available to z/VM (see Table 1) to ensure that the virtualization causes no contention.

Table 1. System z LPAR and guest definitions
System Type CPU memory size [GiB]
z/VM System z LPAR 5 20 GiB central storage +

2GiB expanded storage

Database server z/VM guest 2 (virtual) 16 GiB (virtual memory)

Software

Table 2. Software levels
Product Version/Release
z/VM 5.3
Red Hat Enterprise Linux 4.5
Oracle database server 10.2.0.2

z/VM

For the z/VM setup, we disabled the queue-I/O Assist for the FCP adapters in the user's directory entry with
DEDICATE xxx yyyy NOQIOASSIST

Linux

We increased the values for shared memory setup via sysctl.conf :

kernel.sem=250 32000 100 128
net.ipv4.ip_local_port_range = 1024 65000
fs.file-max=65536 

Increasing the amount of TCP/IP ports and the number of file handles is a standard recommendation. The shared memory limits were set high enough to ensure they did not restrict the size of the Oracle buffer pools.

The file systems were set up with separate disks for either database data files, database log files, and the import file. This setup preserved the sequential character and direction of the I/O streams and kept them away from the bidirectional randomized disk I/O pattern of the disk storing the database data files (see Figure 1).
Figure 1. Disk organization of the import environment
Figure 1: Disk organization of the import environment

Oracle

We configured the Oracle database with the following profile:
###############################################################################
# initloadtest.ora
###############################################################################

db_name=loadtest
db_files = 500
db_file_multiblock_read_count = 8
log_checkpoint_interval = 10000
processes = 512
parallel_max_servers = 32
log_buffer = 32768
max_dump_file_size = 10240
undo_management = auto
global_names = TRUE
filesystemio_options          = setall
sga_target                    = 12582900012
undo_retention                = 0

We used the asynchronous and direct disk I/O, and specified an SGA target to let the database automatically tune the proper buffer sizes.