Before you install the IBM®
Db2
pureScale
environment, you must ensure that your system meets the following network, hardware, firmware,
storage, and software requirements. You can use the db2prereqcheck command to
check the software and firmware prerequisites of a specific Db2 version.
Supported virtual environments
You
can install
the
Db2
pureScale environment
on the following virtual machine (VM) configurations:
Table 1. Supported
VM and operating systemsHypervisor |
Architecture |
Minimum guest
OS (Linux®) |
VMware ESXi 5.0 or higher |
x64 system that is supported by both the VM
and Db2
pureScale |
Any Linux distribution
that is supported by both the VM and Db2
pureScale |
VMware vSphere 6.0
|
x64 system that is supported by both the VM
and Db2
pureScale |
Any Linux distribution
that is supported by both the VM and Db2
pureScale |
Red Hat Enterprise Linux (RHEL) 6.2 and higher KVM |
x64 system that is supported by both RHEL 6.2
and Db2
pureScale |
RHEL 6 and higher |
Note: If
you use multiple physical servers in a VMware environment to host the VMs of a Db2
pureScale cluster, you
can have only a single VM per physical server, per instance. This is due to each raw device mapping
(RDM) disk can only be assigned to one virtual machine per physical server. See Note section under
Table 2 below for detail.
Supported storage configurations
When installed
on a virtual machine, the storage configurations of the
Db2
pureScale environment
are limited by the virtual environment.
Table 2. Supported VM
storage configurations
Disk configuration |
KVM
hypervisor |
VMware ESX/ESXi |
Tiebreaker and I/O fencing |
Virtual disks1 |
No2 |
No2 |
No3 |
RDM disks in Physical Compatibility
Mode1 |
No |
Yes |
Yes |
SAN disks in PCI pass-through mode2 |
Yes |
No |
Yes |
Note: - Virtual disks
do not support SCSI-3 PR commands and cannot be
used as tie-breaker disks. Virtual disks can be used to contain shared
data.
- Only supported in non-production environments.
- I/O
fencing requires SCSI-3 PR commands to be enabled, which are
not supported on virtual disks.
- Raw device mapping (RDM)
disks are logical unit numbers (LUNs)
that can be directly accessed from the VM guest operating system without
going through a virtual machine file system (VMFS). RDM disk support
is not available in KVM environments.
To support
tie-breaker disk and SCSI-3 PR I/O fencing, each RDM disk must be
assigned to only one virtual machine per physical server.
- You
can assign storage Fibre Channel (FC) adapters to the guest
virtual machines by using the PCI device pass-through mode. After
you assign storage adapters, you can directly access storage area
network (SAN) disks from inside the guest VM. Tie-breaker disks and
SCSI-3 PR I/O fencing are supported in this environment.
|
Network requirements
You must configure a network connection to install the Db2
pureScale
environment.
It is also a requirement to keep the maximum transmission unit (MTU) size of the network
interfaces at the default value of 1500. For more information on configuring the MTU size on Linux,
see How do you change the MTU value on the Linux and Windows operating
systems?
Table 3. Supported network configuration
Transport type |
KVM |
VMware ESX/ESXi |
TCP/IP (sockets) 1 |
Yes |
Yes |
RDMA over Ethernet (RoCE) |
Yes2 |
Yes2 |
InfiniBand (IB) |
No |
No |
Note:
- If you are using a network card that is less than 10GE, you must set the
DB2_SD_ALLOW_SLOW_NETWORK registry variable to ON.
- RDMA over Converged
Ethernet (RoCE) is supported if the network adapter is assigned to the guest VM using the PCI device
pass-through mode.
|
Additional installation requirements
If
you are installing the
Db2
pureScale environment
on KVM, you must disable disk caching on virtual disks. Disk caching
might cause data corruption if the same disk is used by multiple physical
machines. If the disk is used by a single host, enabling disk write
caching might result in missing data pages if the server is disconnected
before the pages are updated on the physical disks. You can disable
disk caching on KVM virtual disks by using the following command:
qemu-kvm -drive file=/dev/mapper/ImagesVolumeGroup Guest1,cache=none,if=virtio
For
VMware ESX, there is no disk caching by the host.
Note: When you assign
Fibre Channel adapters directly to the guest VMs using the PCI device
pass- through mode, there is no disk caching.
The
Db2
pureScale environment is based on the clustering of
Db2 members. Therefore,
you must configure all disks that are used by a
Db2
pureScale cluster (via
IBM Spectrum Scale) to allow concurrent read and write disk access between all VMs in the cluster.
- For KVM virtual environments, you can enable concurrent disk access by specifying the
“shareable” option when you configure virtual disks.
- For VMware virtual environments, you can enable concurrent disk access by defining the
multi-writer flag on the virtual disks. For more information, see the VMware documentation Disabling simultaneous write protection
provided by VMFS using the multi-writer flag (http://kb.vmware.com/kb/1034165).