Installation prerequisites for the Db2 pureScale environment in a virtual machine (Intel Linux)
Before you install the IBM® Db2 pureScale environment, you must ensure that your system meets the following network, hardware, firmware, storage, and software requirements. You can use the db2prereqcheck command to check the software and firmware prerequisites of a specific Db2 version.
Supported virtual environments
Hypervisor | Architecture | Minimum guest OS (Linux®) |
---|---|---|
VMware ESXi 6.0 or higher | x64 system that is supported by both the VM and Db2 pureScale | Any Linux distribution that is supported by both the VM and Db2 pureScale |
VMware vSphere 6.0 | x64 system that is supported by both the VM and Db2 pureScale | Any Linux distribution that is supported by both the VM and Db2 pureScale |
Red Hat Enterprise Linux (RHEL) 7.5 and higher KVM | x64 system that is supported by both RHEL 7.5 and Db2 pureScale | RHEL 7.5 and higher |
Supported storage configurations
Disk configuration | KVM hypervisor | VMware ESX/ESXi | Tiebreaker and I/O fencing |
---|---|---|---|
RDM disks in Physical Compatibility Mode1 | No | Yes | Yes |
SAN disks in PCI pass-through mode2 | Yes | No | Yes |
Note:
1 Raw device mapping (RDM) disks are logical unit numbers (LUNs) that can be directly accessed from the VM guest operating system without going through a virtual machine file system (VMFS). RDM disk support is not available in KVM environments. To support tie-breaker disk and SCSI-3 PR I/O fencing, each RDM disk must be assigned to only one virtual machine per physical server. 2 You can assign storage Fibre Channel (FC) adapters to the guest virtual machines by using the PCI device pass-through mode. After you assign storage adapters, you can directly access storage area network (SAN) disks from inside the guest VM. Tie-breaker disks and SCSI-3 PR I/O fencing are supported in this environment. Virtual disks are not supported |
Network requirements
You must configure a network connection to install the Db2 pureScale environment.
It is also a requirement to keep the maximum transmission unit (MTU) size of the network interfaces at the default value of 1500. For more information on configuring the MTU size on Linux, see How do you change the MTU value on the Linux and Windows operating systems?
Transport type | KVM | VMware ESX/ESXi |
---|---|---|
TCP/IP (sockets) 1 | Yes | Yes |
RDMA over Ethernet (RoCE) | Yes2 | Yes2 |
InfiniBand (IB) | No | No |
1If you are using a network card that is less than 10GE, you must set the DB2_SD_ALLOW_SLOW_NETWORK registry variable to ON.
2RDMA over Converged Ethernet (RoCE) is supported for KVM environment if the network adapter is assigned to the guest VM using the PCI device pass-through mode.
Additional installation requirements
qemu-kvm -drive file=/dev/mapper/ImagesVolumeGroup Guest1,cache=none,if=virtio
- For KVM virtual environments, you can enable concurrent disk access by specifying the “shareable” option when you configure virtual disks.
- For VMware virtual environments, you can enable concurrent disk access by defining the multi-writer flag on the virtual disks. For more information, see the VMware documentation Disabling simultaneous write protection provided by VMFS using the multi-writer flag (http://kb.vmware.com/kb/1034165).