Installation prerequisites for Db2 pureScale Feature (Intel Linux)

This document applies only to Linux® distribution on hardware based on Intel. Before you install IBM® Db2 pureScale Feature, you must ensure that your system meets the installation prerequisites.

For specific versions of Linux distribution support, refer to the web page listed in the reference.

Ensure that you created your Db2 pureScale Feature installation plan. Your installation plan helps ensure that your system meets the prerequisites and that you performed the pre-installation tasks. The following requirements are described in detail: software prerequisites (including operating system, IBM Spectrum Scale, and Tivoli® SA MP), storage hardware requirements, network prerequisites, and hardware and firmware prerequisites.

Software prerequisites

In Db2 v11.1, the Db2 pureScale Feature supports Linux virtual machines.

The libraries and additional packages, which are listed for each specific Linux distribution in the following table, are required on the cluster caching facilities and members. Before you install Db2 pureScale Feature or update to the latest fix pack, you must update hosts with the required software.
Table 1. Linux software requirements
Linux distribution Required packages OpenFabrics Enterprise Distribution (OFED) package

Red Hat Enterprise Linux (RHEL) 6.7

Red Hat Enterprise Linux (RHEL) 6.84

Red Hat Enterprise Linux (RHEL) 6.94

Red Hat Enterprise Linux (RHEL) 6.10

Red Hat Enterprise Linux (RHEL) 7.2

Red Hat Enterprise Linux (RHEL) 7.3

Red Hat Enterprise Linux (RHEL) 7.44

libibcm
libibverbs
librdmacm
rdma
dapl
ibacm
ibsim (required on RHEL 6.7 only)
ibutils
libcxgb3
libibmad
libipathverbs
libmlx4
libmlx5
libmthca
libnes
rds-tools (required on RHEL 6.7 only)
libstdc++ (both x86_64 and i686)
glibc (both x86_64 and i686)
gcc-c++
gcc
kernel
kernel-devel
kernel-headers
kernel-firmware
ntp (or chrony for Db2 versions after 11.1.3.3 on RHEL 7 only)
ntpdate
sg3_utils
sg3_utils-libs
binutils
binutils-devel

m4
openssh
cpp
ksh
libgcc
file
libgomp
make
patch

perl-Sys-Syslog

For InfiniBand network type or RoCE network type run group installation of "InfiniBand Support" package.

Red Hat Enterprise Linux (RHEL) 7.54
libibcm
libibverbs
librdmacm
rdma
dapl
ibacm
ibutils
libstdc++ (both x86_64 and i686)
glibc (both x86_64 and i686)
gcc-c++
gcc
kernel
kernel-devel
kernel-headers
linux-firmware
ntp (or chrony for Db2 versions after 11.1.3.3 on RHEL 7 only)
ntpdate
sg3_utils
sg3_utils-libs
binutils
binutils-devel
m4
openssh
cpp
ksh
libgcc
file
libgomp
make
patch
perl-Sys-Syslog

For InfiniBand network type or RoCE network type run group installation of "InfiniBand Support" package.

SUSE Linux Enterprise Server (SLES) 11 Service Pack (SP) 4
libstdc++ (both 32-bit and 64-bit libraries)
glibc++ (both 32-bit and 64-bit libraries)
cpp
gcc
gcc-c++
kernel-source
binutils

m4
OpenSSH
ntp
ksh

You must install OFED packages from the maintenance repository with additional packages that OFED depends on. For more information about installing OFED on SLES 11, see Configuring the network settings of hosts for a Db2 pureScale environment on an InfiniBand network (Linux).

SUSE Linux Enterprise Server (SLES) 12 SP1

SUSE Linux Enterprise Server (SLES) 12 SP2

SUSE Linux Enterprise Server (SLES) 12 SP3


libibcm
libibverbs
librdmacm1-1.0.18.1-1.14.x86_64
rdma
dapl
ibacm
ibsim
ibutils
libcxgb3-rdmav2-1.3.1-6.2.x86_64
libcxgb3-rdmav2-32bit-1.3.1-6.2.x86_64
libibmad
libipathverbs
libmlx4-rdmav2-32bit-1.0.5-11.2.x86_64
libmlx4-rdmav2-1.0.5-11.2.x86_64
libmlx5-rdmav2-1.0.1-7.2.x86_64
libmlx5-rdmav2-32bit-1.0.1-7.2.x86_64
libmthca-rdmav2-32bit-1.0.6-5.2.x86_64
libmthca-rdmav2-1.0.6-5.2.x86_64
libnes-rdmav2-1.1.4-5.2.x86_64
libstdc++*
glibc*
gcc-c++
gcc
kernal
kernal-devel
kernal-firmware
ntp (or chrony for Db2 versions after 11.1.3.3)
sg3_utils
binutils
OpenSSH
cpp
ksh
ksh-debugsource-93vu-12.1.x86_64
ksh-devel-93vu-12.1.x86_64
ksh-debuginfo-93vu-12.1.x86_64
libgcc
file
libgomp
make
patch
libdat2-2
dapl-utils
infiniband-diags-1.6.4-4.7.x86_64
m4
OpenFabrics Enterprise Distribution (OFED) package is already bundled within RDMA package in SLES12 SP1.
  • As of Db2 Version 11.1.4.4, Red Hat Enterprise Linux (RHEL) versions 6.10 and 7.5 are supported.
  • As of Db2 Version 11.1.4.4 with APAR IT28804, SUSE Linux Enterprise Server 12 SP3 is supported.
  • In SLES12 SP1, some ksh versions encounter unexpected behavior. Follow the instructions in the link to install 93vu version of ksh: https://www.suse.com/support/update/announcement/2015/suse-ru-20152360-1.html
  • On RHEL 7.3 or higher, if a ConnectX-3 card is to be used, then firmware 2.42.5000 or higher must be used on the card.
  • On SLES 12 SP2 or higher, if a ConnectX-3 card is to be used, then firmware 2.40.7004 or higher must be used on the card.
    Note: SUSE Linux Enterprise Server 12 SP2 is supported by Db2 V11.1 Mod Pack 4 and Fix Pack 4 (V11.1.4.4) or later.
  • Supported in Db2 Release 11.1.3.3 and later fix packs.
Note: Fibre Channel adapters and 10 GE adapters are required by the virtual machines via PCI Passthrough. For instructions on setting up PCI Passthrough of devices for guest VMs, see the Red Hat website: https://access.redhat.com/knowledge/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Virtualization_Host_Configuration_and_Guest_Installation_Guide/chap-Virtualization_Host_Configuration_and_Guest_Installation_Guide-PCI_Device_Config.html
Note: The required level of IBM Tivoli System Automation for Multiplatforms (Tivoli SA MP) and IBM Spectrum Scale, along with any necessary fix for a particular Db2 release and fix pack, must be obtained in the db2 images for the particular release and fix pack. They must be obtained, installed, and updated through the standard Db2 installation and upgrade procedures. Do not download and install individual fixes manually without any guidance from a Db2 service.
Note: Supported in Db2 Release 11.1.3.3 and later fix packs.

Storage hardware requirements

Db2 pureScale Feature supports all storage area network (SAN) and directly attached shared block storage, however for more information about Db2 cluster services support, see the “Shared storage support for Db2 pureScale environments” topic. The following storage hardware requirements must be met for Db2 pureScale Feature support.
Table 2. Minimum and recommended free disk space per host
  Recommended free disk space Minimum required free disk space
Disk to extract installation 3 GB 3 GB
Installation path 6 GB 6 GB
/tmp directory 5 GB 2 GB
/var directory 5 GB 2 GB
/usr directory 2 GB 512 MB
Instance home directory 5 GB 1.5 GB1
root home directory 300 MB 200 MB
  1. The disk space that is required for the instance home directory is calculated at run time and varies. Approximately 1 to 1.5 GB is normally required.
The following shared disk space must be free for each file system:
  • Instance shared directory: 10 GB1
  • Data: dependent on your specific application needs
  • Logs: dependent on the expectant number of transactions and the applications logging requirements
A fourth shared disk is required to configure as the Db2 cluster services tiebreaker disk.

Network prerequisites

On a TCP/IP protocol over Ethernet (TCP/IP) network, a Db2 pureScale environment requires only one high-speed network for the Db2 cluster interconnect. Running your Db2 pureScale environment on a TCP/IP network can provide a faster setup for testing the technology. However, for the most demanding write-intensive data sharing workloads, an RDMA protocol over Converged Ethernet (RoCE) network can offer better performance.

InfiniBand (IB) networks and RoCE networks that use RDMA protocol require two networks: one (public) Ethernet network and one (private) high-speed communication network for communication between members and CFs. The high-speed communication network must be an IB network, a RoCE network, or a TCP/IP network. A mixture of these high-speed communication networks is not supported.

Note: Although a single Ethernet adapter is required for a Db2 pureScale environment, you must set up Ethernet bonding for the network if you have two Ethernet adapters. Ethernet bonding (also known as channel bonding) is a setup where two or more network interfaces are combined. Ethernet bonding provides redundancy and better resilience if the Ethernet network adapter fails. Refer to your Ethernet adapter documentation for instructions on configuring Ethernet bonding. Bonding high-speed communication network is not supported.

It is also a requirement to keep the maximum transmission unit (MTU) size of the network interfaces at the default value of 1500. For more information on configuring the MTU size on Linux, see How do you change the MTU value on the Linux and Windows operating systems?

The rest of this network prerequisites section applies to using RDMA protocol.

Table 3. High-speed communication adapter requirements rack-mounted servers
Communication adapter type Switch IBM Validated Switch Cabling
InfiniBand (IB) QDR IB Mellanox part number MIS5030Q-1SFC
Mellanox 6036SX (IBM part number: 0724016 or 0724022)
QSFP cables
10-Gigabit Ethernet (10GE) 10GE
  1. Blade Network Technologies RackSwitch G8124
  2. Cisco Nexus 5596 Unified Ports Switch
Small Form-factor Pluggable Plus (SFP+) cables
40-Gigabit Ethernet (40GE) 40GE Lenovo RackSwitch NE10032 QSFP28
Enhanced data rate (EDR) EDR Mellanox SB8700 QSFP28
  • Db2 pureScale environments with Linux systems and InfiniBand communication adapter require FabricIT EFM switch based fabric management software. For communication adapter port support on CF servers, the minimum required fabric manager software image that must be installed on the switch is image-PPC_M405EX-EFM_1.1.2500.img. The switch might not support a direct upgrade path to the minimum version, in which case multiple upgrades are required. For instructions on upgrading the fabric manager software on a specific Mellanox switch, see the Mellanox website. Enabling subnet manager (SM) on the switch is mandatory for InfiniBand networks. To create a Db2 pureScale environment with multiple switches, you must have communication adapter on CF servers and configure switch failover on the switches. To support switch failover, see the Mellanox website for instructions on setting up the subnet manager for a high availability domain.
  • Cable considerations:
    • On InfiniBand networks: The QSFP 4 x 4 QDR cables are used to connect hosts to the switch, and inter-switch links. If you are using two switches, two or more inter-switch links are required. The maximum number of inter-switch links required can be determined by using half of the total communication adapter ports that are connected from CFs and members to the switches. For example, in a two switch Db2 pureScale environment where the primary and secondary CF each have four communication adapter ports, and there are four members, the maximum number of inter-switch links required is 6 (6 = (2 * 4 + 4) / 2).
    • On a RoCE network, the maximum number of ISLs can be further limited by the number of ports that are supported by the Link Aggregate Communication Protocol (LACP). This setup is required for switch failover. As this value can differ in different switch vendors, refer to the switch manual for any such limitation. For example, the Blade Network Technologies G8124 24-port switch, with Blade OS 6.3.2.0, has a limitation of a maximum of eight ports in each LACP trunk between the two switches. This effectively caps the maximum of ISLs to four (four ports on each switch).
  • In general, any 10GE switch that supports global pause flow control, as specified by IEEE 802.3x is also supported. However, the exact setup instructions might differ from what is documented in the switch section, which is based on the IBM validated switches. Refer to the switch user manual for details.
  • For the configuration and the features that are required to be enabled and disabled for switch support on RoCE networks, see Configuring switch failover for a Db2 pureScale environment on a RoCE network (Linux). IEEE 802.3x global pause flow control is required. Any Ethernet switch that supports the listed configuration and features is supported. The exact setup instructions might differ from what is documented in the switch section, which is based on the IBM validated switches. Refer to the switch user manual for details.
Table 4. High-speed communication adapter requirements for BladeCenter HS22 servers
Communication adapter type Switch Cabling
InfiniBand (IB) Voltaire 40 Gb InfiniBand Switch1, for example part number 46M6005 QSFP cables 2
10-Gigabit Ethernet (10GE)3 BNT Virtual Fabric 10 Gb Switch Module for IBM BladeCenter, for example part number 46C7191  
  1. To create a Db2 pureScale environment with multiple switches, set up communication adapter for the CF hosts.
  2. Cable considerations:
    • On InfiniBand networks: The QSFP 4 x 4 QDR cables are used to connect hosts to the switch, and for inter-switch links, too. If you are using two switches, two or more inter-switch links are required. The maximum number of inter-switch links required can be determined by using half of the total communication adapter ports that are connected from CFs and members to the switches. For example, in a two switch Db2 pureScale environment where the primary and secondary CF each have four communication adapter ports, and there are four members, the maximum number of inter-switch links required is 6 (6 = (2 * 4 + 4) / 2). On a 10GE network, the maximum number of ISLs can be further limited by the number of ports that are supported by the Link Aggregate Communication Protocol (LACP). This setup is required for switch failover. As this value can differ in different switch vendors, refer to the switch manual for any such limitation. For example, the Blade Network Technologies G8124 24-port switch, with Blade OS 6.3.2.0, has a limitation of maximum eight ports in each LACP trunk between the two switches. This effectively caps the maximum of ISLs to four (four ports on each switch).
  3. For more information about using Db2 pureScale Feature with application cluster transparency in BladeCenter, see this developerWorks® article: http://www.ibm.com/developerworks/data/library/techarticle/dm-1110purescalebladecenter/.
Note: If a member exists on the same host as a cluster caching facility (CF), the cluster interconnect netname in db2nodes.cfg for the member and CF must be the same.

Hardware and firmware prerequisites

Note: Given the widely varying nature of such systems, IBM cannot practically guarantee to have tested on all possible systems or variations of systems. If problem reports exist for which IBM deems reproduction necessary, IBM reserves the right to attempt problem reproduction on a system that might not match the system on which the problem was reported.

For TCP/IP architecture, the Db2 pureScale Feature is supported on any rack-mounted server or blade server. In Db2 Version 11.1 and later fix packs, the Db2 pureScale Feature is supported on any rack-mounted server or blade server.

In Db2 Version 11.1.4.4 and later fix packs, the Db2 pureScale Feature is supported on any x64 Intel compatible rack-mounted server that supports these 2-port Ethernet RoCE or Infiniband adapters in a PCI slot:
  • Mellanox ConnectX-4 generation card that supports RDMA over converged Ethernet (RoCE or IB) (RHEL only)
    Note: The ConnectX-4 generation cards require a minimum of Db2 Version 11.1.4.4 with RHEL 7.5.
IBM has validated the following adapters, which are configurable options on Lenovo x-Series servers:
  • Mellanox ConnectX-4 40GbE Adapter for Lenovo x-Series (00YK367)
  • Mellanox ConnectX-4 Infiniband EDR IB VPI Adapter for Lenovo x-Series (00MM960)
In Db2 Version 11.1 and later fix packs, the Db2 pureScale Feature is supported on any x64 Intel compatible rack-mounted server that supports these InfiniBand or Ethernet RoCE adapters:
  • Mellanox ConnectX-2 generation card that supports RDMA over converged Ethernet (RoCE) or InfiniBand
  • Mellanox ConnectX-3 generation card that supports RDMA over converged Ethernet (RoCE) or InfiniBand
In Db2 version 11.1 and later fix packs, a geographically dispersed Db2 pureScale cluster (GDPC) environment is supported on any x64 Intel compatible rack-mounted server that supports these Ethernet RoCE adapters:
  • Mellanox ConnectX-2 generation card that supports RDMA over converged Ethernet (RoCE)
  • Mellanox ConnectX-3 generation card that supports RDMA over converged Ethernet (RoCE)
IBM validated these adapters, which are configurable options on Lenovo x-Series servers:
  • Mellanox ConnectX-2 Dual Port 10 GbE Adapter for Lenovo x-Series (81Y9990)
  • Mellanox ConnectX-2 Dual-port QSFP QDR IB Adapter for Lenovo x-Series (95Y3750)
  • Mellanox ConnectX-3 FDR VPI IB/E Adapter for Lenovo x-Series (00D9550)
  • Mellanox ConnectX-3 10 GbE Adapter for Lenovo x-Series (00D9690)
Additionally, these server configurations with any of the specified network adapter types are also supported:
Table 5. Additional IBM validated server configurations
Server 10-Gigabit Ethernet (10GE) adapter Minimum 10GE network adapter firmware version InfiniBand (IB) Host Channel Adapter (HCA) Minimum IB HCA firmware version
BladeCenter HS22 System x blades Mellanox 2-port 10 Gb Ethernet Expansion Card with RoCE, for example part number 90Y3570 2.9.1000 2-port 40 Gb InfiniBand Card (CFFh), for example part number 46M6001 2.9.1000
BladeCenter HS23 System x blades Mellanox 2-port 10 Gb Ethernet Expansion Card (CFFh) with RoCE, part number 90Y3570 2.9.1000 2-port 40 Gb InfiniBand Expansion Card (CFFh) - part number 46M6001 2.9.1000
KVM Virtual Machine Mellanox ConnectX-2 EN 10 Gb Ethernet Adapters with RoCE 2.9.1200 Not supported N/A
LenovoFlex System X 240 Compute Node
Lenovo Flex System X 440 Compute Node
IBM Flex System® EN4132 2-port 10 Gb RoCE Adapter 2.10.2324 + uEFI Fix 4.0.320 Not supported N/A
Note:
  1. Install the latest supported firmware for your System x server from http://www.ibm.com/support/us/en/.
  2. KVM-hosted environments for a Db2 pureScale Feature are supported on rack-mounted servers only.
  3. Availability of specific hardware or firmware can vary over time and region. Check availability with your supplier.
1 For better I/O performance, create a separate IBM Spectrum Scale file system to hold your database and specify this shared disk on the create database command.