Installation prerequisites for Db2 pureScale Feature (Power Linux)

Before you install IBM® Db2 pureScale Feature on Power®, you must ensure that your system meets the installation prerequisites.

Ensure that you created your Db2 pureScale Feature installation plan. Your installation plan helps ensure that your system meets the prerequisites and that you did perform the preinstallation tasks. Detailed requirements for storage prerequisites, storage hardware requirements, network prerequisites, and hardware and firmware prerequisites can be found here: software prerequisites (including operating system, IBM Storage Scale, and Tivoli® SA MP), storage hardware requirements, network prerequisites, and hardware and firmware prerequisites.

Software prerequisites

The Db2 pureScale Feature is supported on Power with a little endian architecture.

The libraries and extra packages, which are listed for the Linux® distribution in the following table, are required on the cluster caching facilities and members. Before you install Db2 pureScale Feature or update to the latest fix pack, update hosts with the required software.

Attention: In Db2 11.5.8 and later, Python 3.6+ is required for pureScale configurations on all supported Linux platforms.
Table 1. Power Linux software requirements
Linux distribution Required packages OpenFabrics Enterprise Distribution (OFED) package
Red Hat® Enterprise Linux (RHEL) 8.8
cpp
gcc
gcc-c++
kernel-devel
sg3_utils
ntp or chrony
patch
perl-Sys-Syslog
Python 3.6+
binutils
ksh
lsscsi
m4
openssh
libibverbs
libibverbs-utils
librdmacm
librdmacm-utils
rdma-core
ibacm
libibumad
infiniband-diags
mstflint
perftest
qperf
mksh5
psmisc5
For RoCE network type, run the group installation of the InfiniBand Support package.
Red Hat Enterprise Linux (RHEL) 8.8
cpp
gcc
gcc-c++
kernel-devel
sg3_utils
ntp or chrony
patch
perl-Sys-Syslog
Python 3.6+
binutils
ksh
lsscsi
m4
openssh
libibverbs
libibverbs-utils
librdmacm
librdmacm-utils
rdma-core
ibacm
libibumad
infiniband-diags
mstflint
perftest
qperf
mksh5
psmisc5
For RoCE network type, run the group installation of the InfiniBand Support package.

Red Hat Enterprise Linux (RHEL) 8.66

cpp
gcc
gcc-c++
kernel-devel
sg3_utils
ntp or chrony
patch
perl-Sys-Syslog
Python 3.6+
binutils
ksh
lsscsi
m4
openssh
libibverbs
libibverbs-utils
librdmacm
librdmacm-utils
rdma-core
ibacm
libibumad
infiniband-diags
mstflint
perftest
qperf
mksh5
psmisc5
For RoCE network type, run the group installation of the InfiniBand Support package.

Red Hat Enterprise Linux (RHEL) 8.14

General

cpp
gcc
gcc-c++
kernel-devel
sg3_utils
ntp or chrony
patch
perl-Sys-Syslog
Python 3.6+
binutils
ksh
lsscsi
m4
openssh

For RoCE network:

libibverbs
libibverbs-utils
librdmacm
librdmacm-utils
rdma-core
dapl
ibacm
ibutils
libibumad
infiniband-diags
mstflint
perftest
qperf
mksh5
psmisc5

For RoCE network type, run the group installation of the InfiniBand Support package.

Red Hat Enterprise Linux (RHEL) 7.93

libibverbs
librdmacm
rdma-core
dapl
ibacm
ibutils
libstdc++ (both x86_64 and i686)
glibc (both x86_64 and i686)
gcc-c++
gcc
kernel
kernel-devel
kernel-headers
linux-firmware
ntp or chrony
ntpdate
sg3_utils
sg3_utils-libs
binutils
binutils-devel
m4
openssh
cpp
ksh
libgcc (both x86_64 and i686
file
libgomp
make
patch
perl-Sys-Syslog
mksh5
psmisc5
Python 3.6+
For InfiniBand network type or RoCE network type run group installation of "InfiniBand Support" package.

Red Hat Enterprise Linux (RHEL) 7.82

General

cpp
gcc
gcc-c++
kernel-devel
sg3_utils
ntp or chrony
patch
perl-Sys-Syslog
Python 3.6+
binutils
ksh
lsscsi
m4
openssh

For RoCE network:

libibverbs
libibverbs-utils
librdmacm
librdmacm-utils
rdma-core
dapl
ibacm
ibutils
libibumad
infiniband-diags
mstflint
perftest
qperf
mksh5
psmisc5

For RoCE network type, run group installation of InfiniBand Support package.

Red Hat Enterprise Linux (RHEL) 7.61

General

cpp
gcc
gcc-c++
kernel-devel
sg3_utils
ntp or chrony
patch
perl-Sys-Syslog
Python 3.6+
binutils
ksh
lsscsi
m4
openssh

For RoCE network:

libibverbs
libibverbs-utils
librdmacm
librdmacm-utils
rdma-core
dapl
ibacm
ibutils
libibumad
infiniband-diags
mstflint
perftest
qperf

For RoCE network type, run group installation of InfiniBand Support package.

Red Hat Enterprise Linux (RHEL) 7.5

General

cpp
gcc
gcc-c++
kernel-devel
sg3_utils
ntp or chrony
patch
perl-Sys-Syslog
Python 3.6+
binutils
ksh
lsscsi
m4
openssh

For RoCE network:

libibcm
libibverbs
libibverbs-utils
librdmacm
librdmacm-utils
rdma-core
dapl
ibacm
ibutils
libibumad
infiniband-diags
mstflint
perftest
qperf

For RoCE network type, run group installation of InfiniBand Support package.
Note:

1 Db2 APAR IT29745 is required when running RHEL 7.6 or higher.

2 When using RHEL 7.8, the Db2 version must be 11.5.5 and later.
Note: ConnectX-2 is no longer supported on RHEL 7.8.

3 When using RHEL 7.9, the Db2 version must be 11.5.6 or later.

4 When using RHEL 8.1, the Db2 version must be 11.5.5 and later. RHEL 8.1 currently supports TCP only (no RDMA), and might only be able to run on POWER8 systems (no POWER9 support).

5 This package is required for Db2 11.5.7 and later.

6 When using RHEL 8.6, the Db2 version must be 11.5.8 and later.
Note: Ensure that the kernel level used is the kernel level that is included in the RHEL release.
Important: The required level of IBM Tivoli System Automation for Multiplatforms (Tivoli SA MP) and IBM Storage Scale, along with any necessary fix for a particular Db2 release and fix pack, must be obtained in the db2 images for the particular release and fix pack. They must be obtained, installed, and updated through the standard Db2 installation and upgrade procedures. Do not download and install individual fixes manually without any guidance from a Db2 service.
Note: For RHEL 8.8, and RHEL 9.2, one must be running 11.5.9.0 or higher.

Storage hardware requirements

Db2 pureScale Feature supports all storage area network (SAN) and directly attached shared block storage, however for more information about Db2 cluster services support, see the “Shared storage support for Db2 pureScale environments” topic. The following storage hardware requirements must be met for Db2 pureScale Feature support:

Table 2. Minimum and recommended free disk space per host
  Recommended free disk space Minimum required free disk space
Disk to extra installation 3 GB 3 GB
Installation path 6 GB 6 GB
/tmp directory 5 GB 2 GB
/var directory 5 GB 2 GB
/usr directory 2 GB 512 MB
Instance home directory 5 GB 1.5 GB1

1 The disk space that is required for the instance home directory is calculated at run times and varies. Approximately 1 to 1.5 GB is normally required.

The following shard disk space must be free for each file system:
  • Instance shared directory: 10 GB1
  • Data: Dependent on your specific application needs
  • Logs: Dependent on the expectant number of transactions and the logging requirements of the application.

Another shared disk is required to configure as the Db2 cluster services tiebreaker disk.

Network prerequisites

Db2 pureScale on Power Linux platform currently only supports TCP/IP and RoCE network protocols.

On a TCP/IP over Ethernet (TCP/IP) protocol network, a Db2 pureScale environment is supported on any POWER8 or POWER9 rack mounted server. On a TCP/IP protocol over Ethernet (TCP/IP) network, a Db2 pureScale Feature requires only 1 high-speed network for the Db2 cluster interconnect. Running your Db2 suggested pureScale environment on a Transmission Control Protocol/Internet Protocol network can provide a faster setup for testing the technology. However, for the most demanding write-intensive data sharing workloads, an RDMA protocol over Converged Ethernet (RoCE) network can offer better performance.

RoCE networks uses RDMA protocol require two networks: one (public) Ethernet network and one (private) high-speed communication network for communication between members and cluster caching facility) CF. The high-speed communication network must be a RoCE network, or a Transmission Control Protocol/Internet Protocol network.
Attention: A mixture of these high-speed communication networks is not supported.
Tip: Although a single Ethernet adapter is required for a Db2 pureScale environment, you should set up Ethernet bonding for the network if you have two Ethernet adapters. Ethernet bonding (also known as channel bonding) is a setup where two or more network interfaces are combined. Ethernet bonding provides redundancy and better resilience if the Ethernet network adapter fails. Refer to your Ethernet adapter documentation for instructions on configuring Ethernet bonding. Bonding RoCE network is not supported.

It is also a requirement to keep the maximum transmission unit (MTU) size of the network interfaces at the default value of 1500. For more information on configuring the MTU size on Linux, see How do you change the MTU value on the Linux and Windows operating systems?

Hardware and firmware prerequisites

The Db2 pureScale Feature is supported on POWER9 compatible rack mounted server which supports one of the Ethernet RoCE adapters listed below:
  • PCIe3 2-port 100 GbE (NIC and RoCE) QSFP28 Adapter with feature code EC3L, EC3M
  • PCIe3 2-port 10 Gb NIC & RoCE SR/Cu Adapter with feature code EC2S, EC2R
  • PCIe3 2-port 10 Gb NIC & RoCE SR/Cu adapter with feature code EC2U, EC2T
Attention: POWER9 machines can only be used with Db2 pureScale in POWER8 compatibility mode.
The Db2 pureScale Feature is supported on POWER8 compatible rack mounted server which supports one of the Ethernet RoCE adapters listed below:
  • PCIe2 2-Port 10GbE RoCE SFP+/SR Adapter with feature code EC27, EC28, EC29, EC30, EL27, EL2Z
  • PCIe3 2-Port 40GbE NIC RoCE QSFP+ Adapter with feature code EC3A, EC3B
Attention: Given the widely varying nature of such systems, IBM cannot practically guarantee to have tested on all possible systems or variations of systems. In the event of problem reports for which IBM deems reproduction necessary, IBM reserves the right to attempt problem reproduction on a system that may not match the system on which the problem was reported.

Cable Information

Table 3. 40GE cable information (1, 3, 5, 10, and 30 meters)
  1 meter (copper) 3 meter (copper) 5 meter (copper) 10m (optical) 30m (optical)
Feature Code number EB2B EB2H ECBN EB2J EB2K
Table 4. 10GE cable information (1, 3 and 5 meters)
  1 meter 3 meter 5 meter
Feature Code number EN01 EN02 EN03
Note: IBM Qualified Copper SFP+ cables or standard 10-Gb SR optical cabling (up to 300 meter cable length) can be used for connecting RoCE adapters to the 10GE switches.
For a list of 100GE cables that are compatible with your adapter, see the corresponding document in the Power documentation. For example, on a POWER9 machine, the cables available for the EC3L/EC3M adapter on POWER9® are listed under the adapter here.
Note: A server using a 100G adapter must use a 100G type switch and the corresponding cables.

Switches

For the configuration and the features that are required to be enabled and disabled, see: Configuring switch failover for a DB2® pureScale environment on a RoCE network (Linux)

Table 5. IBM validated 10GE switches for RDMA
IBM Validated Switch
Blade Network Technologies RackSwitch G8264
Lenovo RackSwitch G8272
Table 6. IBM validated 40GE switches for RDMA
IBM Validated Switch
Blade Network Technologies RackSwitch G8316
Table 7. IBM validated 100GE switches for RDMA
IBM Validated Switch
Cisco Nexus C9336C-FX2
Note:
  • IBM Qualified Copper SFP+ cables or standard 10-Gb SR optical cabling (up to 300 meter cable length) can be used as inter-switch links.
  • For the configuration and the features that are required to be enabled and disabled, see Configuring switch failover for a DB2 pureScale environment on a RoCE network (Linux). Any Ethernet switch that supports the listed configuration and features is supported. The exact setup instructions might differ from what is documented in the switch section, which is based on the IBM validated switches. Refer to the switch user manual for details.
  • For cable considerations, on RoCE: The QSFP 4 x 4 QDR cables are used to connect hosts to the switch, and for inter-switch links, too. If using two switches, two or more inter-switch links are required. The maximum number of inter-switch links required can be determined by using half of the total communication adapter ports connected from cluster caching facilities (CFs) and members to the switches. For example, in a two switch Db2 pureScale environment where the primary and secondary cluster caching facility (CF) each have four communication adapter ports, and there are four members, the maximum number of inter-switch links required is 6 (6 = (2 * 4 + 4 )/2).

You must not intermix 10-Gigabit, 40-Gigabit, and 100-Gigabit Ethernet network switch types. The same type of switch, adapter, and cables must be used in a cluster. A server that uses a 10G adapter must use a 10G type switch and the corresponding cables. A server that uses a 40G adapter must use a 40G type switch and the corresponding cables. A server that uses a 100G adapter must use a 100G type switch and the corresponding cables.

Remember: If a member exists on the same host as a cluster caching facility (CF), the cluster interconnect netname in db2nodes.cfg for the member and CF must be the same.