Installation prerequisites for Db2 pureScale Feature (Power Linux)

Before you install IBM® Db2 pureScale Feature on Power®, you must ensure that your system meets the installation prerequisites.

Ensure that you created your Db2 pureScale Feature installation plan. Your installation plan helps ensure that your system meets the prerequisites and that you did perform the preinstallation tasks. Detailed requirements for storage prerequisites, storage hardware requirements, network prerequisites, and hardware and firmware prerequisites can be found here: software prerequisites (including operating system, IBM Storage Scale, and Pacemaker ), storage hardware requirements, network prerequisites, and hardware and firmware prerequisites.

Software prerequisites

The Db2 pureScale Feature is supported on Power with a little endian architecture.

The libraries and extra packages, which are listed for the Linux® distribution in the following table, are required on the cluster caching facilities and members. Before you install Db2 pureScale Feature or update to the latest fix pack, update hosts with the required software.

Table 1. Power Linux software requirements
Linux distribution Required packages
Red Hat® Enterprise Linux (RHEL) 9.4
cpp
gcc
gcc-c++
kernel-devel
sg3_utils
ntp or chrony
patch
perl-Sys-Syslog
python3 (version 3.6+)
binutils
ksh
lsscsi
m4
openssh
libibverbs
libibverbs-utils
librdmacm
librdmacm-utils
rdma-core 1
ibacm
libibumad
infiniband-diags
mstflint
perftest
qperf
mksh
psmisc
python3-dnf-plugin-versionlock
Note:
  1. To install base packages for RoCE, run the group installation of "InfiniBand Support" package.1
  2. Ensure that the kernel level used is the kernel level that is included in the RHEL release.
Important: The required level of Pacemaker and IBM Spectrum Scale, along with any necessary fix for a particular Db2 release and fix pack, must be obtained in the db2 images for the particular release and fix pack. They must be obtained, installed, and updated through the standard Db2 installation and upgrade procedures. Do not download and install individual fixes manually without guidance from a Db2 service.
Note: Secure Copy Protocol (SCP) is used to copy files between cluster caching facilities and members

Storage hardware requirements

Db2 pureScale Feature supports all storage area network (SAN) and directly attached shared block storage, however for more information about Db2 cluster services support, see the “Shared storage support for Db2 pureScale environments” topic. The following storage hardware requirements must be met for Db2 pureScale Feature support:

Table 2. Minimum and recommended free disk space per host
  Recommended free disk space Minimum required free disk space
Disk to extra installation 3 GB 3 GB
Installation path 6 GB 6 GB
/tmp directory 5 GB 2 GB
/var directory 5 GB 2 GB
/usr directory 2 GB 512 MB
Instance home directory 6 GB 5 GB2

2 The disk space that is required for the instance home directory is calculated at run times and varies. Approximately 1 to 1.5 GB is normally required.

The following shard disk space must be free for each file system:
  • Instance shared directory: 10 GB
  • Data: Dependent on your specific application needs
  • Logs: Dependent on the expectant number of transactions and the logging requirements of the application.

Another shared disk is required to configure as the Db2 cluster services tiebreaker disk.

Network prerequisites

Db2 pureScale on Power Linux platform currently only supports TCP/IP and RoCE network protocols.

On a TCP/IP over Ethernet (TCP/IP) protocol network, a Db2 pureScale environment is supported on any POWER9 or POWER10 rack mounted server. On a TCP/IP protocol over Ethernet (TCP/IP) network, a Db2 pureScale Feature requires only 1 high-speed network for the Db2 cluster interconnect. Running your Db2 suggested pureScale environment on a Transmission Control Protocol/Internet Protocol network can provide a faster setup for testing the technology. However, for the most demanding write-intensive data sharing workloads, an RDMA protocol over Converged Ethernet (RoCE) network can offer better performance.

RoCE networks uses RDMA protocol require two networks: one (public) Ethernet network and one (private) high-speed communication network for communication between members and cluster caching facility) CF. The high-speed communication network must be a RoCE network, or a Transmission Control Protocol/Internet Protocol network.
Attention: A mixture of these high-speed communication networks is not supported.
Tip: Although a single Ethernet adapter is required for a Db2 pureScale environment, you should set up Ethernet bonding for the network if you have two Ethernet adapters. Ethernet bonding (also known as channel bonding) is a setup where two or more network interfaces are combined. Ethernet bonding provides redundancy and better resilience if the Ethernet network adapter fails. Refer to your Ethernet adapter documentation for instructions on configuring Ethernet bonding. Bonding RoCE network is not supported.

It is also a requirement to keep the maximum transmission unit (MTU) size of the network interfaces at the default value of 1500. For more information on configuring the MTU size on Linux, see How do you change the MTU value on the Linux and Windows operating systems?

Hardware and firmware prerequisites

The Db2 pureScale Feature is supported on POWER10 compatible rack mounted server that supports one of the Ethernet RoCE adapters listed:
  • PCIe3 2-port 10 Gb NIC & RoCE SR/Cu adapter with feature code EC2U, EC2T
  • PCIe4 2-port 10 GbE RoCE SFP28 adapter with feature code EC71, EC72
The Db2 pureScale Feature is supported on POWER9 compatible rack mounted server which supports one of the Ethernet RoCE adapters listed below:
  • PCIe3 2-port 100 GbE (NIC and RoCE) QSFP28 Adapter with feature code EC3L, EC3M
  • PCIe3 2-port 10 Gb NIC & RoCE SR/Cu Adapter with feature code EC2S, EC2R
  • PCIe3 2-port 10 Gb NIC & RoCE SR/Cu adapter with feature code EC2U, EC2T
Attention: Given the widely varying nature of such systems, IBM cannot practically guarantee to have tested on all possible systems or variations of systems. In the event of problem reports for which IBM deems reproduction necessary, IBM reserves the right to attempt problem reproduction on a system that may not match the system on which the problem was reported.

Cable Information

Table 3. 10GE cable information (1, 3 and 5 meters)
  1 meter 3 meter 5 meter
Feature Code number EN01 EN02 EN03
Note: IBM Qualified Copper SFP+ cables or standard 10-Gb SR optical cabling (up to 300 meter cable length) can be used for connecting RoCE adapters to the 10GE switches.
For a list of 100GE cables that are compatible with your adapter, see the corresponding document in the Power documentation. For example, on a POWER9 machine, the cables available for the EC3L/EC3M adapter on POWER9® are listed under the adapter here.
Note: A server using a 100G adapter must use a 100G type switch and the corresponding cables.

Switches

For the configuration and the features that are required to be enabled and disabled, see: Configuring switch failover for a DB2® pureScale environment on a RoCE network (Linux)

Table 4. IBM validated 10GE switches for RDMA
IBM Validated Switch
Blade Network Technologies RackSwitch G8264
Lenovo RackSwitch G8272
Table 5. IBM validated 100GE switches for RDMA
IBM Validated Switch
Cisco Nexus C9336C-FX2
Note:
  • IBM Qualified Copper SFP+ cables or standard 10-Gb SR optical cabling (up to 300 meter cable length) can be used as inter-switch links.
  • For the configuration and the features that are required to be enabled and disabled, see Configuring switch failover for a DB2 pureScale environment on a RoCE network (Linux). Any Ethernet switch that supports the listed configuration and features is supported. The exact setup instructions might differ from what is documented in the switch section, which is based on the IBM validated switches. Refer to the switch user manual for details.
  • For cable considerations, on RoCE: The QSFP 4 x 4 QDR cables are used to connect hosts to the switch, and for inter-switch links, too. If using two switches, two or more inter-switch links are required. The maximum number of inter-switch links required can be determined by using half of the total communication adapter ports connected from cluster caching facilities (CFs) and members to the switches. For example, in a two switch Db2 pureScale environment where the primary and secondary cluster caching facility (CF) each have four communication adapter ports, and there are four members, the maximum number of inter-switch links required is 6 (6 = (2 * 4 + 4 )/2).

You must not intermix 10-Gigabit, 40-Gigabit, and 100-Gigabit Ethernet network switch types. The same type of switch, adapter, and cables must be used in a cluster. A server that uses a 10G adapter must use a 10G type switch and the corresponding cables. A server that uses a 40G adapter must use a 40G type switch and the corresponding cables. A server that uses a 100G adapter must use a 100G type switch and the corresponding cables.

Remember: If a member exists on the same host as a cluster caching facility (CF), the cluster interconnect netname in db2nodes.cfg for the member and CF must be the same.