Installation prerequisites for Db2 pureScale Feature (Power Linux)
Before you install IBM® Db2 pureScale Feature on Power®, you must ensure that your system meets the installation prerequisites.
Ensure that you created your Db2 pureScale Feature installation plan. Your installation plan helps ensure that your system meets the prerequisites and that you did perform the preinstallation tasks. Detailed requirements for storage prerequisites, storage hardware requirements, network prerequisites, and hardware and firmware prerequisites can be found here: software prerequisites (including operating system, IBM Storage Scale, and Tivoli® SA MP), storage hardware requirements, network prerequisites, and hardware and firmware prerequisites.
Software prerequisites
The Db2 pureScale Feature is supported on Power with a little endian architecture.
The libraries and extra packages, which are listed for the Linux® distribution in the following table, are required on the cluster caching facilities and members. Before you install Db2 pureScale Feature or update to the latest fix pack, update hosts with the required software.
Linux distribution | Required packages | OpenFabrics Enterprise Distribution (OFED) package |
---|---|---|
Red Hat® Enterprise Linux (RHEL) 8.8 | For RoCE network type, run the group installation of the InfiniBand Support package. | |
Red Hat Enterprise Linux (RHEL) 8.8 | For RoCE network type, run the group installation of the InfiniBand Support package. | |
Red Hat Enterprise Linux (RHEL) 8.66 |
For RoCE network type, run the group installation of the InfiniBand Support package. | |
Red Hat Enterprise Linux (RHEL) 8.14 |
General cpp
gcc gcc-c++ kernel-devel sg3_utils ntp or chrony patch perl-Sys-Syslog Python 3.6+ binutils ksh lsscsi m4 openssh For RoCE network: |
For RoCE network type, run the group installation of the InfiniBand Support package. |
Red Hat Enterprise Linux (RHEL) 7.93 |
libibverbs
Python
3.6+librdmacm rdma-core dapl ibacm ibutils libstdc++ (both x86_64 and i686) glibc (both x86_64 and i686) gcc-c++ gcc kernel kernel-devel kernel-headers linux-firmware ntp or chrony ntpdate sg3_utils sg3_utils-libs binutils binutils-devel m4 openssh cpp ksh libgcc (both x86_64 and i686 file libgomp make patch perl-Sys-Syslog mksh5 psmisc5 |
For InfiniBand network type or RoCE network type run group installation of "InfiniBand Support" package. |
Red Hat Enterprise Linux (RHEL) 7.82 |
General cpp
gcc gcc-c++ kernel-devel sg3_utils ntp or chrony patch perl-Sys-Syslog Python 3.6+ binutils ksh lsscsi m4 openssh For RoCE network: |
For RoCE network type, run group installation of InfiniBand Support package. |
Red Hat Enterprise Linux (RHEL) 7.61 |
General cpp
gcc gcc-c++ kernel-devel sg3_utils ntp or chrony patch perl-Sys-Syslog Python 3.6+ binutils ksh lsscsi m4 openssh For RoCE network: libibverbs
libibverbs-utils librdmacm librdmacm-utils rdma-core dapl ibacm ibutils libibumad infiniband-diags mstflint perftest qperf |
For RoCE network type, run group installation of InfiniBand Support package. |
Red Hat Enterprise Linux (RHEL) 7.5 |
General cpp
gcc gcc-c++ kernel-devel sg3_utils ntp or chrony patch perl-Sys-Syslog Python 3.6+ binutils ksh lsscsi m4 openssh For RoCE network: libibcm
libibverbs libibverbs-utils librdmacm librdmacm-utils rdma-core dapl ibacm ibutils libibumad infiniband-diags mstflint perftest qperf |
For RoCE network type, run group installation of InfiniBand Support package. |
1 Db2 APAR IT29745 is required when running RHEL 7.6 or higher.
3 When using RHEL 7.9, the Db2 version must be 11.5.6 or later.
4 When using RHEL 8.1, the Db2 version must be 11.5.5 and later. RHEL 8.1 currently supports TCP only (no RDMA), and might only be able to run on POWER8 systems (no POWER9 support).
5 This package is required for Db2 11.5.7 and later.
Storage hardware requirements
Db2 pureScale Feature supports all storage area network (SAN) and directly attached shared block storage, however for more information about Db2 cluster services support, see the “Shared storage support for Db2 pureScale environments” topic. The following storage hardware requirements must be met for Db2 pureScale Feature support:
Recommended free disk space | Minimum required free disk space | |
---|---|---|
Disk to extra installation | 3 GB | 3 GB |
Installation path | 6 GB | 6 GB |
/tmp directory | 5 GB | 2 GB |
/var directory | 5 GB | 2 GB |
/usr directory | 2 GB | 512 MB |
Instance home directory | 5 GB | 1.5 GB1 |
1 The disk space that is required for the instance home directory is calculated at run times and varies. Approximately 1 to 1.5 GB is normally required.
- Instance shared directory: 10 GB1
- Data: Dependent on your specific application needs
- Logs: Dependent on the expectant number of transactions and the logging requirements of the application.
Another shared disk is required to configure as the Db2 cluster services tiebreaker disk.
Network prerequisites
Db2 pureScale on Power Linux platform currently only supports TCP/IP and RoCE network protocols.
On a TCP/IP over Ethernet (TCP/IP) protocol network, a Db2 pureScale environment is supported on any POWER8 or POWER9 rack mounted server. On a TCP/IP protocol over Ethernet (TCP/IP) network, a Db2 pureScale Feature requires only 1 high-speed network for the Db2 cluster interconnect. Running your Db2 suggested pureScale environment on a Transmission Control Protocol/Internet Protocol network can provide a faster setup for testing the technology. However, for the most demanding write-intensive data sharing workloads, an RDMA protocol over Converged Ethernet (RoCE) network can offer better performance.
It is also a requirement to keep the maximum transmission unit (MTU) size of the network interfaces at the default value of 1500. For more information on configuring the MTU size on Linux, see How do you change the MTU value on the Linux and Windows operating systems?
Hardware and firmware prerequisites
- PCIe3 2-port 100 GbE (NIC and RoCE) QSFP28 Adapter with feature code EC3L, EC3M
- PCIe3 2-port 10 Gb NIC & RoCE SR/Cu Adapter with feature code EC2S, EC2R
- PCIe3 2-port 10 Gb NIC & RoCE SR/Cu adapter with feature code EC2U, EC2T
- PCIe2 2-Port 10GbE RoCE SFP+/SR Adapter with feature code EC27, EC28, EC29, EC30, EL27, EL2Z
- PCIe3 2-Port 40GbE NIC RoCE QSFP+ Adapter with feature code EC3A, EC3B
Cable Information
1 meter (copper) | 3 meter (copper) | 5 meter (copper) | 10m (optical) | 30m (optical) | |
---|---|---|---|---|---|
Feature Code number | EB2B | EB2H | ECBN | EB2J | EB2K |
1 meter | 3 meter | 5 meter | |
---|---|---|---|
Feature Code number | EN01 | EN02 | EN03 |
Switches
For the configuration and the features that are required to be enabled and disabled, see: Configuring switch failover for a DB2® pureScale environment on a RoCE network (Linux)
IBM Validated Switch |
---|
Blade Network Technologies RackSwitch G8264 |
Lenovo RackSwitch G8272 |
IBM Validated Switch |
---|
Blade Network Technologies RackSwitch G8316 |
IBM Validated Switch |
---|
Cisco Nexus C9336C-FX2 |
- IBM Qualified Copper SFP+ cables or standard 10-Gb SR optical cabling (up to 300 meter cable length) can be used as inter-switch links.
- For the configuration and the features that are required to be enabled and disabled, see Configuring switch failover for a DB2 pureScale environment on a RoCE network (Linux). Any Ethernet switch that supports the listed configuration and features is supported. The exact setup instructions might differ from what is documented in the switch section, which is based on the IBM validated switches. Refer to the switch user manual for details.
- For cable considerations, on RoCE: The QSFP 4 x 4 QDR cables are used to connect hosts to the switch, and for inter-switch links, too. If using two switches, two or more inter-switch links are required. The maximum number of inter-switch links required can be determined by using half of the total communication adapter ports connected from cluster caching facilities (CFs) and members to the switches. For example, in a two switch Db2 pureScale environment where the primary and secondary cluster caching facility (CF) each have four communication adapter ports, and there are four members, the maximum number of inter-switch links required is 6 (6 = (2 * 4 + 4 )/2).
You must not intermix 10-Gigabit, 40-Gigabit, and 100-Gigabit Ethernet network switch types. The same type of switch, adapter, and cables must be used in a cluster. A server that uses a 10G adapter must use a 10G type switch and the corresponding cables. A server that uses a 40G adapter must use a 40G type switch and the corresponding cables. A server that uses a 100G adapter must use a 100G type switch and the corresponding cables.