Installation prerequisites for the Db2 pureScale Feature (Intel Linux)
This document applies only to Linux® distribution on hardware based on Intel. Before you install the Db2 pureScale Feature, you must ensure that your system meets the installation prerequisites.
For specific versions of Linux distribution support, refer to the web page listed in the reference.
Ensure that you created your Db2 pureScale® Feature installation plan. Your installation plan helps ensure that your system meets the prerequisites and that you performed the pre-installation tasks. The following requirements are described in detail: software prerequisites (including operating system, IBM Storage Scale, and Pacemaker), storage hardware requirements, network prerequisites, and hardware and firmware prerequisites.
Software prerequisites
In Db2 12.1, the Db2 pureScale Feature supports Linux virtual machines.
Linux distribution | Required packages |
---|---|
Red Hat® Enterprise Linux (RHEL) 9.4 |
libstdc++ (both x86_64 and i686)
glibc (both x86_64 and i686) gcc-c++ gcc kernel kernel-devel kernel-headers linux-firmware ntp or chrony Python 3.6+ sg3_utils sg3_utils-libs binutils m4 openssh cpp libgcc (both x86_64 and i686) file libgomp make patch perl-Sys-Syslog mksh psmisc libibverbs libibverbs-utils librdmacm librdmacm-utils rdma-core ibacm infiniband-diags iwpmd libibumad libpsm2 mstflint opa-address-resolution opa-basic-tools opa-fastfabric opa-libopamgt perftest qperf srp_daemon |
- Usage notes for software requirements
-
- The required level of Pacemaker and IBM® Storage Scale, along with any necessary fix for a particular Db2 release and fix pack, must be obtained in the Db2 images for the particular release and fix pack. They must be obtained, installed, and updated through the standard Db2 installation and upgrade procedures. Do not download and install individual fixes manually without any guidance from a Db2 service.
- SLES 15 SP6 is supported on Db2 12.1.1 and later.
- To install base packages for RoCE, run the group installation of the InfiniBand Support package.
- Secure Copy Protocol (SCP) is used to copy files between cluster caching facilities and members
Storage hardware requirements
Recommended free disk space | Minimum required free disk space | |
---|---|---|
Disk to extract installation | 3 GB | 3 GB |
Installation path | 6 GB | 6 GB |
/tmp directory | 5 GB | 2 GB |
/var directory | 5 GB | 2 GB |
/usr directory | 2 GB | 512 MB |
Instance home directory | 5 GB | 1.5 GB1 |
root home directory | 300 MB | 200 MB |
- Instance shared directory: 10 GB1
- Data: dependent on your specific application needs
- Logs: dependent on the expectant number of transactions and the applications logging requirements
Network prerequisites
On a TCP/IP protocol over Ethernet (TCP/IP) network, a Db2 pureScale environment requires only one high-speed network for the Db2 cluster interconnect. Running your Db2 pureScale environment on a TCP/IP network can provide a faster setup for testing the technology. However, for the most demanding write-intensive data sharing workloads, an RDMA protocol over Converged Ethernet (RoCE) network can offer better performance.
RoCE networks that use RDMA protocol require two networks: one (public) Ethernet network and one (private) high-speed communication network for communication between members and CFs. The high-speed communication network must be an IB network, a RoCE network, or a TCP/IP network. A mixture of these high-speed communication networks is not supported.
It is also a requirement to keep the maximum transmission unit (MTU) size of the network interfaces at the default value of 1500. For more information on configuring the MTU size on Linux, see How do you change the MTU value on the Linux and Windows operating systems?
The rest of this network prerequisites section applies to using RDMA protocol.
Communication adapter type | Switch | IBM Validated Switch | Cabling |
---|---|---|---|
10-Gigabit Ethernet (10GE) | 10GE |
|
Small Form-factor Pluggable Plus (SFP+) cables |
40-Gigabit Ethernet (40GE) | 40GE |
|
QSFP cables |
100-Gigabit Ethernet (100GE) | 100GE | Cisco Nexus C9336C-FX2 | QSFP28 cables |
- Cable
considerations:
- On a RoCE network, two or more inter-switch links are required if you are using two switches. The maximum number of inter-switch links required can be determined by using half of the total communication adapter ports that are connected from CFs and members to the switches. For example, in a two switch Db2 pureScale environment where the primary and secondary CF each have four communication adapter ports, and there are four members, the maximum number of inter-switch links required is 6 (6 = (2 * 4 + 4) / 2). The maximum number of ISLs can be further limited by the number of ports that are supported by the Link Aggregate Communication Protocol (LACP). This setup is required for switch failover. As this value can differ in different switch vendors, refer to the switch manual for any such limitation. For example, the Blade Network Technologies G8124 24-port switch, with Blade OS 6.3.2.0, has a limitation of a maximum of eight ports in each LACP trunk between the two switches. This effectively caps the maximum of ISLs to four (four ports on each switch).
- For the configuration and the features that are required to be enabled and disabled for switch support on RoCE networks, see Configuring switch failover for a Db2 pureScale environment on a RoCE network (Linux). IEEE 802.3x global pause flow control is required. Any Ethernet switch that supports the listed configuration and features is supported. The exact setup instructions might differ from what is documented in the switch section, which is based on the IBM validated switches. Refer to the switch user manual for details.
Hardware and firmware prerequisites
For TCP/IP architecture, the Db2 pureScale Feature is supported on any rack-mounted server or blade server. The Db2 pureScale Feature is supported on any rack-mounted server or blade server.
- Mellanox ConnectX-3 generation card that supports RDMA over converged Ethernet
- Mellanox ConnectX-4 generation card that supports RDMA over converged Ethernet
- Mellanox ConnectX-5 generation card that supports RDMA over converged Ethernet
- Mellanox ConnectX-6 generation card that supports RDMA over converged Ethernet (RoCE) (RHEL only)
- Mellanox ConnectX-3 FDR VPI IB/E Adapter for Lenovo x-Series (00D9550)
- Mellanox ConnectX-3 10 GbE Adapter for Lenovo x-Series (00D9690)
- Mellanox ConnectX-4 40GbE Adapter for Lenovo x-Series (00YK367)
- Mellanox ConnectX-5 100Gb Adapter (MCX556A-ECAT)
- Mellanox ConnectX-5 100Gb Adapter (MCX516A-CCAT)