Installation prerequisites for Db2 pureScale Feature (AIX)

Before you install a Db2 pureScale environment for the first time, ensure that you have your Db2 pureScale Feature installation plan created. Your installation plan helps ensure that your system meets the prerequisites and that preinstallation tasks are done.

When you plan your Db2 pureScale installation, review the software, hardware, firmware, and storage hardware configuration options to ensure that you meet the requirements.

For the most up-to-date installation requirements for data server products, see System requirements for IBM® Db2 for Linux®, UNIX, and Windows. This techdoc uses IBM Software Product Compatibility Reports (SPCR). With the SPCR tool, you can locate and find complete lists of supported operating systems, system requirements, prerequisites, and optional supported software for these database products.

This topic details the requirements for: software prerequisites (including operating system, OpenSSH, IBM Spectrum® Scale, and Tivoli® SA MP), storage hardware, and hardware and firmware (network adapters, cables, switches).

Software prerequisites

Before you run the installation or apply a fix pack with the installFixPack command, ensure that fixes are applied for your operating system.
Table 1. Software requirements - AIX operating system version and technology levels
AIX version Technology Level Minimum Service Pack (SP) level AIX APAR
AIX 7.1 3 5 IV72952
AIX 7.1 4 1  
AIX 7.1 5 3 IJ07998 IJ13283
AIX 7.2 0 1  
AIX 7.2 1 1  
AIX 7.2 2 1  
AIX 7.2 3 1 IJ11241 IJ11146 IJ11326 IJ15063
AIX 7.2 4 1  
Note:
  1. InfiniBand networks and RoCE networks require uDAPL. The uDAPL level that is required is the uDAPL level that is included in the AIX image. The uDAPL level included in the Technology Level (TL) and Service Pack (SP) is subject to change when the TL or SP level is changed. The uDAPL level is not subject to change if TL or SP level is unchanged.
  2. If the AIX system is running on a Technology Level with the minimum Service pack that is specified in the table, all APARs listed in the row must be installed. For a system that runs on a Technology Level with a later Service Pack, verify whether the APAR fix is included in the Service Pack level. To obtain fixes for the APARs for a system that is running a Service Pack higher than the minimum required and lower than the Service Pack the fix was first included in, see IBM Support Fix Central.
  3. On installations with AIX 7.2, only TCP and RDMA-based configurations can be used. RDMA must be used only with RoCE with IP support.
  4. In order to use AIX 7.2 TL04 SP1 or higher, the Db2 version must be Db2 11.1.4.6 or later.
The AIX level must be one of the following when EC2M, EC2N, EC37, or EC38 cards are used with POWER8® systems:
  • AIX 7.1 TL04 SP3 (or higher SP or TL within the supported range in Table 1)
  • AIX 7.2 TL01 SP1 (or higher SP or TL within the supported range in Table 1)

The AIX level must be AIX 7.2 TL01 SP1 level or higher, with APAR IV91161 applied on the system prior to usage with Db2, when EC3L, or EC3M cards are used.

When used with POWER9™ systems, the AIX level used must be one of the following:
  • AIX 7.2 TL02 SP2 (7200-02-02-1832)(or higher SP) with APAR IJ12133 applied to the system prior to usage with Db2
  • AIX 7.2 TL03 SP1 (with required APARs for this level)(or higher SP) with additional APAR IJ12134 applied to the system prior to usage with Db2
At a minimum, Db2 V11.1.4.4 is required.
The AIX level used must be one of the following when EC2S or EC2R cards are used with POWER9 systems:
  • AIX 7.2 TL02 SP2 (7200-02-02-1832)(or higher SP) with APAR IJ12133 applied to the system prior to usage with Db2
  • AIX 7.2 TL03 SP1 (with required APARs for this level)(or higher SP) with additional APAR IJ12134 applied to the system prior to usage with Db2
At a minimum, Db2 V11.1.4.4 is required.

Starting with Db2 Version 11.1.4.4, AIX 7.1 TL04 SP1 is the new minimum OS level for AIX.

In Db2 V11.1.4.4 and newer versions, POWER9 machines can only be used with Db2 pureScale in POWER8 compatibility mode. Starting in Db2 11.1.4.5, POWER9 systems may be used in POWER9 processor compatibility mode. AIX 7.2 TL03 support on POWER9 also requires the fix for IJ12134. There is no difference between the performance of Db2 pureScale when using POWER8-compatibility mode compared to POWER9-native mode.

The following Db2 configuration must be set when EC2M, EC2N, EC37, EC38, EC2S, EC2R, EC3L, or EC3M cards are used.
Note: These settings must be applied before the db2start command is issued.
- db2set DB2_SAL_INITIAL_TIMEOUT_FOR_CONNECT_SEC=10 
- db2set DB2_SAL_CONNECT_MAX_TIMEOUT=10
In Db2 V11.1.4.4 and newer versions, only DB2_SAL_INITIAL_TIMEOUT_FOR_CONNECT_SEC=10 is required.

The AIX level must be AIX 7.1 TL04 SP1 or higher or AIX 7.2 TL01 SP1 or higher when EC2M, EC2N, EC37, or EC38 cards are used with POWER7® systems.

The following list includes the required software:
Note:
  • IBM Spectrum Scale and Tivoli SA MP:
    • The required level of IBM Tivoli System Automation for Multiplatforms (Tivoli SA MP) and IBM Spectrum Scale, along with any necessary fix for a particular Db2 release and fix pack, must be obtained in the db2 images for the particular release and fix pack. They must be obtained, installed, and updated through the standard Db2 installation and upgrade procedures. Do not download and install individual fixes manually without any guidance from a Db2 service.
  • RSCT:
    • The IBM Reliable Scalable Cluster Technology (RSCT) peer domain (RSCT:) level that is included within a Tivoli SA MP package is always the 'minimum' supported level.
    • On Db2 11.1 and later fix packs, if you have IBM Tivoli System Automation for Multiplatforms (Tivoli SA MP) already installed, it must be installed at the level that is packaged inside the Db2 image. Customers might install any higher RSCT level as part of an AIX update.
  • AIX:
    • Workload partitions (WPARs) are not supported in a Db2 pureScale environment.

Storage hardware requirements

Db2 pureScale Feature supports all storage area network (SAN) and directly attached shared block storage, however for more information about Db2 cluster services support, see the “Shared storage support for Db2 pureScale environments” topic. The following storage hardware requirements must be met for Db2 pureScale Feature support.
Table 2. Minimum and recommended free disk space per host
  Recommended free disk space Minimum required free disk space
Disk to extract installation 3 GB 3 GB
Installation path 6 GB 6 GB
/tmp directory 5 GB 2 GB
/var directory 5 GB 2 GB
/usr directory 2 GB 512 MB
Instance home directory 5 GB 1.5 GB1
root home directory 300 MB 200 MB
  1. The disk space that is required for the instance home directory is calculated at run time and varies. Approximately 1 to 1.5 GB is normally required.
The following shared disk space must be free for each file system:
  • Instance shared directory: 10 GB1
  • Data: dependent on your specific application needs
  • Logs: dependent on the expectant number of transactions and the applications logging requirements
A fourth shared disk is required to configure as the Db2 cluster services tiebreaker disk.

Hardware and firmware prerequisites

Note: Given the widely varying nature of such systems, IBM cannot practically guarantee to have tested on all possible systems or variations of systems. If problem reports for which IBM deems reproduction necessary, IBM reserves the right to attempt problem reproduction on a system that might not match the system on which the problem was reported.
The Db2 pureScale Feature is supported on POWER9® compatible rack-mounted servers that support one of the Ethernet RoCE adapters listed below:
  • PCIe3 2-port 10 Gb NIC & RoCE SR/Cu adapter with feature code EC2S, EC2R
    Note:

    When used with POWER9 systems, the AIX level used must be AIX 7.2 TL02 SP2 (7200-02-02-1832)(or higher SP) with additional APAR IJ12133, or AIX 7.2 TL03 SP1 (with required APARs for this level or higher) with additional to APAR IJ12134 applied on the systems prior to usage with Db2. A minimum of version 11.1.4.4 is required for use with Db2.

    The following Db2 configuration settings must be used (which must be set before the db2start command is issued):
    db2set DB2_SAL_INITIAL_TIMEOUT_FOR_CONNECT_SEC=10 db2set DB2_SAL_CONNECT_MAX_TIMEOUT=20 
    Starting in 11.1.4.5, the above settings are not required.
  • PCIe3 2-port 100 GbE (NIC and RoCE) QSFP28 Adapter with feature code EC3L, EC3M
    Note: EC3L and EC3M cards must be used on the AIX 7.2 TL01 SP1 level or higher with APAR IV91161 applied on the system prior to usage with Db2.
    When used with POWER9 systems, the AIX level used must be one of the following:
    • AIX 7.2 TL02 SP2 (7200-02-02-1832)(or higher SP) with APAR IJ12133 applied to the system prior to usage with Db2
    • AIX 7.2 TL03 SP1 (with required APARs for this level)(or higher SP) with additional APAR IJ12134 applied to the system prior to usage with Db2
    At a minimum, Db2 V11.1.4.4 is required.
    The following Db2 configuration settings must be used (which must be set before the db2start command is issued):
    db2set DB2_SAL_INITIAL_TIMEOUT_FOR_CONNECT_SEC=10 db2set DB2_SAL_CONNECT_MAX_TIMEOUT=20 
    Starting in 11.1.4.5, the above settings are not required.
  • PCIe4 2-port 100 GbE (NIC & RoCE) QSFP28 Adapter with feature code EC66, EC67
    Note: EC66 and EC67 cards must be used on the AIX 7.2 TL03 SP4 level or higher on POWER® 9 systems. At a minimum, Db2 V11.1.4.5 is required.
The Db2 pureScale Feature is supported on POWER8 compatible rack-mounted servers that support one of the Ethernet RoCE adapters in the following list:
  • PCIe2 2-Port 10 GbE RoCE SFP+/SR Adapter with feature code EC27, EC28, EC29, EC30
  • PCIe3 2-Port 40 GbE NIC RoCE QSFP+ Adapter with feature code EC3A, EC3B
  • PCIe3 2-port 10 GbE NIC and RoCE SFP+/SR adapter with feature code EC2M, EC2N, EC37, EC38
    Note: When used with POWER8 systems, the AIX level used must be AIX 7.1 TL04 SP3 (or higher SP or TL within the supported range in Table 1), or AIX 7.2 TL01 SP1 (or higher SP or TL within the supported range in Table 1).
    The following Db2 configuration settings must be used (which must be set before the db2start command is issued):
    db2set DB2_SAL_INITIAL_TIMEOUT_FOR_CONNECT_SEC=10 db2set DB2_SAL_CONNECT_MAX_TIMEOUT=20 
    Starting in 11.1.4.5, the above settings are not required.
  • PCIe3 2-port 100 GbE (NIC and RoCE) QSFP28 Adapter with feature code EC3L, EC3M
    Note:
    The following Db2 configuration settings must be used (which must be set before the db2start command is issued):
    db2set DB2_SAL_INITIAL_TIMEOUT_FOR_CONNECT_SEC=10 db2set DB2_SAL_CONNECT_MAX_TIMEOUT=20 
    Starting in 11.1.4.5, the above settings are not required.

On a TCP/IP over Ethernet (TCP/IP) protocol network, a Db2 pureScale environment is supported on any rack-mounted server or blade server.

On an RDMA protocol network, a Db2 pureScale environment is supported on any POWER7 compatible rack-mounted server that supports one of these Ethernet RoCE or InfiniBand QDR adapters:
  • PCIe2 2-Port 10 GbE RoCE SFP+/SR Adapter with feature code EC27, EC28, EC29, EC30
  • PCIe2 2-port 4X InfiniBand QDR Adapter with feature code 5283, 5285
  • PCIe3 2-port 10 GbE NIC and RoCE SFP+/SR adapter EC2M, EC2N, EC37, EC38
    Note: The following Db2 configuration settings must be used (which must be set before the db2start command is issued):
    db2set DB2_SAL_INITIAL_TIMEOUT_FOR_CONNECT_SEC=10 db2set DB2_SAL_CONNECT_MAX_TIMEOUT=20 
    Starting in 11.1.4.5, the above settings are not required.
On an RDMA protocol network, a Db2 pureScale is supported on POWER7 IBM Flex System® p260 and p640 Compute Nodes (7895-22X, 7895-23X, 7895-42X, and 7895-43X) that support one of these Ethernet RoCE adapters:
  • EN4132: 2-port 10 Gb RoCE Adapter with feature code EC26
On an RDMA protocol network, a Db2 pureScale environment is supported on any POWER6® or POWER7 compatible rack-mounted server in the DDR - InfiniBand support table, and, newer equivalent models supported by POWER.

On a TCP/IP protocol over Ethernet (TCP/IP) network, a Db2 pureScale environment requires only one high-speed network for the Db2 cluster interconnect. Running your Db2 pureScale environment on a TCP/IP network can provide a faster setup for testing the technology. However, for the most demanding write-intensive data sharing workloads, an RDMA protocol over Converged Ethernet (RoCE) network can offer better performance.

InfiniBand networks and RoCE networks that use RDMA protocol require two networks: one (public) Ethernet network and one (private) high-speed communication network for communication between members and CFs. The high-speed communication network must be an InfiniBand network, a RoCE network, or a TCP/IP network. A mixture of these high-speed communication networks is not supported.

Note: Although a single Ethernet adapter is required on a host for the public network in a Db2 pureScale environment, set up Ethernet bonding for the network if you have two Ethernet adapters. Ethernet bonding (also known as channel bonding) is a setup where two or more network interfaces are combined. Ethernet bonding provides redundancy and better resilience if Ethernet network failures. Refer to your Ethernet documentation for instructions on configuring Ethernet bonding. Network interfaces that are used for a Db2 pureScale cluster interconnect with RDMA must not be bonded.

It is also a requirement to keep the maximum transmission unit (MTU) size of the network interfaces at the default value of 1500 for both TCP and RDMA network types. For more information on configuring the MTU size on AIX, see Change MTU of Host Ethernet Adapter.

The rest of this hardware and firmware prerequisites section applies to using RDMA protocol.

Cables and switches: a Db2 pureScale environment is supported on any Ethernet adapter driver (10GbE or other supported speed) and QDR cable and switch that is supported by a POWER7 or POWER8 server.

The communication adapter port can be:
  • a RoCE network,
  • an InfiniBand network.
To use a RoCE network, all network adapters and switches must have remote direct memory access (RDMA) over Converged Ethernet (RoCE).
Server-specific hardware details:

Servers in a Db2 pureScale environment must use both an Ethernet network and high-speed communication adapter port.

Table 3. Server-specific hardware details - IBM validated RDMA over Converged Ethernet (RoCE) support and required firmware level
Server Minimum Required Platform Firmware level PCIe Support for RoCE network adapters
IBM POWER7 780/HE (9179-MHC) AM740_042_042 PCIe2 2-Port 10 GbE RoCE SFP+ Adapter (Feature code EC28) (Copper)

PCIe2 2-Port 10 GbE RoCE SR Adapter (Feature code EC30) (Optical)

IBM POWER7 770/MR (9117-MMC) AM740_042_042 PCIe2 2-Port 10 GbE RoCE SFP+ Adapter (Feature code EC28) (Copper)

PCIe2 2-Port 10 GbE RoCE SR Adapter (Feature code EC30) (Optical)

IBM POWER7 780/HE (9179-MMD) AM760_034_034 PCIe2 2-Port 10 GbE RoCE SFP+ Adapter (Feature code EC28) (Copper)

PCIe2 2-Port 10 GbE RoCE SR Adapter (Feature code EC30) (Optical)

IBM POWER7 770/MR (9117-MMD) AM760_034_034 PCIe2 2-Port 10 GbE RoCE SFP+ Adapter (Feature code EC28) (Copper)

PCIe2 2-Port 10 GbE RoCE SR Adapter (Feature code EC30) (Optical)

IBM POWER7 720 1S (8202-E4C with optional low-profile slots) AL740_043_042 PCIe2 2-Port 10 GbE RoCE SFP+ Adapter (Feature code EC28) (Copper)

PCIe2 2-Port 10 GbE RoCE SR Adapter (Feature code EC30) (Optical)

PCIe2 Low Profile 2-Port 10 GbE RoCE SFP+ Adapter (Feature code EC27) (Copper) in the PCIe Newcombe Riser Card (Feature code 5685)

PCIe2 Low Profile 2-Port 10 GbE RoCE SR Adapter (Feature code EC29) (Optical) in the PCIe Newcombe Riser Card (Feature code 5685)

IBM POWER7 740 2S (8205-E6C with optional low-profile slots) AL740_043_042 PCIe2 2-Port 10 GbE RoCE SFP+ Adapter (Feature code EC28) (Copper)

PCIe2 2-Port 10 GbE RoCE SR Adapter (Feature code EC30) (Optical)

PCIe2 Low Profile 2-Port 10 GbE RoCE SFP+ Adapter (Feature code EC27) (Copper) in the PCIe Newcombe Riser Card (Feature code 5685)

PCIe2 Low Profile 2-Port 10 GbE RoCE SR Adapter (Feature code EC29) (Optical) in the PCIe Newcombe Riser Card (Feature code 5685)

IBM POWER7 710 1S (8231-E1C) AL740_043_042 PCIe2 Low Profile 2-Port 10 GbE RoCE SFP+ Adapter (Feature code EC27) (Copper)

PCIe2 Low Profile 2-Port 10 GbE RoCE SR Adapter (Feature code EC29) (Optical)

IBM POWER7 730 2S (8231-E2C) AL740_043_042 PCIe2 Low Profile 2-Port 10 GbE RoCE SFP+ Adapter (Feature code EC27) (Copper)

PCIe2 Low Profile 2-Port 10 GbE RoCE SR Adapter (Feature code EC29) (Optical)

IBM Flex System p260 Compute Node (7895-22X) AF763_042 EN4132 2-port 10 Gb RoCE Adapter (Feature code EC26)
IBM Flex System p260 Compute Node (7895-23X) AF763_042 EN4132 2-port 10 Gb RoCE Adapter (Feature code EC26)
IBM Flex System p460 Compute Node (7895-42X) AF763_042 EN4132 2-port 10 Gb RoCE Adapter (Feature code EC26)
IBM Flex System p460 Compute Node (7895-43X) AF763_042 EN4132 2-port 10 Gb RoCE Adapter (Feature code EC26)
Note:
  • RoCE adapters do not support virtualization. Each LPAR requires a dedicated RoCE adapter. For example, if a machine has two LPARs (one for CF and one for member), each of these LPARs must have its own dedicated RoCE adapter.
  • For production clusters, it is strongly recommended that each Db2 member and each CF be deployed into their own LPAR. The CF should have a dedicated CPU to provide the low-latency responses required for communication with the Db2 members. Db2 members will also benefit from dedicated CPUs for best-in-class performance.
Table 4. Server-specific hardware details for IBM validated QDR - InfiniBand support and required firmware level
Server Minimum Required Platform Firmware level PCle2 Dual port QDR InfiniBand Channel adapter
IBM POWER7 780/HE (9179-MHC) AM740_042_042 PCIe2 2-port 4X InfiniBand QDR Adapter (Feature code: 5285)
IBM POWER7 770/MR (9117-MMC) AM740_042_042 PCIe2 2-port 4X InfiniBand QDR Adapter (Feature code: 5285)
IBM POWER7 740 2S (8205-E6C with optional low-profile slots) AL740_043_042 PCIe2 2-port 4X InfiniBand QDR Adapter (Feature code: 5285), or PCIe2 Low Profile 2-port 4X InfiniBand QDR Adapter (Feature code: 5283) in the PCIe Newcombe Riser Card (Feature code: 5685), or both
IBM POWER7 740 (8205-E6B) with Newcombe (optional low-profile Gen2 slots) AL720_102 PCIe2 Low Profile 2-port 4X InfiniBand QDR Adapter (Feature code: 5283) in the PCIe Newcombe Riser Card (Feature code: 5685)
IBM POWER7 710 (8231-E1C) AL740_043_042 PCIe2 Low Profile 2-port 4X InfiniBand QDR Adapter (Feature code: 5283) in the PCIe Newcombe Riser Card (Feature Code: 5685)
IBM POWER7 720 (8202-E4B) AL730_066_035 PCIe2 Low Profile 2-port 4X InfiniBand QDR Adapter (Feature code: 5283) in the PCIe Newcombe Riser Card (Feature Code: 5685)
IBM POWER7 720 (8202-E4C) AL740_043_042 PCIe2 2-port 4X InfiniBand QDR Adapter (Feature code: 5285), or PCIe2 Low Profile 2-port 4X InfiniBand QDR Adapter (Feature code: 5283) in the PCIe Newcombe Riser Card (Feature code: 5685), or both
IBM POWER7 730 2S (8231-E2C) AL740_043_042 PCIe2 Low Profile 2-port 4X InfiniBand QDR Adapter (Feature code: 5283) in the PCIe Newcombe Riser Card (Feature Code: 5685)
Note:
  • Although the purchasing of QDR InfiniBand switches is no longer available through IBM, Db2 still supports configurations with QDR InfiniBand switches supported by Intel.
  • QDR InfiniBand adapters do not support virtualization. Each LPAR requires a dedicated QDR InfiniBand adapter. For example, if a machine has two LPARs (one for CF and one for member), each of these LPARs must have its own dedicated QDR InfiniBand adapter.
Table 5. Server-specific hardware details for IBM validated DDR - InfiniBand support1 and required firmware level
Server Minimum Required Platform Firmware level InfiniBand network adapter, GX Dual-port 12x Channel Attach - DDR InfiniBand Channel adapter InfiniBand Channel conversion cables
IBM POWER7 795 (9119-FHB) * AH720_102 or higher Feature Code 1816 12x to 4x (Feature Code 1828, 1841, or 1854)
IBM POWER7 780 (9179-MHB) * AM720_102 or higher Feature Code 1808 12x to 4x (Feature Code 1828, 1841, or 1854)
IBM POWER7 780 (9179-MHC) * AM740_042 or higher Feature Code 1808 12x to 4x (Feature Code 1828, 1841, or 1854)
IBM POWER7 770 (9117-MMB) * AM720_102 or higher Feature Code 1808 12x to 4x (Feature Code 1828, 1841, or 1854)
IBM POWER7 770 (9117-MMC) * AM740_042 or higher Feature Code 1808 12x to 4x (Feature Code 1828, 1841, or 1854)
IBM POWER7 750 (8233-E8B) AL730_049 or higher Feature Code 5609 12x to 4x (Feature Code 1828, 1841, or 1854)
IBM POWER7 740 (8205-E6C) AL720_102 or higher Feature Code EJ04 12x to 4x (Feature Code 1828, 1841, or 1854)
IBM POWER7 740 (8205-E6B) AL720_102 or higher Feature Code 5615 12x to 4x (Feature Code 1828, 1841, or 1854)
IBM POWER7 730 (8231-E2B) AL720_102 or higher Feature Code 5266 4x to 4x cables (Feature Code 3246)
IBM POWER7 720 (8202-E4C) AL720_102 or higher Feature Code EJ04 12x to 4x (Feature Code 1828, 1841, or 1854)
IBM POWER7 720 (8202-E4B) AL720_102 or higher Feature Code 5615 12x to 4x (Feature Code 1828, 1841, or 1854)
IBM POWER7 710 (8231-E2B) AL720_102 or higher Feature Code 5266 4x to 4x cables (Feature Code 3246)
IBM POWER6 595 (9119-FHA) EH350_071 or higher Feature Code 1816 12x to 4x (Feature Code 1828, 1841, or 1854)
IBM POWER6 550 Express (8204-E8A) EL350_071 or higher Feature Code 5609 12x to 4x (Feature Code 1828, 1841, or 1854)
Note:
  1. Although the purchasing of DDR InfiniBand hardware is no longer available through IBM, Db2 still supports configurations with DDR InfiniBand.
  2. When you acquire systems, consider the I/O ports available and future workloads for greater flexibility and scalability. The servers that are marked with an asterisk (*) are designed for enterprise applications. For more information about selecting the hardware, see Site and hardware planning in the IBM System Hardware documentation: http://www.ibm.com/support/knowledgecenter/POWER8/p8hdx/POWER8welcome.htm.
  3. InfiniBand Channel conversion cables are available in multiple lengths, each with a different product feature code (FC). Some different 12x to 4x InfiniBand Channel conversion cable lengths available are 1.5 m (FC 1828), 3 m (FC 1841), and 10 m (FC 1854). Your data center layout and the relative location of the hardware in the Db2 pureScale environment are factors that must be considered when you select the cable length.

Cable Information:

For 100Gb cables, refer to the POWER documentation for cable options that correspond to the adapter you are using.
Note: A server that uses a 100Gb adapter must use a 100Gb-type switch and the corresponding cables.
Table 6. 40GE cable information (1, 3, 5, 10, and 30 meters)
  1 m (copper) 3 m (copper) 5 m (copper) 10 m (optical) 30 m (optical)
Feature Code number EB2B EB2H ECBN EB2J EB2K
Table 7. 10GE cable information (1, 3 and 5 meters)
  1 meter 3 m 5 m
Feature Code number EN01 EN02 EN03
Note:
  • IBM Qualified Copper SFP+ cables or standard 10-Gb SR optical cabling (up to 300 m cable length) can be used for connecting RoCE adapters to the 10GE switches.
Table 8. IBM Qualified QSFP+ Cable Information for 10GE RoCE
  1 meter 3 m
Feature Code number EB2B EB2H
Note:
  • IBM Qualified QSFP+ cables can be used as inter-switch links between POWER Flex System 10GE switches.
Table 9. QDR InfiniBand cable information (1, 3, 5, 10, 30 meters)
  1 meter (copper) 3 m (copper) 5 m (copper) 10 m (optical) 30 meter (optical)
Feature Code number 3287 3288 3289 3290 3293
Switches:
In general, any 10GE or 40GE switch that supports global pause flow control, as specified by IEEE 802.3x is also supported for RoCE networks. For more information about required switch configurations, see Switch configuration on a RoCE network (AIX).
Table 10. IBM validated 100GE switches for RDMA
IBM Validated Switch
Cisco Nexus C9336C-FX2
Table 11. IBM validated 10GE switches for RDMA
IBM Validated Switch
Blade Network Technologies RackSwitch G8124
Blade Network Technologies RackSwitch G8264
Juniper Networks QFX3500 Switch
Lenovo RackSwitch G8272
Table 12. IBM validated 40GE switches for RDMA
IBM Validated Switch
Blade Network Technologies RackSwitch G8316
Note:
  • IBM Qualified Copper SFP+ cables or standard 10-Gb SR optical cabling (up to 300 m cable length) can be used as inter-switch links. Cables that are 3 m or 7 m SFP+ cables that are supplied by Juniper can be used between Juniper switches.
  • For the configuration and the features that are required to be enabled and disabled, see Switch configuration on a RoCE network (AIX). Any Ethernet switch that supports the listed configuration and features is supported. The exact setup instructions might differ from what is documented in the switch section, which is based on the IBM validated switches. Refer to the switch user manual for details.
You must not intermix 10 Gigabit and 40-Gigabit Ethernet network switch types. The same type of switch, adapter, and cables must be used in a cluster. A server that uses a 10G adapter must use a 10G type switch and the corresponding cables. A server that uses a 40G adapter must use a 40G type switch and the corresponding cables.
Table 13. Supported InfiniBand network switches
InfiniBand switch Intel model number Number of ports Type Required rack space
IBM 7874-024 9024 24 4x DDR InfiniBand Edge Switch 1U
IBM 7874-040 9040 48 4x DDR InfiniBand Fabric Director Switch 4U
IBM 7874-120 9102 128 4x DDR InfiniBand Fabric Director Switch 7U
IBM 7874-240 9240 288 4x DDR InfiniBand Fabric Director Switch 14U
IBM 7874-036 12200 36 QDR InfiniBand Switch 1U
IBM 7874-072 12800-040 72 QDR InfiniBand Switch 5U
IBM 7874-324 12800-180 324 QDR InfiniBand Switch 14U
Note:
  • All of the InfiniBand switches listed in the previous table must use the embedded subnet management functions. When you order InfiniBand switches from Intel, management modules must be purchased for the switch.
  • Although the purchasing of InfiniBand switches is no longer available through IBM, Db2 still supports configurations with InfiniBand switches supported by Intel.
  • If you are using two switches in the Db2 pureScale environment, two or more 4x to 4x inter-switch links (ISL) are required. To help with performance and fault tolerance to inter-switch link failures, use half the number of inter-switch link cables as there are total communication adapter ports that are connected from CFs and members to the switches. For example, in a two switch Db2 pureScale environment where the primary and secondary CF each has four cluster interconnect netname(s), and there are four members, use six inter-switch links (6 = (2 x 4 + 4) /2). Choose 4x to 4x InfiniBand ISL cables of appropriate length for your network environment.

DDR and QDR InfiniBand network switch types cannot be intermixed. The same type of switch, adapter, and cables must be used in a cluster. A server that uses a DDR InfiniBand adapter must use a DDR type switch and the corresponding cables. A server that uses a QDR InfiniBand adapter must use a QDR type switch and the corresponding cables.

1 For better I/O performance, create a separate IBM Spectrum Scale file system to hold your database and specify this shared disk on the create database command.