DB2 10.5 for Linux, UNIX, and Windows

Installation prerequisites for DB2 pureScale Feature (AIX)

Before you install a DB2® pureScale® environment for the first time, ensure you have created your DB2 pureScale Feature installation plan. Your installation plan helps ensure that your system meets the prerequisites and that you have performed the preinstallation tasks.

When planning your DB2 pureScale installation, review the software, hardware, firmware, and storage hardware configuration options to ensure you meet the requirements.

This topic details the requirements for: software prerequisites (including operating system, OpenSSH, GPFS™, and Tivoli® SA MP), storage hardware, and hardware and firmware (network adapters, cables, switches).

Software prerequisites

Before running the installation, or apply a fix pack with the installFixPack command, ensure that fixes are applied for your operating system.
Table 1. Software requirements - AIX operating system version and technology levels
AIX version Technology Level Minimum Service Pack (SP) level Required uDAPL level AIX APAR
AIX 6.1 7 6 6.1.7.15 IV35889
AIX 6.1 8 2 6.1.8.15 IV35889
AIX 6.1 9 1 6.1.9.03  
AIX 7.1 1 6 7.1.1.15 IV35894
AIX 7.1 2 2 7.1.2.15 IV35894
AIX 7.1 3 1 7.1.3.03  
Note:
  1. IB networks and RoCE networks require uDAPL. Download and install the uDAPL package at the base Technology Level (not the uDAPL packages specific to a fix pack) from the AIX Web Download Pack Programs website, see https://www14.software.ibm.com/webapp/iwm/web/reg/signup.do?source=aixbp&lang=en_US&S_PKG=udapl. After installing the base uDAPL package, apply the appropriate uDAPL fix for the Technology Level from the IBM® Support portal, see https://www-304.ibm.com/support/docview.wss?q1=U830315&dc=DB510&rs=1209&uid=isg1fileset664799651&cs=UTF-8&lang=en&loc=en_US.
  2. If the AIX system is running on a Technology Level with the minimum Service Pack specified in the table, all APARs listed in the row must be installed except for AIX APARs marked with an asterisk (*). The asterisk marked APARs are only required for DB2 pureScale environments with multiple switches. For a system that runs on a Technology Level with a later Service Pack, verify whether the APAR fix is included in the Service Pack level. The first Service Pack the APAR fix is included in is in the table next to the APAR in parentheses. To obtain fixes for the APARs for a system running a Service Pack higher than the minimum required and lower than the Service Pack the fix was first included in, see IBM Support Fix Central: http://www-933.ibm.com/support/fixcentral/.
  3. Starting on AIX 6.1 TL9 and AIX 7.1 TL3, the uDAPL level required is the uDAPL level that is included in the AIX image. This is subject to change with respect to Technology Level and Service Pack level
  4. Starting in Version 10.5 Fix Pack 5 and later fix packs, DB2 pureScale has support for 40-gigabit ethernet as a cluster interconnect transport. A minimum AIX level of AIX 7.1 TL03 SP3 is required for 40-gigabit ethernet support.
  5. AIX Version 6.1 cannot use uDAPL version 7.x.x.x
Required Software:
Note:
  • GPFS:
    • On Version 10.5 Fix Pack 8 and later fix packs, if you have IBM General Parallel File System ( GPFS) already installed, it must be GPFS 4.1.1.4 efix 14.
    • On DB2 Cancun Release 10.5.0.4 and later fix packs, if you have IBM General Parallel File System ( GPFS) already installed, it must be GPFS 3.5.0.17. The installation of DB2 pureScale Feature performs the update to the required level automatically.
    • On Version 10.5 Fix Pack 3 and earlier fix packs, if you have IBM General Parallel File System ( GPFS) already installed, it must be GPFS 3.5.0.7.
  • Tivoli SA MP:
    • On DB2 Cancun Release 10.5.0.4 and later fix packs, if you have IBM Tivoli System Automation for Multiplatforms (Tivoli SA MP) already installed, it must be Tivoli SA MP 3.2.2.8. The installation of DB2 pureScale Feature upgrades existing Tivoli SA MP installations to this version level.
    • On Version 10.5 Fix Pack 3 and earlier fix packs, if you have Tivoli SA MP already installed, it must be Tivoli SA MP Version 3.2.2.5.
  • RSCT:
    • The IBM RSCT (Reliable Scalable Cluster Technology) level included within a TSA package is always the 'minimum' supported level.
    • On DB2 Cancun Release 10.5.0.4 and later fix packs, if you have IBM Tivoli System Automation for Multiplatforms (Tivoli SA MP) already installed, it must be Tivoli SA MP 3.2.2.8. If so, customers might install any higher RSCT level.
    • On Version 10.5 Fix Pack 3 and earlier fix packs, if you have Tivoli SA MP already installed, it must be Tivoli SA MP Version 3.2.2.5. If so, customers might install any higher RSCT level.
  • AIX workload partitions (WPARs) are not supported in a DB2 pureScale environment.

Storage hardware requirements

DB2 pureScale Feature supports all storage area network (SAN) and directly attached shared block storage. Configuring DB2 cluster services managed shared storage is recommended for better resiliency. For more information about DB2 cluster services support, see the “Shared storage considerations” topic. The following storage hardware requirements must be met for DB2 pureScale Feature support.
Table 2. Minimum and recommended free disk space
  Recommended free disk space Minimum required free disk space
Disk to extract installation 3 GB 3 GB
Installation path 6 GB 6 GB
/tmp directory 5 GB 2 GB
/var directory 5 GB 2 GB
/usr directory 2 GB 512 MB
Instance home directory 5 GB 1.5 GB1
  1. The disk space that is required for the instance home directory is calculated at run time and varies. Approximately 1 to 1.5 GB is normally required.
The following shared disk space must be free for each file system:
  • Instance shared files: 10 GB1
  • Data: dependent on your specific application needs
  • Logs: dependent on the expectant number of transactions and the applications logging requirements
A fourth shared disk is required to configure as the DB2 cluster services tiebreaker disk.

Hardware and firmware prerequisites

Note: Given the widely varying nature of such systems, IBM cannot practically guarantee to have tested on all possible systems or variations of systems. In the event of problem reports for which IBM deems reproduction necessary, IBM reserves the right to attempt problem reproduction on a system that may not match the system on which the problem was reported.
The DB2 pureScale Feature is supported on POWER8® compatible rack mounted server which supports one of the Ethernet RoCE adapters listed below:
  • PCIe2 2-Port 10GbE RoCE SFP+ Adapter with feature code EC27, EC28, EC29, EC30
  • PCIe3 2-Port 40GbE NIC RoCE QSFP+ Adapter with feature code EC3A, EC3B

In DB2 Cancun Release 10.5.0.4 and later fix packs, a DB2 pureScale environment is supported on any rack mounted server or blade server.

On an RDMA protocol network, a DB2 pureScale environment is supported on any POWER7® compatible rack mounted server which supports one of these Ethernet RoCE or InfiniBand QDR adapters:
  • PCIe2 2-Port 10GbE RoCE SFP+ Adapter with feature code EC27, EC28, EC29, EC30
  • PCIe2 2-port 4X InfiniBand QDR Adapter with feature code 5283, 5285

On an RDMA protocol network, a DB2 pureScale environment is supported on any POWER6® or POWER7 compatible rack mounted server in the DDR - InfiniBand support table, and, newer equivalent models supported by POWER®.

On a TCP/IP protocol over Ethernet (TCP/IP) network, a DB2 pureScale environment requires only 1 high speed network for the DB2 cluster interconnect. Running your DB2 pureScale environment on a TCP/IP network can provide a faster setup for testing the technology. However, for the most demanding write-intensive data sharing workloads, an RDMA protocol over Converged Ethernet (RoCE) network can offer better performance.

InfiniBand (IB) networks and RoCE networks using RDMA protocol require two networks: one (public) Ethernet network and one (private) high speed communication network for communication between members and CFs. The high speed communication network must be an IB network, a RoCE network, or a TCP/IP network. A mixture of these high speed communication networks is not supported.

Note: Although a single Ethernet adapter is required on a host for the public network in a DB2 pureScale environment, you should set up Ethernet bonding for the network if you have two Ethernet adapters. Ethernet bonding (also known as channel bonding) is a setup where two or more network interfaces are combined. Ethernet bonding provides redundancy and better resilience in the event of Ethernet network failures. Refer to your Ethernet documentation for instructions on configuring Ethernet bonding. Network interfaces used for a DB2 pureScale cluster interconnect with RDMA must not be bonded.

The rest of this hardware and firmware prerequisites section applies to using RDMA protocol.

Cables and switches: a DB2 pureScale environment is supported on any 10GE and QDR cable and switch that is supported by a POWER7 or POWER8 server.

The communication adapter port can be:
  • a RoCE network,
  • an InfiniBand (IB) network.
To use a RoCE network all network adapters and switches must be capable of remote direct memory access (RDMA) over Converged Ethernet (RoCE).
The hardware and firmware requirements for IBM validated servers are listed in these tables later in this section: Servers in a DB2 pureScale environment must use both an Ethernet network and high-speed communication adapter port.
Table 3. Server-specific hardware details - IBM validated RDMA over Converged Ethernet (RoCE) support and required firmware level
Server Minimum Required Platform Firmware level PCIe Support for RoCE network adapters
IBM POWER7 780/HE (9179-MHC) AM740_042_042 PCIe2 2-Port 10GbE RoCE SFP+ Adapter (Feature code EC28) (Copper)

PCIe2 2-Port 10GbE RoCE SR Adapter (Feature code EC30) (Optical)

IBM POWER7 770/MR (9117-MMC) AM740_042_042 PCIe2 2-Port 10GbE RoCE SFP+ Adapter (Feature code EC28) (Copper)

PCIe2 2-Port 10GbE RoCE SR Adapter (Feature code EC30) (Optical)

IBM POWER7 780/HE (9179-MMD) AM760_034_034 PCIe2 2-Port 10GbE RoCE SFP+ Adapter (Feature code EC28) (Copper)

PCIe2 2-Port 10GbE RoCE SR Adapter (Feature code EC30) (Optical)

IBM POWER7 770/MR (9117-MMD) AM760_034_034 PCIe2 2-Port 10GbE RoCE SFP+ Adapter (Feature code EC28) (Copper)

PCIe2 2-Port 10GbE RoCE SR Adapter (Feature code EC30) (Optical)

IBM POWER7 720 1S (8202-E4C with optional low-profile slots) AL740_043_042 PCIe2 2-Port 10GbE RoCE SFP+ Adapter (Feature code EC28) (Copper)

PCIe2 2-Port 10GbE RoCE SR Adapter (Feature code EC30) (Optical)

PCIe2 Low Profile 2-Port 10GbE RoCE SFP+ Adapter (Feature code EC27) (Copper) in the PCIe Newcombe Riser Card (Feature code 5685)

PCIe2 Low Profile 2-Port 10GbE RoCE SR Adapter (Feature code EC29) (Optical) in the PCIe Newcombe Riser Card (Feature code 5685)

IBM POWER7 740 2S (8205-E6C with optional low-profile slots) AL740_043_042 PCIe2 2-Port 10GbE RoCE SFP+ Adapter (Feature code EC28) (Copper)

PCIe2 2-Port 10GbE RoCE SR Adapter (Feature code EC30) (Optical)

PCIe2 Low Profile 2-Port 10GbE RoCE SFP+ Adapter (Feature code EC27) (Copper) in the PCIe Newcombe Riser Card (Feature code 5685)

PCIe2 Low Profile 2-Port 10GbE RoCE SR Adapter (Feature code EC29) (Optical) in the PCIe Newcombe Riser Card (Feature code 5685)

IBM POWER7 710 1S (8231-E1C) AL740_043_042 PCIe2 Low Profile 2-Port 10GbE RoCE SFP+ Adapter (Feature code EC27) (Copper)

PCIe2 Low Profile 2-Port 10GbE RoCE SR Adapter (Feature code EC29) (Optical)

IBM POWER7 730 2S (8231-E2C) AL740_043_042 PCIe2 Low Profile 2-Port 10GbE RoCE SFP+ Adapter (Feature code EC27) (Copper)

PCIe2 Low Profile 2-Port 10GbE RoCE SR Adapter (Feature code EC29) (Optical)

IBM Flex System® p260 Compute Node (7895-22X) AF763_042 EN4132 2-port 10Gb RoCE Adapter (Feature code EC26)
IBM Flex System p260 Compute Node (7895-23X) AF763_042 EN4132 2-port 10Gb RoCE Adapter (Feature code EC26)
IBM Flex System p460 Compute Node (7895-42X) AF763_042 EN4132 2-port 10Gb RoCE Adapter (Feature code EC26)
IBM Flex System p460 Compute Node (7895-43X) AF763_042 EN4132 2-port 10Gb RoCE Adapter (Feature code EC26)
Note: RoCE adapters do not support virtualization. Each LPAR requires a dedicated RoCE adapter. For example, if a machine has two LPARs (one for CF and one for member), each of these LPARs must have its own dedicated RoCE adapter.
Table 4. Server-specific hardware details for IBM validated QDR - InfiniBand support and required firmware level
Server Minimum Required Platform Firmware level PCle2 Dual port QDR InfiniBand Channel adapter
IBM POWER7 780/HE (9179-MHC) AM740_042_042 PCIe2 2-port 4X InfiniBand QDR Adapter (Feature code: 5285)
IBM POWER7 770/MR (9117-MMC) AM740_042_042 PCIe2 2-port 4X InfiniBand QDR Adapter (Feature code: 5285)
IBM POWER7 740 2S (8205-E6C with optional low-profile slots) AL740_043_042 PCIe2 2-port 4X InfiniBand QDR Adapter (Feature code: 5285), or PCIe2 Low Profile 2-port 4X InfiniBand QDR Adapter (Feature code: 5283) in the PCIe Newcombe Riser Card (Feature code: 5685), or both
IBM POWER7 740 (8205-E6B) with Newcombe (optional low-profile Gen2 slots) AL720_102 PCIe2 Low Profile 2-port 4X InfiniBand QDR Adapter (Feature code: 5283) in the PCIe Newcombe Riser Card (Feature code: 5685)
IBM POWER7 710 (8231-E1C) AL740_043_042 PCIe2 Low Profile 2-port 4X InfiniBand QDR Adapter (Feature code: 5283) in the PCIe Newcombe Riser Card (Feature Code: 5685)
IBM POWER7 720 (8202-E4B) AL730_066_035 PCIe2 Low Profile 2-port 4X InfiniBand QDR Adapter (Feature code: 5283) in the PCIe Newcombe Riser Card (Feature Code: 5685)
IBM POWER7 720 (8202-E4C) AL740_043_042 PCIe2 2-port 4X InfiniBand QDR Adapter (Feature code: 5285), or PCIe2 Low Profile 2-port 4X InfiniBand QDR Adapter (Feature code: 5283) in the PCIe Newcombe Riser Card (Feature code: 5685), or both
IBM POWER7 730 2S (8231-E2C) AL740_043_042 PCIe2 Low Profile 2-port 4X InfiniBand QDR Adapter (Feature code: 5283) in the PCIe Newcombe Riser Card (Feature Code: 5685)
Note:
  • Although the purchasing of QDR IB switches is no longer available through IBM, DB2 for Linux, UNIX, and Windows still supports configurations with QDR IB switches supported by Intel.
  • QDR IB adapters do not support virtualization. Each LPAR requires a dedicated QDR IB adapter. For example, if a machine has two LPARs (one for CF and one for member), each of these LPARs must have its own dedicated QDR IB adapter.
Table 5. Server-specific hardware details for IBM validated DDR - InfiniBand support1 and required firmware level
Server Minimum Required Platform Firmware level InfiniBand network adapter, GX Dual-port 12x Channel Attach - DDR InfiniBand Channel adapter InfiniBand Channel conversion cables
IBM POWER7 795 (9119-FHB) * AH720_102 or higher Feature Code 1816 12x to 4x (Feature Code 1828, 1841, or 1854)
IBM POWER7 780 (9179-MHB) * AM720_102 or higher Feature Code 1808 12x to 4x (Feature Code 1828, 1841, or 1854)
IBM POWER7 780 (9179-MHC) * AM740_042 or higher Feature Code 1808 12x to 4x (Feature Code 1828, 1841, or 1854)
IBM POWER7 770 (9117-MMB) * AM720_102 or higher Feature Code 1808 12x to 4x (Feature Code 1828, 1841, or 1854)
IBM POWER7 770 (9117-MMC) * AM740_042 or higher Feature Code 1808 12x to 4x (Feature Code 1828, 1841, or 1854)
IBM POWER7 750 (8233-E8B) AL730_049 or higher Feature Code 5609 12x to 4x (Feature Code 1828, 1841, or 1854)
IBM POWER7 740 (8205-E6C) AL720_102 or higher Feature Code EJ04 12x to 4x (Feature Code 1828, 1841, or 1854)
IBM POWER7 740 (8205-E6B) AL720_102 or higher Feature Code 5615 12x to 4x (Feature Code 1828, 1841, or 1854)
IBM POWER7 730 (8231-E2B) AL720_102 or higher Feature Code 5266 4x to 4x cables (Feature Code 3246)
IBM POWER7 720 (8202-E4C) AL720_102 or higher Feature Code EJ04 12x to 4x (Feature Code 1828, 1841, or 1854)
IBM POWER7 720 (8202-E4B) AL720_102 or higher Feature Code 5615 12x to 4x (Feature Code 1828, 1841, or 1854)
IBM POWER7 710 (8231-E2B) AL720_102 or higher Feature Code 5266 4x to 4x cables (Feature Code 3246)
IBM POWER6 595 (9119-FHA) EH350_071 or higher Feature Code 1816 12x to 4x (Feature Code 1828, 1841, or 1854)
IBM POWER6 550 Express (8204-E8A) EL350_071 or higher Feature Code 5609 12x to 4x (Feature Code 1828, 1841, or 1854)
Note:
  1. Although the purchasing of DDR IB hardware is no longer available through IBM, DB2 for Linux, UNIX, and Windows still supports configurations with DDR IB.
  2. When acquiring systems, consider the I/O ports available and future workloads for greater flexibility and scalability. The servers marked with an asterisk (*) are designed for enterprise applications. For more information about selecting the hardware, see "Site and hardware planning" in the IBM System Hardware documentation: http://publib.boulder.ibm.com/infocenter/powersys/v3r1m5/index.jsp.
  3. InfiniBand Channel conversion cables are available in multiple lengths, each with a different product feature code (FC). Some different 12x to 4x InfiniBand Channel conversion cable lengths available are 1.5 m (FC 1828), 3 m (FC 1841), and 10 m (FC 1854). Your data center layout and the relative location of the hardware in the DB2 pureScale environment are factors that must be considered when selecting the cable length.

Cable Information:

Table 6. 40GE cable information (1, 3, 5, 10, and 30 meters)
  1 meter (copper) 3 meter (copper) 5 meter (copper) 10m (optical) 30m (optical)
Feature Code number EB2B EB2H ECBN EB2J EB2K
Table 7. 10GE cable information (1, 3 and 5 meters)
  1 meter 3 meter 5 meter
Feature Code number EN01 EN02 EN03
Note:
  • IBM Qualified Copper SFP+ cables or standard 10-Gb SR optical cabling (up to 300 meter cable length) can be used for connecting RoCE adapters to the 10GE switches.
Table 8. IBM Qualified QSFP+ Cable Information for 10GE RoCE
  1 meter 3 meter
Feature Code number EB2B EB2H
Note:
  • IBM Qualified QSFP+ cables can be used as inter-switch links between POWER Flex System 10GE switches.
Table 9. QDR IB cable information (1, 3, 5, 10, 30 meters)
  1 meter (copper) 3 meter (copper) 5 meter (copper) 10 meter (optical) 30 meter (optical)
Feature Code number 3287 3288 3289 3290 3293
In general, any 10GE or 40GE switch that supports global pause flow control, as specified by IEEE 802.3x is also supported. For details on required switch configurations, see Switch configuration on a RoCE network (AIX).
Table 10. IBM validated 10GE switches for RDMA
IBM Validated Switch
Blade Network Technologies RackSwitch G8124
Juniper Networks QFX3500 Switch
Table 11. IBM validated 40GE switches for RDMA
IBM Validated Switch
Blade Network Technologies RackSwitch G8318
Note:
  • IBM Qualified Copper SFP+ cables or standard 10-Gb SR optical cabling (up to 300 meter cable length) can be used as inter-switch links. Cables that are 3 meter or 7 meter SFP+ cables supplied by Juniper can be used between Juniper switches.
  • For the configuration and the features that are required to be enabled and disabled, see Switch configuration on a RoCE network (AIX). However, the exact setup instructions might differ from what is documented in the switch section, which is based on the IBM validated switches. Refer to the switch user manual for details.
You must not intermix 10 Gigabit and 40 Gigabit Ethernet network switch types. The same type of switch, adapter and cables must be used in a cluster. A server using a 10G adapter must use a 10G type switch and the corresponding cables. A server using a 40G adapter must use a 40G type switch and the corresponding cables.
Table 12. Supported InfiniBand network switches
InfiniBand switch Intel model number Number of ports Type Required rack space
IBM 7874-024 9024 24 4x DDR InfiniBand Edge Switch 1U
IBM 7874-040 9040 48 4x DDR InfiniBand Fabric Director Switch 4U
IBM 7874-120 9102 128 4x DDR InfiniBand Fabric Director Switch 7U
IBM 7874-240 9240 288 4x DDR InfiniBand Fabric Director Switch 14U
IBM 7874-036 12200 36 QDR InfiniBand Switch 1U
IBM 7874-072 12800-040 72 QDR InfiniBand Switch 5U
IBM 7874-324 12800-180 324 QDR InfiniBand Switch 14U
Note:
  • All of the InfiniBand switches listed in the previous table must use the embedded subnet management functionality. When ordering InfiniBand switches from Intel, management modules must be purchased for the switch.
  • Although the purchasing of IB switches is no longer available through IBM, DB2 for Linux, UNIX, and Windows still supports configurations with IB switches supported by Intel.
  • If using two switches in the DB2 pureScale environment, two or more 4x to 4x inter-switch links (ISL) are required. To help with performance and fault tolerance to inter-switch link failures, use half the number of inter-switch link cables as there are total communication adapter ports connected from CFs and members to the switches. For example, in a two switch DB2 pureScale environment where the primary and secondary CF each has four cluster interconnect netnames, and there are four members, use 6 inter-switch links (6 = (2 x 4 + 4) /2). Choose 4x to 4x InfiniBand ISL cables of appropriate length for your network environment.

DDR and QDR InfiniBand network switch types cannot be intermixed. The same type of switch, adapter and cables must be used in a cluster. A server using a DDR IB adapter must use a DDR type switch and the corresponding cables. A server using a QDR IB adapter must use a QDR type switch and the corresponding cables.

1 For better I/O performance, create a separate GPFS file system to hold your database and specify this shared disk on the create database command.