Installation prerequisites for DB2 pureScale Feature (AIX)
When planning your DB2 pureScale installation, review the software, hardware, firmware, and storage hardware configuration options to ensure you meet the requirements.
This topic details the requirements for: software prerequisites (including operating system, OpenSSH, GPFS™, and Tivoli® SA MP), storage hardware, and hardware and firmware (network adapters, cables, switches).
Software prerequisites
AIX version | Technology Level | Minimum Service Pack (SP) level | Required uDAPL level | AIX APAR |
---|---|---|---|---|
AIX 6.1 | 7 | 6 | 6.1.7.15 | IV35889 |
AIX 6.1 | 8 | 2 | 6.1.8.15 | IV35889 |
AIX 6.1 | 9 | 1 | 6.1.9.03 | |
AIX 7.1 | 1 | 6 | 7.1.1.15 | IV35894 |
AIX 7.1 | 2 | 2 | 7.1.2.15 | IV35894 |
AIX 7.1 | 3 | 1 | 7.1.3.03 |
- IB networks and RoCE networks require uDAPL. Download and install the uDAPL package at the base Technology Level (not the uDAPL packages specific to a fix pack) from the AIX Web Download Pack Programs website, see https://www14.software.ibm.com/webapp/iwm/web/reg/signup.do?source=aixbp&lang=en_US&S_PKG=udapl. After installing the base uDAPL package, apply the appropriate uDAPL fix for the Technology Level from the IBM® Support portal, see https://www-304.ibm.com/support/docview.wss?q1=U830315&dc=DB510&rs=1209&uid=isg1fileset664799651&cs=UTF-8&lang=en&loc=en_US.
- If the AIX system is running on a Technology Level with the minimum Service Pack specified in the table, all APARs listed in the row must be installed except for AIX APARs marked with an asterisk (*). The asterisk marked APARs are only required for DB2 pureScale environments with multiple switches. For a system that runs on a Technology Level with a later Service Pack, verify whether the APAR fix is included in the Service Pack level. The first Service Pack the APAR fix is included in is in the table next to the APAR in parentheses. To obtain fixes for the APARs for a system running a Service Pack higher than the minimum required and lower than the Service Pack the fix was first included in, see IBM Support Fix Central: http://www-933.ibm.com/support/fixcentral/.
- Starting on AIX 6.1 TL9 and AIX 7.1 TL3, the uDAPL level required is the uDAPL level that is included in the AIX image. This is subject to change with respect to Technology Level and Service Pack level
- Starting in Version 10.5 Fix Pack 5 and later fix packs, DB2 pureScale has support for 40-gigabit ethernet as a cluster interconnect transport. A minimum AIX level of AIX 7.1 TL03 SP3 is required for 40-gigabit ethernet support.
- AIX Version 6.1 cannot use uDAPL version 7.x.x.x
- OpenSSH level 4.5.0.5302 or higher
- For minimum C++ runtime level required, see Additional installation considerations (AIX).
- GPFS:
- On Version 10.5 Fix Pack 8 and later fix packs, if you have IBM General Parallel File System ( GPFS) already installed, it must be GPFS 4.1.1.4 efix 14.
- On DB2 Cancun Release 10.5.0.4 and later fix packs, if you have IBM General Parallel File System ( GPFS) already installed, it must be GPFS 3.5.0.17. The installation of DB2 pureScale Feature performs the update to the required level automatically.
- On Version 10.5 Fix Pack 3 and earlier fix packs, if you have IBM General Parallel File System ( GPFS) already installed, it must be GPFS 3.5.0.7.
- Tivoli SA
MP:
- On DB2 Cancun Release 10.5.0.4 and later fix packs, if you have IBM Tivoli System Automation for Multiplatforms (Tivoli SA MP) already installed, it must be Tivoli SA MP 3.2.2.8. The installation of DB2 pureScale Feature upgrades existing Tivoli SA MP installations to this version level.
- On Version 10.5 Fix Pack 3 and earlier fix packs, if you have Tivoli SA MP already installed, it must be Tivoli SA MP Version 3.2.2.5.
- RSCT:
- The IBM RSCT (Reliable Scalable Cluster Technology) level included within a TSA package is always the 'minimum' supported level.
- On DB2 Cancun Release 10.5.0.4 and later fix packs, if you have IBM Tivoli System Automation for Multiplatforms (Tivoli SA MP) already installed, it must be Tivoli SA MP 3.2.2.8. If so, customers might install any higher RSCT level.
- On Version 10.5 Fix Pack 3 and earlier fix packs, if you have Tivoli SA MP already installed, it must be Tivoli SA MP Version 3.2.2.5. If so, customers might install any higher RSCT level.
- AIX workload partitions (WPARs) are not supported in a DB2 pureScale environment.
Storage hardware requirements
Recommended free disk space | Minimum required free disk space | |
---|---|---|
Disk to extract installation | 3 GB | 3 GB |
Installation path | 6 GB | 6 GB |
/tmp directory | 5 GB | 2 GB |
/var directory | 5 GB | 2 GB |
/usr directory | 2 GB | 512 MB |
Instance home directory | 5 GB | 1.5 GB1 |
- The disk space that is required for the instance home directory is calculated at run time and varies. Approximately 1 to 1.5 GB is normally required.
- Instance shared files: 10 GB1
- Data: dependent on your specific application needs
- Logs: dependent on the expectant number of transactions and the applications logging requirements
Hardware and firmware prerequisites
- PCIe2 2-Port 10GbE RoCE SFP+ Adapter with feature code EC27, EC28, EC29, EC30
- PCIe3 2-Port 40GbE NIC RoCE QSFP+ Adapter with feature code EC3A, EC3B
In DB2 Cancun Release 10.5.0.4 and later fix packs, a DB2 pureScale environment is supported on any rack mounted server or blade server.
- PCIe2 2-Port 10GbE RoCE SFP+ Adapter with feature code EC27, EC28, EC29, EC30
- PCIe2 2-port 4X InfiniBand QDR Adapter with feature code 5283, 5285
On an RDMA protocol network, a DB2 pureScale environment is supported on any POWER6® or POWER7 compatible rack mounted server in the DDR - InfiniBand support table, and, newer equivalent models supported by POWER®.
On a TCP/IP protocol over Ethernet (TCP/IP) network, a DB2 pureScale environment requires only 1 high speed network for the DB2 cluster interconnect. Running your DB2 pureScale environment on a TCP/IP network can provide a faster setup for testing the technology. However, for the most demanding write-intensive data sharing workloads, an RDMA protocol over Converged Ethernet (RoCE) network can offer better performance.
InfiniBand (IB) networks and RoCE networks using RDMA protocol require two networks: one (public) Ethernet network and one (private) high speed communication network for communication between members and CFs. The high speed communication network must be an IB network, a RoCE network, or a TCP/IP network. A mixture of these high speed communication networks is not supported.
The rest of this hardware and firmware prerequisites section applies to using RDMA protocol.
Cables and switches: a DB2 pureScale environment is supported on any 10GE and QDR cable and switch that is supported by a POWER7 or POWER8 server.
- a RoCE network,
- an InfiniBand (IB) network.
- Server-specific hardware details:
- Cable information:
- Switches:
Server | Minimum Required Platform Firmware level | PCIe Support for RoCE network adapters |
---|---|---|
IBM POWER7 780/HE (9179-MHC) | AM740_042_042 | PCIe2 2-Port 10GbE RoCE SFP+ Adapter (Feature
code EC28) (Copper) PCIe2 2-Port 10GbE RoCE SR Adapter (Feature code EC30) (Optical) |
IBM POWER7 770/MR (9117-MMC) | AM740_042_042 | PCIe2 2-Port 10GbE RoCE SFP+ Adapter (Feature
code EC28) (Copper) PCIe2 2-Port 10GbE RoCE SR Adapter (Feature code EC30) (Optical) |
IBM POWER7 780/HE (9179-MMD) | AM760_034_034 | PCIe2 2-Port 10GbE RoCE SFP+ Adapter (Feature
code EC28) (Copper) PCIe2 2-Port 10GbE RoCE SR Adapter (Feature code EC30) (Optical) |
IBM POWER7 770/MR (9117-MMD) | AM760_034_034 | PCIe2 2-Port 10GbE RoCE SFP+ Adapter (Feature
code EC28) (Copper) PCIe2 2-Port 10GbE RoCE SR Adapter (Feature code EC30) (Optical) |
IBM POWER7 720 1S (8202-E4C with optional low-profile slots) | AL740_043_042 | PCIe2 2-Port 10GbE RoCE SFP+ Adapter (Feature
code EC28) (Copper) PCIe2 2-Port 10GbE RoCE SR Adapter (Feature code EC30) (Optical) PCIe2 Low Profile 2-Port 10GbE RoCE SFP+ Adapter (Feature code EC27) (Copper) in the PCIe Newcombe Riser Card (Feature code 5685) PCIe2 Low Profile 2-Port 10GbE RoCE SR Adapter (Feature code EC29) (Optical) in the PCIe Newcombe Riser Card (Feature code 5685) |
IBM POWER7 740 2S (8205-E6C with optional low-profile slots) | AL740_043_042 | PCIe2 2-Port 10GbE RoCE SFP+ Adapter (Feature
code EC28) (Copper) PCIe2 2-Port 10GbE RoCE SR Adapter (Feature code EC30) (Optical) PCIe2 Low Profile 2-Port 10GbE RoCE SFP+ Adapter (Feature code EC27) (Copper) in the PCIe Newcombe Riser Card (Feature code 5685) PCIe2 Low Profile 2-Port 10GbE RoCE SR Adapter (Feature code EC29) (Optical) in the PCIe Newcombe Riser Card (Feature code 5685) |
IBM POWER7 710 1S (8231-E1C) | AL740_043_042 | PCIe2 Low Profile 2-Port 10GbE RoCE SFP+
Adapter (Feature code EC27) (Copper) PCIe2 Low Profile 2-Port 10GbE RoCE SR Adapter (Feature code EC29) (Optical) |
IBM POWER7 730 2S (8231-E2C) | AL740_043_042 | PCIe2 Low Profile 2-Port 10GbE RoCE SFP+
Adapter (Feature code EC27) (Copper) PCIe2 Low Profile 2-Port 10GbE RoCE SR Adapter (Feature code EC29) (Optical) |
IBM Flex System® p260 Compute Node (7895-22X) | AF763_042 | EN4132 2-port 10Gb RoCE Adapter (Feature code EC26) |
IBM Flex System p260 Compute Node (7895-23X) | AF763_042 | EN4132 2-port 10Gb RoCE Adapter (Feature code EC26) |
IBM Flex System p460 Compute Node (7895-42X) | AF763_042 | EN4132 2-port 10Gb RoCE Adapter (Feature code EC26) |
IBM Flex System p460 Compute Node (7895-43X) | AF763_042 | EN4132 2-port 10Gb RoCE Adapter (Feature code EC26) |
Server | Minimum Required Platform Firmware level | PCle2 Dual port QDR InfiniBand Channel adapter |
---|---|---|
IBM POWER7 780/HE (9179-MHC) | AM740_042_042 | PCIe2 2-port 4X InfiniBand QDR Adapter (Feature code: 5285) |
IBM POWER7 770/MR (9117-MMC) | AM740_042_042 | PCIe2 2-port 4X InfiniBand QDR Adapter (Feature code: 5285) |
IBM POWER7 740 2S (8205-E6C with optional low-profile slots) | AL740_043_042 | PCIe2 2-port 4X InfiniBand QDR Adapter (Feature code: 5285), or PCIe2 Low Profile 2-port 4X InfiniBand QDR Adapter (Feature code: 5283) in the PCIe Newcombe Riser Card (Feature code: 5685), or both |
IBM POWER7 740 (8205-E6B) with Newcombe (optional low-profile Gen2 slots) | AL720_102 | PCIe2 Low Profile 2-port 4X InfiniBand QDR Adapter (Feature code: 5283) in the PCIe Newcombe Riser Card (Feature code: 5685) |
IBM POWER7 710 (8231-E1C) | AL740_043_042 | PCIe2 Low Profile 2-port 4X InfiniBand QDR Adapter (Feature code: 5283) in the PCIe Newcombe Riser Card (Feature Code: 5685) |
IBM POWER7 720 (8202-E4B) | AL730_066_035 | PCIe2 Low Profile 2-port 4X InfiniBand QDR Adapter (Feature code: 5283) in the PCIe Newcombe Riser Card (Feature Code: 5685) |
IBM POWER7 720 (8202-E4C) | AL740_043_042 | PCIe2 2-port 4X InfiniBand QDR Adapter (Feature code: 5285), or PCIe2 Low Profile 2-port 4X InfiniBand QDR Adapter (Feature code: 5283) in the PCIe Newcombe Riser Card (Feature code: 5685), or both |
IBM POWER7 730 2S (8231-E2C) | AL740_043_042 | PCIe2 Low Profile 2-port 4X InfiniBand QDR Adapter (Feature code: 5283) in the PCIe Newcombe Riser Card (Feature Code: 5685) |
- Although the purchasing of QDR IB switches is no longer available through IBM, DB2 for Linux, UNIX, and Windows still supports configurations with QDR IB switches supported by Intel.
- QDR IB adapters do not support virtualization. Each LPAR requires a dedicated QDR IB adapter. For example, if a machine has two LPARs (one for CF and one for member), each of these LPARs must have its own dedicated QDR IB adapter.
Server | Minimum Required Platform Firmware level | InfiniBand network adapter, GX Dual-port 12x Channel Attach - DDR InfiniBand Channel adapter | InfiniBand Channel conversion cables |
---|---|---|---|
IBM POWER7 795 (9119-FHB) * | AH720_102 or higher | Feature Code 1816 | 12x to 4x (Feature Code 1828, 1841, or 1854) |
IBM POWER7 780 (9179-MHB) * | AM720_102 or higher | Feature Code 1808 | 12x to 4x (Feature Code 1828, 1841, or 1854) |
IBM POWER7 780 (9179-MHC) * | AM740_042 or higher | Feature Code 1808 | 12x to 4x (Feature Code 1828, 1841, or 1854) |
IBM POWER7 770 (9117-MMB) * | AM720_102 or higher | Feature Code 1808 | 12x to 4x (Feature Code 1828, 1841, or 1854) |
IBM POWER7 770 (9117-MMC) * | AM740_042 or higher | Feature Code 1808 | 12x to 4x (Feature Code 1828, 1841, or 1854) |
IBM POWER7 750 (8233-E8B) | AL730_049 or higher | Feature Code 5609 | 12x to 4x (Feature Code 1828, 1841, or 1854) |
IBM POWER7 740 (8205-E6C) | AL720_102 or higher | Feature Code EJ04 | 12x to 4x (Feature Code 1828, 1841, or 1854) |
IBM POWER7 740 (8205-E6B) | AL720_102 or higher | Feature Code 5615 | 12x to 4x (Feature Code 1828, 1841, or 1854) |
IBM POWER7 730 (8231-E2B) | AL720_102 or higher | Feature Code 5266 | 4x to 4x cables (Feature Code 3246) |
IBM POWER7 720 (8202-E4C) | AL720_102 or higher | Feature Code EJ04 | 12x to 4x (Feature Code 1828, 1841, or 1854) |
IBM POWER7 720 (8202-E4B) | AL720_102 or higher | Feature Code 5615 | 12x to 4x (Feature Code 1828, 1841, or 1854) |
IBM POWER7 710 (8231-E2B) | AL720_102 or higher | Feature Code 5266 | 4x to 4x cables (Feature Code 3246) |
IBM POWER6 595 (9119-FHA) | EH350_071 or higher | Feature Code 1816 | 12x to 4x (Feature Code 1828, 1841, or 1854) |
IBM POWER6 550 Express (8204-E8A) | EL350_071 or higher | Feature Code 5609 | 12x to 4x (Feature Code 1828, 1841, or 1854) |
- Although the purchasing of DDR IB hardware is no longer available through IBM, DB2 for Linux, UNIX, and Windows still supports configurations with DDR IB.
- When acquiring systems, consider the I/O ports available and future workloads for greater flexibility and scalability. The servers marked with an asterisk (*) are designed for enterprise applications. For more information about selecting the hardware, see "Site and hardware planning" in the IBM System Hardware documentation: http://publib.boulder.ibm.com/infocenter/powersys/v3r1m5/index.jsp.
- InfiniBand Channel conversion cables are available in multiple lengths, each with a different product feature code (FC). Some different 12x to 4x InfiniBand Channel conversion cable lengths available are 1.5 m (FC 1828), 3 m (FC 1841), and 10 m (FC 1854). Your data center layout and the relative location of the hardware in the DB2 pureScale environment are factors that must be considered when selecting the cable length.
Cable Information:
1 meter (copper) | 3 meter (copper) | 5 meter (copper) | 10m (optical) | 30m (optical) | |
---|---|---|---|---|---|
Feature Code number | EB2B | EB2H | ECBN | EB2J | EB2K |
1 meter | 3 meter | 5 meter | |
---|---|---|---|
Feature Code number | EN01 | EN02 | EN03 |
- IBM Qualified Copper SFP+ cables or standard 10-Gb SR optical cabling (up to 300 meter cable length) can be used for connecting RoCE adapters to the 10GE switches.
1 meter | 3 meter | |
---|---|---|
Feature Code number | EB2B | EB2H |
- IBM Qualified QSFP+ cables can be used as inter-switch links between POWER Flex System 10GE switches.
1 meter (copper) | 3 meter (copper) | 5 meter (copper) | 10 meter (optical) | 30 meter (optical) | |
---|---|---|---|---|---|
Feature Code number | 3287 | 3288 | 3289 | 3290 | 3293 |
IBM Validated Switch |
---|
Blade Network Technologies RackSwitch G8124 |
Juniper Networks QFX3500 Switch |
IBM Validated Switch |
---|
Blade Network Technologies RackSwitch G8318 |
- IBM Qualified Copper SFP+ cables or standard 10-Gb SR optical cabling (up to 300 meter cable length) can be used as inter-switch links. Cables that are 3 meter or 7 meter SFP+ cables supplied by Juniper can be used between Juniper switches.
- For the configuration and the features that are required to be enabled and disabled, see Switch configuration on a RoCE network (AIX). However, the exact setup instructions might differ from what is documented in the switch section, which is based on the IBM validated switches. Refer to the switch user manual for details.
InfiniBand switch | Intel model number | Number of ports | Type | Required rack space |
---|---|---|---|---|
IBM 7874-024 | 9024 | 24 | 4x DDR InfiniBand Edge Switch | 1U |
IBM 7874-040 | 9040 | 48 | 4x DDR InfiniBand Fabric Director Switch | 4U |
IBM 7874-120 | 9102 | 128 | 4x DDR InfiniBand Fabric Director Switch | 7U |
IBM 7874-240 | 9240 | 288 | 4x DDR InfiniBand Fabric Director Switch | 14U |
IBM 7874-036 | 12200 | 36 | QDR InfiniBand Switch | 1U |
IBM 7874-072 | 12800-040 | 72 | QDR InfiniBand Switch | 5U |
IBM 7874-324 | 12800-180 | 324 | QDR InfiniBand Switch | 14U |
- All of the InfiniBand switches listed in the previous table must use the embedded subnet management functionality. When ordering InfiniBand switches from Intel, management modules must be purchased for the switch.
- Although the purchasing of IB switches is no longer available through IBM, DB2 for Linux, UNIX, and Windows still supports configurations with IB switches supported by Intel.
- If using two switches in the DB2 pureScale environment, two or more 4x to 4x inter-switch links (ISL) are required. To help with performance and fault tolerance to inter-switch link failures, use half the number of inter-switch link cables as there are total communication adapter ports connected from CFs and members to the switches. For example, in a two switch DB2 pureScale environment where the primary and secondary CF each has four cluster interconnect netnames, and there are four members, use 6 inter-switch links (6 = (2 x 4 + 4) /2). Choose 4x to 4x InfiniBand ISL cables of appropriate length for your network environment.
DDR and QDR InfiniBand network switch types cannot be intermixed. The same type of switch, adapter and cables must be used in a cluster. A server using a DDR IB adapter must use a DDR type switch and the corresponding cables. A server using a QDR IB adapter must use a QDR type switch and the corresponding cables.