Installation prerequisites for Db2 pureScale Feature (AIX)

Before you install a Db2 pureScale environment for the first time, ensure that you have your Db2 pureScale Feature installation plan created. Your installation plan helps ensure that your system meets the prerequisites and that preinstallation tasks are done.

Important: Starting from version 11.5.5, support for Infiniband (IB) adapters as the high-speed communication network between members and CFs in Db2 pureScale on all supported platforms is deprecated and will be removed in a future release. Use Remote Direct Memory Access over Converged Ethernet (RoCE) network as the replacement.

When you plan your Db2 pureScale installation, review the software, hardware, firmware, and storage hardware configuration options to ensure that you meet the requirements.

For the most up-to-date installation requirements for data server products, see System requirements for IBM® Db2 for Linux®, UNIX, and Windows. This techdoc uses IBM Software Product Compatibility Reports (SPCR). With the SPCR tool, you can locate and find complete lists of supported operating systems, system requirements, prerequisites, and optional supported software for these database products.

This topic details the requirements for: software prerequisites (including operating system, OpenSSH, IBM Spectrum® Scale, and Tivoli® SA MP), storage hardware, and hardware and firmware (network adapters, cables, switches).

Software prerequisites

Before you run the installation or apply a fix pack with the installFixPack command, ensure that fixes are applied for your operating system.
Table 1. Software prerequisites - AIX operating system version and technology levels
AIX version Technology level Minimum Service Pack (SP) level AIX APAR
AIX 7.1 5 3 IJ13283
AIX 7.2 3 2 IJ11241, IJ15063
AIX 7.2 4 1 IT31924
AIX 7.2 5 1 IJ46384
AIX 7.3 0 1 IJ48201
AIX 7.3 1 1 IJ45513(fixed in SP2)
Note:
  • InfiniBand networks and RoCE networks require uDAPL. The uDAPL level that is required is the uDAPL level that is included in the AIX image. The uDAPL level included in the Technology Level (TL) and Service Pack (SP) is subject to change when the TL or SP level is changed. The uDAPL level is not subject to change if TL or SP level is unchanged.
  • If the AIX system is running on a Technology Level with the minimum Service pack that is specified in the table, all APARs listed in the row must be installed. For a system that runs on a Technology Level with a later Service Pack, verify whether the APAR fix is included in the Service Pack level. To obtain fixes for the APARs for a system that is running a Service Pack higher than the minimum required and lower than the Service Pack the fix was first included in, see IBM Support Fix Central.
  • On installations with AIX 7.2, only TCP and RDMA-based configurations can be used. RDMA must be used only with RoCE with IP support.
  • When POWER9™ systems are used with version 11.5, users must have AIX 7.2 and in POWER8® processor compatibility mode. Starting in version 11.5.5 POWER9 systems may be used in POWER9 processor compatibility mode. AIX 7.2 TL03 support on POWER9 also requires the fix for IJ12134.
  • In order to use AIX 7.2 TL04 SP1 or higher, the Db2 version used must be version 11.5.5 or later with Db2 APAR IT31924.
  • When using version 11.5.5 or later, AIX 7.2 TL03 SP3 is the new minimum AIX operating system level.
  • The AIX level must be AIX 7.2 TL03 SP4 or higher when EC66 or EC67 cards are used.
  • To use AIX 7.2 TL05 SP1 or higher SP, the Db2 level used must be Db2 11.5.7.0 or higher with AIX APAR IJ30808. For AIX with RoCE networks using Db2 11.5.8.0 or higher, the fix for AIX APAR IJ46384 is required.
  • To use AIX 7.3 TL00 SP1 or higher SP, the Db2 level used must be Db2 11.5.8.0 or higher. For AIX with RoCE networks, the fix for AIX APAR IJ48201 is required.
  • When POWER10 systems are used with version 11.5, users must have AIX 7.2 TL05 SP3 or higher, or AIX 7.3 TL00 SP1 or higher, based on what is listed Table 1.
  • AIX7.3 TL01 must be used with 11.5.9.0 or higher
  • 11.5.9.0 requires AIX APAR IJ40428 for the corresponding supported AIX level that db2 is installed on.
The following list includes the required software:
Note:
  • IBM Spectrum Scale and Tivoli SA MP:
    • The required level of IBM Tivoli System Automation for Multiplatforms (Tivoli SA MP) and IBM Spectrum Scale, along with any necessary fix for a particular Db2 release and fix pack, must be obtained in the db2 images for the particular release and fix pack. They must be obtained, installed, and updated through the standard Db2 installation and upgrade procedures. Do not download and install individual fixes manually without any guidance from a Db2 service.
  • RSCT:
    • The IBM Reliable Scalable Cluster Technology (RSCT) peer domain (RSCT:) level that is included within a Tivoli SA MP package is always the 'minimum' supported level.
  • AIX:
    • Workload partitions (WPARs) are not supported in a Db2 pureScale environment.

Storage hardware requirements

Db2 pureScale Feature supports all storage area network (SAN) and directly attached shared block storage, however for more information about Db2 cluster services support, see the “Shared storage support for Db2 pureScale environments” topic. The following storage hardware requirements must be met for Db2 pureScale Feature support.
Table 2. Minimum and recommended free disk space per host
  Recommended free disk space Minimum required free disk space
Disk to extract installation 3 GB 3 GB
Installation path 6 GB 6 GB
/tmp directory 5 GB 2 GB
/var directory 5 GB 2 GB
/usr directory 2 GB 512 MB
Instance home directory 5 GB 1.5 GB1
root home directory 300 MB 200 MB
Note: 1 The disk space that is required for the instance home directory is calculated at run time and varies. Approximately 1 to 1.5 GB is normally required.

The following shared disk space must be free for each file system:
  • Instance shared directory: 10 GB1
  • Data: dependent on your specific application needs
  • Logs: dependent on the expectant number of transactions and the applications logging requirements

A fourth shared disk is required to configure as the Db2 cluster services tiebreaker disk.

Hardware and firmware prerequisites

Note: Given the widely varying nature of such systems, IBM cannot practically guarantee to have tested on all possible systems or variations of systems. If problem reports for which IBM deems reproduction necessary, IBM reserves the right to attempt problem reproduction on a system that might not match the system on which the problem was reported.

On a TCP/IP over Ethernet (TCP/IP) protocol network, a Db2 pureScale environment is supported on any POWER7®, POWER8, POWER9 or POWER10 rack-mounted server or blade server.

On an RDMA protocol network, the Db2 pureScale environment is supported on a POWER10-compatible rack-mounted server that supports one of the following Ethernet RoCE adapters:
  • PCIe4 2-port 100 GbE RoCE x16 adapter with feature code EC66, EC67
  • PCIe3 2-port 10 Gb NIC & RoCE SR/Cu adapter with feature code EC2U, EC2T
On an RDMA protocol network, the Db2 pureScale environment is supported on POWER9-compatible rack-mounted server, which supports one of the Ethernet RoCE adapters in the following list:
  • PCIe3 2-port 10 Gb NIC & RoCE SR/Cu adapter with feature code EC2S, EC2R
  • PCIe3 2-port 100 GbE (NIC and RoCE) QSFP28 adapter with feature code EC3L, EC3M
  • PCIe4 2-port 100 GbE RoCE x16 adapter with feature code EC66, EC67
On an RDMA protocol network, the Db2 pureScale environment is supported on POWER8 compatible rack-mounted server, which supports one of the Ethernet RoCE adapters in the following list:
  • PCIe2 2-Port 10 GbE RoCE SFP+/SR Adapter with feature code EC27, EC28, EC29, EC30
  • PCIe3 2-Port 40 GbE NIC RoCE QSFP+ Adapter with feature code EC3A, EC3B
  • PCIe3 2-port 10 GbE NIC and RoCE SFP+/SR adapter with feature code EC2M, EC2N, EC37, EC38
  • PCIe3 2-port 100 GbE (NIC and RoCE) QSFP28 Adapter with feature code EC3L, EC3M
On an RDMA protocol network, a Db2 pureScale environment is supported on any POWER7 compatible rack-mounted server, which supports one of these Ethernet RoCE or InfiniBand QDR adapters:
  • PCIe2 2-Port 10 GbE RoCE SFP+/SR Adapter with feature code EC27, EC28, EC29, EC30
  • PCIe2 2-port 4X InfiniBand QDR Adapter with feature code 5283, 5285
  • PCIe3 2-port 10 GbE NIC and RoCE SFP+/SR adapter EC2M, EC2N, EC37, EC38

On an RDMA protocol network, Db2 pureScale is supported on POWER7 IBM Flex System® p260 and p640 Compute Nodes (7895-22X, 7895-23X, 7895-42X, and 7895-43X) that support the EN4132: 2-port 10 Gb RoCE Adapter with feature code EC26.

On an RDMA protocol network, a Db2 pureScale environment is supported on any POWER7 compatible rack-mounted server in the DDR - InfiniBand support table, and, newer equivalent models supported by POWER®.

On a TCP/IP protocol over Ethernet (TCP/IP) network, a Db2 pureScale environment requires only one high-speed network for the Db2 cluster interconnect. Running your Db2 pureScale environment on a TCP/IP network can provide a faster setup for testing the technology. However, for the most demanding write-intensive data sharing workloads, an RDMA protocol over Converged Ethernet (RoCE) network can offer better performance.

InfiniBand networks and RoCE networks that use RDMA protocol require two networks: one (public) Ethernet network and one (private) high-speed communication network for communication between members and CFs. The high-speed communication network must be an InfiniBand network, a RoCE network, or a TCP/IP network. A mixture of these high-speed communication networks is not supported.

Note: Although a single Ethernet adapter is required on a host for the public network in a Db2 pureScale environment, set up Ethernet bonding for the network if you have two Ethernet adapters. Ethernet bonding (also known as channel bonding) is a setup where two or more network interfaces are combined. Ethernet bonding provides redundancy and better resilience if Ethernet network failures. Refer to your Ethernet documentation for instructions on configuring Ethernet bonding. Network interfaces that are used for a Db2 pureScale cluster interconnect with RDMA must not be bonded.

It is also a requirement to keep the maximum transmission unit (MTU) size of the network interfaces at the default value of 1500 for both TCP and RDMA network types. For more information on configuring the MTU size on AIX, see Change MTU of Host Ethernet Adapter.

The rest of this hardware and firmware prerequisites section applies to using RDMA protocol.

Cables and switches: a Db2 pureScale environment is supported on any Ethernet adapter driver (10GbE or other supported speed) and QDR cable and switch that is supported by a POWER7 or POWER8 server.

The communication adapter port can be:
  • a RoCE network,
  • an InfiniBand network.
To use a RoCE network, all network adapters and switches must have remote direct memory access (RDMA) over Converged Ethernet (RoCE).
Server-specific hardware details:

Servers in a Db2 pureScale environment must use both an Ethernet network and high-speed communication adapter port.

Table 3. Server-specific hardware details - IBM validated RDMA over Converged Ethernet (RoCE) support and required firmware level
Server Minimum Required Platform Firmware level PCIe Support for RoCE network adapters
IBM POWER7 780/HE (9179-MHC) AM740_042_042 PCIe2 2-Port 10 GbE RoCE SFP+ Adapter (Feature code EC28) (Copper)

PCIe2 2-Port 10 GbE RoCE SR Adapter (Feature code EC30) (Optical)

IBM POWER7 770/MR (9117-MMC) AM740_042_042 PCIe2 2-Port 10 GbE RoCE SFP+ Adapter (Feature code EC28) (Copper)

PCIe2 2-Port 10 GbE RoCE SR Adapter (Feature code EC30) (Optical)

IBM POWER7 780/HE (9179-MMD) AM760_034_034 PCIe2 2-Port 10 GbE RoCE SFP+ Adapter (Feature code EC28) (Copper)

PCIe2 2-Port 10 GbE RoCE SR Adapter (Feature code EC30) (Optical)

IBM POWER7 770/MR (9117-MMD) AM760_034_034 PCIe2 2-Port 10 GbE RoCE SFP+ Adapter (Feature code EC28) (Copper)

PCIe2 2-Port 10 GbE RoCE SR Adapter (Feature code EC30) (Optical)

IBM POWER7 720 1S (8202-E4C with optional low-profile slots) AL740_043_042 PCIe2 2-Port 10 GbE RoCE SFP+ Adapter (Feature code EC28) (Copper)

PCIe2 2-Port 10 GbE RoCE SR Adapter (Feature code EC30) (Optical)

PCIe2 Low Profile 2-Port 10 GbE RoCE SFP+ Adapter (Feature code EC27) (Copper) in the PCIe Newcombe Riser Card (Feature code 5685)

PCIe2 Low Profile 2-Port 10 GbE RoCE SR Adapter (Feature code EC29) (Optical) in the PCIe Newcombe Riser Card (Feature code 5685)

IBM POWER7 740 2S (8205-E6C with optional low-profile slots) AL740_043_042 PCIe2 2-Port 10 GbE RoCE SFP+ Adapter (Feature code EC28) (Copper)

PCIe2 2-Port 10 GbE RoCE SR Adapter (Feature code EC30) (Optical)

PCIe2 Low Profile 2-Port 10 GbE RoCE SFP+ Adapter (Feature code EC27) (Copper) in the PCIe Newcombe Riser Card (Feature code 5685)

PCIe2 Low Profile 2-Port 10 GbE RoCE SR Adapter (Feature code EC29) (Optical) in the PCIe Newcombe Riser Card (Feature code 5685)

IBM POWER7 710 1S (8231-E1C) AL740_043_042 PCIe2 Low Profile 2-Port 10 GbE RoCE SFP+ Adapter (Feature code EC27) (Copper)

PCIe2 Low Profile 2-Port 10 GbE RoCE SR Adapter (Feature code EC29) (Optical)

IBM POWER7 730 2S (8231-E2C) AL740_043_042 PCIe2 Low Profile 2-Port 10 GbE RoCE SFP+ Adapter (Feature code EC27) (Copper)

PCIe2 Low Profile 2-Port 10 GbE RoCE SR Adapter (Feature code EC29) (Optical)

IBM Flex System p260 Compute Node (7895-22X) AF763_042 EN4132 2-port 10 Gb RoCE Adapter (Feature code EC26)
IBM Flex System p260 Compute Node (7895-23X) AF763_042 EN4132 2-port 10 Gb RoCE Adapter (Feature code EC26)
IBM Flex System p460 Compute Node (7895-42X) AF763_042 EN4132 2-port 10 Gb RoCE Adapter (Feature code EC26)
IBM Flex System p460 Compute Node (7895-43X) AF763_042 EN4132 2-port 10 Gb RoCE Adapter (Feature code EC26)
Note:
  • RoCE adapters do not support virtualization. Each LPAR requires a dedicated RoCE adapter. For example, if a machine has two LPARs (one for CF and one for member), each of these LPARs must have its own dedicated RoCE adapter.
  • For production clusters, it is strongly recommended that each Db2 member and each CF be deployed into their own LPAR. The CF should have a dedicated CPU to provide the low-latency responses required for communication with the Db2 members. Db2 members will also benefit from dedicated CPUs for best-in-class performance.
Table 4. Server-specific hardware details for IBM validated QDR - InfiniBand support and required firmware level
Server Minimum Required Platform Firmware level PCle2 Dual port QDR InfiniBand Channel adapter
IBM POWER7 780/HE (9179-MHC) AM740_042_042 PCIe2 2-port 4X InfiniBand QDR Adapter (Feature code: 5285)
IBM POWER7 770/MR (9117-MMC) AM740_042_042 PCIe2 2-port 4X InfiniBand QDR Adapter (Feature code: 5285)
IBM POWER7 740 2S (8205-E6C with optional low-profile slots) AL740_043_042 PCIe2 2-port 4X InfiniBand QDR Adapter (Feature code: 5285), or PCIe2 Low Profile 2-port 4X InfiniBand QDR Adapter (Feature code: 5283) in the PCIe Newcombe Riser Card (Feature code: 5685), or both
IBM POWER7 740 (8205-E6B) with Newcombe (optional low-profile Gen2 slots) AL720_102 PCIe2 Low Profile 2-port 4X InfiniBand QDR Adapter (Feature code: 5283) in the PCIe Newcombe Riser Card (Feature code: 5685)
IBM POWER7 710 (8231-E1C) AL740_043_042 PCIe2 Low Profile 2-port 4X InfiniBand QDR Adapter (Feature code: 5283) in the PCIe Newcombe Riser Card (Feature Code: 5685)
IBM POWER7 720 (8202-E4B) AL730_066_035 PCIe2 Low Profile 2-port 4X InfiniBand QDR Adapter (Feature code: 5283) in the PCIe Newcombe Riser Card (Feature Code: 5685)
IBM POWER7 720 (8202-E4C) AL740_043_042 PCIe2 2-port 4X InfiniBand QDR Adapter (Feature code: 5285), or PCIe2 Low Profile 2-port 4X InfiniBand QDR Adapter (Feature code: 5283) in the PCIe Newcombe Riser Card (Feature code: 5685), or both
IBM POWER7 730 2S (8231-E2C) AL740_043_042 PCIe2 Low Profile 2-port 4X InfiniBand QDR Adapter (Feature code: 5283) in the PCIe Newcombe Riser Card (Feature Code: 5685)
Note:
  • Although the purchasing of QDR InfiniBand switches is no longer available through IBM, Db2 still supports configurations with QDR InfiniBand switches supported by Intel.
  • QDR InfiniBand adapters do not support virtualization. Each LPAR requires a dedicated QDR InfiniBand adapter. For example, if a machine has two LPARs (one for CF and one for member), each of these LPARs must have its own dedicated QDR InfiniBand adapter.
Table 5. Server-specific hardware details for IBM validated DDR - InfiniBand support1 and required firmware level
Server Minimum Required Platform Firmware level InfiniBand network adapter, GX Dual-port 12x Channel Attach - DDR InfiniBand Channel adapter InfiniBand Channel conversion cables
IBM POWER7 795 (9119-FHB) * AH720_102 or higher Feature Code 1816 12x to 4x (Feature Code 1828, 1841, or 1854)
IBM POWER7 780 (9179-MHB) * AM720_102 or higher Feature Code 1808 12x to 4x (Feature Code 1828, 1841, or 1854)
IBM POWER7 780 (9179-MHC) * AM740_042 or higher Feature Code 1808 12x to 4x (Feature Code 1828, 1841, or 1854)
IBM POWER7 770 (9117-MMB) * AM720_102 or higher Feature Code 1808 12x to 4x (Feature Code 1828, 1841, or 1854)
IBM POWER7 770 (9117-MMC) * AM740_042 or higher Feature Code 1808 12x to 4x (Feature Code 1828, 1841, or 1854)
IBM POWER7 750 (8233-E8B) AL730_049 or higher Feature Code 5609 12x to 4x (Feature Code 1828, 1841, or 1854)
IBM POWER7 740 (8205-E6C) AL720_102 or higher Feature Code EJ04 12x to 4x (Feature Code 1828, 1841, or 1854)
IBM POWER7 740 (8205-E6B) AL720_102 or higher Feature Code 5615 12x to 4x (Feature Code 1828, 1841, or 1854)
IBM POWER7 730 (8231-E2B) AL720_102 or higher Feature Code 5266 4x to 4x cables (Feature Code 3246)
IBM POWER7 720 (8202-E4C) AL720_102 or higher Feature Code EJ04 12x to 4x (Feature Code 1828, 1841, or 1854)
IBM POWER7 720 (8202-E4B) AL720_102 or higher Feature Code 5615 12x to 4x (Feature Code 1828, 1841, or 1854)
IBM POWER7 710 (8231-E2B) AL720_102 or higher Feature Code 5266 4x to 4x cables (Feature Code 3246)
Note:
  • 1 Although the purchasing of DDR InfiniBand hardware is no longer available through IBM, Db2 still supports configurations with DDR InfiniBand.
  • When you acquire systems, consider the I/O ports available and future workloads for greater flexibility and scalability. The servers that are marked with an asterisk (*) are designed for enterprise applications. For more information about selecting the hardware, see Site and hardware planning in the IBM System Hardware documentation.
  • InfiniBand Channel conversion cables are available in multiple lengths, each with a different product feature code (FC). Some different 12x to 4x InfiniBand Channel conversion cable lengths available are 1.5 m (FC 1828), 3 m (FC 1841), and 10 m (FC 1854). Your data center layout and the relative location of the hardware in the Db2 pureScale environment are factors that must be considered when you select the cable length.

Cable Information:

Table 6. 40GE cable information (1, 3, 5, 10, and 30 meters)
  1 m (copper) 3 m (copper) 5 m (copper) 10 m (optical) 30 m (optical)
Feature Code number EB2B EB2H ECBN EB2J EB2K
Table 7. 10GE cable information (1, 3 and 5 meters)
  1 meter 3 m 5 m
Feature Code number EN01 EN02 EN03
Note:
  • IBM Qualified Copper SFP+ cables or standard 10-Gb SR optical cabling (up to 300 m cable length) can be used for connecting RoCE adapters to the 10GE switches.
Table 8. IBM Qualified QSFP+ Cable Information for 40GE RoCE
  1 meter 3 m
Feature Code number EB2B EB2H
Note:
  • IBM Qualified QSFP+ cables can be used as inter-switch links between POWER Flex System 10GE switches.

For a list of 100GE cables that are compatible with your adapter, see the IBM Power9 documentation. For example, the cables available for the EC3L/EC3M adapter on POWER9 are listed under the adapter here.

Table 9. QDR InfiniBand cable information (1, 3, 5, 10, 30 meters)
  1 meter (copper) 3 m (copper) 5 m (copper) 10 m (optical) 30 meter (optical)
Feature Code number 3287 3288 3289 3290 3293
Switches:
For the configuration and the features that are required to be enabled and disabled, see Switch configuration on an RoCE network (AIX).
Table 10. IBM validated 10GE switches for RDMA
IBM Validated Switch
Lenovo RackSwitch G8124
Lenovo RackSwitch G8264
Juniper Networks QFX3500 Switch
Lenovo RackSwitch G8272
Table 11. IBM validated 40GE switches for RDMA
IBM Validated Switch
Lenovo RackSwitch G8316
Table 12. IBM validated 100GE switches for RDMA
IBM Validated Switch
Cisco Nexus C9336C-FX2
Note:
  • IBM Qualified Copper SFP+ cables or standard 10-Gb SR optical cabling (up to 300 m cable length) can be used as inter-switch links. Cables that are 3 m or 7 m SFP+ cables that are supplied by Juniper can be used between Juniper switches.
  • For the configuration and the features that are required to be enabled and disabled, see Switch configuration on an RoCE network (AIX). Any Ethernet switch that supports the listed configuration and features is supported. The exact setup instructions might differ from what is documented in the switch section, which is based on the IBM validated switches. Refer to the switch user manual for details.
You must not intermix 10-Gigabit, 40-Gigabit, and 100-Gigabit Ethernet network switch types. The same type of switch, adapter, and cables must be used in a cluster. A server that uses a 10G adapter must use a 10G type switch and the corresponding cables. A server that uses a 40G adapter must use a 40G type switch and the corresponding cables. A server that uses a 100G adapter must use a 100G type switch and the corresponding cables.
Table 13. Supported InfiniBand network switches
InfiniBand switch Intel model number Number of ports Type Required rack space
IBM 7874-024 9024 24 4x DDR InfiniBand Edge Switch 1U
IBM 7874-040 9040 48 4x DDR InfiniBand Fabric Director Switch 4U
IBM 7874-120 9102 128 4x DDR InfiniBand Fabric Director Switch 7U
IBM 7874-240 9240 288 4x DDR InfiniBand Fabric Director Switch 14U
IBM 7874-036 12200 36 QDR InfiniBand Switch 1U
IBM 7874-072 12800-040 72 QDR InfiniBand Switch 5U
IBM 7874-324 12800-180 324 QDR InfiniBand Switch 14U
Note:
  • All of the InfiniBand switches listed in the previous table must use the embedded subnet management functions. When you order InfiniBand switches from Intel, management modules must be purchased for the switch.
  • Although the purchasing of InfiniBand switches is no longer available through IBM, Db2 still supports configurations with InfiniBand switches supported by Intel.
  • If you are using two switches in the Db2 pureScale environment, two or more 4x to 4x inter-switch links (ISL) are required. To help with performance and fault tolerance to inter-switch link failures, use half the number of inter-switch link cables as there are total communication adapter ports that are connected from CFs and members to the switches. For example, in a two switch Db2 pureScale environment where the primary and secondary CF each has four cluster interconnect netname(s), and there are four members, use six inter-switch links (6 = (2 x 4 + 4) /2). Choose 4x to 4x InfiniBand ISL cables of appropriate length for your network environment.

DDR and QDR InfiniBand network switch types cannot be intermixed. The same type of switch, adapter, and cables must be used in a cluster. A server that uses a DDR InfiniBand adapter must use a DDR type switch and the corresponding cables. A server that uses a QDR InfiniBand adapter must use a QDR type switch and the corresponding cables.

1 For better I/O performance, create a separate IBM Spectrum Scale file system to hold your database and specify this shared disk on the create database command.