The DB2(R) pureScale(TM) feature helps reduce the risk and cost of business growth by providing nearly unlimited capacity, continuous availability, and application transparency. DB2 pureScale benefits from a low latency interconnect, such as InfiniBand, and is built on top of a shared disk architecture. To achieve the low latency, the Power Systems InfiniBand Host Channel Adapters (HCA) and switches are used, and a fiber channel SAN provides access to shared disks.
This article addresses the following questions:
- How are the DB2 pureScale members connected?
- How are a member and cluster caching facility connected?
- How are the AIX® LPARs for the member and cluster caching facility connected in a cluster?
- Is each and every AIX LPAR of one host connected to each and every LPAR of other host?
- How is the SAN storage connected to the cluster?
This article explains how the DB2 pureScale cluster hardware is coupled together for a DB2 pureScale production system. The article also clarifies concepts related to setting up a DB2 pureScale cluster. Setting up and configuring the cluster requires expertise in UNIX(R), InfiniBand, and SAN storage.
For the detailed list of DB2 pureScale prerequisites consult the following DB2 for Linux, UNIX, and Windows Information Center page.
The DB2 pureScale feature is based on IBM DB2 RDBMS shared-disk technology. When you hear about DB2 pureScale, it is usually in the context of a solution based on a cluster architecture made up of several tightly coupled components:
- At least two DB2 members
- Cluster caching facility (CF)
- A high-speed communication network, such as InfiniBand
- IBM Tivoli® System Automation for Multiplatforms (Tivoli SA MP) software
- IBM Reliable Scalable Clustering Technology (RSCT) software
- IBM General Parallel File System (GPFS™) software
The DB2 pureScale feature addresses capacity and availability issues by providing an easier way to scale up or to scale down while making sure that the entire database is always available. The shared disk enables all the members to access the same data set. Any member failure or CF failure (in case of duplexed CF) does not impact the database availability. With DB2 pureScale, additional capacity is added by simply adding new members to the existing cluster. Cluster caching facility Global Buffer Manager (GBP) and Global Lock Manager (GLM) provide centralized data access synchronization.
Figure 1 shows a high-level view of a DB2 pureScale instance with four members and two CFs. It shows DB2 clients connected to the data server. DB2 members are processing database requests, and cluster caching facilities provide centralized synchronization services. Data is stored on shared-disk storage, which is accessible by all members.
Figure 1. A view of the major components in a DB2 pureScale environment
Following is a list of the hardware required for the DB2 pureScale environment described in this article:
- IBM POWER6® or POWER7® Servers with AIX
- Fiber Channel SAN storage, SAN switch and Host Bus Adapters (HBA)
- InfiniBand switch, InfiniBand Host Channel Adapters (HCA), and cables
- Ethernet adapters
- Hardware Management Console (HMC)
The sections below briefly explain each element of the solution.
The servers are POWER6 or POWER7 computers with AIX Logical Partitions (LPAR) on which DB2 pureScale binaries are deployed. A minimum of two members and two cluster caching facilities are advised. It is recommended that each member and cluster caching facility be deployed on its own LPARs and across a minimum of two POWER6 or POWER7 computers. Currently, the following POWER systems are supported:
- POWER6 550
- POWER6 595
- POWER7 710
- POWER7 720
- POWER7 730
- POWER7 740
- POWER7 750
- POWER7 755
- POWER7 770
- POWER7 780
- POWER7 795
Fiber Channel-attached SAN storage is shared among all DB2 members. DB2 pureScale benefits from storage with SCSI3-Persistent Reserve support. DB2 pureScale uses this technology to quickly fence off errant members from the storage in case of a failure, which ensures that the database files remain consistent. For a list of storage with SCSI3-PR support that has been tested and is supported by GPFS, see the online GPFS FAQ in Resources.
Because the shared data is at the heart of a DB2 pureScale system, a RAID configuration is recommended to provide maximum redundancy and availability. Some of the more fault-tolerant RAID levels, such as RAID10 and RAID6, help provide an extra assurance that the storage subsystem can survive various disk failures.
SAN switches are typically used to connect the servers to the storage controller. For a DB2 pureScale deployment, SAN switches should be redundant and also connected to different power supplies for maximum availability.
Host Bus Adapter (HBA) is used to connect the servers to the SAN storage, typically by using a SAN switch using Fiber Channel cables. Redundant HBAs on each DB2 member and the use of multipath software, such as IBM AIX MPIO, or device drivers that support multipath access to LUNS is recommended. Note that load balancing is available for some such multipath drivers that would increase the throughput when multiple HBAs are used.
InfiniBand is a low-latency high-bandwidth interconnect used to communicate among DB2 members and cluster caching facilities. InfiniBand Host Channel Adapter (HCA) is the device that enables the servers to be connected. HCAs are connected to an InfiniBand switch fabric using InfiniBand cables to form a subnet. InfiniBand connectivity is further described under Using InfiniBand (IB).
Ethernet adapters are typically connected to the corporate network and enable DB2 clients to connect with the DB2 pureScale instance, such as EtherChannel or Network Interface Backup technology. DB2 pureScale feature automatically routes connection requests to the member with the lowest workload. Alternatively, you can specify that DB2 clients are to connect to specific active members in the DB2 pureScale instance.
The IBM Hardware Management Console (HMC) provides systems administrators a tool to plan, deploy, and manage IBM System p® servers. The HMC provides server hardware management and virtualization (partition) management.
The HCAs, InfiniBand cables, and InfiniBand switch form a subnet. The performance of this network is critical, because it is used to communicate locking and caching information across the cluster. All hosts in the instance must use the same type of interconnect. DB2 pureScale exploits InfiniBand, which provides Remote Direct Memory Access (RDMA) support. The use of RDMA enables direct updates in member host memory without requiring member processor time. Each of the IB components and their part numbers are described in the following sections.
The IBM GX++ HCA is installed in the POWER system servers, which are used as part of the DB2 pureScale cluster. DB2 pureScale supports only the GX++ HCA adapters. The list of supported adapters with the feature codes is shown Table 1.
Table 1. POWER system server models and supported HCA adapters
|POWER system server model||HCA feature codes|
The HCAs are connected to the IB switch using a 12x to 4x IB cable, such as the 10 meter copper cable under FC 1854, or using a 4x to 4x IB cable, such as the FC 3246 (4x to 4x cable only for FC 5266).
There are multiple ways to connect the LPARs, depending on how many LPARs there are and how many HCAs are supported for that server model. Some of the options include the following:
- POWER 750 with one LPAR
- The HCA is assigned to the LPAR. One IB cable is connected to the IB switch.
- POWER 750 with two LPARs
- The HCA is logically partitioned using the POWER hypervisor, and each LPAR is assigned a portion of the HCA bandwidth and resources. One IB cable is connected to the IB switch.
- POWER 770 with two LPARs
- Two HCAs are installed, and each LPAR has a dedicated HCA. Two IB cables are connected to the IB switch.
- POWER 770 with multiple LPARs
- One or more HCAs are installed. Either every LPAR has a dedicated HCA, or some or all LPARs share the HCAs. The same number of IB cables as HCAs are connected to the IB switch.
At the center of the InfiniBand fabric is the IB switch, which ties all of the DB2 pureScale servers into a subnet. The IBM line of 7874 IB switches provides a wide range of port counts from 24 to 240.
Table 2 lists the supported IBM POWER Systems InfiniBand switches.
Table 2. Supported IBM POWER Systems InfiniBand switches
|Feature codes||Supported switches|
|7874-024||1U, 24-port 4x DDR IB Edge Switch (QLogic 9024CU)|
|7874-040||4U, 48-port 4x DDR IB Director Switch (QLogic 9040)|
|7874-120||7U, 120-port 4x DDR IB Director Switch (QLogic 9120)|
|7874-240||14U, 240-port 4x DDR IB Director Switch (QLogic 9240)|
There are various combinations of servers for DB2 pureScale feature deployment. This section describes a few common deployment models.
- Two-server deployment
- Three-server deployment
- Four-and-more-server deployment
Table 3 shows the configurations for the three models.
Table 3. Three configuration models
|Components||Number of servers||Number of LPARS||IBM IB switch||IBM IB HCAs||IBM IB cables||FC SAN HBA||FC SAN switch||FC SAN cables||FC SAN storage controller|
|2-server model||2||4 (2 LPARs on each)||Mandatory||Minimum 2||Minimum 2||Minimum 2 dual port||Optional||4 cables, 2 from each server||Mandatory|
|3-server model||3||5 (2 LPARs on two servers and 1 LPAR on one server)||Mandatory||Minimum 3||Minimum 3||Minimum 3 dual port||Optional||Minimum 6 cables, 2 from each server||Mandatory|
|4-and-more-server model||4 or more||4 or more||Mandatory||Minimum 1 per server||Minimum 1 per server||Minimum 2 per server dual port||Optional||Minimum 2 from each server||Mandatory|
To maintain high availability (HA) characteristics, two servers are the minimum configuration. In such a configuration, each server would have two LPARs (one DB2 LPAR, one cluster caching facility LPAR). The loss of one physical server in this configuration enables the DB2 pureScale instance to continue to be available, because one DB2 member and one cluster caching facility will be available on the surviving physical server.
In this configuration, high availability is not preserved during a hardware failure or a hardware maintenance window of any one server. The IB cards can be either dedicated to each LPAR (if a server supports more than one HCA) or shared. Similarly the HBAs can be either dedicated to each LPAR or shared using Virtual I/O Server (VIOS). Each of the IB HCAs is connected to the IB switch with IB cables. Similarly the HBA adapters are connected to the FC SAN switch with FC SAN cables. Figure 2 shows this configuration.
Figure 2. A four LPAR, two POWER server configuration with cabling
The three-server deployment enables high availability during hardware failure or hardware maintenance of one server (such as the one without the cluster caching facility LPAR). In this configuration each server has one member LPAR (for a total of three members) and two cluster caching facility LPARs on two different servers. The description of the IB and FC SAN connectivity is the same as for the two-server setup except that the server hosting only the member LPAR has a dedicated HCA. Figure 3 shows this configuration.
Figure 3. A five LPAR, three POWER server configuration with cabling
The four-and-more-server deployment enables additional members and an option to isolate the cluster caching facility on dedicated servers. Scale out of the cluster is achieved by simply adding additional servers, while making sure storage input/output capacity is increased proportionally and cluster caching facility LPAR capacity is increased gradually.
The configuration is the same as for the three-server deployment except that an additional LPAR and a member on the additional servers are added. It is also possible to deploy one LPAR per server, in which case DB2 pureScale members and the cluster caching facility use dedicated HCA/HBA. Figure 4 shows this configuration.
Figure 4. A four-and-more POWER servers configuration with cabling
The IBM DB2 pureScale feature and IBM POWER servers provide a tightly coupled solution that addresses business growth and continuous availability needs. This article has shown various sample deployment models, which are built from industry-standard components. Various deployment models illustrate a flexible infrastructure, which can be as small as a 2-member cluster up to a 128-member cluster, and thus satisfy various business requirements.
- Get more information about GPFS in the General Parallel File system FAQs in the IBM Cluster Information
- Refer to the DB2 for Linux, UNIX, and Windows Information Center for more
information about the DB2 pureScale feature.
- Read "InfiniBand usage " for more
information about InfiniBand usage on IBM POWER servers.
- Scan through "IBM HMC" for complete
information about IBM Hardware Management Console.
- Explore "IBM Qlogic" for more
information on the IBM Qlogic IB switch.
- Learn more about Information Management at the developerWorks Information Management
zone. Find technical documentation,
how-to articles, education, downloads, product information, and
- Stay current with
developerWorks technical events and webcasts.
- Follow developerWorks on
Get products and technologies
- Build your next
development project with
IBM trial software,
available for download directly from developerWorks.
- Participate in the discussion forum.
- Check out the
blogs and get involved in the