Deploy IBM DB2 pureScale feature on IBM Power Systems

Benefit from the shared disk architecture of DB2 pureScale

The DB2® pureScale™ feature for Enterprise Server Edition builds on familiar and proven design features from the IBM® DB2 for z/OS® database software. This article describes different deployment methods of the DB2 pureScale feature on IBM Power Systems™ and how the hardware pieces come together to create a pureScale cluster.

Share:

Miso Cilimdzic (cilimdzi@ca.ibm.com), DB2 Performance Manager, IBM

Photo of Miso CilimdzicMiso has been with IBM since 2000. He has worked on various DB2 performance-related activities with recent focus on DB2 pureScale.



Sanjeeva Kumar Ogirala (sogirala@in.ibm.com), Software Engineer, IBM

Photo of Sanjeeva KumarSanjeeva Kumar Ogirala is a Software Engineer in DB2 performance team. He is a post graduate with an M.Tech in Power Systems from IIT Delhi. He has been with IBM since July 2007, and he is an IBM-certified DB2 for Linux, UNIX, and Windows database administrator.



09 September 2010

Also available in Chinese

Introduction

The DB2(R) pureScale(TM) feature helps reduce the risk and cost of business growth by providing nearly unlimited capacity, continuous availability, and application transparency. DB2 pureScale benefits from a low latency interconnect, such as InfiniBand, and is built on top of a shared disk architecture. To achieve the low latency, the Power Systems InfiniBand Host Channel Adapters (HCA) and switches are used, and a fiber channel SAN provides access to shared disks.

This article addresses the following questions:

  • How are the DB2 pureScale members connected?
  • How are a member and cluster caching facility connected?
  • How are the AIX® LPARs for the member and cluster caching facility connected in a cluster?
  • Is each and every AIX LPAR of one host connected to each and every LPAR of other host?
  • How is the SAN storage connected to the cluster?

This article explains how the DB2 pureScale cluster hardware is coupled together for a DB2 pureScale production system. The article also clarifies concepts related to setting up a DB2 pureScale cluster. Setting up and configuring the cluster requires expertise in UNIX(R), InfiniBand, and SAN storage.

For the detailed list of DB2 pureScale prerequisites consult the following DB2 for Linux, UNIX, and Windows Information Center page.

Understanding the DB2 pureScale feature

The DB2 pureScale feature is based on IBM DB2 RDBMS shared-disk technology. When you hear about DB2 pureScale, it is usually in the context of a solution based on a cluster architecture made up of several tightly coupled components:

  • At least two DB2 members
  • Cluster caching facility (CF)
  • A high-speed communication network, such as InfiniBand
  • IBM Tivoli® System Automation for Multiplatforms (Tivoli SA MP) software
  • IBM Reliable Scalable Clustering Technology (RSCT) software
  • IBM General Parallel File System (GPFS™) software

The DB2 pureScale feature addresses capacity and availability issues by providing an easier way to scale up or to scale down while making sure that the entire database is always available. The shared disk enables all the members to access the same data set. Any member failure or CF failure (in case of duplexed CF) does not impact the database availability. With DB2 pureScale, additional capacity is added by simply adding new members to the existing cluster. Cluster caching facility Global Buffer Manager (GBP) and Global Lock Manager (GLM) provide centralized data access synchronization.

Figure 1 shows a high-level view of a DB2 pureScale instance with four members and two CFs. It shows DB2 clients connected to the data server. DB2 members are processing database requests, and cluster caching facilities provide centralized synchronization services. Data is stored on shared-disk storage, which is accessible by all members.

Figure 1. A view of the major components in a DB2 pureScale environment
Clients access the data server consisting of DB2 members which process request, using primary and secondary cluster caching facility, and storing data on shared disk storage

Understanding the hardware components that make up the solution

Following is a list of the hardware required for the DB2 pureScale environment described in this article:

  • IBM POWER6® or POWER7® Servers with AIX
  • Fiber Channel SAN storage, SAN switch and Host Bus Adapters (HBA)
  • InfiniBand switch, InfiniBand Host Channel Adapters (HCA), and cables
  • Ethernet adapters
  • Hardware Management Console (HMC)

The sections below briefly explain each element of the solution.

IBM POWER6 or POWER7 Servers

The servers are POWER6 or POWER7 computers with AIX Logical Partitions (LPAR) on which DB2 pureScale binaries are deployed. A minimum of two members and two cluster caching facilities are advised. It is recommended that each member and cluster caching facility be deployed on its own LPARs and across a minimum of two POWER6 or POWER7 computers. Currently, the following POWER systems are supported:

  • POWER6 550
  • POWER6 595
  • POWER7 710
  • POWER7 720
  • POWER7 730
  • POWER7 740
  • POWER7 750
  • POWER7 755
  • POWER7 770
  • POWER7 780
  • POWER7 795

Fiber Channel SAN storage, switches, and HBA

Fiber Channel-attached SAN storage is shared among all DB2 members. DB2 pureScale benefits from storage with SCSI3-Persistent Reserve support. DB2 pureScale uses this technology to quickly fence off errant members from the storage in case of a failure, which ensures that the database files remain consistent. For a list of storage with SCSI3-PR support that has been tested and is supported by GPFS, see the online GPFS FAQ in Resources.

Because the shared data is at the heart of a DB2 pureScale system, a RAID configuration is recommended to provide maximum redundancy and availability. Some of the more fault-tolerant RAID levels, such as RAID10 and RAID6, help provide an extra assurance that the storage subsystem can survive various disk failures.

SAN switches are typically used to connect the servers to the storage controller. For a DB2 pureScale deployment, SAN switches should be redundant and also connected to different power supplies for maximum availability.

Host Bus Adapter (HBA) is used to connect the servers to the SAN storage, typically by using a SAN switch using Fiber Channel cables. Redundant HBAs on each DB2 member and the use of multipath software, such as IBM AIX MPIO, or device drivers that support multipath access to LUNS is recommended. Note that load balancing is available for some such multipath drivers that would increase the throughput when multiple HBAs are used.

InfiniBand switch, HCA, and cables

InfiniBand is a low-latency high-bandwidth interconnect used to communicate among DB2 members and cluster caching facilities. InfiniBand Host Channel Adapter (HCA) is the device that enables the servers to be connected. HCAs are connected to an InfiniBand switch fabric using InfiniBand cables to form a subnet. InfiniBand connectivity is further described under Using InfiniBand (IB).

Ethernet adapters

Ethernet adapters are typically connected to the corporate network and enable DB2 clients to connect with the DB2 pureScale instance, such as EtherChannel or Network Interface Backup technology. DB2 pureScale feature automatically routes connection requests to the member with the lowest workload. Alternatively, you can specify that DB2 clients are to connect to specific active members in the DB2 pureScale instance.

Hardware Management Console

The IBM Hardware Management Console (HMC) provides systems administrators a tool to plan, deploy, and manage IBM System p® servers. The HMC provides server hardware management and virtualization (partition) management.


Using InfiniBand (IB)

The HCAs, InfiniBand cables, and InfiniBand switch form a subnet. The performance of this network is critical, because it is used to communicate locking and caching information across the cluster. All hosts in the instance must use the same type of interconnect. DB2 pureScale exploits InfiniBand, which provides Remote Direct Memory Access (RDMA) support. The use of RDMA enables direct updates in member host memory without requiring member processor time. Each of the IB components and their part numbers are described in the following sections.

Host Channel Adapters (HCA)

The IBM GX++ HCA is installed in the POWER system servers, which are used as part of the DB2 pureScale cluster. DB2 pureScale supports only the GX++ HCA adapters. The list of supported adapters with the feature codes is shown Table 1.

Table 1. POWER system server models and supported HCA adapters
POWER system server modelHCA feature codes
550, 7505609
595, 7951816
710, 7305266
720, 7405615
770, 7801808

HCAs connected to the IB switch

The HCAs are connected to the IB switch using a 12x to 4x IB cable, such as the 10 meter copper cable under FC 1854, or using a 4x to 4x IB cable, such as the FC 3246 (4x to 4x cable only for FC 5266).

Multiple LPARs on a server connected to the IB fabric

There are multiple ways to connect the LPARs, depending on how many LPARs there are and how many HCAs are supported for that server model. Some of the options include the following:

POWER 750 with one LPAR
The HCA is assigned to the LPAR. One IB cable is connected to the IB switch.
POWER 750 with two LPARs
The HCA is logically partitioned using the POWER hypervisor, and each LPAR is assigned a portion of the HCA bandwidth and resources. One IB cable is connected to the IB switch.
POWER 770 with two LPARs
Two HCAs are installed, and each LPAR has a dedicated HCA. Two IB cables are connected to the IB switch.
POWER 770 with multiple LPARs
One or more HCAs are installed. Either every LPAR has a dedicated HCA, or some or all LPARs share the HCAs. The same number of IB cables as HCAs are connected to the IB switch.

InfiniBand switch

At the center of the InfiniBand fabric is the IB switch, which ties all of the DB2 pureScale servers into a subnet. The IBM line of 7874 IB switches provides a wide range of port counts from 24 to 240.

Table 2 lists the supported IBM POWER Systems InfiniBand switches.

Table 2. Supported IBM POWER Systems InfiniBand switches
Feature codesSupported switches
7874-0241U, 24-port 4x DDR IB Edge Switch (QLogic 9024CU)
7874-0404U, 48-port 4x DDR IB Director Switch (QLogic 9040)
7874-1207U, 120-port 4x DDR IB Director Switch (QLogic 9120)
7874-24014U, 240-port 4x DDR IB Director Switch (QLogic 9240)

Exploring sample deployment models

There are various combinations of servers for DB2 pureScale feature deployment. This section describes a few common deployment models.

  • Two-server deployment
  • Three-server deployment
  • Four-and-more-server deployment

Table 3 shows the configurations for the three models.

Table 3. Three configuration models
ComponentsNumber of serversNumber of LPARSIBM IB switchIBM IB HCAsIBM IB cablesFC SAN HBAFC SAN switchFC SAN cablesFC SAN storage controller
2-server model24 (2 LPARs on each)MandatoryMinimum 2Minimum 2Minimum 2 dual portOptional4 cables, 2 from each serverMandatory
3-server model35 (2 LPARs on two servers and 1 LPAR on one server)MandatoryMinimum 3Minimum 3Minimum 3 dual portOptionalMinimum 6 cables, 2 from each serverMandatory
4-and-more-server model4 or more4 or moreMandatoryMinimum 1 per serverMinimum 1 per serverMinimum 2 per server dual portOptionalMinimum 2 from each serverMandatory

Two-server deployment

To maintain high availability (HA) characteristics, two servers are the minimum configuration. In such a configuration, each server would have two LPARs (one DB2 LPAR, one cluster caching facility LPAR). The loss of one physical server in this configuration enables the DB2 pureScale instance to continue to be available, because one DB2 member and one cluster caching facility will be available on the surviving physical server.

In this configuration, high availability is not preserved during a hardware failure or a hardware maintenance window of any one server. The IB cards can be either dedicated to each LPAR (if a server supports more than one HCA) or shared. Similarly the HBAs can be either dedicated to each LPAR or shared using Virtual I/O Server (VIOS). Each of the IB HCAs is connected to the IB switch with IB cables. Similarly the HBA adapters are connected to the FC SAN switch with FC SAN cables. Figure 2 shows this configuration.

Figure 2. A four LPAR, two POWER server configuration with cabling
shows Member 1 connected throug IBM switch to Member 2; they are also each connected throu SAN FC switches to the storage controller

Three-server deployment

The three-server deployment enables high availability during hardware failure or hardware maintenance of one server (such as the one without the cluster caching facility LPAR). In this configuration each server has one member LPAR (for a total of three members) and two cluster caching facility LPARs on two different servers. The description of the IB and FC SAN connectivity is the same as for the two-server setup except that the server hosting only the member LPAR has a dedicated HCA. Figure 3 shows this configuration.

Figure 3. A five LPAR, three POWER server configuration with cabling
shows 3 members and 2 pureScale LPARs, connected through IB switch and storage controller

Four-and-more-server deployment

The four-and-more-server deployment enables additional members and an option to isolate the cluster caching facility on dedicated servers. Scale out of the cluster is achieved by simply adding additional servers, while making sure storage input/output capacity is increased proportionally and cluster caching facility LPAR capacity is increased gradually.

The configuration is the same as for the three-server deployment except that an additional LPAR and a member on the additional servers are added. It is also possible to deploy one LPAR per server, in which case DB2 pureScale members and the cluster caching facility use dedicated HCA/HBA. Figure 4 shows this configuration.

Figure 4. A four-and-more POWER servers configuration with cabling
shows 4 members and LPARs on each server, connect through IB switch and SAN FC switches going through a storage controller

Conclusion

The IBM DB2 pureScale feature and IBM POWER servers provide a tightly coupled solution that addresses business growth and continuous availability needs. This article has shown various sample deployment models, which are built from industry-standard components. Various deployment models illustrate a flexible infrastructure, which can be as small as a 2-member cluster up to a 128-member cluster, and thus satisfy various business requirements.

Resources

Learn

Get products and technologies

  • Build your next development project with IBM trial software, available for download directly from developerWorks.

Discuss

Comments

developerWorks: Sign in

Required fields are indicated with an asterisk (*).


Need an IBM ID?
Forgot your IBM ID?


Forgot your password?
Change your password

By clicking Submit, you agree to the developerWorks terms of use.

 


The first time you sign into developerWorks, a profile is created for you. Information in your profile (your name, country/region, and company name) is displayed to the public and will accompany any content you post, unless you opt to hide your company name. You may update your IBM account at any time.

All information submitted is secure.

Choose your display name



The first time you sign in to developerWorks, a profile is created for you, so you need to choose a display name. Your display name accompanies the content you post on developerWorks.

Please choose a display name between 3-31 characters. Your display name must be unique in the developerWorks community and should not be your email address for privacy reasons.

Required fields are indicated with an asterisk (*).

(Must be between 3 – 31 characters.)

By clicking Submit, you agree to the developerWorks terms of use.

 


All information submitted is secure.

Dig deeper into Information management on developerWorks


static.content.url=http://www.ibm.com/developerworks/js/artrating/
SITE_ID=1
Zone=Information Management, AIX and UNIX
ArticleID=517700
ArticleTitle=Deploy IBM DB2 pureScale feature on IBM Power Systems
publish-date=09092010