Set up DB2 for Linux, UNIX, and Windows for high availability using Microsoft Cluster Server

Virtualize DB2 on MSCS using VMWare

IBM® DB2® for Linux®, UNIX®, and Windows® has a number of options to provide high availability for production systems such as high availability and disaster recovery (HADR), DB2 pureScale, and DB2 MSCS. This article shows you how to set up a DB2 cluster in a virtual Microsoft Windows environment, including how to quickly set up a training and testing system in a virtual environment such as the cloud. This article will also help you to learn and practice DB2 MSCS without special hardware. Finally, simple troubleshooting methods are introduced to help you with common issues. This article is an update of a previously published white paper on IBM.com.

Jack Xia (zxia@au1.ibm.com), MDM Advance Support Engineer, IBM

Author photoJack Xia is the team leader for IBM Master Data Management Technical Support, AP team. He has 10 years of experience in the database and software technical support field. He is an experienced support engineer in areas of problem determination and problem solving, with extensive skills in DataStage, DB2 for Linux, UNIX, and Windows, IBM Master Data Management, and IBM Information Server. He joined IBM in 2002 and holds a Masters degree in Computer Science.



Anthony Corrente (Anthony.Corrente@au1.ibm.com), Rational Technical Support Manager, IBM

Author photoAnthony Corrente is a Technical Support Manager for IBM Rational Client Support AP. He has 18 years of experience in the database and software technical support field. His areas of expertise include problem determination, down systems support, systems scripting, heterogeneous systems interoperability, with extensive skills in Information Server, DataStage, DB2 for Linux, UNIX, and Windows, as well as UniData, and UniVerse. He has written a Redbook titled "Deploying a Grid Solution with the IBM InfoSphere Information Server". He holds a degree in Computer Science and Pure Mathematics from the University of Sydney, Australia.



06 December 2012

Introduction

DB2 for Linux, UNIX, and Windows has been recognized for providing very resilient data management to achieve high data integrity, reliability, consistency, and availability. It has the following options to provide high availability for production systems.

  • DB2 high availability and disaster recovery (HADR), is a data replication feature that provides a high availability solution for both partial and complete site failures. HADR protects against data loss by replicating data changes from a source database (primary), to a target database (standby). HADR is available in all editions of DB2 10, and supports up to three standby nodes for high availability and disaster recovery.
  • DB2 pureScale, an optional feature, offers a shared-disk clustering technology that helps provide continuous availability and scalability.
  • DB2 MSCS is another option for users who require a high availability solution in a windows environment.

This article concentrates on the last option that can be used for training, development, or testing scenarios running VMware images on a single physical Microsoft Windows machine. Figure 1 shows the typical topology of DB2 Enterprise Server Edition (ESE) and MSCS.

Figure 1. Typical topology
Typical topology of DB2 ESE and MSCS, it contains at least two nodes with share disks, each node has a db2MSCS instance.

Environment used in this article

In this article, a single physical personal computer was used. Three virtual systems running Window 2008 Server were created in this environment as follows.

  • One of the virtual machines is the domain controller and Microsoft iSCSI Software Target server.
  • The other two are domain member servers with iSCSI initiator installed.

Note: Only Windows Server 2008 Enterprise and Datacenter support Failover Cluster.

Figure 2 shows the db2cls.com domain consisting of two Virtual Windows 2008 Enterprise Server, Host 1 - Cluster node1, Host 2- Cluster node2, and one Virtual Windows 2008 Datacenter Server, Host 0 - Domain controller, all connected through a TCP/IP network cloud.

Figure 2. Environment used in this article
Environment used in this article, a single domain with 3 virtual windows servers running on a single machine, with shared disks created on one of the virtual machine using iSCSI technology.

Note: The following tables are used as reference throughout this article. Please refer to these details in subsequent sections.

Table 1. Physical machine one
ComponentHardware used in this article
CPUIntel Core i7 Q720 1.6GHz
Memory8GB
Disk500GB
Network1 network card
Operation systemWindows 7 Professional, 64 bit with SP1
Table 2. Virtual machine one
ItemDetail
RoleDomain controller
Domain namedb2cls.com
Memory1GB
Disk1 local OS disk of 15GB
Operation systemWindows 7 Professional, 64 bit with SP1
IP addresses Total of 2 IP addresses
  • 192.168.64.10
  • 192.168.109.1
Table 3. Virtual machine two
ItemDetail
RoleMember of domain: db2cls.com
Cluster namedb2cluster1
Memory1GB
Disks 1 local OS disk of 20GB 2 shared virtual disks:
  • Virtual disk1 for MSCS quorum of 512MB
  • Virtual disk2 for DB2 instance and database of 4GB
Operation systemWindows 7 Professional, 64 bit with SP1
IP addresses Total of 2 IP addresses
  • 192.168.64.11
  • 192.168.109.2
Table 4. Virtual machine three
ItemDetail
RoleMember of domain: db2cls.com
Cluster namedb2cluster1
Memory1GB
Disks 1 local OS disk of 20GB 2 shared virtual disks:
  • Virtual disk1 for MSCS quorum of 512MB
  • Virtual disk2 for DB2 instance and database of 4GB
Operation systemWindows 7 Professional, 64 bit with SP1
IP addresses Total of 2 IP addresses
  • 192.168.64.12
  • 192.168.109.3
Table 5. MSCS cluster
ItemDetail
MSCS cluster namedb2cls.com
quorum configurationNode and disk majority
Cluster IP address192.168.64.100
DisksOne shared disk for quorum
Table 6. DB2 cluster
ItemDetail
DB2 MSCS cluster namedb2cluster1.db2cls.com
DB2 cluster instancedb3
Resource group namedb2Group
DB2 cluster host namedb2clus
Disks1 for both db3 instance and database
IP address192.168.64.89

Set up virtual machines and domain

  1. Install VMware player version 4 on your physical machine as per the details in Table 1.
  2. Create three Windows Server 2008 R2 VMware virtual machine images as per Table 2, Table 3, and Table 4.
  3. Create db2cls.com domain on those three Windows Server 2008 R2 VMware images. From HOST0, two members (HOST1 and HOST2) can be found in Active Directory Users and Computers window as shown in Figure 3.
    Figure 3. db2cls.com domain with two nodes HOST1, HOST2
    domain db2cls.com with two nodes HOST1, HOST2 added into it.

Set up shared disks for the cluster on HOST0

For a typical MSCS cluster, at least one shared disk is required for all nodes in the cluster. In this VMware environment, each normal VMware virtual disk can only be accessed by a single VMware image. The iSCSI storage overcomes this limit. So in this article, Microsoft iSCSI technology is used to create an iSCSI target with two virtual disks on HOST0, then set HOST1 and HOST2 to connect to it. You need two virtual disks because the cluster needs one, and the db2 instance and database need another one.

Microsoft iSCSI Target software can be downloaded from the Resources section. The version used in this article is 3.3.

  1. As shown in Figure 4, a single iSCSI target host0-disk0 with two virtual disks is used.
    Figure 4. Microsoft iSCSI target on HOST0
    Microsoft iSCSI target on HOST0, which is the host of two shared disks for HOST1 and HOST2.
  2. Right-click iSCSI Targets and click Create iSCSI Target, as shown in Figure 5.
    Figure 5. Creating a single iSCSI Target
    Create a single iSCSI Target, later two virtual disks will be added into it.
  3. Type host0-disk0 in the iSCSI target name field, and type clustOS in the Description field, as shown in Figure 6.
    Figure 6. Set the iSCST target name as host0-disk0
    Set the iSCST target name as host0-disk0.
  4. Make sure the IP addresses for both HOST1 and HOST2 are added, which allows these two servers to access the virtual disks of the target, as shown in Figure 7.
    Figure 7. Add the IP addresses of HOST1, HOST2 to the target
    Add the IP addresses of HOST1, HOST2 to the target, this allows HOST1 and HOST2 to access the shared disks in this target.
  5. Create two virtual disks, as shown in Figure 8, and name one as Virtual Disk 0 with size 500MB, and the other as Virtual Disk 1, size 4GB.
    Figure 8. Create virtual disk for the target
    Create virtual disk for this target.
  6. For the first virtual disk, Virtual Disk 0, add the description clusterOS, which is used by the cluster itself. Then specify F:\share-0.vhd, as shown in Figure 9.
    Figure 9. Set the virtual disk as local file share-0.vhd
    Set the virtual disk as local file called share-0.vhd.
  7. For the second virtual disk, Virtual Disk 1, add the description DB2 Shared Disk, which is used by db2 instance and database, and then specify G:\DB2Data\DB2Disk1.vhd, as shown in Figure 10.
    Figure 10. Local file DB2Disk1.vhd for Virtual Disk 1
    Local file DB2Disk1.vhd for the second virtual disk Virtual Disk 1
  8. After Virtual Disk 1 is created, there are two files created on HOST0, as shown in Figure 11.
    Figure 11. Two files created for Virtual Disk 1
    After Virtual Disk 1 created, there are two files created on HOST0.
  9. As shown in Figure 12, the two virtual disks are ready for the cluster.
    Figure 12. Two virtual disks created under host0-disk0
    Two virtual disks created under this target.

Set up the shared disks on HOST1 and HOST2

  1. Select the Microsoft iSCSI Initiator from the Start menu of the Windows operating system, as shown in Figure 13.
    Figure 13. Start iSCSI Initiator on HOST1
    start iSCSI Initator on HOST1.
  2. Connect to the Target dialog box, and then click OK, as shown in Figure 14.
    Figure 14. Target dialog box
    This figure shows the Target dialog boxNote: Make sure that both check boxes are selected in the Connect To Target dialog box.
  3. Type host0 in the Target field, and then click Quick Connect, as shown in Figure 15.
    Figure 15. Connecting to the target on HOST0
    Connecting to the target on HOST0
  4. Upon successful connection, click Computer Management -> Storage -> Disk Management to have two new disks disk1 and disk2 added into the system, as shown in Figure 16.
    Figure 16. Two disks G: and E: added on HOST1
    On HOST1, there are two disks G: and E: added.
  5. Change disk 1 and disk 2 from offline to online, perform an initiate and quick format using the NTFS file system.
  6. Repeat Step 3 to connect and set up these two shared disks for HOST2.

Set up a MSCS cluster on HOST1 and HOST2

  1. Make sure the Failover Cluster Manager is installed on HOST1 and HOST2. If not, you can install it from the Windows installation image, as shown in Figure 17.
    Figure 17. Installing Failover Cluster Manager
    Install Failover Cluster Manager if it is not installed. It is under start up->Administrative Tools->Server Manager
  2. From the Select one or more features to install on this server list, ensure that Failover Clustering is selected or installed, as shown in Figure 18.
    Figure 18. Failover Clustering
    Make sure 'Failover Clustering' is checked or installed.
  3. Click Validate a Configuration to start the Failover Cluster Manager to set up the MSCS cluster, and validate all resources for a cluster, as shown in Figure 19.
    Figure 19. Validating all resources for a cluster
    Click Validate a Configuration first to check we have all the resources for MSCS
  4. Add both nodes for the cluster, and then click Next, as shown in Figure 20.
    Figure 20. Adding HOST1 and HOST2
    Add HOST1 and HOST2 for the checking.
  5. Ensure you click Run all tests (recommended), and then click Next, as shown in Figure 21.
    Figure 21. Run all tests
    Run all the tests.
  6. From the Summary page, ensure your result has no errors. If you run into errors, click View Report to look at the warnings, as shown in Figure 22.
    Figure 22. Ensuring that the tests are passed
    Make sure the tests are passed for MSCS.
  7. Create a cluster called db2cluster1.db2cls.com, using HOST1 and HOST2. Figure 23 shows servers host1.db2cls.com and host2.db2cls.com. Click Next.
    Figure 23. Add HOST1 and HOST2 for the cluster
    Add HOST1 and HOST2 for the cluster.
  8. Assign cluster IP address 192.168.64.100, as shown in Figure 24.
    Figure 24. Assigning IP for the cluster
    Assign IP for this cluster.
  9. Assign a cluster quorum disk. On the right panel, click More Actions, as shown in Figure 25, and then select Configure Cluster Quorum Settings.
    Figure 25. Assigning cluster quorum disk
    Assign cluster qurom disk from 'More Action' on the right panel.
  10. Click Node and Disk Majority (recommended for your current number of nodes), and then click Next, as shown in Figure 26.
    Figure 26. Node and Disk Majority
    Select quorum Type as 'Node and Disk Majority'.
  11. To create a cluster called db2cluster1.db2cls.com, select the Cluster Disk 1 check box, and then click Next, as shown in Figure 27.
    Figure 27. Select Cluster Disk 1
    Choose 'Cluster Disk 1' for the quorum disk.
  12. Click Failover Cluster Manager -> db2cluster1.db2cls.com -> Storage, to check if the shared disks can be failover between HOST0 and HOST1. There are two disks available in the cluster. These disks are not local disks, and can be accessed by both HOST1 and HOST2, as shown in Figure 28.
    Figure 28. Two cluster disks are shown as online from HOST1
    Two cluster disks are shown as online and from HOST1.

Set up DB2 MSCS

  1. Create a stand alone instance db3 on HOST0, as shown in figure 29.
    Figure 29. Standard instance db3 On HOST1
    On HOST1, create a standard instance: DB3.
  2. Create a simple db2mscs.cfg configuration file as shown in figure 30.
    Figure 30. A simple db2mscs.cfg configuration file
    A simple db2mscs.cfg configuration file

    See the following details about the parameters in this file.

    • DB2_INSTANCE = DB3

      The existing instance to be migrated into cluster.

    • CLUSTER_NAME=db2cluster1

      Cluster name (db2cluster1.db2cls.com).

    • GROUP_NAME=db2Group

      New group name for the db2cluster for all resources.

    • IP_NAME=db2cluster1

      The hostname of db2 cluster, it must not exist in the network.

    • IP_ADDRESS=192.168.64.89

      The IP of the db2 cluster. It must not exist in the network.

    • IP_SUBNET=255.255.255.0

      Submask for the IP_ADDRESS above.

    • IP_NETWORK=Cluster Network 1

      The network existing in the cluster, as shown in Figure 31.

      Figure 31. Ensuring that Cluster Network 1 exists in the cluster networks list
      Ensuring that Cluster Network 1 exists in the cluster networks list
    • DISK_NAME = Cluster Disk 2

      Cluster disk, as shown in Figure 32.

      Figure 32. Disk existing in the cluster storage list
      Disk existing in the cluster storage list
    • INSTPROF_DISK = Cluster Disk 2

  3. Run the db2mscs command to create the db2mscs cluster. Use the following command, as shown in Figure 33. db2mscs -f c:\download\db2mscs.cfg
    Figure 33. Run db2mscs command to mirage db3 from standard instance to db2mscs instance
    Run db2mscs command to mirage db3 from standard instance to db2MSCS instance.
  4. When the db2mscs command is completed successfully, a new application called db2Group shows up under Failover Cluster Manager -> db2cluster1.db2cls.com -> Services and applications, as shown in Figure 34.
    Figure 34. Creation of db2Group application on successful completion of db2mscs command
    After db2mscs command runs successfully, a new application called db2Group shows up under 'Services and applications'.
  5. The new application db2Group is currently running on HOST1 and this group has the following three resources.
    • One disk: Cluster Disk 2
    • One IP: 192.168.64.89
    • One db2 cluster instance: DB3
  6. On the DB2Data (G:) disk, you can see that a folder DB2PROFS is created, which is a db3 instance. This instance is moved from the C: drive to the G: drive, under the G:\DB2PROFS folder, as shown in Figure 35.
    Figure 35. Folder DB2PROFS created on DB2Data(G:)
    On G:, there is a folder called DB2PROFS created, it is the instance folder of DB3.
  7. You can also use the db2set -all command to show DB2INSTPROF=G:\DB2PROFS on the G: drive, as shown in Figure 36.
    Figure 36. db2set -all command to show DB2INSTPROF on G:
    db2set -all shows DB2INSTPROF on G: too.

Failover test

After the DB2 MSCS is built, you can do some quick tests to check if it is created successfully or not.

  1. Right-click db2Group and click Move this service or application to another node, and then click 1 – Move to node host 2 to move db3Group from HOST1 to HOST2, as shown in Figure 37.
    Figure 37. Moving db2Group to node HOST2
    Move db2Group to node HOST2.
  2. Now the owner of db2Group is changed to HOST2, as shown in Figure 38.
    Figure 38. db2Group running on HOST2
    Now db2Group is running no HOST2.
  3. On HOST2, as shown in Figure 39, you can see before failover G:\ controlled by HOST1, HOST2 without data, and after failover, G:\ with dataG:\ has data.
    Figure 39. Before and after failover
    Before failover, G: is controlled by HOST1, HOST2 can not see it. After failover, we can see it now.
  4. On HOST2, create a sample database using the following db2 command: db2 create database sample on G:, as shown in figure 40.

    Figure 40. Creating a sample database on G:
    Create a database called sample on G:, we can see more folders and files created on G:.
  5. A new folder called DB3 is created under G:\ where the database is located, as shown in Figure 41.
    Figure 41. Creating a where the database is located
    G:\DB3 is where the database located.
  6. Now, HOST1 is no longer accessible to the G:\ drive. For HOST1 to access this drive, you will have to failover from HOST2 to HOST1, as shown in Figure 42.
    Figure 42. HOST1 losing control of G:
    HOST1 loses the control of G:, can not see it anymore.
  7. Perform the failover from HOST2 back to HOST1, as shown in Figure 43.
    Figure 43. Failover back to HOST1
    Failover back to HOST1 again.
  8. Disk G: is moved to HOST1, and database SAMPLE can be accessed locally from HOST1. But you can no longer see HOST2 on G:\, and On HOST1, two db2 list db directory commands have different results, as shown in Figure 44.
    Figure 44. Disk G: moved to HOST1
    On HOST1,two db2 list db directory commands having different results

Troubleshooting

Check the following to help you troubleshoot any problems.

  1. First of all, please note that only DB2 version 10, 64 bit is supported by windows 2008 cluster.
  2. Make sure the instance owner, DB2CLS\db2admin belongs to the local administrators group of HOST1 and HOST2, as shown in Figure 45.
    Figure 45. Check Administrators group
    Check db2cls\db2admin belongs on local Administrators group.
  3. Always check to see if all resources are available at the MSCS level, and that they can failover between HOST1 and HOST2. Also make sure that the iSCSI target is connected, as shown in Figure 46.
    Figure 46. Resource availability
    On HOST1 and HOST2, make sure the iSCSI target is connected all the time.
  4. Check Cluster Events from the Failover Cluster Manager panel, as shown in Figure 47.
    Figure 47. Check Cluster Events
    Check Cluster Events from the Failover Cluster Manager Panel.
  5. Use the cluster command to collect Windows cluster-level traces, as shown in Figure 48.
    Figure 48. Use cluster command
    Use cluster command to collect Windows cluster level traces.
    Microsoft provides a cluster log to capture any cluster activities. You can view more details from the Resources section.
  6. By default, the cluster.log is generated under C:\Windows\Cluster\Reports, as shown in Figure 49.
    Figure 49. Finding cluster.log
    Find cluster.log under c:\windows\Cluster\Reports.
  7. Use the following command syntax to collect the db2mscs trace, which is also shown in figure 50. db2mscs -f [db2mscs configuration file] –d:[trace filename]
    Figure 50. Generation of db2mscs.trc
    db2mscs.trc is generated.
    Note that db2mscs trace can be difficult to read and understand. This trace may be requested by an IBM DB2 support engineer for further analysis and research.

Conclusion

This article showed you how to set up a DB2 cluster in a virtual environment, including how to quickly set up a training and testing system in a virtual environment such as the cloud. This article also helped you to learn and practice DB2 MSCS without special hardware. Finally, you learned simple troubleshooting methods to help you with common issues.

Resources

Learn

Get products and technologies

Discuss

Comments

developerWorks: Sign in

Required fields are indicated with an asterisk (*).


Need an IBM ID?
Forgot your IBM ID?


Forgot your password?
Change your password

By clicking Submit, you agree to the developerWorks terms of use.

 


The first time you sign into developerWorks, a profile is created for you. Information in your profile (your name, country/region, and company name) is displayed to the public and will accompany any content you post, unless you opt to hide your company name. You may update your IBM account at any time.

All information submitted is secure.

Choose your display name



The first time you sign in to developerWorks, a profile is created for you, so you need to choose a display name. Your display name accompanies the content you post on developerWorks.

Please choose a display name between 3-31 characters. Your display name must be unique in the developerWorks community and should not be your email address for privacy reasons.

Required fields are indicated with an asterisk (*).

(Must be between 3 – 31 characters.)

By clicking Submit, you agree to the developerWorks terms of use.

 


All information submitted is secure.

Dig deeper into Information management on developerWorks


static.content.url=http://www.ibm.com/developerworks/js/artrating/
SITE_ID=1
Zone=Information Management
ArticleID=848404
ArticleTitle=Set up DB2 for Linux, UNIX, and Windows for high availability using Microsoft Cluster Server
publish-date=12062012