Install the DB2 UDB data partitioning feature on Linux

How to configure DB2 UDB Extended Server Edition for a partitioned environment

This article provides a straightforward example of an actual installation of IBM® DB2® Universal Database™ Extended Server Edition V8.1 with the data partitioning feature (DPF) on Red Hat Linux Advanced Server 2.1 using the scripted db2_install method. Using db2_install is convenient for installing DB2 through a telnet session. Learn the steps that form a good basis for installing and configuring DB2 with DPF in large multi-server and multi-partitioned environments.

Priti Desai (pritid@us.ibm.com), DB2 UDB Consultant, IBM US Ltd.

Author photoPriti Desai is a DB2 Technical Consultant with the Information Management Partner Enablement organization at IBM Silicon Valley Lab, where her main focus is assisting IBM Business Partners with high-water benchmarks, performance tuning, application migration, and training.



Fraser McArthur (fgmcarth@ca.ibm.com), DB2 UDB Consultant, IBM Canada

Fraser McArthur is a DB2 Technical Consultant with the Information Management Partner Enablement organization at the IBM Toronto Lab, where he has worked for the last five years. He focuses on assisting IBM Business Partners with performing application migrations and performance tuning.



14 April 2005

Overview

To enable scaling of a single database to more than one server, you will need the database partitioning feature (DPF) of IBM’s DB2 UDB V8.1 Enterprise Server Edition (DB2 ESE V8.1). In addition, to enable DPF in DB2 UDB ESE, you will need a valid DPF license. DPF offers the necessary scalability to distribute a large database over multiple partitions (logical and physical) using a shared-nothing architecture. DPF can be beneficial for both standalone SMP servers and environments consisting of more than one server. With the DPF "divide and conquer" processing, scalability can be enhanced within a single server (scaling up) and across a cluster of servers (scaling out).

The goal of this article is to provide straightforward step-by-step instructions on how to install and configure DB2 ESE V8.1 with DPF on Red Hat Linux Advanced Server (AS) 2.1 using the scripted db2_install approach (as opposed to the GUI Setup wizard). This article is geared towards the novice user, but can be used by IT architects and specialists already familiar with this architecture. The db2_install utility installs the DB2 file sets, but does not create an instance, users, or perform any other configuration tasks performed by the DB2 Setup wizard. Some people prefer to use db2_install when installing DB2 on a large, complex database system that has special requirements or when installing DB2 using a telnet session to the DB2 server.

Following the steps below, you could potentially design and set up as many as 1000 partitions, but in our setup we used two Intel-based IBM Blade Server machines to set up two physical partitions (and two logical partitions on each physical server).

Note: The following easy-to-follow step-by-step instructions were tested at a partner’s site during a skill transfer engagement.

Figure 1. Partitioned database across two machines
Partitioned database across two machines

Operating system

Red Hat Advanced Server 2.1 (RHAS)

Hardware

Two 2-way Intel-based IBM Blade Servers:

  • HS20 Blade Server
  • 2 GB RAM
  • 2 x 2.4 GHz
  • One machine as NFS server (ServerA)
  • Second machine as NFS client (ServerB)

IBM middleware

File system configurations

Two 2-way Intel-based IBM Blade Servers configured for a DB2 V8.1 partitioned environment:

  • For partitioned environments, it is recommended that you create a separate DB2 home file system on one of the machine in the cluster to be used as the instance home directory. This file system is to be shared between all machines in the cluster via NFS; that is, NFS-exported from the NFS server machine (in our setup, ServerA), and NFS-mounted on the remaining machines (in our setup, ServerB). In our setup we created the file system /db2home with mount point /db2home on ServerA , which is DB2 Instance owner’s home directory.
  • For partitioned database systems, there should be a separate database file system on each machine that participates in the partitioned database. In our setup, we created a file system called /data1 with mount point /data1 on ServerA and ServerB.
  • The DB2 install and fixpak images must be made available to all the participating machines. We simply created a directory called images within the /db2home file system on ServerA. This directory will then contain the DB2 install image and fixpak and will be made available to all participating machines through NFS.

Installation and configuration instructions

The following were the two machines used in our setup:

  • ServerA: The instance-owning machine (there can be only one instance-owning machine), where ServerA is the actual hostname of the machine.
  • ServerB (and each remaining machine in the cluster): The participating machine(s), where ServerB is the actual hostname of the machine.

What follows are the steps used to successfully install and run DB2 in the DPF environment.

Step 1 - Verify the file system mounts

As root, verify that the required files systems are mounted. We assume that the file systems were created ahead of time (see the File system configurations, above).

ServerA:

  • Verify that the file system /db2home, which was created ahead of time, is indeed mounted. Issue the following command and ensure that /db2home appears in the 'Mounted on' column of the output:
    • df

ServerA and ServerB (and each remaining machine in the cluster):

  • Verify that the file system /data1, which was created ahead of time, is mounted. Issue the following command and ensure that /data1 appears in the 'Mounted on' column of the output:
    • df

Step 2 - Required Files and packages

As root, make the appropriate DB2 images available and verify that Red Hat has all of the required packages installed.

ServerA:

  1. Copy the Install Image C48THML.tar file for DB2 UDB Enterprise Server Edition V8.1 Linux for Intel 32-bit to the /db2home/images directory.
  2. Copy the PSoft_10533_v81_fp5a_linuxintel_32.tar file for FixPak 5a to the /db2home/images directory.

ServerA and ServerB (and each remaining machine in the cluster)

  1. Verify that the required Red Hat packages are installed, by issuing the following commands:
    • rpm -qa | grep pdksh
    • rpm -qa | grep rsh
    • rpm -qa | grep nfs

If the above packages are not installed, you should be able to download the missing ones from http://fr.rpmfind.net/linux/RPM/. You can then install them, using a command similar to rpm -ivh pdkshxxxxxx.rpm where pdkshxxxxxx.rpm is a package name. Otherwise, contact your Linux system admininstrator.

Step 3 - Modifying the kernel parameters

As root, modify the kernel parameters on both ServerA and ServerB (and each remaining machines in the cluster) by adding the following entries to the default system control configuration file, /etc/sysctl.conf:

  • kernel.msgmni = 1024
  • kernel.sem = 250 256000 32 1024

where max semaphores system wide = max number of arrays x max semaphores/array.

Now you need to load in the newly updated sysctl settings from the default file /etc/sysctl.conf. Issue the following command to do this:

  • sysctl -p

Step 4 - Starting NFS

As root, verify that NFS is running on both ServerA and ServerB (and each remaining machine in the cluster). Issue the following command:

  • showmount -e hostname

where hostname is the actual hostname of the machine. In our setup, our two hostnames were ServerA and ServerB.

Entering the showmount command without the hostname parameter will check the local system. If NFS is not active you will receive a message similar to the following:

  • showmount: hostname: RPC: Program not registered

This showmount command should list the file systems. If this command fails, the NFS server may not have been started. Run the following command as root on the NFS server to start the server manually:

  • /etc/rc.d/init.d/nfs restart

Step 5 - Enable rsh

As root, enable rsh on both ServerA and ServerB (and each remaining machine in the cluster).

  1. Edit the /etc/xinetd.d/rsh file and change the disable flag to no
  2. Restart the xinetd server by entering:
    • /etc/init.d/xinetd restart

Step 6 - Configuring the NFS server

As root on ServerA, perform the following:

  1. Ensure the /db2home directory is mounted by issuing:
    • mount /db2home
  2. Make sure that your new file system will be mounted at startup by making sure that /etc/fstab contains an entry for it. If there is no entry for /db2home, you should add one. In our setup, we used:
    • /dev/hda7 /db2home ext3 defaults 1 2
  3. Add an entry to /etc/exports file to automatically export the NFS file system at boot time. In our setup, our entry was:
    • /db2home ServerA (rw,sync,no_root_squash) ServerB (rw,sync,no_root_squash)
    If you had more machines, then your entry would look more like:
    • /db2home ServerA (permissions) ServerB (permissions) ServerC (permissions)...
    By default, permissions are set to rw and root_squash. The root_squash setting means that the root user on the client is not treated as root when accessing files on the NFS server. Although this mode of operation is typically desirable in a production environment, you need to turn it off in order to create users on each of the remaining machines in the cluster. To do this, specify no_root_squash in the permissions section.
  4. Export the NFS directory by running:
    • /usr/sbin/exportfs -a

Step 7 - Configuring the NFS client

As root on ServerB (and each remaining machine in the cluster) perform the following:

  1. Create the new directory that will be mapped to the NFS shared directory /db2home by issuing:
    • mkdir /db2home
  2. Add an entry to /etc/fstab to NFS mount the file system automatically at boot time
    • ServerA:/db2home /db2home nfs rw,timeo=300,retrans=5,hard,intr,bg,suid
    where,
    • ServerA - NFS server machine name
    • rw - read and write access
    • timeo=300 - allows the kernel to timeout if the nfs server is not responding (in tenths of a sec)
    • retrans=5 - sets the number of repeating requests before an error returns
    • hard - when the server hangs the client is blocked until the server comes back
    • intr - the client (user) can interrupt blocked operations which leads to an error
    • bg - if a mount fails the system keeps retrying in the background and continue
    • suid - allow set-user-identifier or set-group-identifier bits to take effect
  3. NFS mount the exported file system
    • mount SereverA:/db2home /db2home
    If NFS is not active you will receive a message similar to the following:
    • showmount: ServerA: RPC: Program not registered

    The above showmount command should list the file systems. If this command fails, the NFS server may not have been started. Run the following command as root on the NFS server to start the server manually:

    • /etc/rc.d/init.d/nfs restart

Step 8 - Creating required groups and users

As root on ServerA and ServerB (and each remaining machine in the cluster) perform the following to create the appropriate OS users and groups for DB2 (Note: user and group IDs on each machine must be the same):

  1. groupadd -g 999 db2iadm1
  2. groupadd -g 998 db2fadm1
  3. groupadd -g 997 dasadm1
  4. useradd -u 1004 -g db2iadm1 -m -d /db2home/db2inst1 db2inst1 -p password1
  5. useradd -u 1003 -g db2fadm1 -m -d /db2home/db2fenc1 db2fenc1 -p password2
  6. useradd -u 1002 -g dasadm1 -m -d /home/dasusr1 dasusr1 -p password3
  7. passwd db2inst1
  8. passwd db2fenc1
  9. passwd dasusr1

Step 9 - Install DB2 UDB V8.1 ESE and FixPak on instance-owning machine

As root on ServerA, perform the following to install IBM DB2 UDB Enterprise Server Edition V8.1 and FixPak:

  1. cd /db2home/images/
  2. tar xvf C48THML.tar
  3. tar xvf PSoft_10533_v81_fp5a_linuxintel_32.tar
  4. Install DB2:
    • cd /db2home/images/009_ESE_LNX_32_NLV
    • ./db2_install
  5. When db2_install prompts you for the product keyword, enter:
    • DB2.ESE
  6. Install the FixPak:
    • cd /db2home/images/FP5a
    • ./installFixPak

The installation directory for DB2 on Linux is /opt/IBM/db2/V8.1

Step 10 - Post-installation tasks on instance-owning machine

As root on ServerA, perform the following:

  1. Create DB2 instance:
    • cd /opt/IBM/db2/V8.1/instance
    • ./db2icrt -u db2fenc1 db2inst1
  2. Configure TCP/IP for DB2 instance by updating the /etc/services file to specify the service name and port number that DB2 server will listen on for client requests. Our file looked like the following:
    • db2c_db2inst1 50000/tcp # DB2 connections service port
    • DB2_db2inst1 60000/tcp
    • DB2_db2inst1_1 60001/tcp
    • DB2_db2inst1_2 60002/tcp
    • DB2_db2inst1_END 60003/tcp
  3. Update the database manager configuration file and .profile on the server:
    1. su - db2inst1
    2. Set up the instance environment by running the db2profile script
      • ./db2home/db2inst1/sqllib/db2profile

      To set up the user environment every time the user logs on to the Linux system, add the following command to .profile:

      • vi /db2home/db2inst1/.profile file
      • ./db2home/db2inst1/sqllib/db2profile
    3. Update SVCENAME parameter in DBM configuration file:
      • db2 update dbm cfg using SVCENAME db2c_db2inst1
    4. Set DB2COMM registry variable to tcpip:
      • db2set DB2COMM=tcpip
    5. Stop and re-start the instance for these changes to take effect
      • db2stop
      • db2start
  4. As root, create DB2 Administration Server (DAS)
    • su -
    • cd /opt/IBM/db2/V8.1/instance
    • ./dascrt -u dasusr1

Note: DAS is required if you plan to use the DB2 GUI tools, such as Control Center and Task Center.

Step 11 - Install DB2 UDB V8.1 ESE and FixPak on participating machine(s)

As root on ServerB (and each remaining machine in the cluster), perform the following to install IBM DB2 UDB Enterprise Server Edition V8.1 and FixPak:

  1. Install DB2:
    • cd /db2home/images/009_ESE_LNX_32_NLV
    • ./db2_install
  2. When db2_install prompts you for the product keyword, enter:
    • DB2.ESE
  3. Now install the FixPak:
    • cd /db2home/images/FP5a
    • ./installFixPak

Step 12 - Partition configuration

In a partitioned database system, each database partition server must have the authority to perform remote commands on all of the other database partition servers participating in an instance. As root on ServerA, perform the following to prepare the DB2 instance to run on more than one partition:

  1. Open the .rhosts file in an editor:
    • vi /db2home/db2inst1/.rhosts
  2. Add the following two lines to .rhosts and save:
    • ServerA db2inst1
    • ServerB db2inst1
  3. Give root (and only root) (r)ead and (w)rite permission on the .rhosts file:
    • chmod 600 /db2home/db2inst1/.rhosts
  4. To define which hosts are participating in partitioned environment, db2nodes.cfg must be updated:
    • vi /db2home/db2inst1/sqllib/db2nodes.cfg
  5. Add the following lines (which represent the partition number, hostname, and logical-port) to db2nodes.cfg and save:
    • 0 ServerA 0
    • 1 ServerA 1
    • 2 ServerB 0
    • 3 ServerB 1
    In this case, we have two logical partitions running on each physical server.

To enable communication between the database partition servers that participate in your partitioned database systems, you must reserve TCPIP ports in the /etc/services file. As root on ServerB (and each remaining machine in the cluster), make sure each machine's /etc/services file contains a set of identical DB2 entries.

  • Our /etc/services file contained the following DB2 entries:
    • # Local services
    • db2c_db2inst1 50000/tcp
    • DB2_db2inst1 60000/tcp
    • DB2_db2inst1_1 60001/tcp
    • DB2_db2inst1_2 60002/tcp
    • DB2_db2inst1_END 60003/tcp

Once all the updates have been made, you can check your settings using the db2_all command, which will query the date on all partitions (in our setup, two logical partitions on each of two physical servers). As db2inst1 on ServerA or ServerB:

  • db2_all date
Listing 1. db2_all test
$ db2_all date

Fri Jul 16 18:57:57 EDT 2004
ServerA: date completed ok

Fri Jul 16 18:57:57 EDT 2004
ServerA: date completed ok

Fri Jul 16 18:57:49 EDT 2004
ServerB: date completed ok

Fri Jul 16 18:57:49 EDT 2004
ServerB: date completed ok

$

This verifies the successful installation and configuration of DB2 UDB ESE V8.1 with DPF on Red Hat Advanced Server 2.1 using db2_install.

Step 13 - Testing configuration

Now you are ready to work with DB2 UDB ESE V8.1 with the DPF environment. Try the following:

  1. db2stop
  2. db2start
  3. cd /db2home/db2inst1/sqllib/bin
  4. ./db2sampl
  5. db2 connect to sample
  6. db2 "select * from sales"

Resources

Comments

developerWorks: Sign in

Required fields are indicated with an asterisk (*).


Need an IBM ID?
Forgot your IBM ID?


Forgot your password?
Change your password

By clicking Submit, you agree to the developerWorks terms of use.

 


The first time you sign into developerWorks, a profile is created for you. Information in your profile (your name, country/region, and company name) is displayed to the public and will accompany any content you post, unless you opt to hide your company name. You may update your IBM account at any time.

All information submitted is secure.

Choose your display name



The first time you sign in to developerWorks, a profile is created for you, so you need to choose a display name. Your display name accompanies the content you post on developerWorks.

Please choose a display name between 3-31 characters. Your display name must be unique in the developerWorks community and should not be your email address for privacy reasons.

Required fields are indicated with an asterisk (*).

(Must be between 3 – 31 characters.)

By clicking Submit, you agree to the developerWorks terms of use.

 


All information submitted is secure.

Dig deeper into Information management on developerWorks


static.content.url=http://www.ibm.com/developerworks/js/artrating/
SITE_ID=1
Zone=Information Management
ArticleID=58907
ArticleTitle=Install the DB2 UDB data partitioning feature on Linux
publish-date=04142005