DB2 Version 10.1 for Linux, UNIX, and Windows

Preinstallation checklist for DB2 pureScale Feature (Linux)

Perform the following preinstallation steps and verify them on each host before installing the IBM® DB2® pureScale® Feature.

Before you install

The following steps must be performed on all hosts:

  1. All hosts must use the same Linux distribution.
  2. DB2 pureScale instances require specific users and groups, including fenced users. You can create the users before starting the DB2 Setup wizard or have the wizard create them for you as you progress through the panels. If you are not creating or modifying instances you can create the required users after completing the installation.
  3. Ensure that the required Linux Version and Service Pack is installed.
    • SUSE Linux Enterprise Server (SLES) 10 Service Pack (SP) 3 - The minimum required level is the x64 version of SUSE SLES 10 SP3, kernel 2.6.16.60-0.69.1-smp and the matching kernel source. Check the /etc/SuSE-release file for the operating system level and service pack. The following sample output should be returned:
      cat /etc/SuSE-release
      SUSE Linux Enterprise Server 10 (x86_64)
      VERSION = 10
      PATCHLEVEL = 3
      Enter the following command:
      cat /proc/version
      Linux version 2.6.16.60-0.69.1-smp (geeko@buildhost)(gcc version 4.1.2 20070115 (SUSE Linux)) #1 SMP Fri May 28 12:10:21 UTC 2010
    • For single InfiniBand communication adapter port on Red Hat Enterprise Linux (RHEL) 5.6 - The minimum required level is the x64 version of RHEL 5.6 and the matching kernel source. Check the /etc/redhat-release file for the operating system level and service pack. The following sample output should be returned for RHEL 6.1:
      cat /etc/redhat-release
      Red Hat Enterprise Linux Server release 6.1 (Santiago)
      and for RHEL 5.7:
      cat /etc/redhat-release
      Red Hat Enterprise Linux Server release 5.7 (Tikanga)
      Enter the following command for RHEL 6.1:
      cat /proc/versionLinux version
      2.6.32-131.0.15.el6.x86_64 (mockbuild@x86-007.build.bos.redhat.com) (gcc version 4.4.4 20100726 (Red Hat 4.4.4-13) (GCC) ) #1 SMP Tue May 10 15:42:40 EDT 2011
      and for RHEL 5.7:
      cat /proc/versionLinux version
      2.6.18-274.7.1.el5 (mockbuild@x86-004.build.bos.redhat.com) (gcc version 4.1.2 20080704 (Red Hat 4.1.2-50)) #1 SMP Mon Oct 17 11:57:14 EDT 2011
      Note: If kernel modules (such as RDAC) have not been rebuilt after the kernel upgrade, the kernel modules must be rebuilt before proceeding.
    • For multiple InfiniBand communication adapter port and single or multiple 10GE communication adapter port, Red Hat Enterprise Linux (RHEL) 6.1 is required. - The minimum required level is the x64 version of RHEL 6.1, kernel and the matching kernel source. Check the /etc/redhat-release file for the operating system level and service pack. The following sample output is returned:
      cat /etc/redhat-release
      Red Hat Enterprise Linux Server release 6.1 (Santiago)
      Enter the following command:
      cat /proc/version
      Linux version 
      Ensure the following 32-bit IB/ROCEE packages are installed as part of the RSCT requirements:
      • libibcm.i686
      • libibverbs-rocee.i686
      • librdmacm.i686
      • libcxgb3.i686
      • libibmad.i686
      • libibumad.i686
      • libmlx4-rocee.i686
      • libmthca.i686
      As root, run the following command on each of the package names (listed above):
      yum list | grep package_name
      For example:
      [root]# for i in `cat /tmp/list`; do yum list | grep $i; done
      libibcm.i686            1.0.5-2.el6    @rhel-x86_64-server-6
      libibverbs-rocee.i686   1.1.4-4.el6    @rhel-x86_64-server-hpn-6
      librdmacm.i686          1.0.10-2.el6   @rhel-x86_64-server-6
      libcxgb3.i686           1.3.0-1.el6    @rhel-x86_64-server-6
      libibmad.i686           1.3.4-1.el6    @rhel-x86_64-server-6
      libibumad.i686          1.3.4-1.el6    @rhel-x86_64-server-6
      libmlx4-rocee.i686      1.0.1-8.el6    @rhel-x86_64-server-hpn-6
      libmthca.i686           1.0.5-7.el6    @rhel-x86_64-server-6
  4. For single and multiple communication adapter ports on InfiniBand network on SLES and single communication adapter port on InfiniBand network on RHEL 5.6, ensure that OpenFabrics Enterprise Distribution (OFED) software is installed, and configured. See Configuring the networking settings of hosts on a 10GE network (Linux) and Configuring the networking settings of hosts on an InfiniBand network (Linux) for more information.
  5. Ensure that OpenSSH is installed from the SLES10 media or RHEL media, as appropriate.
  6. For InfiniBand network on both SLES and RHEL 5.5 and 10GE network on RHEL 5.5, ensure that the openibd service is enabled.
    # chkconfig --list | grep -i openibd
    openibd 0:off 1:off 2:off 3:off 4:off 5:off 6:off
    The openibd service must be enabled. To enable the service:
    # chkconfig openibd on
    # chkconfig --list | grep -i openibd
    openibd 0:off 1:off 2:on 3:on 4:on 5:on 6:off
    For 10GE network on RHEL 6.1, ensure that the rdma service is enabled:
    chkconfig --list | grep -i rdma
    rdma 0:off 1:off 2:off 3:off 4:off 5:off 6:off
    The rdma service must be enabled. To enable the service:
    # chkconfig rdma on
    # chkconfig --list | grep -i rdma
    rdma 0:on 1:on 2:on 3:on 4:off 5:off 6:off
  7. DB2 pureScale Feature requires libstdc++.so.6. Verify that the files exist with the following commands:
    ls /usr/lib/libstdc++.so.6* 
    ls /usr/lib64/libstdc++.so.6* 
  8. Optional. To use a specific set of ports, ensure that the ports are free on all hosts. Otherwise, the installer selects the unique set of ports across all hosts. The Fast Communications Manager (FCM) requires a port range of the three mandatory ports plus the value provided for the logical members field. This port range can designate up to 130 hosts (128 members + 2 cluster caching facilities.) The default FCM start port is 60000 and must be in the range 1024 - 65535. In addition, two ports are required for cluster caching facilities. These two ports are chosen automatically.

    Use the grep command on the /etc/services file to ensure that a contiguous range of ports is available.

  9. Confirm that the required network adapters are installed on each server. Ensure that an Ethernet network (eth0) and an InfiniBand network (ib0) or 10 Gigabit Ethernet network (eth1) display. The following sample uses the netstat -i command to list all available network adapters with an InfiniBand communication adapter port.
    root@host1:/> netstat -i 
    Iface   MTU Met   RX-OK RX-ERR RX-DRP RX-OVR   TX-OK TX-ERR TX-DRP TX-OVR Flg
    eth0   1500   0 6876034      0      0      0 5763121      0      0      0 BMRU
    ib0   65520   0  106972      0      0      0       9      0      0      0 BMRU
    lo    16436   0  180554      0      0      0  180554      0      0      0 LRU
    Note: The DB2 pureScale Feature does not support a mixed environment of InfiniBand and 10 Gigabit Ethernet networks, all servers must use the same communication adapter port.
  10. As root, validate ssh access between all hosts. From the current host, run the hostname command on the current host and on all other hosts in the cluster by using the ssh command. The result of the hostname command matching the host name identified in the ssh command verifies ssh access between the two hosts.
    $ ssh host1 hostname
    host1
  11. Optional. For DB2 managed GPFS™ installations, verify the remote shell and remote file copy settings default to db2locssh and db2scp. For example:
    /usr/lpp/mmfs/bin/mmlscluster
      Remote shell command:      /var/db2/db2ssh/db2locssh
      Remote file copy command:  /var/db2/db2ssh/db2scp
  12. If upgrading from DB2 Version 9.8 Fix Pack 2 or earlier, ensure that the .update file, located at <db2 instance shared directory>/sqllib_shared/.update, is synchronized correctly after adding or dropping a member or cluster caching facility (CF). An example of the file location is /db2sd_20110126085343/db2sdin1/sqllib_shared/.update, where <db2 instance shared directory>=db2sd_20110126085343.

    To ensure correct synchronization, check that all hosts are listed in the .update file with the following format: hostname=install path. If incorrectly formatted, update the file. For example: machineA=/opt/IBM/db2/V9.8, where hostname=machineA and install path=/opt/IBM/db2/V9.8.

  13. As root, ensure that the /tmp directory has at least 5 GB of free space. The following command shows the free space in the /tmp directory.
    $ cd /tmp
    $ df -k . 
    Filesystem 1K-blocks Used Available Use% Mounted on
    /dev/sda7 2035606 164768 1870838 9% /tmp
  14. Determine the number of paths to your device on the system with IBM RDAC, DM-MP, or EMC PowerPath driver:
    On systems with IBM RDAC driver, the following are the commands to run, and sample output:
    1. Determine the LUN mapping by using the lsvdev command:
      host1:~ # /opt/mpp/lsvdev
              Array Name      Lun    sd device
              -------------------------------------
              DS5300SVT1      0     -> /dev/sdc
              DS5300SVT1      1     -> /dev/sdd
              DS5300SVT1      2     -> /dev/sde
              DS5300SVT1      3     -> /dev/sdf
              DS5300SVT1      4     -> /dev/sdg
    2. Get a list of storage arrays seen by the host:
      host1:~ # /usr/sbin/mppUtil -a
      Hostname    = host1
      Domainname  = N/A
      Time        = GMT 08/06/2010 16:27:59
      
      ---------------------------------------------------------------
      Info of Array Module's seen by this Host.
      ---------------------------------------------------------------
      ID              WWN                      Type     Name
      ---------------------------------------------------------------
       0      600a0b800012abc600000000402756fc FC     FASTSVT1
       1      600a0b800047bf3c000000004a9553b8 FC     DS5300SVT1
      ---------------------------------------------------------------
    3. For the storage array you are interested in, get the path information (for example for DS5300SVT1).
      host1:~ # /usr/sbin/mppUtil -a DS5300SVT1 | awk '/Status/ || /NumberOfPaths/'
      Controller 'A' Status:
         NumberOfPaths: 1                                          FailoverInProg: N
      Controller 'B' Status:
         NumberOfPaths: 1                                          FailoverInProg: N
      When the disk is of single path setup, only one controller is listed, and the value of NumberOfPaths is 1.
    On systems with the DM-MP driver, the following are the commands to run, and sample output:
    1. Check the Linux SCSI devices:
      [root@host1 ~]# lsscsi
      [3:0:1:0]    disk    IBM      2107900          .450  /dev/sdk
      [3:0:1:2]    disk    IBM      2107900          .450  /dev/sdl
      [3:0:1:3]    disk    IBM      2107900          .450  /dev/sdm
      [3:0:1:4]    disk    IBM      2107900          .450  /dev/sdn
      [3:0:2:0]    disk    IBM      2107900          .450  /dev/sdo
      [3:0:2:2]    disk    IBM      2107900          .450  /dev/sdp
      [3:0:2:3]    disk    IBM      2107900          .450  /dev/sdq
      [3:0:2:4]    disk    IBM      2107900          .450  /dev/sdr
      [4:0:0:0]    disk    IBM      2107900          .450  /dev/sdc
      [4:0:0:2]    disk    IBM      2107900          .450  /dev/sdd
      [4:0:0:3]    disk    IBM      2107900          .450  /dev/sde
      [4:0:0:4]    disk    IBM      2107900          .450  /dev/sdf
      [4:0:1:0]    disk    IBM      2107900          .450  /dev/sdg
      [4:0:1:2]    disk    IBM      2107900          .450  /dev/sdh
      [4:0:1:3]    disk    IBM      2107900          .450  /dev/sdi
      [4:0:1:4]    disk    IBM      2107900          .450  /dev/sdj
    2. List the LUN device mappings:
      [root@host1 ~]# sg_map -x
      /dev/sg9  4 0 0 0  0  /dev/sdc
      /dev/sg10  4 0 0 2  0  /dev/sdd
      /dev/sg11  4 0 0 3  0  /dev/sde
      /dev/sg12  4 0 0 4  0  /dev/sdf
      /dev/sg13  4 0 1 0  0  /dev/sdg
      /dev/sg14  4 0 1 2  0  /dev/sdh
      /dev/sg15  4 0 1 3  0  /dev/sdi
      /dev/sg16  4 0 1 4  0  /dev/sdj
      /dev/sg17  3 0 1 0  0  /dev/sdk
      /dev/sg18  3 0 1 2  0  /dev/sdl
      /dev/sg19  3 0 1 3  0  /dev/sdm
      /dev/sg20  3 0 1 4  0  /dev/sdn
      /dev/sg21  3 0 2 0  0  /dev/sdo
      /dev/sg22  3 0 2 2  0  /dev/sdp
      /dev/sg23  3 0 2 3  0  /dev/sdq
      /dev/sg24  3 0 2 4  0  /dev/sdr
    3. List the multipath devices:
    4. [root@host1 ~]# multipath -l
      mpath2 (36005076304ffc21f000000000000111f) dm-0 IBM,2107900
      '               '                            '          '-- Vendor,Product 
      '               '                            '------------- device-mapper or 
      '               '                                           disk name
      '               '
      '               '------------------------------------------ WWID 
      '
      '---------------------------------------------------------- user friendly 
                                                                  name
      [size=100G][features=1 queue_if_no_path][hwhandler=0][rw]
      '                     '                    '
      '                     '                    '--------------- hardware handler, 
      '                     '                                     if any (seen in 
      '                     '                                     cases of FastT,EMC)  
      '                     '
      '                     '------------------------------------ features supported 
      '                                                           or configured
      '
      '---------------------------------------------------------- Size of the disk
      
      \_ round-robin 0 [prio=0][active]
       '      '          '       '------------------------------- Path Group State
       '      '          '--------------------------------------- Path Group Priority
       '      '
       '      '-------------------------------------------------- Path Selector and 
       '                                                          repeat count
       '
       '--------------------------------------------------------- Path Group Level
      
       \_ 4:0:0:0 sdc 8:32  [active][ready]
          ------- --- ----  ------- ------
            '      '   '       '     '--------------------------- Physical Path State
            '      '   '       '--------------------------------- Device Mapper State
            '      '   '----------------------------------------- Major, Minor number 
            '      '                                              of disk
            '      '--------------------------------------------- Linux SCSI device name
            '
            '---------------------------------------------------- SCSI Information: 
                                                                  Host_ID, Channel_ID, 
                                                                  SCSI_ID, LUN_ID
      
       \_ 4:0:1:0 sdg 8:96  [active][ready]
       \_ 3:0:1:0 sdk 8:160 [active][ready]
       \_ 3:0:2:0 sdo 8:224 [active][ready]
      
      mpath6 (36005076304ffc21f0000000000001123) dm-3 IBM,2107900
      [size=100G][features=1 queue_if_no_path][hwhandler=0][rw]
      \_ round-robin 0 [prio=0][active]
       \_ 4:0:0:4 sdf 8:80  [active][ready]
       \_ 4:0:1:4 sdj 8:144 [active][ready]
       \_ 3:0:1:4 sdn 8:208 [active][ready]
       \_ 3:0:2:4 sdr 65:16 [active][ready]
      mpath5 (36005076304ffc21f0000000000001122) dm-2 IBM,2107900
      [size=1.0G][features=0][hwhandler=0][rw]
      \_ round-robin 0 [prio=0][enabled]
       \_ 4:0:0:3 sde 8:64  [active][ready]
       \_ 4:0:1:3 sdi 8:128 [active][ready]
       \_ 3:0:1:3 sdm 8:192 [active][ready]
       \_ 3:0:2:3 sdq 65:0  [active][ready]
      mpath4 (36005076304ffc21f0000000000001121) dm-1 IBM,2107900
      [size=100G][features=1 queue_if_no_path][hwhandler=0][rw]
      \_ round-robin 0 [prio=0][active]
       \_ 4:0:0:2 sdd 8:48  [active][ready]
       \_ 4:0:1:2 sdh 8:112 [active][ready]
       \_ 3:0:1:2 sdl 8:176 [active][ready]
       \_ 3:0:2:2 sdp 8:240 [active][ready]
    The block device name is listed as the Linux SCSI device name. If there are multiple paths, multiple block devices are displayed under each pseudo name.
    On systems with EMC PowerPath driver, the following are the commands to run, and sample output:
    1. Run the powermt command to display all path and device mappings. This command lists the block devices and paths which are mapped to the device path (for example, /dev/emcpowerd of which the EMC pseudo name is emcpowerd):
      host1:~ # powermt display dev=all
      Pseudo name=emcpowerd
      Symmetrix ID=000194900547
      Logical device ID=0040
      state=alive; policy=BasicFailover; priority=0; queued-IOs=0
      ==============================================================================
      ---------------- Host ---------------   - Stor -   -- I/O Path -  -- Stats ---
      ###  HW Path                I/O Paths    Interf.   Mode    State  Q-IOs Errors
      ==============================================================================
         3 qla2xxx                   sdg       FA  7eB   active  alive      0      0
      
      Pseudo name=emcpowerc
      Symmetrix ID=000194900547
      Logical device ID=0041
      state=alive; policy=BasicFailover; priority=0; queued-IOs=0
      ==============================================================================
      ---------------- Host ---------------   - Stor -   -- I/O Path -  -- Stats ---
      ###  HW Path                I/O Paths    Interf.   Mode    State  Q-IOs Errors
      ==============================================================================
         3 qla2xxx                   sdh       FA  7eB   active  alive      0      0
      
      Pseudo name=emcpowerb
      Symmetrix ID=000194900547
      Logical device ID=0126
      state=alive; policy=BasicFailover; priority=0; queued-IOs=0
      ==============================================================================
      ---------------- Host ---------------   - Stor -   -- I/O Path -  -- Stats ---
      ###  HW Path                I/O Paths    Interf.   Mode    State  Q-IOs Errors
      ==============================================================================
         3 qla2xxx                   sdi       FA  7eB   active  alive      0      0
      
      Pseudo name=emcpowera
      Symmetrix ID=000194900547
      Logical device ID=013C
      state=alive; policy=BasicFailover; priority=0; queued-IOs=0
      ==============================================================================
      ---------------- Host ---------------   - Stor -   -- I/O Path -  -- Stats ---
      ###  HW Path                I/O Paths    Interf.   Mode    State  Q-IOs Errors
      ==============================================================================
         3 qla2xxx                   sdj       FA  7eB   active  alive      0      0
    The block device name is listed under I/O Paths column. If there are multiple paths, multiple block devices are displayed under each pseudo name.
  15. Increase the Mellanox HCA driver mlx4_core parameter log_mtts_per_seg value from 3 (the default) to 7 on the host where the cluster caching facility (CF) resides. To increase the size, issue the following command as root:
    • On SUSE:
      echo "options mlx4_core log_mtts_per_seg=7" >> /etc/modprobe.conf.local
    • On RHEL 6.x:
      echo "options mlx4_core log_mtts_per_seg=7" >> /etc/modprobe.d/modprobe.conf
      options mlx4_core log_mtts_per_seg=7
    For this change to take effect, you must reboot the server. To check whether your change is effective on the module, issue the following command:
    <host-name>/sys/module/mlx4_core/parameters # cat /sys/module/mlx4_core/parameters/log_mtts_per_seg 
    7
  16. In some installations, if Intel TCO WatchDog Timer Driver modules are loaded by default, they should be blacklisted, so that they do not start automatically or conflict with RSCT. To blacklist the modules, edit the following files:
    1. To verify if the modules are loaded
      lsmod | grep -i iTCO_wdt; lsmod | grep -i iTCO_vendor_support
    2. Edit the configuration files:
      • On RHEL 5.x and RHEL 6.1, edit file /etc/modprobe.d/blacklist.conf:
        # RSCT hatsd
        blacklist iTCO_wdt
        blacklist iTCO_vendor_support
      • On SLES, edit file /etc/modprobe.d/blacklist:
        add
        blacklist iTCO_wdt 
        blacklist iTCO_vendor_support 
  17. Optional. If you are doing a root installation of the DB2 pureScale Feature, you must set the ulimit value of filesize to unlimited. You must also set the value of umask to 022. If you do not set the values of ulimit and umask correctly, your DB2 pureScale Feature installation might fail.

    You can view the current values of ulimit and umask by issuing the following command:

    id root; ulimit -f; umask

    You must have root authority to use these commands.

Using the DB2 Setup wizard

To install the DB2 pureScale Feature, you must know the following items. You can enter your values for each of these steps in the preinstallation cheat sheet section that follows.

Preinstallation cheat sheet

Enter the appropriate required item value in the "Your Value" field.
Table 1. Preinstallation cheat sheet
Required Item Your Value Example
Instance owner/group name   db2sdin1/db2iadm1
Fenced user/group name   db2sdfe1/db2fadm1
Installation directory name   /opt/IBM/db2/V10.1
Shared file system disk   /dev/hdisk12
Hosts to include   db2_host01 - db2_host04.
Netname interconnect for each member and CF  

InfiniBand network example: db2_<hostname>_ib0

10 Gigabit Ethernet network example: db2_<hostname>_en1

Note: db2_<hostname>_en1 does not map to a regular ethernet adapter. It must map to the pseudo IP address for the 10GE communication adapter port.
For multiple RoCE adapters configuration, ensure that the third octet of the pseudo IP address to all RoCE adapters on the same host are different.For example,
9.43.1.40 test-en0
9.43.2.40 test-en1
9.43.3.40 test-en2
9.43.4.40 test-en3
The pseudo IP address to all RoCE adapters is stored in the /etc/hosts file.
Table 2. Preinstallation cheat sheet - optional items
Optional Item Your Value Example
Tiebreaker disk   On AIX®: /dev/hdisk13

On Linux: /dev/dm-0 or /dev/sdc

FCM port range   60000 - 60004
cluster caching facilities port range   56000 - 56001
DB2 communication port   50001
Hosts to set up as cluster caching facilities   db2_host03 and db2host04
On InfiniBand, the cluster interconnect netnames of the cluster caching facilities  

Primary: db2_<hostname1>_ib0,db2_<hostname1>_ib1,db2_<hostname1>_ib2,db2_<hostname1>_ib3

Secondary: db2_<hostname2>_ib0,db2_<hostname2>_ib1,db2_<hostname2>_ib2,db2_<hostname2>_ib3

On 10GE, the cluster interconnect netnames of the cluster caching facilities  

Primary: db2_<hostname1>_en1,db2_<hostname1>_en2,db2_<hostname1>_en3,db2_<hostname1>_en4

Secondary: db2_<hostname2>_en1,db2_<hostname2>_en2,db2_<hostname2>_en3,db2_<hostname2>_en4

Hosts to set up as members   db2_host01 and db2host02

What to do next

If you completed all the steps in the preinstallation checklist and filled out the cheat sheet, you can proceed directly to the installation section.