Contents


Install and configure General Parallel File System (GPFS) on xSeries

Comments

A file system describes the way information is stored on a hard disk, such as ext2, ext3, ReiserFS, and JFS. The General Parallel File System (GPFS) is another type of file system available for a clustered environment. The design of GPFS has better throughput and a high fault tolerance. This article discusses a simple case of GPFS implementation. To keep things easy, you'll use machines with two hard disks -- the first hard disk is used for a Linux® installation and the second is left "as is" (in raw format).

Hardware, software, and setup

The required hardware and software are listed below. Figure 1 shows the system setup.

  • eServer™ x336
  • Red Hat Enterprise Linux Version 4.0 Advanced Server with Update 2
  • GPFS Version 2.3
  • GPFS Version 2.3.0.9 Fix
Figure 1. Sample setup
Setup
Setup

Installation and configuration

Before you start, please note the following assumptions:

  • All machines have Red Hat Enterprise Linux AS installed (for example, RHEL4 AS with Update 2).
  • The /etc/hosts file is updated on all the nodes.
  • SSH is configured so that "root" can log in to any machine without a password prompt.
  • The nodes gpfs1.my.com and gpfs2.my.com act as GPFS servers and offer /dev/sdb for storage. Node gpfs3.my.com acts as a GPFS client.
  • GPFS code is available in tar format in the /dump folder.
  • Steps 1, 2, 3, 4, and 5 are required on all the nodes (gpfs1.my.com, gpfs2.my.com, and gpfs3.my.com).
  • Steps 6 and 7 are only required on gpfs1.my.com, because it has gpfs.gpl-2.3.0-9.noarch.rpm installed.
  1. If you have the code available in tar format, extract the GPFS files or mount the CD-ROM or DVD having the GPFS RPMs, as follows:
    #cd /dump
    #tar zxvf gpfs_code.tar.gz
    Figure 2 above illustrates the some of the extracted contents:

    Figure 2. Contents of GPFS
    gpfs_files
    gpfs_files
  2. Extract the GPFS RPMs, as follows:
    #./gpfs_install-2.3.0-0_i386 --dir /dump

    This command displays the "License" message. Accept it. To suppress the license message, use --silent.

    Figure 3. GPFS extraction
    gpfs_rpms
    gpfs_rpms

    After extraction, you should see:

    Figure 4. Extracted files
    gpfs_rpms
    gpfs_rpms
  3. For GPFS installation, use:
    #cd /dump
    #rpm -ivh gpfs.msg.en_US-2.3.0-0.noarch.rpm gpfs.base-2.3.0-0.i386.rpm 
    gpfs.docs-2.3.0-0.noarch.rpm gpfs.gpl-2.3.0-0.noarch.rpm OR rpm -ivh *.rpm
    Figure 5. GPFS installation
    rpm_install
    rpm_install

    The gpfs.gpl-2.3.0-0.noarch.rpm install is only required on gpfs1.my.com.

  4. After GPFS RPMs are installed, the binaries are available in the path, as shown in the sample below.
    #cd /usr/lpp/mmfs
    Figure 6. Binary paths
    binary_path
    binary_path
  5. For GPFS fix installation, use:
    #tar zxvf gpfs-2.3.0-9.i386.update.tar.gz
    #rpm -Uvh gpfs.msg.en_US-2.3.0-9.noarch.rpm gpfs.base-2.3.0-9.i386.rpm
    gpfs.docs-2.3.0-9.noarch.rpm gpfs.gpl-2.3.0-9.noarch.rpm  OR rpm -Uvh *.rpm
    Figure 7. GPFS fix install
    rpm_fix_install
    rpm_fix_install

    The gpfs.gpl-2.3.0-9.noarch.rpm upgrade is required only on gpfs1.my.com.

  6. You are now ready to build GPFS GPL modules on the node (gpfs1.my.com) selected for this activity. You'll do this activity on just one node. After that, you'll distribute the generated binaries to the other nodes (gpfs2.my.com and gpfs3.my.com).
    #cd /usr/lpp/mmfs/src/config
    #cp site.mcr.proto site.mcr 
    #vi site.mcr

    Refer to the README in /usr/lpp/mmfs/src for the desired changes.

    Figure 8. GPFS GPL modules
    site_mcr
    site_mcr

    In the sample above, LINUX_DISTRIBUTION, LINUX_DISTRIBUTION_LEVEL, and LINUX_KERNEL_VERSION were specifically updated to match RHEL4 Update 2.

    #cd /usr/lpp/mmfs/src
    #export SHARKCLONEROOT=/usr/lpp/mmfs/src
    #make World
    #make InstallImages
    Figure 9. Install images
    make_InstallImages
    make_InstallImages
    #cd /usr/lpp/mmfs/bin
    #scp mmfslinux mmfs26 lxtrace dumpconv tracedev gpfs2:/usr/lpp/mmfs/bin
     /* Copy to all the other nodes */
  7. At this point, you're ready to create a GPFS cluster and configure the Network Storage Device (NSD).
    1. Start the GPFS daemons on all the nodes using:
      #/usr/lpp/mmfs/bin/mmstartup -a
      Figure 10. GPFS daemons
      mmstartu
      mmstartu
    2. Create a file that identifies the nodes participating in the GPFS cluster. See Related topics for detailed information on the Configuration and Tuning GPFS for Digital Media Environments redbook.
      #vi  /dump/gpfs.nodes
      Figure 11. GPFS nodes
      gpfs_nodelist
      gpfs_nodelist
    3. Create a separate file that contains the disk information for creating the NSD. See the Configuration and Tuning GPFS for Digital Media Environments redbook for detailed information in the Related topics section.
      #vi  /dump/gpfs.disks
      Figure 12. GPFS disks
      gpfs_disks
      gpfs_disks
    4. Create the GPFS cluster using:
      #/usr/lpp/mmfs/bin/mmcrcluster -p gpfs1.my.com -s gpfs2.my.com -n
      /dump/gpfs.nodes -r /usr/bin/ssh -R /usr/bin/scp
      Figure 13. GPFS cluster
      mmcrcluster
      mmcrcluster

      List the GPFS cluster details using:

      /usr/lpp/mmfs/bin/mmlscluster
      Figure 14. GPFS mmls cluster
      mmlscluster
      mmlscluster
    5. Create the NSD using:
      #/usr/lpp/mmfs/bin/mmcrnsd -F /dump/gpfs.disks -v yes
      Figure 15. Creating the NSD
      mmcrnsd
      mmcrnsd

      List the NSD details using:

      /usr/lpp/mmfs/bin/mmlsnsd
      Figure 16. NSD details
      mmlsnsd
      mmlsnsd

      The contents of the /dump/gpfs.disks changes after you execute the mmcrnsd command.

      Figure 17. Modified GPFS disks
      modified files
      modified files
    6. Create the GPFS file system using:
      #/usr/lpp/mmfs/bin/mmcrfs /gpfs gpfsdev -F /dump/gpfs.disks -B 
      1024K -m 1 -M 2 -r 1 -R 2
      Figure 18. Creating the GPFS file system
      mmcrfs
      mmcrfs

      The above command creates the GPFS type file system (for example, 286GB) and mounts it in the /gpfs folder. It also makes an entry in the /etc/fstab file on all the nodes so that the file system can be automatically mounted on a reboot.

      Figure 19. fstab file
      fstab file
      fstab file
    7. Your GPFS environment is now ready for use.

Uninstallation process

To uninstall GPFS RPMs, do the following:

#rpm -e gpfs.msg.en_US gpfs.docs gpfs.base   
   /* gpfs2.my.com and gpfs3.my.com */
#rpm -e gpfs.msg.en_US gpfs.docs gpfs.base gpfs.gpl   
   /* gpfs1.my.com */
#rm -rf /usr/lpp/mmfs   
   /* gpfs1.my.com, gpfs2.my.com and gpfs3.my.com  */

Downloadable resources


Related topics


Comments

Sign in or register to add and subscribe to comments.

static.content.url=http://www.ibm.com/developerworks/js/artrating/
SITE_ID=1
Zone=Linux
ArticleID=106334
ArticleTitle=Install and configure General Parallel File System (GPFS) on xSeries
publish-date=03212006