Install and configure General Parallel File System (GPFS) on xSeries

Walk through a simple General Parallel File System (GPFS) implementation. In the Linux® world today, you have a variety of file systems available, such as ext2, ext3, ReiserFS, JFS, and so on. Similarly, in the clustered environment, you need a file system that can scale well, give better throughput, and provide high fault tolerance. The IBM GPFS fits the bill. It has large block size support with wide striping, parallel access to files from multiple nodes, token management, and more.


Harish Chauhan (, Linux Architect, IBM

Photo of Harish ChauhanHarish has been with IBM since 1998 and has 13 years of experience. During his last seven years with IBM, he has spent five years at the India Research Lab and over a year and a half at the IBM T.J.Watson Research Center. Harish currently leads the Linux Center of Competency in Bangalore, India. You can contact him at

21 March 2006

Also available in Russian


A file system describes the way information is stored on a hard disk, such as ext2, ext3, ReiserFS, and JFS. The General Parallel File System (GPFS) is another type of file system available for a clustered environment. The design of GPFS has better throughput and a high fault tolerance. This article discusses a simple case of GPFS implementation. To keep things easy, you'll use machines with two hard disks -- the first hard disk is used for a Linux® installation and the second is left "as is" (in raw format).

Hardware, software, and setup

The required hardware and software are listed below. Figure 1 shows the system setup.

  • eServer™ x336
  • Red Hat Enterprise Linux Version 4.0 Advanced Server with Update 2
  • GPFS Version 2.3
  • GPFS Version Fix
Figure 1. Sample setup

Installation and configuration

Before you start, please note the following assumptions:

  • All machines have Red Hat Enterprise Linux AS installed (for example, RHEL4 AS with Update 2).
  • The /etc/hosts file is updated on all the nodes.
  • SSH is configured so that "root" can log in to any machine without a password prompt.
  • The nodes and act as GPFS servers and offer /dev/sdb for storage. Node acts as a GPFS client.
  • GPFS code is available in tar format in the /dump folder.
  • Steps 1, 2, 3, 4, and 5 are required on all the nodes (,, and
  • Steps 6 and 7 are only required on, because it has gpfs.gpl-2.3.0-9.noarch.rpm installed.
  1. If you have the code available in tar format, extract the GPFS files or mount the CD-ROM or DVD having the GPFS RPMs, as follows:
    #cd /dump
    #tar zxvf gpfs_code.tar.gz
    Figure 2 above illustrates the some of the extracted contents:

    Figure 2. Contents of GPFS
  2. Extract the GPFS RPMs, as follows:
    #./gpfs_install-2.3.0-0_i386 --dir /dump

    This command displays the "License" message. Accept it. To suppress the license message, use --silent.

    Figure 3. GPFS extraction

    After extraction, you should see:

    Figure 4. Extracted files
  3. For GPFS installation, use:
    #cd /dump
    #rpm -ivh gpfs.msg.en_US-2.3.0-0.noarch.rpm gpfs.base-2.3.0-0.i386.rpm gpfs.gpl-2.3.0-0.noarch.rpm OR rpm -ivh *.rpm
    Figure 5. GPFS installation

    The gpfs.gpl-2.3.0-0.noarch.rpm install is only required on

  4. After GPFS RPMs are installed, the binaries are available in the path, as shown in the sample below.
    #cd /usr/lpp/mmfs
    Figure 6. Binary paths
  5. For GPFS fix installation, use:
    #tar zxvf gpfs-2.3.0-9.i386.update.tar.gz
    #rpm -Uvh gpfs.msg.en_US-2.3.0-9.noarch.rpm gpfs.base-2.3.0-9.i386.rpm gpfs.gpl-2.3.0-9.noarch.rpm  OR rpm -Uvh *.rpm
    Figure 7. GPFS fix install

    The gpfs.gpl-2.3.0-9.noarch.rpm upgrade is required only on

  6. You are now ready to build GPFS GPL modules on the node ( selected for this activity. You'll do this activity on just one node. After that, you'll distribute the generated binaries to the other nodes ( and
    #cd /usr/lpp/mmfs/src/config
    #cp site.mcr.proto site.mcr 
    #vi site.mcr

    Refer to the README in /usr/lpp/mmfs/src for the desired changes.

    Figure 8. GPFS GPL modules

    In the sample above, LINUX_DISTRIBUTION, LINUX_DISTRIBUTION_LEVEL, and LINUX_KERNEL_VERSION were specifically updated to match RHEL4 Update 2.

    #cd /usr/lpp/mmfs/src
    #export SHARKCLONEROOT=/usr/lpp/mmfs/src
    #make World
    #make InstallImages
    Figure 9. Install images
    #cd /usr/lpp/mmfs/bin
    #scp mmfslinux mmfs26 lxtrace dumpconv tracedev gpfs2:/usr/lpp/mmfs/bin
     /* Copy to all the other nodes */
  7. At this point, you're ready to create a GPFS cluster and configure the Network Storage Device (NSD).
    1. Start the GPFS daemons on all the nodes using:
      #/usr/lpp/mmfs/bin/mmstartup -a
      Figure 10. GPFS daemons
    2. Create a file that identifies the nodes participating in the GPFS cluster. See Resources for detailed information on the Configuration and Tuning GPFS for Digital Media Environments redbook.
      #vi  /dump/gpfs.nodes
      Figure 11. GPFS nodes
    3. Create a separate file that contains the disk information for creating the NSD. See the Configuration and Tuning GPFS for Digital Media Environments redbook for detailed information in the Resources section.
      #vi  /dump/gpfs.disks
      Figure 12. GPFS disks
    4. Create the GPFS cluster using:
      #/usr/lpp/mmfs/bin/mmcrcluster -p -s -n
      /dump/gpfs.nodes -r /usr/bin/ssh -R /usr/bin/scp
      Figure 13. GPFS cluster

      List the GPFS cluster details using:

      Figure 14. GPFS mmls cluster
    5. Create the NSD using:
      #/usr/lpp/mmfs/bin/mmcrnsd -F /dump/gpfs.disks -v yes
      Figure 15. Creating the NSD

      List the NSD details using:

      Figure 16. NSD details

      The contents of the /dump/gpfs.disks changes after you execute the mmcrnsd command.

      Figure 17. Modified GPFS disks
      modified files
    6. Create the GPFS file system using:
      #/usr/lpp/mmfs/bin/mmcrfs /gpfs gpfsdev -F /dump/gpfs.disks -B 
      1024K -m 1 -M 2 -r 1 -R 2
      Figure 18. Creating the GPFS file system

      The above command creates the GPFS type file system (for example, 286GB) and mounts it in the /gpfs folder. It also makes an entry in the /etc/fstab file on all the nodes so that the file system can be automatically mounted on a reboot.

      Figure 19. fstab file
      fstab file
    7. Your GPFS environment is now ready for use.

Uninstallation process

To uninstall GPFS RPMs, do the following:

#rpm -e gpfs.msg.en_US gpfs.base   
   /* and */
#rpm -e gpfs.msg.en_US gpfs.base gpfs.gpl   
   /* */
#rm -rf /usr/lpp/mmfs   
   /*, and  */



Get products and technologies



developerWorks: Sign in

Required fields are indicated with an asterisk (*).

Need an IBM ID?
Forgot your IBM ID?

Forgot your password?
Change your password

By clicking Submit, you agree to the developerWorks terms of use.


The first time you sign into developerWorks, a profile is created for you. Information in your profile (your name, country/region, and company name) is displayed to the public and will accompany any content you post, unless you opt to hide your company name. You may update your IBM account at any time.

All information submitted is secure.

Choose your display name

The first time you sign in to developerWorks, a profile is created for you, so you need to choose a display name. Your display name accompanies the content you post on developerWorks.

Please choose a display name between 3-31 characters. Your display name must be unique in the developerWorks community and should not be your email address for privacy reasons.

Required fields are indicated with an asterisk (*).

(Must be between 3 – 31 characters.)

By clicking Submit, you agree to the developerWorks terms of use.


All information submitted is secure.

Dig deeper into Linux on developerWorks

ArticleTitle=Install and configure General Parallel File System (GPFS) on xSeries