1 like Updated 9/30/16, 1:30 PM by ScottGPFSTags: None

IBM Spectrum Scale version 4.2.1 Quick Start Guide for Linux


Before you start

  • Check the GPFS FAQ for the latest on supported Linux distributions and levels
  • GPFS requries at least some nodes in the cluster to communicate with all of the other nodes without being prompted for a password. The default is rsh/rcp though the most common tools used are ssh/scp. When configuring the remote execution a file copy commands  ensure that:
    1. Proper authorization is granted to all nodes in the GPFS cluster.
    2. The nodes in the GPFS cluster can communicate without the use of a password as user 'root', and without any extraneous output being displayed by the remote shell.
    3. For example to test ssh in an environment with 2 nodes
        # Edit node list
        nodes="node1 node2"
        # Test ssh configuration
        for i in $nodes
        do for j in $nodes
         do echo -n "Testing ${i} to ${j}: "
         ssh ${i} "ssh ${j} date"

        The output should look similar to:

        Testing fs122-data to fs122-data: Wed Oct 15 10:14:35 CDT 2008
        Testing fs122-data to fs121-data: Wed Oct 15 10:14:35 CDT 2008
        Testing fs121-data to fs122-data: Wed Oct 15 10:14:36 CDT 2008
        Testing fs121-data to fs121-data: Wed Oct 15 10:14:36 CDT 2008

        Repeat this for the list of short names (node1), fully qualified names ( and IP addresses ( from all nodes to all nodes.


Installing Spectrum Scale


These instructions include installing the base Spectrum Scale software (no protocols, no GUI) using the yum method of Linux package installation.

  1. Obtain and Install Spectrum Scale from product media. (example for x86_64 Linux)
    • extract the  RPMs and accept the license:


    • The software is extracted to


    • Install the gpfs.base,, gpfs.gpl and gpfs.msg RPMs. In the directory where the software resides use the rpm command to install the GPFS software.
      yum install gpfs.base-4.2.1-1.x86_64.rpm gpfs.callhome-4.2.1-1.000.el7.noarch.rpm 
      gpfs.ext-4.2.1-1.x86_64.rpm gpfs.gpl-4.2.1-1.noarch.rpm
      gpfs.gskit-8.0.50-57.x86_64.rpm gpfs.msg.en_US-4.2.1-1.noarch.rpm


    • Note: It is recommended that you place the base RPM's in a different directory from the patch RPMs to simplify installation (since it is common to do rpm -ivh *.rpm ). This is important becuase you need to install the base RPM's completely before installing a patch level. If you are applying a patch during the initial installation then you only need to build the portability layer once after the patch RPM's are installed.


  1. Download and upgrade GPFS to the latest service level (rpm -Uvh gpfs*.rpm), For the latest GPFS service levels go to IBM Fix Central - GPFS

    rpm -Uvh *.rpm

    Verify all the GPFS RPMs have been upgraded to the same version (rpm -qa | grep gpfs).

  2. Build the Portability layer

    IBM Spectrum Scale 4.1 and later run


    On GPFS 3.5 and earlier, as well as on RHEL-clones that don't identify themselves as RHEL you may need to do this:

    cd /usr/lpp/mmfs/src
    make Autoconfig
    make World
    make InstallImages

    # Cut and paste version

    cd /usr/lpp/mmfs/src; make Autoconfig; make World; make InstallImages

    If make Autoconfig fails for some reason, and you are running a Linux distribution from Redhat on GPFS and later you can try telling Autoconfig that the Linux version should be redhat using the LINUX_DISTRIBUTION flag.


    You will have to perform this build step on each node in the cluster, or if all the nodes are running the same version of Linux you can run make rpm to generate gpfs.gplbin-`uname -r` RPMS. You can install the gpfs.gplbin on the other GPFS nodes to install the portability layer modules.

    make rpm

    make rpm generates gpfs.gplbin-`uname -r`-<GPFS Version>.<CPU Arch>.rpm and by default the RPM is stored in /root/rpmbuild/RPMS/x86_64 directory. When using GPFS 3.2.1 and earlier there is not an rpm option to the GPFS GPL build process. In this case to distribute the kernel module binaries you can copy the files manually. You can see a list of these files when the make InstallImages process completes. Five files are generated during make InstallImages. Three files are installed in /lib/modules/`uname -r`/{mmfslinux, mmfs26, tracedev}.ko and Two files are installed in /usr/lpp/mmfs/bin/{kdump,lxtrace}-`uname -r`. Ensure these Five files are installed properly on rest of the nodes (On rest of the nodes, either build portability layer OR just install the generated gpfs.gplbin RPM from make rpm)

  3. Create the cluster using the mmcrcluster command. Choose a primary and/or secondary configuration server, feed in the list of nodes (NodeName:NodeDesignations:AdminNodeName), and define ssh and scp as the remote shell and remote file copy commands.
    mmcrcluster -N node01:quorum-manager,node02:quorum-manager 
    -r /usr/bin/ssh -R /usr/bin/scp
  4. Set the license mode for each node using the mmchlicensecommand. Use a server or client license setting where appropriate. In this example both nodes are servers.
    mmchlicense server --accept -N node01,node02
  5. Start GPFS on all nodes using the mmstartupcommand:
    mmstartup -a
  6. Verify GPFS state is "active" (mmgetstate -a)
  7. Determine the disk to be used to create a file system and create stanza file that will be used as input to the mmcrnsd command to create the NSD's:stanza.txt

    The format for the file is:
                   %nsd: device=DiskName
                      usage={dataOnly | metadataOnly | dataAndMetadata | descOnly}

    You only need to populate the fields required to create the NSD's, in this example all NSD's use the default failure group and pool definitions.

  8. Create the NSD's using the mmcrnsd command:
    mmcrnsd -F stanza.txt
    # List NSDs
    # For Two Quorum Node Configuration, it is necessary to 
    # define one or three Tie-Breaker Disks from the NSD List. 
    # Configuring tie-breaker Disks requires Spectrum Scale to
    # be shut down across the cluster unless CCR is enabled.
    mmshutdown -a
    mmchconfig tiebreakerDisks="mynsd1"
    mmstartup -a
  9. Create a filesystem using the NSD's you just created using themmcrfscommand:
    mmcrfs gpfs1 -F stanza.txt -A yes -T /gpfs/fs1

    This command creates a file system named gpfs1 and mounts it automatically when GPFS starts at /gpfs.

  10. Mount the file system on all nodes using the mmmountcommand:
    mmmount fs1 -a

Adding GPFS Nodes

  1. To add additional nodes to an existing GPFS cluster start by repeating steps 1,2 and 3 from the Installing GPFS section above.
  2. Ensure the existing nodes in the GPFS cluster can ssh (or whatever tol you are using for admin commands) to the new nodes without the use of a password as user 'root'.
  3. Create a file(for e.g. client-all.txt) with client nodename listed one per line. Following is example to add four client nodes, with nodenames listed one per line.
    #cat client-all.txt




  4. Run the mmaddnode command using that file to add the new nodes to the cluster.
    mmaddnode -N client-all.txt
  5. Set the GPFS client license using the mmchlicense command.
  6. Start GPFS on the newly added client nodes using the mmstartup command.
    mmstartup -N client-all.txt
  7. Verify the state of GPFS on all nodes in the cluster using the mmgetstate command. Ensure GPFS is in "active" state.
    mmgetstate -a
  8. You can list all of the nodes that have a GPFS file system mounted using the mmlsmount command.
    mmlsmount all -L