Install and configure a GPFS cluster on AIX

This page has not been liked. Updated 4/8/15, 12:37 PM by gcorneauTags: None


  • Verify the system environment
  • Create a GPFS cluster
  • Define NSD's
  • Create a GPFS file system

You will need


Requirements for this lab (not necessarily GPFS minimum requirements):

  • Two AIX 6.1 or 7.1 operating systems (LPARs)
    • Very similar to Linux installation. AIX LPP packages replace the Linux RPMs, some of the administrative commands are different.
  • At least 4 hdisks

Step 1: Verify Environment

  1. Verify nodes properly installed
    1. Check that the operating system level is supported

      On the system run oslevel

      Check the GPFS FAQ:
    2. Is the installed OS level supported by GPFS? Yes No
    3. Is there a specific GPFS patch level required for the installed OS? Yes No
    4. If so what patch level is required? ___________
  2. Verify nodes configured properly on the network(s)
    1. Write the name of Node1: ____________
    2. Write the name of Node2: ____________
    3. From node 1 ping node 2
    4. From node 2 ping node 1

      If the pings fail, resolve the issue before continuing.
  3. Verify node-to-node ssh communications (For this lab you will use ssh and scp for secure remote commands/copy)
    1. On each node create an ssh-key. To do this use the command ssh-keygen; if you don't specify a blank passphrase, -N, then you need to press enter each time you are promoted to create a key with no passphrase until you are returned to a prompt. The result should look something like this:

      # ssh-keygen -t rsa -N "" -f $HOME/.ssh/id_rsa
      Generating public/private rsa key pair.
      Created directory '/.ssh'.
      Your identification has been saved in /.ssh/id_rsa.
      Your public key has been saved in /.ssh/
      The key fingerprint is:
    2. On node1 copy the $HOME/.ssh/ file to $HOME/.ssh/authorized_keys
      # cp $HOME/.ssh/ $HOME/.ssh/authorized_keys
    3. From node1 copy the $HOME/.ssh/ file from node2 to /tmp/
      # scp node2:/.ssh/ /tmp/
    4. Add the public key from node2 to the authorized_keys file on node1
      # cat /tmp/ >> $HOME/.ssh/authorized_keys
    5. Copy the authorized key file from node1 to node2
      # scp $HOME/.ssh/authorized_keys node2:/.ssh/authorized_keys
    6. To test your ssh configuration ssh as root from node 1 to node1 and node1 to node2 until you are no longer prompted for a password or for addition to the known_hosts file.

      node1# ssh node1 date
      node1# ssh node2 date
      node2# ssh node1 date
      node2# ssh node2 date
    7. Supress ssh banners by creating a .hushlogin file in the root home directory
      # touch $HOME/.hushlogin
  4. Verify the disks are available to the system

    For this lab you should have 4 disks available for use hdiskw-hdiskz.
    1. Use lspv to verify the disks exist
    2. Ensure you see 4 unused disks besides the existing rootvg disks and/or other volume groups.

Step 2: Install the GPFS software


On node1

  1. Locate the GPFS software in /yourdir/gpfs/base/
    # cd /yourdir/gpfs/base/
  2. Run the inutoc command to create the table of contents, if not done already
    # inutoc .
  3. Install the base GPFS code using the installp command
    # installp -aXY -d/yourdir/gpfs/base all
  4. Locate the latest GPFS updates in /yourdir/gpfs/fixes/
    # cd /yourdir/gpfs/fixes/
  5. Run the inutoc command to create the table of contents, if not done already
    # inutoc .
  6. Install the GPFS PTF updates using the installp command
    # installp -aXY -d/yourdir/gpfs/fixes all
  7. Repeat Steps 1-7 on node2. On node1 and node2 confirm GPFS is installed using the lslpp command
    # lslpp -L gpfs.\*

    the output should look similar to this

    Fileset                      Level   State Type  Description (Uninstaller)
    gpfs.base             A    F    GPFS File Manager        A    F    GPFS Server Manpages and Documentation
    gpfs.gskit            A    F    GPFS GSKit Cryptography Runtime
    gpfs.msg.en_US        A    F    GPFS Server Messages U.S. English

    Note 1: The above example is from GPFS V4.1 Express Edition.  The important part is the base, docs and msg filesets are present.

                 If you have GPFS Standard Edition, you should also have the following:

    gpfs.ext              A    F    GPFS Extended Features
                 If you have GPFS Advanced Edition, in addition to gpfs.ext, you should also have the following entry:
    gpfs.crypto           A    F    GPFS Cryptographic Subsystem

    Note2: The gpfs.gnr fileset is used by the Power 775 HPC cluster only and there is no need to install this fileset on any other AIX cluster.  This fileset does not ship on the V4.1 media.

  8. Confirm the GPFS binaries are in your $PATH using the mmlscluster command
    # mmlscluster
    mmlscluster: This node does not belong to a GPFS cluster.
    mmlscluster: Command failed.  Examine previous error messages to determine cause.

    Note: The path to the GPFS binaries is: /usr/lpp/mmfs/bin


Step 3: Create the GPFS cluster


For this exercise the cluster is initially created with a single node. When creating the cluster make node1 the primary configuration server and give node1 the designations quorum and manager. Use ssh and scp as the remote shell and remote file copy commands.

*Primary Configuration server (node1): __________

*Verify fully qualified path to ssh and scp:

ssh path__________

scp path_____________

  1. Use the mmcrcluster command to create the cluster
    # mmcrcluster -N node1:manager-quorum -p node1 -r /usr/bin/ssh -R /usr/bin/scp
    Thu Mar  1 09:04:33 CST 2012: mmcrcluster: Processing node node1
    mmcrcluster: Command successfully completed
    mmcrcluster: Warning: Not all nodes have proper GPFS license designations.
        Use the mmchlicense command to designate licenses as needed.
  2. Run the mmlscluster command again to see that the cluster was created
    # mmlscluster
    | Warning:                                                                    |
    |   This cluster contains nodes that do not have a proper GPFS license        |
    |   designation.  This violates the terms of the GPFS licensing agreement.    |
    |   Use the mmchlicense command and assign the appropriate GPFS licenses      |
    |   to each of the nodes in the cluster.  For more information about GPFS     |
    |   license designation, see the Concepts, Planning, and Installation Guide.  |
    GPFS cluster information
      GPFS cluster name:
      GPFS cluster id:           13882390374179224464
      GPFS UID domain: 
      Remote shell command:      /usr/bin/ssh
      Remote file copy command:  /usr/bin/scp
    GPFS cluster configuration servers:
      Primary server:
      Secondary server:  (none)
    Node Daemon node name            IP address       Admin node name             Designation
       1               quorum-manager


  3. Set the license mode for the node using the mmchlicense command. Use a server license for this node.
    # mmchlicense server --accept -N node1
    The following nodes will be designated as possessing GPFS server licenses:
    mmchlicense: Command successfully completed

Step 4: Start GPFS and verify the status of all nodes

  1. Start GPFS on all the nodes in the GPFS cluster using the mmstartup command
    # mmstartup -a
  2. Check the status of the cluster using the mmgetstate command
    # mmgetstate -a
    Node number  Node name        GPFS state
      1          node1            active



Step 5: Add the second node to the cluster

  1. One node 1 use the mmaddnode command to add node2 to the cluster
    # mmaddnode -N node2
  2. Confirm the node was added to the cluster using the mmlscluster command
    # mmlscluster
  3. Use the mmchcluster command to set node2 as the secondary configuration server
    # mmchcluster -s node2
  4. Set the license mode for the node using the mmchlicense command. Use a server license for this node.
    # mmchlicense server --accept -N node2
  5. Start node2 using the mmstartup command
    # mmstartup -N node2
  6. Use the mmgetstate command to verify that both nodes are in the active state

    # mmgetstate -a

Step 6: Collect information about the cluster


Now we will take a moment to check a few things about the cluster. Examine the cluster configuration using the mmlscluster command

  1. What is the cluster name? ______________________
  2. What is the IP address of node2? _____________________
  3. What date was this version of GPFS "Built"? ________________

    Hint: look in the GPFS log file: /var/adm/ras/mmfs.log.latest

Step 7: Create NSDs


You will use the 4 hdisks.

  • Each disk will store both data and metadata
  • The NSD server field (ServerList) can be left blank if both nodes have direct access to the shared LUNs.
  1. On node1 create the directory /yourdir/data
  2. Create a disk stanza file /yourdir/data/diskdesc.txt using your favorite text editor.

    The format for the file is:
  3. %nsd: device=DiskName
          usage={dataOnly | metadataOnly | dataAndMetadata | descOnly}

    You only need to populate the fields required to create the NSD's, in this example all NSD's use the default failure group and pool definitions.

    Create the NSD's using the mmcrnsd command:
    mmcrnsd -F /yourdir/data/diskdesc.txt 

    Note: hdisk numbers will vary per system.

  4. Create the NSD's using the mmcrnsd command
    # mmcrnsd -F /yourdir/data/diskdesc.txt

Step 8: Collect information about the NSD's


Now collect some information about the NSD's you have created.

  1. Examine the NSD configuration using the mmlsnsd command
    1. What mmlsnsd flag do you use to see the operating system device (/dev/hdisk?) associated with an NSD? _______

Step 9: Create a file system


Now that there is a GPFS cluster and some NSDs available you can create a file system. In this section we will create a file system.

  • Set the file system blocksize to 64kb
  • Mount the file system at /gpfs
  1. Create the file system using the mmcrfs command
    # mmcrfs /gpfs fs1 -F diskdesc.txt -B 64k
  2. Verify the file system was created correctly using the mmlsfs command
    # mmlsfs fs1

    Will the file system automatically mounted when GPFS starts? _________________

  3. Mount the file system using the _mmmount_ command
    # mmmount all -a
  4. Verify the file system is mounted using the df command
    # df -k
    Filesystem    1024-blocks      Free %Used    Iused %Iused Mounted on
    /dev/hd4            65536      6508   91%     3375    64% /
    /dev/hd2          1769472    465416   74%    35508    24% /usr
    /dev/hd9var        131072     75660   43%      620     4% /var
    /dev/hd3           196608    192864    2%       37     1% /tmp
    /dev/hd1            65536     65144    1%       13     1% /home
    /proc                   -         -    -         -     -  /proc
    /dev/hd10opt       327680     47572   86%     7766    41% /opt
    /dev/fs1        398929107 398929000    1%        1     1% /gpfs
  5. Use the mmdf command to get information on the file system.
    # mmdf fs1

    How many inodes are currently used in the file system? ______________