Linux

1 like Updated 6/12/14 1:44 PM by ScottGPFSTags: None

GPFS V3.5 Quick Start Guide for Linux

 

Before you start:

 
  • The GPFS FAQ should be checked periodically. Question #1 under "Software-specific questions:": "What are the latest distributions and fix or kernel levels that GPFS has been tested with?" points to a table that lists the latest GPFS versions, kernel levels, and distributions which have been tested. Verify that the software levels that will be used are compatible with the contents of this table before you begin.
  • This document covers the basic installation of GPFS. If the situation requires configuration that is non-basic (for example: single-node quorum), then please consult the GPFS Documentation: Concepts, Planning, and Installation Guide.
  • There are a number of required Linux packages to install and run GPFS. The dependencies are listed in the rpm in addition you can check out the Linux Package List for GPFS page for a list of packages required beyond the default.
  • A prerequisite of GPFS is the capability for all nodes in the cluster to communicate with other nodes without with other nodes without being prompted for a password. The default is rsh/rcp though the most common tools used are ssh/scp. If you chose to use a different remote shell and remote file copy command, ssh and scp for example, you must specify the fully qualified pathname for the program to be used by GPFS. You must also ensure:
    1. Proper authorization is granted to all nodes in the GPFS cluster.
    2. The nodes in the GPFS cluster can communicate without the use of a password as user 'root', and without any extraneous output being displayed by the remote shell.
    3. For example to test ssh in an environment with 2 nodes
      1. node1.mydomain.com:10.0.0.1
      2. node2.mydomain.com:10.0.0.2
        #!/bin/bash
        
        # Edit node list
        nodes="node1 node2"
        
        # Test ssh configuration
        for i in $nodes
        do for j in $nodes
         do echo -n "Testing ${i} to ${j}: "
         ssh ${i} "ssh ${j} date"
         done
        done

        The output should look similar to:

        Testing fs122-data to fs122-data: Wed Oct 15 10:14:35 CDT 2008
        Testing fs122-data to fs121-data: Wed Oct 15 10:14:35 CDT 2008
        Testing fs121-data to fs122-data: Wed Oct 15 10:14:36 CDT 2008
        Testing fs121-data to fs121-data: Wed Oct 15 10:14:36 CDT 2008

        Repeat this for the list of short names (node1), fully qualified names (node1.mydomain.com) and IP addresses (10.0.0.1) from all nodes to all nodes.

 

Installing GPFS

 
  1. Obtain and Install GPFS RPMs from product media. (example for x86_64 Linux)
    • extract the GPFS RPMs and accept the license:



      sh gpfs_install-3.5.0-0_x86_64 ~--text-only



      In GPFS 3.5 you can start with gpfs.base-3.5.0-0.x86_64.rpm or gpfs.base-3.5.0-3.x86_64.rpm.

       
    • install the gpfs.base, gpfs.docs, gpfs.gpl and gpfs.msg RPMs. In the directory where the software resides use the rpm command to install the GPFS software.
      rpm -ivh *.rpm
    • Note: It is recommended that you place the base RPM's in a different directory from the patch RPMs to simplify installation (since it is common to do rpm -ivh *.rpm ). This is important becuase you need to install the base RPM's completely before installing a patch level. If you are applying a patch during the initial installation then you only need to build the portability layer once after the patch RPM's are installed.

 

  1. Download and upgrade GPFS to the latest service level(rpm -Uvh gpfs*.rpm), For the latest GPFS service levels go to IBM Fix Central - GPFS



    rpm -Uvh *.rpm


    Verify all the GPFS RPMs have been upgraded to the same version (rpm -qa | grep gpfs).



     
  2. Build the Portability layer:



    cd /usr/lpp/mmfs/src
    make Autoconfig
    make World
    make InstallImages

    # Cut and paste version

    cd /usr/lpp/mmfs/src; make Autoconfig; make World; make InstallImages

    If make Autoconfig fails for some reason, and you are running a Linux distribution from Redhat on GPFS 3.4.0.4 and later you can try telling Autoconfig that the Linux version should be redhat using the LINUX_DISTRIBUTION flag.

    make LINUX_DISTRIBUTION=REDHAT_AS_LINUX Autoconfig


    You will have to perform this build step on each node in the cluster, or if all the nodes are running the same version of Linux you can run make rpm to generate gpfs.gplbin-`uname -r` RPMS. You can install the gpfs.gplbin on the other GPFS nodes to install the portability layer modules.



    make rpm

    make rpm generates gpfs.gplbin-`uname -r`-<GPFS Version>.<CPU Arch>.rpm and by default the RPM is stored in /root/rpmbuild/RPMS/x86_64 directory. When using GPFS 3.2.1 and earlier there is not an rpm option to the GPFS GPL build process. In this case to distribute the kernel module binaries you can copy the files manually. You can see a list of these files when the make InstallImages process completes. Five files are generated during make InstallImages. Three files are installed in /lib/modules/`uname -r`/{mmfslinux, mmfs26, tracedev}.ko and Two files are installed in /usr/lpp/mmfs/bin/{kdump,lxtrace}-`uname -r`. Ensure these Five files are installed properly on rest of the nodes (On rest of the nodes, either build portability layer OR just install the generated gpfs.gplbin RPM from make rpm)

  3. Create the cluster using themmcrclustercommand. Choose a primary and/or secondary configuration server, feed in the list of nodes (NodeName:NodeDesignations:AdminNodeName), and define ssh and scp as the remote shell and remote file copy commands.
    mmcrcluster -N node01:quorum-manager,node02:quorum-manager -p node01 -s node02 
    -r /usr/bin/ssh -R /usr/bin/scp
  4. Set the license mode for each node using the mmchlicensecommand. Use a server or client license setting where appropriate. In this example both nodes are servers.
    mmchlicense server --accept -N node01,node02
  5. Start GPFS on all nodes using the mmstartupcommand:
    mmstartup -a
  6. Verify GPFS state is "active" (mmgetstate -a)
  7. Determine the disk to be used to create a file system and create stanza file that will be used as input to the mmcrnsd command to create the NSD's:stanza.txt



    The format for the file is:
                   %nsd: device=DiskName
                      nsd=NsdName
                      servers=ServerList
                      usage={dataOnly | metadataOnly | dataAndMetadata | descOnly}
                      failureGroup=FailureGroup
                      pool=StoragePool
    
    

    You only need to populate the fields required to create the NSD's, in this example all NSD's use the default failure group and pool definitions.

    %nsd:
      device=/dev/sdc
      nsd=mynsd1
      usage=dataAndMetadata
    
    %nsd:
      device=/dev/sdd
      nsd=mynsd2
      usage=dataAndMetadata
    
    
  8. Create the NSD's using the mmcrnsd command:
    mmcrnsd -F stanza.txt
    
    # List NSDs
    mmlsnsd
     
    # For Two Quorum Node Configuration, it is necessary to 
    #   define one or three Tie-Breaker Disks from the NSD List. 
    # Configuring tie-breaker Disks 
    # require GPFS to be shutdown across the cluster
    mmshutdown -a
    mmchconfig tiebreakerDisks="DMD_NSD01"
    mmstartup -a
  9. Create a filesystem using the NSD's you just created using themmcrfscommand:
    mmcrfs gpfs1 -F stanza.txt -A yes -T /gpfs

    This command creates a file system named gpfs1 and mounts it automatically when GPFS starts at /gpfs.

  10. Mount the file system on all nodes using the mmmountcommand:
    mmmount gpfs1 -a
 

Add GPFS Nodes

 
  1. To add additional nodes to an existing GPFS cluster start by repeating steps 1,2 and 3 from the Installing GPFS section above.
  2. Ensure the existing nodes in the GPFS cluster can ssh (or whatever tol you are using for admin commands) to the new nodes without the use of a password as user 'root'.
  3. Create a file(for e.g. client-all.txt) with client nodename listed one per line. Following is example to add four client nodes, with nodenames listed one per line.
    #cat client-all.txt

    node10

    node11

    node12

    node13
  4. Set the GPFS client license using the mmchlicense command.
  5. Start GPFS on the newly added client nodes using the mmstartup command.

    mmstartup -N client-all.txt
  6. Verify the state of GPFS on all nodes in the cluster using the mmgetstate command. Ensure GPFS is in "active" state.

    mmgetstate -a
    
  7. You can list all of the nodes that have a GPFS file system mounted using the mmlsmount command.

    mmlsmount all -L