Mounting a remote GPFS file system

Explore an example of how to mount a file system that is owned and served by another IBM Spectrum Scale cluster.

The package gpfs.gskit must be installed on all the nodes of the owning cluster and the accessing cluster. For more information, see the installation chapter for your operating system, such as Installing IBM Spectrum Scale on Linux nodes and deploying protocols.
The procedure to set up remote file system access involves the generation and exchange of authorization keys between the two clusters. In addition, the administrator of the GPFS cluster that owns the file system needs to authorize the remote clusters that are to access it, while the administrator of the GPFS cluster that seeks access to a remote file system needs to define to GPFS the remote cluster and file system whose access is desired.
Note: For more information about CES cluster setup, see CES cluster setup.
In this example, owningCluster is the cluster that owns and serves the file system to be mounted and accessingCluster is the cluster that accesses owningCluster.
Note:
  • The following example uses AUTHONLY as the authorization setting. When you specify AUTHONLY for authentication, GPFS checks network connection authorization. However, data sent over the connection is not protected.
  • Clusters that are created on IBM Spectrum Scale version 4.2 or later are already created with AUTHONLY as the authentication mode. If the authentication mode used for owningCluster is AUTHONLY or a cipher other than empty, skip steps 1 and 2. If the authentication mode used for accessingCluster is AUTHONLY or a cipher other than empty, skip steps 4 and 5. You can use the mmlsconfig cipherList command to list the current cipher list that is being used by the local cluster.
  1. On owningCluster, the system administrator issues the mmauth genkey command to generate a public/private key pair. The key pair is placed in /var/mmfs/ssl. The public key file is id_rsa.pub.
    mmauth genkey new
  2. On owningCluster, the system administrator enables authorization by entering the following command:
    mmauth update . -l AUTHONLY
  3. The system administrator of owningCluster gives the file /var/mmfs/ssl/id_rsa.pub to the system administrator of accessingCluster. This operation requires the two administrators to coordinate their activities and must occur outside of the GPFS command environment.

    The system administrator of accessingCluster can rename the key file and put it in any directory of the node that the administrator is working on, so long as the administrator provides the correct path and file name in the mmremotecluster add command in Step 9. In this example, the system administrator renames the key file to owningCluster_id_rsa.pub.

  4. On accessingCluster, the system administrator issues the mmauth genkey command to generate a public/private key pair. The key pair is placed in /var/mmfs/ssl. The public key file is id_rsa.pub.
    mmauth genkey new
  5. On accessingCluster, the system administrator enables authorization by entering the following command:
    mmauth update . -l AUTHONLY
    
  6. The system administrator of accessingCluster gives key file /var/mmfs/ssl/id_rsa.pub to the system administrator of owningCluster. This operation requires the two administrators to coordinate their activities, and must occur outside of the GPFS command environment.

    The system administrator of owningClustercan rename the key file and put it in any directory of the node that they are working on, so long as the administrator provides the correct path and file name in the mmauth add command in Step 7. In this example, the system administrator renames the key file to accessingCluster_id_rsa.pub.

  7. On owningCluster, the system administrator issues the mmauth add command to authorize accessingCluster to mount file systems that are owned by owningCluster by using the key file that was received from the administrator of accessingCluster:
    mmauth add accessingCluster -k accessingCluster_id_rsa.pub
    
  8. On owningCluster, the system administrator issues the mmauth grant command to authorize accessingCluster to mount specific file systems that are owned by owningCluster:
    mmauth grant accessingCluster -f gpfs
    Note: If the accessing cluster is mounting the remote file system in read-only mode, only a subset of the events will be generated. For more information, see File audit logging events.
  9. On accessingCluster, the system administrator must define the cluster name, contact nodes and public key for owningCluster:
    mmremotecluster add owningCluster -n node1,node2,node3 -k owningCluster_id_rsa.pub 

    This command provides the system administrator of accessingCluster a means to locate the serving cluster and mount its file systems.

  10. On accessingCluster, the system administrator issues one or more mmremotefs commands to identify the file systems in owningCluster that are to be accessed by nodes in accessingCluster:
    mmremotefs add mygpfs -f gpfs -C owningCluster -T /mygpfs 
    
    where:
    mygpfs
    Is the device name under which the file system is known in accessingCluster.
    gpfs
    Is the device name for the file system in owningCluster.
    owningCluster
    Is the name of owningCluster as given by the mmlscluster command on a node in owningCluster.
    /mygpfs
    Is the local mount point in accessingCluster.

  11. On accessingCluster, the system administrator enters the mmmount command to mount the file system:
    mmmount mygpfs 
Table 1 summarizes the commands that the administrators of the two clusters need to issue so that the nodes in accessingCluster can mount the remote file system fs1, which is owned by owningCluster, assigning rfs1 as the local name with a mount point of /rfs1.
Table 1. Summary of commands to set up cross-cluster file system access.
accessingCluster owningCluster
mmauth genkey new

mmauth update . -l AUTHONLY

Note: The mmauth genkey new and mmauth update commands can be skipped if the given cluster (accessingCluster or owningCluster) is already operating with the authentication mode AUTHONLY (it is the default authentication mode on the clusters that are created on version 4.2 or later) or a cipher other than empty.
mmauth genkey new

mmauth update . -l AUTHONLY

Exchange public keys (file /var/mmfs/ssl/id_rsa.pub) Exchange public keys (file /var/mmfs/ssl/id_rsa.pub)
mmremotecluster add owningCluster ...

mmremotefs add rfs1 -f fs1 -C owningCluster -T /rfs1

mmauth add accessingCluster ...

mmauth grant accessingCluster -f fs1 ...

Note: The configuration of remote clusters might impact the notify RPCs that are related to deadlock detection and amelioration, node overload, and node expel. The authentication of the notify RPCs is controlled by the value of the sdrNotifyAuthEnabled configuration parameter, which is local to a cluster. However, the notify RPCs can be used between remote clusters. Hence, it is recommended that remote clusters have a consistent setting of sdrNotifyAuthEnabled to avoid failures. For example, if the home cluster was created prior to version 5.1.1, it may have the sdrNotifyAuthEnabled configuration parameter set to no. If a new client cluster is created with version 5.1.1 or later, and that cluster has sdrNotifyAuthEnabled set to yes, then either the value should be set to yes on the home cluster (after the nodes of the home cluster are upgraded to version 5.1.1 or later), or the value should be set to no on the client cluster.

For more information about how to configure the sdrNotifyAuthEnabled parameter, see mmchconfig command.