Mounting a remote GPFS file system

Explore an example of how to mount a file system that is owned and served by another IBM Spectrum Scale™ cluster.

The package gpfs.gskit must be installed on all the nodes of the owning cluster and the accessing cluster. For more information, see the installation chapter for your operating system, such as Installing IBM Spectrum Scale on Linux nodes and deploying protocols.
The procedure to set up remote file system access involves the generation and exchange of authorization keys between the two clusters. In addition, the administrator of the GPFS™ cluster that owns the file system needs to authorize the remote clusters that are to access it, while the administrator of the GPFS cluster that seeks access to a remote file system needs to define to GPFS the remote cluster and file system whose access is desired.
Note: For more information on CES cluster setup, see CES cluster setup.
In this example, owningCluster is the cluster that owns and serves the file system to be mounted and accessingCluster is the cluster that accesses owningCluster.
Note: The following example uses AUTHONLY as the authorization setting. When you specify AUTHONLY for authentication, GPFS checks network connection authorization. However, data sent over the connection is not protected.
  1. On owningCluster, the system administrator issues the mmauth genkey command to generate a public/private key pair. The key pair is placed in /var/mmfs/ssl. The public key file is id_rsa.pub.
    mmauth genkey new
  2. On owningCluster, the system administrator enables authorization by entering the following command:
    mmauth update . -l AUTHONLY
  3. The system administrator of owningCluster gives the file /var/mmfs/ssl/id_rsa.pub to the system administrator of accessingCluster. This operation requires the two administrators to coordinate their activities and must occur outside of the GPFS command environment.

    The system administrator of accessingCluster can rename the key file and put it in any directory of the node that he is working on, so long as he provides the correct path and file name in the mmremotecluster add command in Step 9. In this example, the system administrator renames the key file to owningCluster_id_rsa.pub.

  4. On accessingCluster, the system administrator issues the mmauth genkey command to generate a public/private key pair. The key pair is placed in /var/mmfs/ssl. The public key file is id_rsa.pub.
    mmauth genkey new
  5. On accessingCluster, the system administrator enables authorization by entering the following command:
    mmauth update . -l AUTHONLY
  6. The system administrator of accessingCluster gives key file /var/mmfs/ssl/id_rsa.pub to the system administrator of owningCluster. This operation requires the two administrators to coordinate their activities, and must occur outside of the GPFS command environment.

    The system administrator of owningClustercan rename the key file and put it in any directory of the node that he is working on, so long as he provides the correct path and file name in the mmauth add command in Step 7. In this example, the system administrator renames the key file to accessingCluster_id_rsa.pub.

  7. On owningCluster, the system administrator issues the mmauth add command to authorize accessingCluster to mount file systems that are owned by owningCluster utilizing the key file that was received from the administrator of accessingCluster:
    mmauth add accessingCluster -k accessingCluster_id_rsa.pub
  8. On owningCluster, the system administrator issues the mmauth grant command to authorize accessingCluster to mount specific file systems that are owned by owningCluster:
    mmauth grant accessingCluster -f gpfs
  9. On accessingCluster, the system administrator must define the cluster name, contact nodes and public key for owningCluster:
    mmremotecluster add owningCluster -n node1,node2,node3 -k owningCluster_id_rsa.pub 

    This command provides the system administrator of accessingCluster a means to locate the serving cluster and mount its file systems.

  10. On accessingCluster, the system administrator issues one or more mmremotefs commands to identify the file systems in owningCluster that are to be accessed by nodes in accessingCluster:
    mmremotefs add mygpfs -f gpfs -C owningCluster -T /mygpfs 
    where:
    mygpfs
    Is the device name under which the file system is known in accessingCluster.
    gpfs
    Is the device name for the file system in owningCluster.
    owningCluster
    Is the name of owningCluster as given by the mmlscluster command on a node in owningCluster.
    /mygpfs
    Is the local mount point in accessingCluster.

  11. On accessingCluster, the system administrator enters the mmmount command to mount the file system:
    mmmount mygpfs 
Table 1 summarizes the commands that the administrators of the two clusters need to issue so that the nodes in accessingCluster can mount the remote file system fs1, which is owned by owningCluster, assigning rfs1 as the local name with a mount point of /rfs1.
Table 1. Summary of commands to set up cross-cluster file system access.
accessingCluster owningCluster
mmauth genkey new

mmauth update . -l AUTHONLY

mmauth genkey new

mmauth update . -l AUTHONLY

Exchange public keys (file /var/mmfs/ssl/id_rsa.pub) Exchange public keys (file /var/mmfs/ssl/id_rsa.pub)
mmremotecluster add owningCluster ...

mmremotefs add rfs1 -f fs1 -C owningCluster -T /rfs1

mmauth add accessingCluster ...

mmauth grant accessingCluster -f fs1 ...