Starting and stopping GPFS

You can use the mmstartup and mmshutdown commands to start and stop GPFS on new or existing clusters.

For new GPFS clusters, see Steps for establishing and starting your IBM Storage Scale cluster.

For existing GPFS clusters, before starting GPFS, ensure that you have:
  1. Verified the installation of all prerequisite software.
  2. Compiled the GPL layer, if Linux® is being used.
    Tip: You can configure a cluster to rebuild the GPL automatically whenever a new level of the Linux kernel is installed or whenever a new level of IBM Storage Scale is installed. This feature is available only on the Linux operating system. For more information, see the description of the autoBuildGPL attribute in the topic mmchconfig command.
  3. Properly configured and tuned your system for use by GPFS. This should be done prior to starting GPFS.
For more information, see Configuring and tuning your system for GPFS.
Start the daemons on all of the nodes in the cluster by issuing the mmstartup -a command:
mmstartup  -a
The output is similar to this:
Thu Nov 26 06:35:49 MST 2020: mmstartup: Starting GPFS ...
Check the messages recorded in /var/adm/ras/mmfs.log.latest on one node for verification. Look for messages similar to this:
2020-11-26_06:36:13.534-0700: [N] mmfsd ready

This indicates that quorum has been formed and this node has successfully joined the cluster, and is now ready to mount file systems.

If GPFS does not start, see GPFS daemon does not come up.

For more information, see mmstartup command.

If it becomes necessary to stop GPFS, you can do so from the command line by issuing the mmshutdown command:
mmshutdown -a 
The system displays information similar to:
Thu Nov 26 06:32:43 MST 2020: mmshutdown: Starting force unmount of GPFS file systems
Thu Nov 26 06:32:48 MST 2020: mmshutdown: Shutting down GPFS daemons
Thu Nov 26 06:32:59 MST 2020: mmshutdown: Finished

For more information, see mmshutdown command.

Note: Before you shut down the current cluster manager node, run the mmchmgr command to move the cluster manager role to another node to avoid unexpected I/O interruption.