mmcrcluster command
Creates a GPFS cluster from a set of nodes.
Synopsis
mmcrcluster -N {NodeDesc[,NodeDesc...] | NodeFile}
[ [-r RemoteShellCommand] [-R RemoteFileCopyCommand] |
--use-sudo-wrapper [--sudo-user UserName] ]
[-C ClusterName] [-U DomainName] [-A]
[-c ConfigFile | --profile ProfileName] [--port PortNumber]
Availability
Available on all IBM Storage Scale editions.
Description
Use the mmcrcluster command to create a GPFS cluster.
Upon successful completion of the mmcrcluster command, the /var/mmfs/gen/mmsdrfs and the /var/mmfs/gen/mmfsNodeData files are created on each of the nodes in the cluster. Do not delete these files under any circumstances. For more information, see Quorum.
- While a node may mount file systems from multiple clusters, the node itself may only be added to a single cluster using the mmcrcluster or mmaddnode command.
- The nodes must be available for the command to be successful. If any of the nodes listed are not available when the command is issued, a message listing those nodes is displayed. You must correct the problem on each node and issue the mmaddnode command to add those nodes.
- Designate at least one but not more than eight nodes as quorum nodes. How many quorum nodes altogether you will have depends on whether you intend to use the node quorum with tiebreaker algorithm or the regular node based quorum algorithm. For more information, see Quorum.
- After the nodes are added to the cluster, use the mmchlicense command to designate appropriate GPFS licenses to the new nodes.
- Clusters that will include both UNIX and Windows nodes must use ssh and scp for the remote shell and copy commands. For more information, see Installing and configuring OpenSSH on Windows nodes.
- Carefully consider the remote execution and remote copy tooling you want to use within your cluster. Once a cluster has been created, it is complicated to change, especially if additional nodes are added. The default tools as specified under -r RemoteShellCommand and -R RemoteFileCopyCommand by default use /usr/bin/ssh and /usr/bin/scp respectively. For more information, see GPFS cluster creation considerations.
- A cluster is created with the sdrNotifyAuthEnabled parameter set to yes. This parameter specifies whether to authenticate the notify RPCs related to deadlock detection and amelioration, node overload, and node expel. System administrators can use the mmchconfig command to change the value of the sdrNotifyAuthEnabled parameter to no. For more details about the sdrNotifyAuthEnabled parameter, see mmchconfig command.
- A cluster is created with the tscCmdAllowRemoteConnections parameter set to no. This parameter specifies whether the ts* commands in /usr/lpp/mmfs/bin (which are used by the mm* commands) can use the remote TCP/IP connections when communicating with the local or other mmfsd daemons. The system administrators can use the mmchconfig command to set the value of the tscCmdAllowRemoteConnections parameter to yes. For more details about the tscCmdAllowRemoteConnections parameter, see mmchconfig command.
Parameters
- -N NodeDesc[,NodeDesc...] | NodeFile
- Specifies node descriptors, which provide information
about nodes to be added to the cluster.
- NodeFile
- Specifies a file containing a list of node descriptors, one per line, to be added to the cluster.
- NodeDesc[,NodeDesc...]
- Specifies the list of nodes and node designations to be
added to the GPFS cluster. Node descriptors
are defined as:
NodeName:NodeDesignations:AdminNodeName:NodeComment
where:- NodeName
- Specifies the host name or IP address of the node for GPFS daemon-to-daemon communication. For hosts with multiple adapters, see the
IBM
Storage Scale:
Administration Guide and search on Using remote
access with public and private IP addresses. The host name or IP address must refer to the communication adapter over which the GPFS daemons communicate. Aliased interfaces are not allowed. Use the original address or a name that is resolved by the host command to that original address. You can specify a node using any of these forms:
- Short host name (for example, h135n01)
- Long, fully-qualified, host name (for example, h135n01.ibm.com)
- IP address (for example, 7.111.12.102). IPv6 addresses must be enclosed in brackets (for example, [2001:192::192:168:115:124]).
Regardless of which form you use, GPFS will resolve the input to a host name and an IP address and will store these in its configuration files. It is expected that those values will not change while the node belongs to the cluster.
- NodeDesignations
- An optional, "-" separated list of node roles:
- manager | client – Indicates whether a node is part of the node pool from which file system managers and token managers can be selected. The default is client.
- quorum | nonquorum – Indicates whether a node is counted as a quorum node. The default is nonquorum.
- AdminNodeName
- Specifies an optional field that consists of a node name to be used by the administration
commands to communicate between nodes. If AdminNodeName is not specified,
the NodeName value is used.Note: AdminNodeName must be a resolvable network host name. For more information, see GPFS node adapter interface names.
You must provide a NodeDesc for each node to be added to the GPFS cluster.
- NodeComment
- Specifies an optional field that provides additional information on the node. The comment field
might also be included in the NodeFile file as a parameter.Note:
- Comments can only contain the following characters:
0-9 A-Z a-z (space) # - . @ _ - While running a command with the -N option or using a NodeFile attribute to provide multiple node descriptors, all comments that include one or more spaces must be enclosed within single quotation marks (') or double quotation marks ("). Always use quotation marks with comments to avoid possible parsing errors.
- Comments must not begin with a
-
. - Comments cannot be longer than 32 characters.
- Comments that include spaces at the beginning or end will have the spaces trimmed.
The comment field adds new data that is stored in the cluster configuration file and is available only to nodes running on IBM Storage Scale 5.1.4 or later for reading and configuring.
- Comments can only contain the following characters:
- -r RemoteShellCommand
- Specifies the fully-qualified path name for the remote shell program
to be used by GPFS. The default
value is /usr/bin/ssh.
The remote shell command must adhere to the same syntax format as the ssh command, but may implement an alternate authentication mechanism.
- -R RemoteFileCopy
- Specifies the fully-qualified path name for the remote file copy
program to be used by GPFS.
The default value is /usr/bin/scp.
The remote copy command must adhere to the same syntax format as the scp command, but may implement an alternate authentication mechanism.
- --use-sudo-wrapper [sudo-user UserName]
- Causes the nodes in the cluster to call the ssh and scp sudo wrapper scripts as the remote shell
program and the remote copy program. For more information, see Running IBM Storage Scale commands without remote root login.
- --sudo-user UserName
- Specifies a non-root admin user ID to be used when sudo wrappers are
enabled and a root-level background process calls an administration command directly instead of
through sudo. The GPFS
daemon that processes the administration command specifies this non-root user ID instead of the root
ID when it needs to run internal commands on other nodes. For more
information, see Root-level processes that call administration commands directly.To disable this feature, specify the key word DELETE instead of a user name, as in the following example:
mmchcluster --sudo-user DELETE
- -C ClusterName
- Specifies a name for the cluster. If the user-provided name
contains dots then the command assumes that the user-provided name is a fully qualified domain name.
Otherwise, to make the cluster name unique, the command appends the domain of a quorum node to the
user-provided name. The maximum length of the cluster name including any appended domain name is 115
characters.
If the -C flag is omitted, the cluster name defaults to the name of a quorum node within the cluster definition.
- -U DomainName
- Specifies the UID domain name for the cluster.
- -A
- Specifies that GPFS daemons are to be automatically started when nodes come up. The default is not to start daemons automatically.
- -c ConfigFile
- Specifies a file containing GPFS configuration
parameters with values different than the documented defaults. A sample
file can be found in /usr/lpp/mmfs/samples/mmfs.cfg.sample.
See the mmchconfig command for a detailed
description of the different configuration parameters.
The -c ConfigFile parameter should be used only by experienced administrators. Use this file to set up only those parameters that appear in the mmfs.cfg.sample file. Changes to any other values may be ignored by GPFS. When in doubt, use the mmchconfig command instead.
- --profile ProfileName
- Specifies a predefined profile of attributes to be applied. System-defined profiles are located in /usr/lpp/mmfs/profiles/. All the configuration attributes listed under a cluster stanza will be changed as a result of this command.
--port PortNumber
Specifies the IBM Storage Scale communication port number. The default port number is 1191.
%cluster:
[CommaSeparatedNodesOrNodeClasses:]ClusterConfigurationAttribute=Value
...
%filesystem:
FilesystemConfigurationAttribute=Value
See the mmchconfig command for a detailed description of the different
configuration parameters. A sample file can be found in
/usr/lpp/mmfs/samples/sample.profile.Exit status
- 0
- Successful completion.
- nonzero
- A failure has occurred.
Security
You must have root authority to run the mmcrcluster command.
The node on which the command is issued must be able to execute remote shell commands on any other node in the cluster without the use of a password and without producing any extraneous messages. For more information, see Requirements for administering a GPFS file system.
Examples
- To create a GPFS cluster made of all of the nodes that are listed in the file /u/admin/nodelist, issue the following command:
mmcrcluster -N /u/admin/nodelist
where the file /u/admin/nodelist has the following contents:k164n04.kgn.ibm.com:quorum k164n05.kgn.ibm.com:quorum k164n06.kgn.ibm.com k164n04.kgn.ibm.com:quorum:k164n04.kgn.ibm.com:"Bld 1. RM 205. Row 7a." k164n05.kgn.ibm.com:quorum k164n06.kgn.ibm.com
The command displays output as the following example:To confirm the creation, issue the following command:Mon May 10 10:59:09 EDT 2010: mmcrcluster: Processing node k164n04.kgn.ibm.com Mon May 10 10:59:09 EDT 2010: mmcrcluster: Processing node k164n05.kgn.ibm.com Mon May 10 10:59:09 EDT 2010: mmcrcluster: Processing node k164n06.kgn.ibm.com mmcrcluster: Command successfully completed mmcrcluster: Warning: Not all nodes have proper GPFS license designations. Use the mmchlicense command to designate licenses as needed.
The command displays information as in the following example:mmlscluster
GPFS cluster information ======================== GPFS cluster name: k164n05.kgn.ibm.com GPFS cluster id: 680681562214606028 GPFS UID domain: k164n05.kgn.ibm.com Remote shell command: /usr/bin/ssh Remote file copy command: /usr/bin/scp Repository type: CCR Node Daemon node name IP address Admin node name Designation --------------------------------------------------------------------- 1 k164n04.kgn.ibm.com 198.117.68.68 k164n04.kgn.ibm.com quorum 2 k164n05.kgn.ibm.com 198.117.68.71 k164n05.kgn.ibm.com quorum 3 k164n06.kgn.ibm.com 198.117.68.70 k164n06.kgn.ibm.com
- To view comments, issue the following
command:
mmlscluster --comment
The command displays the output as the following example:
GPFS cluster information ======================== GPFS cluster name: k164n05.kgn.ibm.com GPFS cluster id: 680681562214606028 Node Daemon node name Comment -------------------------------------------- 1 k164n04.kgn.ibm.com Bld 1. RM 205. Row 7a. 2 k164n05.kgn.ibm.com 3 k164n06.kgn.ibm.com
See also
Location
/usr/lpp/mmfs/bin