mmcrcluster command
Creates a GPFS™ cluster from a set of nodes.
Synopsis
mmcrcluster -N {NodeDesc[,NodeDesc...] | NodeFile}
[--ccr-enable | {--ccr-disable -p PrimaryServer [-s SecondaryServer]}]
[ [-r RemoteShellCommand] [-R RemoteFileCopyCommand] |
--use-sudo-wrapper [--sudo-user UserName] ]
[-C ClusterName] [-U DomainName] [-A]
[-c ConfigFile | --profile ProfileName]
Availability
Available on all IBM Spectrum Scale™ editions.
Description
Use the mmcrcluster command to create a GPFS cluster.
Upon successful completion of the mmcrcluster command, the /var/mmfs/gen/mmsdrfs and the /var/mmfs/gen/mmfsNodeData files are created on each of the nodes in the cluster. Do not delete these files under any circumstances. For more information, see Quorum.
- While a node may mount file systems from multiple clusters, the node itself may only be added to a single cluster using the mmcrcluster or mmaddnode command.
- The nodes must be available for the command to be successful. If any of the nodes listed are not available when the command is issued, a message listing those nodes is displayed. You must correct the problem on each node and issue the mmaddnode command to add those nodes.
- Designate at least one but not more than seven nodes as quorum nodes. How many quorum nodes altogether you will have depends on whether you intend to use the node quorum with tiebreaker algorithm or the regular node based quorum algorithm. For more information, see Quorum.
- After the nodes are added to the cluster, use the mmchlicense command to designate appropriate GPFS licenses to the new nodes.
- Clusters that will include both UNIX and Windows nodes must use ssh and scp for the remote shell and copy commands. For more information, see Installing and configuring OpenSSH on Windows nodes.
- Carefully consider the remote execution and remote copy tooling you want to use within your cluster. Once a cluster has been created, it is complicated to change, especially if additional nodes are added. The default tools as specified under -r RemoteShellCommand and -R RemoteFileCopyCommand by default use /usr/bin/ssh and /usr/bin/scp respectively. For more information, see GPFS cluster creation considerations.
Parameters
- -N NodeDesc[,NodeDesc...] | NodeFile
- Specifies node descriptors,
which provide information about nodes to be added to the cluster.
- NodeFile
- Specifies a file containing a list of node descriptors, one per line, to be added to the cluster.
- NodeDesc[,NodeDesc...]
- Specifies the list of nodes
and node designations to be added to the GPFS cluster.
Node descriptors are defined as:
NodeName:NodeDesignations:AdminNodeName
where:- NodeName
- Specifies the host name or IP address of the node for GPFS daemon-to-daemon communication. For hosts
with multiple adapters, see the IBM Spectrum
Scale:
Administration Guide and search
on Using remote access with public and private IP addresses. The host name or IP address must refer to the communication adapter over which the GPFS daemons communicate. Aliased interfaces are not allowed. Use the original address or a name that is resolved by the host command to that original address. You can specify a node using any of these forms:
- Short host name (for example, h135n01)
- Long, fully-qualified, host name (for example, h135n01.ibm.com)
- IP address (for example, 7.111.12.102). IPv6 addresses must be enclosed in brackets (for example, [2001:192::192:168:115:124]).
Regardless of which form you use, GPFS will resolve the input to a host name and an IP address and will store these in its configuration files. It is expected that those values will not change while the node belongs to the cluster.
- NodeDesignations
- An optional, "-" separated list of node roles:
- manager | client – Indicates whether a node is part of the node pool from which file system managers and token managers can be selected. The default is client.
- quorum | nonquorum – Indicates whether a node is counted as a quorum node. The default is nonquorum.
- AdminNodeName
- Specifies an optional field that consists of a node name to be used by the administration
commands to communicate between nodes. If AdminNodeName is not specified,
the NodeName value is
used.Note: AdminNodeName must be a resolvable network host name. For more information, see GPFS node adapter interface names.
You must provide a NodeDesc for each node to be added to the GPFS cluster.
- --ccr-enable
- Enables the configuration server repository (CCR), which stores
redundant copies of configuration data files on all quorum nodes.
All GPFS administration commands,
as well as file system mounts and daemon startups, work normally as
long as a majority of quorum nodes are accessible. This is the default.
The CCR operation requires the use of the GSKit toolkit for authenticating network connections. As such, the gpfs.gskit package, which is available on all Editions, should be installed.
- --ccr-disable
- Indicates that the traditional primary/backup server-based configuration
repository (non-CCR, earlier than GPFS 4.1)
is to be used.
When using this option you must also specify a primary configuration server (-p option). It is suggested that you also specify a secondary GPFS cluster configuration server (-s option) to prevent the loss of configuration data in the event your primary GPFS cluster configuration server goes down. When the GPFS daemon starts up, at least one of the two GPFS cluster configuration servers must be accessible.
If your primary GPFS cluster configuration server fails and you have not designated a secondary server, the GPFS cluster configuration files are inaccessible, and any GPFS administration commands that are issued fail. File system mounts or daemon startups also fail if no GPFS cluster configuration server is available.
You are strongly advised to designate the cluster configuration servers as quorum nodes.
- -p PrimaryServer
- Specifies the primary GPFS cluster configuration server node used to store the GPFS configuration data. This node must be a member of the GPFS cluster. This option is necessary only when –ccr-disable is specified.
- -s SecondaryServer
- Specifies the secondary GPFS cluster configuration server node used to store the GPFS cluster data. This node must be a member of the GPFS cluster. This option is necessary only when –ccr-disable is specified.
- -r RemoteShellCommand
- Specifies the fully-qualified path name for the remote shell program
to be used by GPFS. The default
value is /usr/bin/ssh.
The remote shell command must adhere to the same syntax format as the ssh command, but may implement an alternate authentication mechanism.
- -R RemoteFileCopy
- Specifies the fully-qualified path name for the remote file copy
program to be used by GPFS.
The default value is /usr/bin/scp.
The remote copy command must adhere to the same syntax format as the scp command, but may implement an alternate authentication mechanism.
- --use-sudo-wrapper [sudo-user UserName]
- Causes the nodes in the cluster to call the ssh and scp sudo wrapper scripts as the remote shell
program and the remote copy program. For more information, see Running IBM Spectrum Scale commands without remote root login.
- --sudo-user UserName
- Specifies a non-root admin user ID to be used when sudo wrappers are
enabled and a root-level background process calls an administration command directly instead of
through sudo. The GPFS daemon that
processes the administration command specifies this non-root user ID instead of the root ID when it
needs to run internal commands on other nodes. For more information, see
Root-level processes that call administration commands directly.To disable this feature, specify the key word DELETE instead of a user name, as in the following example:
mmchcluster --sudo-user DELETE
- -C ClusterName
- Specifies a name for the cluster. If the user-provided name contains
dots, it is assumed to be a fully qualified domain name. Otherwise,
to make the cluster name unique, the domain of the primary configuration
server will be appended to the user-provided name.
If the -C flag is omitted, the cluster name defaults to the name of the primary GPFS cluster configuration server.
- -U DomainName
- Specifies the UID domain name for the cluster.
- -A
- Specifies that GPFS daemons are to be automatically started when nodes come up. The default is not to start daemons automatically.
- -c ConfigFile
- Specifies a file containing GPFS configuration
parameters with values different than the documented defaults. A sample
file can be found in /usr/lpp/mmfs/samples/mmfs.cfg.sample.
See the mmchconfig command for a detailed
description of the different configuration parameters.
The -c ConfigFile parameter should be used only by experienced administrators. Use this file to set up only those parameters that appear in the mmfs.cfg.sample file. Changes to any other values may be ignored by GPFS. When in doubt, use the mmchconfig command instead.
- --profile ProfileName
- Specifies a predefined profile of attributes to be applied. System-defined profiles are located in /usr/lpp/mmfs/profiles/. All the configuration attributes listed under a cluster stanza will be changed as a result of this command.
- The following system-defined profile names are accepted:
- gpfsProtocolDefaults
- gpfsProtocolRandomIO
A user's profiles must be installed in /var/mmfs/etc/. The profile file specifies GPFS configuration parameters with values different than the documented defaults. A user-defined profile must not begin with the string 'gpfs' and must have the .profile suffix.
%cluster:
[CommaSeparatedNodesOrNodeClasses:]ClusterConfigurationAttribute=Value
...
%filesystem:
FilesystemConfigurationAttribute=Value
See the mmchconfig command for a detailed description of the different
configuration parameters. A sample file can be found in
/usr/lpp/mmfs/samples/sample.profile.Exit status
- 0
- Successful completion.
- nonzero
- A failure has occurred.
Security
You must have root authority to run the mmcrcluster command.
The node on which the command is issued must be able to execute remote shell commands on any other node in the cluster without the use of a password and without producing any extraneous messages. For more information, see Requirements for administering a GPFS file system.
Examples
mmcrcluster -N /u/admin/nodelist -p k164n05 -s k164n04
k164n04.kgn.ibm.com:quorum
k164n05.kgn.ibm.com:quorum
k164n06.kgn.ibm.com
Mon May 10 10:59:09 EDT 2010: mmcrcluster:
Processing node k164n04.kgn.ibm.com
Mon May 10 10:59:09 EDT 2010: mmcrcluster:
Processing node k164n05.kgn.ibm.com
Mon May 10 10:59:09 EDT 2010: mmcrcluster:
Processing node k164n06.kgn.ibm.com
mmcrcluster: Command successfully completed
mmcrcluster: Warning: Not all nodes have proper
GPFS license designations.
Use the mmchlicense command to designate
licenses as needed.
mmlscluster
The
system displays information similar to: GPFS cluster information
========================
GPFS cluster name: k164n05.kgn.ibm.com
GPFS cluster id: 680681562214606028
GPFS UID domain: k164n05.kgn.ibm.com
Remote shell command: /usr/bin/ssh
Remote file copy command: /usr/bin/scp
GPFS cluster configuration servers:
-----------------------------------
Primary server: k164n05.kgn.ibm.com
Secondary server: k164n04.kgn.ibm.com
Node Daemon node name IP address Admin node name Designation
---------------------------------------------------------------------
1 k164n04.kgn.ibm.com 198.117.68.68 k164n04.kgn.ibm.com quorum
2 k164n05.kgn.ibm.com 198.117.68.71 k164n05.kgn.ibm.com quorum
3 k164n06.kgn.ibm.com 198.117.68.70 k164n06.kgn.ibm.com