mmchnode command

Changes node attributes.

Synopsis

mmchnode change-options -N {Node[,Node...] | NodeFile | NodeClass} 
     [--cloud-gateway-nodeclass CloudGatewayNodeClass] 

or

mmchnode {-S Filename | --spec-file=Filename} 

Availability

Available on all IBM Spectrum Scale editions.

Description

Use the mmchnode command to change one or more attributes on a single node or on a set of nodes. If conflicting node designation attributes are specified for a given node, the last value is used. If any of the attributes represent a node-unique value, the -N option must resolve to a single node.

Do not use the mmchnode command to change the gateway node role while IO is happening on the fileset. Run the flushpending command to flush any pending messages from queues before running the mmchnode command for the gateway node role changes.

Parameters

-N {Node[,Node...] | NodeFile | NodeClass}
Specifies the nodes whose states are to be changed.

For general information on how to specify node names, see Specifying nodes as input to GPFS commands.

[--cloud-gateway-nodeclass CloudGatewayNodeClass]
Use this option to specify a node class you will use for Transparent cloud tiering management along with the -N option where you will specify individual node names. Both -N with a node list and --cloud-gateway-node with a node class will be required.
-S Filename | --spec-file=Filename
Specifies a file with a detailed description of the changes to be made. Each line represents the changes to an individual node and has the following format:
node-identifier change-options
change-options
A blank-separated list of attribute[=value] pairs. The following attributes can be specified:
--admin-interface={hostname | ip_address}
Specifies the name of the node to be used by GPFS administration commands when communicating between nodes. The admin node name must be specified as an IP address or a hostname that is resolved by the host command to the desired IP address. If the keyword DEFAULT is specified, the admin interface for the node is set to be equal to the daemon interface for the node.
--client
Specifies that the node should not be part of the pool of nodes from which cluster managers, file system managers, and token managers are selected.
--cloud-gateway-enable
Enables one or more nodes as Transparent cloud tiering nodes on the cluster based on the -N option parameters.
--cloud-gateway-disable
Disables one or more Transparent cloud tiering nodes from the cluster based on the -N option parameters. Only disable a Transparent cloud tiering node if you no longer need it to migrate or recall data from the configured cloud.
--ces-enable
Enables Cluster Export Services (CES) on the node.
--ces-disable
Disables CES on the node.
--ces-group=Group[,Group...]
Adds one or more groups to the specified nodes. Start of changeEach group that is listed is added to all the specified nodes. End of change
--noces-group=Group[,Group...]
Removes one or more groups from the specified nodes.
--cnfs-disable
Disables the CNFS functionality of a CNFS member node.
--cnfs-enable
Enables a previously-disabled CNFS member node.
--cnfs-groupid=groupid
Specifies a failover recovery group for the node. If the keyword DEFAULT is specified, the CNFS recovery group for the node is set to zero.

For more information, see Implementing a clustered NFS environment on Linux.

--cnfs-interface=ip_address_list
A comma-separated list of host names or IP addresses to be used for GPFS cluster NFS serving.

The specified IP addresses can be real or virtual (aliased). These addresses must be configured to be static (not DHCP) and to not start at boot time.

The GPFS daemon interface for the node cannot be a part of the list of CNFS IP addresses.

If the keyword DEFAULT is specified, the CNFS IP address list is removed and the node is no longer considered a member of CNFS.

If adminMode central is in effect for the cluster, all CNFS member nodes must be able to execute remote commands without the need for a password.

For more information, see Implementing a clustered NFS environment on Linux.

--daemon-interface={hostname | ip_address}
Specifies the host name or IP address to be used by the GPFS daemons for node-to-node communication. The host name or IP address must refer to the communication adapter over which the GPFS daemons communicate. Alias interfaces are not allowed. Use the original address or a name that is resolved by the host command to the original address.

Before you specify this option, you must stop GPFS on all the nodes in the cluster. You cannot use the keyword DEFAULT with this option.

You cannot specify the --daemon-interface option for a quorum node if CCR is enabled. Temporarily change the node to a nonquorum node. Then run the mmchnode command with the --daemon-interface option against the nonquorum node. Finally, change the node back into a quorum node.

--gateway | --nogateway
Specifies whether the node is to be designated as a gateway node or not.
--manager | --nomanager
Designates the node as part of the pool of nodes from which file system managers and token managers are selected.
--nonquorum
Designates the node as a non-quorum node. If two or more quorum nodes are downgraded at the same time, GPFS must be stopped on all nodes in the cluster. GPFS does not have to be stopped if the nodes are downgraded one at a time.
--perfmon | --noperfmon
Specifies whether the node is to be designated as a perfmon node or not.
--nosnmp-agent
Stops the SNMP subagent and specifies that the node should no longer serve as an SNMP collector node. For more information, see GPFS SNMP support.
--quorum
Designates the node as a quorum node.
Note: If you are designating a node as a quorum node, and adminMode central is in effect for the cluster, you must ensure that GPFS is up and running on that node (mmgetstate reports the state of the node as active).
--snmp-agent
Designates the node as an SNMP collector node. If the GPFS daemon is active on this node, the SNMP subagent will be started as well. For more information, GPFS SNMP support.

Exit status

0
Successful completion.
nonzero
A failure has occurred.

Security

You must have root authority to run the mmchnode command.

The node on which the command is issued must be able to execute remote shell commands on any other node in the cluster without the use of a password and without producing any extraneous messages. For more information, see Requirements for administering a GPFS file system.

Examples

  1. To change nodes k145n04 and k145n05 to be both quorum and manager nodes, issue this command:
    mmchnode --quorum --manager -N k145n04,k145n05
    The system displays information similar to:
    Wed May 16 04:50:24 EDT 2007: mmchnode: Processing node k145n04.kgn.ibm.com
    Wed May 16 04:50:24 EDT 2007: mmchnode: Processing node k145n05.kgn.ibm.com
    mmchnode: Propagating the cluster configuration data to all
      affected nodes.  This is an asynchronous process.
    After completion, mmlscluster displays information similar to:
    GPFS cluster information
    ========================
    GPFS cluster name: mynodes.kgn.ibm.com
    GPFS cluster id: 680681553700098206
    GPFS UID domain: mynodes.kgn.ibm.com
    Remote shell command: /usr/bin/ssh
    Remote file copy command: /usr/bin/scp
    
    GPFS cluster configuration servers:
    -----------------------------------
    Primary server: k145n04.kgn.ibm.com
    Secondary server: k145n06.kgn.ibm.com
    
    Node Daemon node name  IP address  Admin node name     Designation
    ---------------------------------------------------------------------
    1  k145n04.kgn.ibm.com 9.114.68.68 k145n04.kgn.ibm.com quorum-manager
    2  k145n05.kgn.ibm.com 9.114.68.69 k145n05.kgn.ibm.com quorum-manager
    3  k145n06.kgn.ibm.com 9.114.68.70 k145n06.kgn.ibm.com
  2. To change nodes k145n04 and k145n05 to be both quorum and manager nodes, and node k45n06 to be a non-quorum node, issue this command:
    mmchnode -S /tmp/specFile
    Where the contents of /tmp/specFile are:
    k145n04 --quorum --manager
    k145n05 --quorum --manager
    k145n06 --nonquorum
    The system displays information similar to:
    Wed May 16 05:23:31 EDT 2007: mmchnode: Processing node k145n04
    Wed May 16 05:23:32 EDT 2007: mmchnode: Processing node k145n05
    Wed May 16 05:23:32 EDT 2007: mmchnode: Processing node k145n06
    Verifying GPFS is stopped on all nodes ...
    mmchnode: Propagating the cluster configuration data to all
      affected nodes.  This is an asynchronous process.
    And mmlscluster displays information similar to:
    GPFS cluster information
    ========================
    GPFS cluster name: mynodes.kgn.ibm.com
    GPFS cluster id: 680681553700098206
    GPFS UID domain: mynodes.kgn.ibm.com
    Remote shell command: /usr/bin/rsh
    Remote file copy command: /usr/bin/rcp
    
    GPFS cluster configuration servers:
    -----------------------------------
    Primary server: k145n04.kgn.ibm.com
    Secondary server: k145n06.kgn.ibm.com
    
    Node Daemon node name   IP address   Admin node name     Designation
    -----------------------------------------------------------------------
    1  k145n04.kgn.ibm.com  9.114.68.68  k145n04.kgn.ibm.com quorum-manager
    2  k145n05.kgn.ibm.com  9.114.68.69  k145n05.kgn.ibm.com quorum-manager
    3  k145n06.kgn.ibm.com  9.114.68.70  k145n06.kgn.ibm.com
  3. To enable all the nodes specified in the node class TCTNodeClass1 as Transparent cloud tiering nodes, issue this command:
    mmchnode --cloud-gateway-enable -N TCTNodeClass1
    The system displays output similar to this:
    Wed May 11 12:51:37 EDT 2016: mmchnode: Processing node c350f2u18
    mmchnode: Verifying media for Transparent Cloud Tiering nodes...
    mmchnode: node c350f2u18 media checks passed.
    
    Wed May 11 12:51:38 EDT 2016: mmchnode: Processing node c350f2u22.pk.labs.ibm.com
    mmchnode: node c350f2u22.pok.stglabs.ibm.com media checks passed.
    
    Wed May 11 12:51:41 EDT 2016: mmchnode: Processing node c350f2u26.pk.labs.ibm.com
    mmchnode: node c350f2u26.pok.stglabs.ibm.com media checks passed.
    
    mmchnode: Propagating the cluster configuration data to all
    affected nodes.  This is an asynchronous process.
    You can verify the Transparent cloud tiering nodes by issuing this command:
    mmcloudgateway node list
  4. To designate only a few nodes (node1 and node2) in the node class, TCTNodeClass1, as Transparent cloud tiering server nodes, issue this command:
    mmchnode --cloud-gateway-enable -N node1,node2 --cloud-gateway-nodeclass TCTNodeClass1
    Note: It only designates node1 and node2 as Transparent cloud tiering server nodes from the node class, TCTNodeClass1. Administrators can continue to use the node class for other purposes.
  5. To disable all Transparent cloud tiering nodes from the node class, TCTNodeClass1, issue this command:
    mmchnode --cloud-gateway-disable -N TCTNodeClass1
    The system displays output similar to this:
    Thu May 12 16:10:11 EDT 2016: mmchnode: Processing node c350f2u18
    mmchnode: Verifying Transparent Cloud Tiering node c350f2u18 can be disabled...
    mmchnode: Node c350f2u18 passed disable checks.
    
    Thu May 12 16:10:11 EDT 2016: mmchnode: Processing node c350f2u22.pk.labs.ibm.com
    mmchnode: Verifying Transparent Cloud Tiering node c350f2u22.pk.labs.ibm.com can be disabled...
    mmchnode: Node c350f2u22.pok.stglabs.ibm.com passed disable checks.
    
    Thu May 12 16:10:14 EDT 2016: mmchnode: Processing node c350f2u26.pk.labs.ibm.com
    mmchnode: Verifying Transparent Cloud Tiering node c350f2u26.pk.labs.ibm.com can be disabled...
    mmchnode: Node c350f2u26.pk.labs.ibm.com passed disable checks.
    
    mmchnode: Propagating the cluster configuration data to all
    affected nodes.  This is an asynchronous process.
  6. To disable only a few nodes (node1 and node2) from the node class, TCTNodeClass1, as Transparent cloud tiering server nodes, issue this command:
    mmchnode --cloud-gateway-disable -N node1,node2 --cloud-gateway-nodeclass TCTNodeClass1
    Note: It only disables node1 and node2 as Transparent cloud tiering server nodes from the node class, TCTNodeClass1.
  7. Start of changeTo add groups to specified nodes, issue the mmchnode --ces-group command. For example:
    mmchnode --ces-group g1,g2 -N 2,3 
    Note: This command adds groups g1 and g2 to both nodes 2 and 3. Run the mmces node list command to view the group allocation:
    [root@cluster-12 ~]# mmces node list
    Node   Name                      Node Groups   Node Flags   
    ------ ------------------------- ------------- ------------ 
    2      cluster-12.localnet.com   g1,g2         Suspended    
    3      cluster-13.localnet.com   g1,g2         none         
    4      cluster-14.localnet.com                 none 
    End of change

Location

/usr/lpp/mmfs/bin