mmchnode command

Changes node attributes.

Synopsis

mmchnode change-options -N {Node[,Node...] | NodeFile | NodeClass} 
     [--cloud-gateway-nodeclass CloudGatewayNodeClass] 

or

mmchnode {-S Filename | --spec-file=Filename} 

Availability

Available on all IBM Spectrum Scale editions.

Description

Use the mmchnode command to change one or more attributes on a single node or on a set of nodes. If conflicting node designation attributes are specified for a given node, the last value is used. If any of the attributes represent a node-unique value, the -N option must resolve to a single node.

Do not use the mmchnode command to change the gateway node role while IO is happening on the fileset. Run the flushpending command to flush any pending messages from queues before running the mmchnode command for the gateway node role changes.

Parameters

-N {Node[,Node...] | NodeFile | NodeClass}
Specifies the nodes whose states are to be changed.

For general information on how to specify node names, see Specifying nodes as input to GPFS commands.

[--cloud-gateway-nodeclass CloudGatewayNodeClass]
Use this option to specify a node class you will use for Transparent cloud tiering management along with the -N option where you will specify individual node names. Both -N with a node list and --cloud-gateway-node with a node class will be required.
-S Filename | --spec-file=Filename
Specifies a file with a detailed description of the changes to be made. Each line represents the changes to an individual node and has the following format:
node-identifier change-options
change-options
A blank-separated list of attribute[=value] pairs. The following attributes can be specified:
--admin-interface={hostname | ip_address}
Specifies the name of the node to be used by GPFS administration commands when communicating between nodes. The admin node name must be specified as an IP address or a hostname that is resolved by the host command to the desired IP address. If the keyword DEFAULT is specified, the admin interface for the node is set to be equal to the daemon interface for the node.
--client
Specifies that the node should not be part of the pool of nodes from which cluster managers, file system managers, and token managers are selected.
--cloud-gateway-enable
Enables one or more nodes as Transparent cloud tiering nodes on the cluster based on the -N option parameters.
--cloud-gateway-disable
Disables one or more Transparent cloud tiering nodes from the cluster based on the -N option parameters. Only disable a Transparent cloud tiering node if you no longer need it to migrate or recall data from the configured cloud.
--ces-enable
Enables Cluster Export Services (CES) on the node.
--ces-disable
Disables CES on the node.
--ces-group=Group[,Group...]
Adds one or more groups to the specified nodes. Each group that is listed is added to all the specified nodes.
--noces-group=Group[,Group...]
Removes one or more groups from the specified nodes.
--cnfs-disable
Disables the CNFS functionality of a CNFS member node.
--cnfs-enable
Enables a previously-disabled CNFS member node.
--cnfs-groupid=groupid
Specifies a failover recovery group for the node. If the keyword DEFAULT is specified, the CNFS recovery group for the node is set to zero.

For more information, see Implementing a clustered NFS environment on Linux.

--cnfs-interface=ip_address_list
A comma-separated list of host names or IP addresses to be used for GPFS cluster NFS serving.

The specified IP addresses can be real or virtual (aliased). These addresses must be configured to be static (not DHCP) and to not start at boot time.

The GPFS daemon interface for the node cannot be a part of the list of CNFS IP addresses.

If the keyword DEFAULT is specified, the CNFS IP address list is removed and the node is no longer considered a member of CNFS.

If adminMode central is in effect for the cluster, all CNFS member nodes must be able to execute remote commands without the need for a password.

For more information, see Implementing a clustered NFS environment on Linux.

--daemon-interface={hostname | ip_address}
Specifies the host name or IP address to be used by the GPFS daemons for node-to-node communication. The host name or IP address must refer to the communication adapter over which the GPFS daemons communicate. Alias interfaces are not allowed. Use the original address or a name that is resolved by the host command to the original address. You cannot set --daemon-interface=DEFAULT.

You must stop all the nodes in the cluster with mmshutdown before you set this attribute. This requirement holds true even if you are changing only one node.

See examples 8 and 9 at the end of this topic.

If the minimum release level of the cluster is IBM Spectrum Scale 5.1.0 or later, the following features are available:
  • You can specify the --daemon-interface option for a quorum node even when CCR is enabled. For earlier versions of IBM Spectrum Scale, temporarily change the quorum node to a nonquorum node, issue the mmchnode command with the --daemon-interface option for the nonquorum node, and change the nonquorum node back to a quorum node.
  • You can change the IP addresses or host names of cluster nodes when a node quorum is not available. For more information, see Changing IP addresses or host names of cluster nodes.
--gateway | --nogateway
Specifies whether the node is to be designated as a gateway node or not.
--manager | --nomanager
Designates the node as part of the pool of nodes from which file system managers and token managers are selected.
--nonquorum
Designates the node as a non-quorum node. If two or more quorum nodes are downgraded at the same time, GPFS must be stopped on all nodes in the cluster. GPFS does not have to be stopped if the nodes are downgraded one at a time.
--perfmon | --noperfmon
Specifies whether the node is to be designated as a perfmon node or not.
--nosnmp-agent
Stops the SNMP subagent and specifies that the node should no longer serve as an SNMP collector node. For more information, see GPFS SNMP support.
--quorum
Designates the node as a quorum node.
Note: If you are designating a node as a quorum node, and adminMode central is in effect for the cluster, you must ensure that GPFS is up and running on that node (mmgetstate reports the state of the node as active).
--snmp-agent
Designates the node as an SNMP collector node. If the GPFS daemon is active on this node, the SNMP subagent will be started as well. For more information, GPFS SNMP support.

Exit status

0
Successful completion.
nonzero
A failure has occurred.

Security

You must have root authority to run the mmchnode command.

The node on which the command is issued must be able to execute remote shell commands on any other node in the cluster without the use of a password and without producing any extraneous messages. For more information, see Requirements for administering a GPFS file system.

Examples

  1. To change nodes k145n04 and k145n05 to be both quorum and manager nodes, issue Start of changethe followingEnd of change command:
    # mmchnode --quorum --manager -N k145n04,k145n05
    Start of changeA sample output is as follows:End of change
    Wed May 16 04:50:24 EDT 2007: mmchnode: Processing node k145n04.kgn.ibm.com
    Wed May 16 04:50:24 EDT 2007: mmchnode: Processing node k145n05.kgn.ibm.com
    mmchnode: Propagating the cluster configuration data to all
      affected nodes.  This is an asynchronous process.
    After completion, mmlscluster displays information similar to:
    GPFS cluster information
    ========================
    GPFS cluster name: mynodes.kgn.ibm.com
    GPFS cluster id: 680681553700098206
    GPFS UID domain: mynodes.kgn.ibm.com
    Remote shell command: /usr/bin/ssh
    Remote file copy command: /usr/bin/scp
    
    GPFS cluster configuration servers:
    -----------------------------------
    Primary server: k145n04.kgn.ibm.com
    Secondary server: k145n06.kgn.ibm.com
    
    Node Daemon node name  IP address  Admin node name     Designation
    ---------------------------------------------------------------------
    1  k145n04.kgn.ibm.com 9.114.68.68 k145n04.kgn.ibm.com quorum-manager
    2  k145n05.kgn.ibm.com 9.114.68.69 k145n05.kgn.ibm.com quorum-manager
    3  k145n06.kgn.ibm.com 9.114.68.70 k145n06.kgn.ibm.com
  2. To change nodes k145n04 and k145n05 to be both quorum and manager nodes, and node k45n06 to be a non-quorum node, issue Start of changethe followingEnd of change command:
    # mmchnode -S /tmp/specFile
    Where the contents of /tmp/specFile are:
    k145n04 --quorum --manager
    k145n05 --quorum --manager
    k145n06 --nonquorum
    
    Start of changeA sample output is as follows:End of change
    Wed May 16 05:23:31 EDT 2007: mmchnode: Processing node k145n04
    Wed May 16 05:23:32 EDT 2007: mmchnode: Processing node k145n05
    Wed May 16 05:23:32 EDT 2007: mmchnode: Processing node k145n06
    Verifying GPFS is stopped on all nodes ...
    mmchnode: Propagating the cluster configuration data to all
      affected nodes.  This is an asynchronous process.
    
    And mmlscluster displays information similar to:
    GPFS cluster information
    ========================
    GPFS cluster name: mynodes.kgn.ibm.com
    GPFS cluster id: 680681553700098206
    GPFS UID domain: mynodes.kgn.ibm.com
    Remote shell command: /usr/bin/rsh
    Remote file copy command: /usr/bin/rcp
    
    GPFS cluster configuration servers:
    -----------------------------------
    Primary server: k145n04.kgn.ibm.com
    Secondary server: k145n06.kgn.ibm.com
    
    Node Daemon node name   IP address   Admin node name     Designation
    -----------------------------------------------------------------------
    1  k145n04.kgn.ibm.com  9.114.68.68  k145n04.kgn.ibm.com quorum-manager
    2  k145n05.kgn.ibm.com  9.114.68.69  k145n05.kgn.ibm.com quorum-manager
    3  k145n06.kgn.ibm.com  9.114.68.70  k145n06.kgn.ibm.com
  3. To enable all the nodes specified in the node class TCTNodeClass1 as Transparent cloud tiering nodes, issue Start of changethe followingEnd of change command:
    # mmchnode --cloud-gateway-enable -N TCTNodeClass1
    Start of changeA sample output is as follows:End of change
    
    Wed May 11 12:51:37 EDT 2016: mmchnode: Processing node c350f2u18
    mmchnode: Verifying media for Transparent Cloud Tiering nodes...
    mmchnode: node c350f2u18 media checks passed.
    
    Wed May 11 12:51:38 EDT 2016: mmchnode: Processing node c350f2u22.pk.labs.ibm.com
    mmchnode: node c350f2u22.pok.stglabs.ibm.com media checks passed.
    
    Wed May 11 12:51:41 EDT 2016: mmchnode: Processing node c350f2u26.pk.labs.ibm.com
    mmchnode: node c350f2u26.pok.stglabs.ibm.com media checks passed.
    
    mmchnode: Propagating the cluster configuration data to all
    affected nodes.  This is an asynchronous process.
    
    You can verify the Transparent cloud tiering nodes by issuing Start of changethe followingEnd of change command:
    # mmcloudgateway node list
  4. To designate only a few nodes (node1 and node2) in the node class, TCTNodeClass1, as Transparent cloud tiering server nodes, issue Start of changethe followingEnd of change command:
    # mmchnode --cloud-gateway-enable -N node1,node2 --cloud-gateway-nodeclass TCTNodeClass1
    Note: It only designates node1 and node2 as Transparent cloud tiering server nodes from the node class, TCTNodeClass1. Administrators can continue to use the node class for other purposes.
  5. To disable all Transparent cloud tiering nodes from the node class, TCTNodeClass1, issue Start of changethe followingEnd of change command:
    # mmchnode --cloud-gateway-disable -N TCTNodeClass1
    Start of changeA sample output is as follows:End of change
    
    Thu May 12 16:10:11 EDT 2016: mmchnode: Processing node c350f2u18
    mmchnode: Verifying Transparent Cloud Tiering node c350f2u18 can be disabled...
    mmchnode: Node c350f2u18 passed disable checks.
    
    Thu May 12 16:10:11 EDT 2016: mmchnode: Processing node c350f2u22.pk.labs.ibm.com
    mmchnode: Verifying Transparent Cloud Tiering node c350f2u22.pk.labs.ibm.com can be disabled...
    mmchnode: Node c350f2u22.pok.stglabs.ibm.com passed disable checks.
    
    Thu May 12 16:10:14 EDT 2016: mmchnode: Processing node c350f2u26.pk.labs.ibm.com
    mmchnode: Verifying Transparent Cloud Tiering node c350f2u26.pk.labs.ibm.com can be disabled...
    mmchnode: Node c350f2u26.pk.labs.ibm.com passed disable checks.
    
    mmchnode: Propagating the cluster configuration data to all
    affected nodes.  This is an asynchronous process.
  6. To disable only a few nodes (node1 and node2) from the node class, TCTNodeClass1, as Transparent cloud tiering server nodes, issue Start of changethe followingEnd of change command:
    # mmchnode --cloud-gateway-disable -N node1,node2 --cloud-gateway-nodeclass TCTNodeClass1
    Note: It only disables node1 and node2 as Transparent cloud tiering server nodes from the node class, TCTNodeClass1.
  7. To add groups to specified nodes, issue the mmchnode --ces-group command. For example:
    # mmchnode --ces-group g1,g2 -N 2,3 
    Note: This command adds groups g1 and g2 to both nodes 2 and 3. Run the mmces node list command to view the group allocation:
    
    [root@cluster-12 ~]# mmces node list
    Node   Name                      Node Groups   Node Flags   
    ------ ------------------------- ------------- ------------ 
    2      cluster-12.localnet.com   g1,g2         Suspended    
    3      cluster-13.localnet.com   g1,g2         none         
    4      cluster-14.localnet.com                 none 
  8. The following example changes the daemon interface of a cluster node:
    1. The mmlscluster command displays the state of the cluster:
      # mmlscluster
      GPFS cluster information
      ========================
        GPFS cluster name:         small_cluster.localnet.com
        GPFS cluster id:           5072947464461061246
        GPFS UID domain:           small_cluster.localnet.com
        Remote shell command:      /usr/bin/ssh
        Remote file copy command:  /usr/bin/scp
        Repository type:           CCR
      GPFS cluster configuration servers:
      -----------------------------------
        Primary server:    node-6.localnet.com (not in use)
        Secondary server:  (none)
       Node  Daemon node name     IP address      Admin node name      Designation
      -----------------------------------------------------------------------------
         1   node-6.localnet.com  192.168.124.36  node-6.localnet.com  quorum
         2   node-7.localnet.com  192.168.124.37  node-7.localnet.com  quorum
         3   node-8.localnet.com  192.168.124.38  node-8.localnet.com
      
    2. The mmchnode command changes the daemon interface of node-6 from node-6.localnet.com/192.168.124.36 to node-6-2.new-localnet.com/10.20.40.36:
      # mmchnode --daemon-interface=node-6-2.new-localnet.com -N node-6
      Wed Sep 23 20:01:35 CEST 2020: mmchnode: Processing node node-6.localnet.com
      Verifying GPFS is stopped on all nodes ...
      Wed Sep 23 20:01:36 CEST 2020: mmchnode: Collecting ccr.nodes file version from all quorum nodes ...
      Wed Sep 23 20:01:37 CEST 2020: mmchnode: Applying new change to ccr.nodes version 2 on all available quorum nodes ...
      Wed Sep 23 20:01:40 CEST 2020: mmchnode: Committing new version of ccr.nodes file ...
      mmchnode: Propagating the cluster configuration data to all affected nodes.
      mmchnode: Propagating the cluster configuration data to all
        affected nodes.  This is an asynchronous process.
      
    3. The mmlscluster command now shows the new daemon interface for node-6. Some lines of output are omitted:
      # mmlscluster
      GPFS cluster information
      ========================
      ...
       Node  Daemon node name           IP address      Admin node name      Designation
      -----------------------------------------------------------------------------------
         1   node-6-2.new-localnet.com  10.20.40.36     node-6.localnet.com  quorum
         2   node-7.localnet.com        192.168.124.37  node-7.localnet.com  quorum
         3   node-8.localnet.com        192.168.124.38  node-8.localnet.com
      
  9. The following example changes the daemon interface and the administration interface of multiple nodes:
    1. The data file /tmp/specfile contains the following lines:
      node-6 --daemon-interface=node-6-2.new-localnet.com --admin-interface=node-6-2.new-localnet.com
      node-7 --daemon-interface=node-7-2.new-localnet.com --admin-interface=node-7-2.new-localnet.com
      node-8 --daemon-interface=node-8-2.new-localnet.com --admin-interface=node-8-2.new-localnet.com
      
    2. The mmchmode command changes the daemon interfaces and the administration interfaces of node-6, node-7, and node-8 based on the information in the data file:
      mmchnode -S /tmp/specFile
      Wed Sep 23 20:19:48 CEST 2020: mmchnode: Processing node node-6
      Wed Sep 23 20:19:49 CEST 2020: mmchnode: Processing node node-7
      Wed Sep 23 20:19:50 CEST 2020: mmchnode: Processing node node-8
      Verifying GPFS is stopped on all nodes ...
      Wed Sep 23 20:19:52 CEST 2020: mmchnode: Collecting ccr.nodes file version from all quorum nodes ...
      Wed Sep 23 20:19:53 CEST 2020: mmchnode: Applying new change to ccr.nodes version 4 on all available quorum nodes ...
      Wed Sep 23 20:19:56 CEST 2020: mmchnode: Committing new version of ccr.nodes file ...
      mmchnode: Propagating the cluster configuration data to all affected nodes.
      mmchnode: Propagating the cluster configuration data to all
        affected nodes.  This is an asynchronous process.
      
    3. The mmlscluster command now shows the new daemon interfaces and administration interfaces for the three nodes. Some lines of output are omitted:
      # mmlscluster
      GPFS cluster information
      ========================
      GPFS cluster name:         small_cluster.localnet.com
      ...
      Node  Daemon node name             IP address      Admin node name               Designation
      --------------------------------------------------------------------------------------
         1   node-6-2.new-localnet.com    10.20.40.36    node-6-2.new-localnet.com     quorum
         2   node-7-2.new-localnet.com    10.20.40.37    node-7-2.new-localnet.com     quorum
         3   node-8-2.new-localnet.com    10.20.40.38    node-8-2.new-localnet.com
      

See also

Location

/usr/lpp/mmfs/bin