IBM Support

Initiate configuration node failover from CLI

How To


Summary

If the configuration node fails or is taken offline, the system chooses a new configuration node. This action is called configuration node failover. The new configuration node takes over the management IP addresses. Thus, you can access the system through the same IP addresses although the original configuration node failed. During the failover, there is a short period when you cannot use the command-line tools or management GUI.
For correct failover operation, all nodes of the system must be connected to the same subnet and the management IP configuration must be valid on the subnet.

Objective

This process describes the steps to initiate configuration node failover from CLI on any StorV device.

Environment

Any Storage Virtualize-based device

Steps

1. SSH to the Cluster IP to reach the CLI.

2. Identify the configuration node with command: "svcinfo lsnode"
The configuration node will display - config_node yes
e.g
IBM_FlashSystem:Cluster_9.42.162.193:superuser>svcinfo lsnode
id name UPS_serial_number WWNN      status IO_group_id IO_group_name config_node UPS_unique_id hardware iscsi_name                    iscsi_alias panel_name enclosure_id canister_id enclosure_serial_number site_id site_name
1 node1         500507681200017D online 0     io_grp0   no            6H2   iqn.1986-03.com.ibm:2145.cluster9.42.162.193.node1      01-1   1      1     78F10NV
3 node2         500507681200017E online 0     io_grp0   yes           6H2   iqn.1986-03.com.ibm:2145.cluster9.42.162.193.node2      01-2   1      2     78F10NV
3. Verify in the output of "svcinfo lsnode" that the partner node of the current config node (the node in the same IO group) is Online. In addition, make sure that there are no volumes that depend on the configuration node. Command "lsdependentvdisks -node <ID OF THE CONFIG NODE>" will return a list of dependent volumes. Proceed to next step only if the command doesn't return any output.
 
4. Warmstart the current config node. This will restart the IO process on the node, so the node will be unavailable for IO for a couple of minutes.
During that time the partner node will handle all of the traffic:
satask stopnode -warmstart <PANEL_NAME of Config node>
e.g
IBM_FlashSystem:Cluster_9.42.162.193:superuser>satask stopnode -warmstart 01-2

5. You will lose access to the CLI as the SSH service will stop and restart on the other node.

Document Location

Worldwide

[{"Type":"MASTER","Line of Business":{"code":"LOB26","label":"Storage"},"Business Unit":{"code":"BU058","label":"IBM Infrastructure w\/TPS"},"Product":{"code":"SSK3NF","label":"IBM Spectrum Virtualize Software for SAN Volume Controller"},"ARM Category":[{"code":"a8m0z000000bqPHAAY","label":"Node Issues"}],"ARM Case Number":"","Platform":[{"code":"PF025","label":"Platform Independent"}],"Version":"All Versions"}]

Document Information

Modified date:
21 November 2023

UID

ibm17080171