Configuration node
At any one time, a single node in the system provides a focal point for configuration commands. If the configuration node fails, another node in the system takes over its responsibilities.
- Accepts user logins to the management GUI and CLI via the IP address that is bound to Ethernet port 1 and optionally to the second management IP address that is bound to Ethernet port 2.
- Sends system configuration changes to the other nodes in the system.
- Sends call home and system notifications on behalf of the system.
If the configuration node fails or is taken offline, the system chooses a new configuration node. This action is called configuration node failover. The new configuration node takes over the management IP addresses. Thus, you can access the system through the same IP addresses although the original configuration node failed. During the failover, there is a short period when you cannot use the command-line tools or management GUI.
For correct failover operation, all nodes of the system must be connected to the same subnet and the management IP configuration must be valid on the subnet.
Ethernet Link Failures
If the Ethernet link to the system fails because of an event that is unrelated to the system, the system does not attempt to fail over the configuration node to restore management IP access. For example, the Ethernet link can fail if a cable is disconnected or an Ethernet router fails. To protect against this type of failure, the system provides the option for two Ethernet ports that each have a management IP address. If you cannot connect through one IP address, attempt to access the system through the alternative IP address.
Identify the configuration node using the GUI
To identify the configuration node, follow these steps:
- In a web browser, navigate to the Service Assistance Tool (SAT) of any node in the system, by
entering the following address:
https://<service_ip>/service
Where
service_ip
is the service IP address of the chosen node. - Log into the SAT by authenticating using the superuser credentials.
- Once logged in, the dashboard displays a list of all the nodes in the system. The configuration node is labelled as CONFIG.
- Click the LED toggle to turn on the identification LED on the node canister.
- Check for the node canister with the identification LED lit and perform the required service action.
- When finished, click the LED toggle to turn off the identification LED on the node canister.
Identifying the configuration node using the command-line interface
- In a terminal window, use Secure Shell (SSH) software to connect to the service IP address of
any node in the system and authenticate using the superuser
credentials:
ssh superuser@service_ip
Where
service_ip
is the service IP address of the chosen node. - Once logged in, run sainfo lsservicenodes command to display a list of all
the nodes that can be serviced by using the service assistant CLI. From the list of nodes, check the
panel_name
field to determine the list of all nodes in the same system as the node that is running the command. - For each of the nodes above, run the following command to display the service status of the
node:
The configuration node is the one which reports asainfo lsservicestatus
config_node
value of Yes. - Turn on the identification LED for the configuration node canister by running the following
command:
satask chnodeled -on panel_name
Where
panel_name
is the panel name of the configuration node. - Check for the node canister with the identification LED lit and perform the required service action.
- When finished, turn off the identification LED for the configuration node canister by running
the following command:
satask chnodeled -off panel_name
Where
panel_name
is the panel name of the configuration node.