To enable concurrent maintenance, a system is configured as pairs of nodes. Each pair is
called an I/O group. If one system node is shut down or disconnected for maintenance, the other node
can keep the I/O group operational.
With concurrent maintenance, all field-replaceable units (FRUs) can be removed, replaced, and
tested on one system node while the network and host systems are powered on and doing productive
work through the second node.
Attention: Do not remove the power from any nodes unless the procedures instruct you to
do so.
Verify that concurrent maintenance is enabled before you shut down a node that is part of a
system or when you delete the node from a system. To do so, complete the following checks.
- Confirm that no volumes have dependencies on the node by completing these steps:
- In the management GUI, select .
- On the System -- Overview page, use the directional arrow near the
enclosure that contains the node canister to open the Enclosure Details page.
- Under Rear View of the system, right click the canister and select
Dependent Volumes from the Actions menu you display
all volumes that become unavailable to hosts if the canister is powered off. You can also use the
node parameter with the lsdependentvdisks CLI command to view dependent volumes.
If dependent volumes exist, determine whether the volumes are being
used. If the volumes are being used, either restore the redundant configuration or suspend the host
application. If a dependent quorum disk is reported, repair the access to the quorum disk or modify
the quorum disk configuration.
- Ensure that the host multipathing device drivers can fail over to the partner node.
Some
host multipathing device drivers take a while to update after changes are made on the fabric.
Do not shut down a node or delete the node from a cluster if the partner node in the I/O group to
which the node belongs has not been online for more than 30 minutes.
If possible, check the
status of the host multipathing device drivers before you shut down a node to ensure that the device
drivers can fail over to the partner node.
When you shut down the node, follow the procedure that is described in the
Procedure:
Powering off a node canister page of the relevant Hardware Guide.
Attention: Do not power off any expansion enclosures when you power
off a node.
When you delete a node from the clustered system, retain the node information that is described
in Deleting a node from a clustered system by using the management GUI. This information helps you avoid
data corruption when you add the node back to the system. The topic describes how to ensure that the
multipathing device driver does not rediscover any paths that are manually removed. Other
considerations about dependent volumes are also provided.
For more information about working with dependent volumes, see the following topics: