Changing partitioned nodes to failed
Sometimes, a partitioned condition is reported when there really was a node outage. This can occur when cluster resource service loses communications with one or more nodes, but cannot detect if the nodes are still operational. When this condition occurs, a simple mechanism exists for you to indicate that the node has failed.
When the status of a node is changed to Failed, the role of nodes in the recovery domain for each cluster resource group in the partition may be reordered. The node being set to Failed will be assigned as the last backup. If multiple nodes have failed and their status needs to be changed, the order in which the nodes are changed will affect the final order of the recovery domain's backup nodes. If the failed node was the primary node for a CRG, the first active backup will be reassigned as the new primary node.
When cluster resource services has lost communications with a node but cannot detect if the node is still operational, a cluster node will have a status of Not communicating. You may need to change the status of the node from Not communicating to Failed. You will then be able to restart the node.