CephMonQuorumAtRisk

Multiple MONs work together to provide redundancy. Each of the MONs keeps a copy of the metadata. The cluster is deployed with 3 MONs, and requires 2 or more MONs to be up and running for quorum and for the storage operations to run. If quorum is lost, access to data is at risk.

Impact: High

Diagnosis

Restore the Ceph MON Quorum. For more information, see Restoring ceph-monitor quorum in Fusion Data Foundation in the Troubleshooting guide.

If the restoration of the Ceph MON Quorum fails, follow the general pod troubleshooting to resolve the issue.

Use the following steps for general pod troubleshooting:
pod status: pending
  1. Check for resource issues, pending Persistent Volume Claims (PVCs), node assignment, and kubelet problems, using the following commands:
    • oc project openshift-storage
    • oc get pod | grep rook-ceph-mon
  2. Set MYPOD as the variable for the pod that is identified as the problem pod, specifying the name of the pod that is identified as the problem pod for <pod_name>:
    Examine the output for a {ceph-component} that is in the pending state, not running or not ready
    MYPOD=<pod_name>
  3. Look for the resource limitations or pending PVCs. Otherwise, check for the node assignment, using the oc get pod/${MYPOD} -o wide command.
pod status: NOT pending, running, but NOT ready
Check the readiness of the probe, using the oc describe pod/${MYPOD} command.
pod status: NOT pending, but NOT running
Check for application or image issues, using the oc logs pod/${MYPOD} command.
Important: If a node was assigned, check the kubelet on the node.

Mitigation

(Optional) Debugging log information
Run the following command to gather the debugging information for the Ceph cluster:
oc adm must-gather --image=registry.redhat.io/ocs4/ocs-must-gather-rhel8:v4.6