CephOSDFlapping

A storage daemon has restarted 5 times in the last 5 minutes. Check the pod events or Ceph status to find out the cause.

Impact: High

Diagnosis

Follow the steps, as detailed in Flapping OSDs within the IBM Storage Ceph documentation.
Use the following steps for general pod troubleshooting:
pod status: pending
  1. Check for resource issues, pending Persistent Volume Claims (PVCs), node assignment, and kubelet problems, using the following commands:
    • oc project openshift-storage
    • oc get pod | grep rook-ceph
  2. Set MYPOD as the variable for the pod that is identified as the problem pod, specifying the name of the pod that is identified as the problem pod for <pod_name>:
     Examine the output for a rook-ceph that is in the pending state, not running or not ready
    MYPOD=<pod_name>
  3. Look for the resource limitations or pending PVCs. Otherwise, check for the node assignment, using the oc get pod/${MYPOD} -o wide command.
pod status: NOT pending, running, but NOT ready
Check the readiness of the probe, using the oc describe pod/${MYPOD} command.
pod status: NOT pending, but NOT running
Check for application or image issues, using the oc logs pod/${MYPOD} command.
Important:
  • If a node was assigned, check the kubelet on the node.

  • If the basic health of the running pods, node affinity and resource availability on the nodes are verified, run the Ceph tools to get the status of the storage components.

Mitigation

(Optional) Debugging log information
Run the following command to gather the debugging information for the Ceph cluster:
oc adm must-gather --image=registry.redhat.io/ocs4/ocs-must-gather-rhel8:v4.6