Restoring ceph-monitor quorum in Fusion Data Foundation

In some circumstances, the ceph-mons might lose quorum. If the mons cannot form quorum again, there is a manual procedure to get the quorum going again. The only requirement is that, at least one mon must be healthy. Use this information to remove the unhealthy mons from quorum and form a quorum again with a single mon, then bring the quorum back to the original size.

About this task

An example use case is if you have three mons and lose quorum, you need to remove the two bad mons from quorum, notify the good mon that it is the only mon in quorum, and then restart the good mon.

Procedure

  1. Stop the rook-ceph-operator so that the mons are not failed over when you are modifying the monmap.
    oc -n openshift-storage scale deployment rook-ceph-operator --replicas=0
  2. Inject a new monmap.
    Note: You must inject the monmap very carefully. If run incorrectly, your cluster could be permanently destroyed. The Ceph monmap keeps track of the mon quorum. The monmap is updated to only contain the healthy mon. In this example, the healthy mon is rook-ceph-mon-b, while the unhealthy mons are rook-ceph-mon-a and rook-ceph-mon-c.
    1. Take a backup of the current rook-ceph-mon-b deployment:
      oc -n openshift-storage get deployment rook-ceph-mon-b -o yaml > rook-ceph-mon-b-deployment.yaml
    2. Open the YAML file and copy the command and args (arguments) from the mon container.
      These are part of the containers list, as shown in the following example. This is needed for the monmap changes.
      [...]
        containers:
        - args:
          - --fsid=41a537f2-f282-428e-989f-a9e07be32e47
          - --keyring=/etc/ceph/keyring-store/keyring
          - --log-to-stderr=true
          - --err-to-stderr=true
          - --mon-cluster-log-to-stderr=true
          - '--log-stderr-prefix=debug '
          - --default-log-to-file=false
          - --default-mon-cluster-log-to-file=false
          - --mon-host=$(ROOK_CEPH_MON_HOST)
          - --mon-initial-members=$(ROOK_CEPH_MON_INITIAL_MEMBERS)
          - --id=b
          - --setuser=ceph
          - --setgroup=ceph
          - --foreground
          - --public-addr=10.100.13.242
          - --setuser-match-path=/var/lib/ceph/mon/ceph-b/store.db
          - --public-bind-addr=$(ROOK_POD_IP)
          command:
          - ceph-mon
      [...]
    3. Cleanup the copied command and args fields to form a pastable command as follows:
      # ceph-mon \
          --fsid=41a537f2-f282-428e-989f-a9e07be32e47 \
          --keyring=/etc/ceph/keyring-store/keyring \
          --log-to-stderr=true \
          --err-to-stderr=true \
          --mon-cluster-log-to-stderr=true \
          --log-stderr-prefix=debug \
          --default-log-to-file=false \
          --default-mon-cluster-log-to-file=false \
          --mon-host=$ROOK_CEPH_MON_HOST \
          --mon-initial-members=$ROOK_CEPH_MON_INITIAL_MEMBERS \
          --id=b \
          --setuser=ceph \
          --setgroup=ceph \
          --foreground \
          --public-addr=10.100.13.242 \
          --setuser-match-path=/var/lib/ceph/mon/ceph-b/store.db \
          --public-bind-addr=$ROOK_POD_IP
      Note: Make sure to remove the single quotes around the --log-stderr-prefix flag and the parenthesis around the variables being passed ROOK_CEPH_MON_HOST, ROOK_CEPH_MON_INITIAL_MEMBERS, and ROOK_POD_IP).
    4. Patch the rook-ceph-mon-b deployment to stop the working of this mon without deleting the mon pod.
      oc -n openshift-storage patch deployment rook-ceph-mon-b  --type='json' -p '[{"op":"remove", "path":"/spec/template/spec/containers/0/livenessProbe"}]'
      oc -n openshift-storage patch deployment rook-ceph-mon-b -p '{"spec": {"template": {"spec": {"containers": [{"name": "mon", "command": ["sleep", "infinity"], "args": []}]}}}}'
    5. Perform the following steps on the mon-b pod:
      1. Connect to the pod of a healthy mon and run the following command:
        oc -n openshift-storage exec -it <mon-pod> bash
      2. Set the variable.
        monmap_path=/tmp/monmap
      3. Extract the monmap to a file, by pasting the ceph mon command from the good mon deployment and adding the --extract-monmap=${monmap_path} flag.
        # ceph-mon \
               --fsid=41a537f2-f282-428e-989f-a9e07be32e47 \
               --keyring=/etc/ceph/keyring-store/keyring \
               --log-to-stderr=true \
               --err-to-stderr=true \
               --mon-cluster-log-to-stderr=true \
               --log-stderr-prefix=debug \
               --default-log-to-file=false \
               --default-mon-cluster-log-to-file=false \
               --mon-host=$ROOK_CEPH_MON_HOST \
               --mon-initial-members=$ROOK_CEPH_MON_INITIAL_MEMBERS \
               --id=b \
               --setuser=ceph \
               --setgroup=ceph \
               --foreground \
               --public-addr=10.100.13.242 \
               --setuser-match-path=/var/lib/ceph/mon/ceph-b/store.db \
               --public-bind-addr=$ROOK_POD_IP \
               --extract-monmap=${monmap_path}
      4. Review the contents of the monmap.
         monmaptool --print /tmp/monmap
      5. Remove the bad mons from the monmap.
         monmaptool ${monmap_path} --rm <bad_mon>
        In this example we remove mon0 and mon2:
         monmaptool ${monmap_path} --rm a
        # monmaptool ${monmap_path} --rm c
      6. Inject the modified monmap into the good mon, by pasting the ceph mon command and adding the --inject-monmap=${monmap_path} flag as follows:
        # ceph-mon \
               --fsid=41a537f2-f282-428e-989f-a9e07be32e47 \
               --keyring=/etc/ceph/keyring-store/keyring \
               --log-to-stderr=true \
               --err-to-stderr=true \
               --mon-cluster-log-to-stderr=true \
               --log-stderr-prefix=debug \
               --default-log-to-file=false \
               --default-mon-cluster-log-to-file=false \
               --mon-host=$ROOK_CEPH_MON_HOST \
               --mon-initial-members=$ROOK_CEPH_MON_INITIAL_MEMBERS \
               --id=b \
               --setuser=ceph \
               --setgroup=ceph \
               --foreground \
               --public-addr=10.100.13.242 \
               --setuser-match-path=/var/lib/ceph/mon/ceph-b/store.db \
               --public-bind-addr=$ROOK_POD_IP \
               --inject-monmap=${monmap_path}
      7. Exit the shell to continue.
  3. Edit the Rook configmaps.
    1. Edit the configmap that the operator uses to track the mons.
      oc -n openshift-storage edit configmap rook-ceph-mon-endpoints
    2. Verify that in the data element you see three mons such as the following (or more depending on your moncount):
      data: a=10.100.35.200:6789;b=10.100.13.242:6789;c=10.100.35.12:6789
    3. Delete the bad mons from the list to end up with a single good mon. For example:
      data: b=10.100.13.242:6789
    4. Save the file and exit.
    5. Adapt a Secret which is used for the mons and other components.
      1. Set a value for the variable good_mon_id.

        For example:
         good_mon_id=b
      2. You can use the oc patch command to patch the rook-ceph-config secret and update the two key/value pairs mon_host and mon_initial_members.
         mon_host=$(oc -n openshift-storage get svc rook-ceph-mon-b -o jsonpath='{.spec.clusterIP}')
        oc -n openshift-storage patch secret rook-ceph-config -p '{"stringData": {"mon_host": "[v2:'"${mon_host}"':3300,v1:'"${mon_host}"':6789]", "mon_initial_members": "'"${good_mon_id}"'"}}'
        Note: If you are using hostNetwork: true, you need to replace the mon_host var with the node IP the mon is pinned to (nodeSelector). This is because there is no rook-ceph-mon-* service created in that “mode”.
  4. Restart the mon. You need to restart the good mon pod with the original ceph-mon command to pick up the changes.
    1. Use the oc replace command on the backup of the mon deployment YAML file:
      oc replace --force -f rook-ceph-mon-b-deployment.yaml
      Note: Option --force deletes the deployment and creates a new one.
    2. Verify the status of the cluster. The status should show one mon in quorum. If the status looks good, your cluster should be healthy again.
  5. Delete the two mon deployments that are no longer expected to be in quorum.

    For example:

    oc delete deploy <rook-ceph-mon-1>
    oc delete deploy <rook-ceph-mon-2>

    In this example the deployments to be deleted are rook-ceph-mon-a and rook-ceph-mon-c.

  6. Restart the operator.
    1. Start the rook operator again to resume monitoring the health of the cluster.
      Note: It is safe to ignore the errors that a number of resources already exist.
      oc -n openshift-storage scale deployment rook-ceph-operator --replicas=1
      The operator automatically adds more mons to increase the quorum size again depending on the mon count.