Activating applications in the destination cluster (Portworx asynchronous disaster recovery)

If the source cluster becomes unavailable, activate the applications in the destination cluster.

About this task

It is important to maintain a steady connection to your OpenShift® cluster for the following steps. When running these commands from a workstation, ensure that the connection remains active, or consider using a terminal multiplexer like tmux over SSH to decouple the running commands from your main terminal. For more information, see the following articles:

Procedure

  1. If the source cluster still exists, do the following substeps.
    1. Suspend the migration schedule:
      storkctl suspend migrationschedule cpd-tenant-migrationschedule -n ${PX_ADMIN_NS}
    2. Ensure that no migration is in progress:
      storkctl get migration -n ${PX_ADMIN_NS}
      The following example output shows a migration that is still in progress, and you should not proceed to the next step until that migration is finished.
      NAME                                                CLUSTERPAIR       STAGE     STATUS       VOLUMES   RESOURCES   CREATED       ELAPSED                                    TOTAL BYTES TRANSFERRED
      cpd-tenant-migrationschedule-interval-<timestamp>   mig-clusterpair   Final     Successful   44/44     129/129     <timestamp>   Volumes (21m37s) Resources (2m4s)          0
      cpd-tenant-migrationschedule-interval-<timestamp>   mig-clusterpair   Volumes   InProgress   0/44      0/0         <timestamp>   Volumes (4m24.546815892s) Resources (NA)   0
  2. Activate migration for the instance (tenant) projects (namespaces):
    storkctl activate migrations -n ${PROJECT_CPD_INST_OPERATORS}
    storkctl activate migrations -n ${PROJECT_CPD_INST_OPERANDS}
    The cpdbr-tenant-service pod starts in the ${PROJECT_CPD_INST_OPERATORS} project.
  3. Run post-restore (migration) steps.
    1. 4.8.0 4.8.1 4.8.2 4.8.3 If you are using Cloud Pak for Data 4.8.0, 4.8.1, 4.8.2, or 4.8.3, delete the following PVCs:
      for i in $(oc get persistentvolumeclaim -n ${PROJECT_CPD_INST_OPERANDS} -l icpdsupport/cpdbr=true,icpdsupport/ignore-on-nd-backup=true | awk '{print $1}' | grep -v NAME); do oc delete persistentvolumeclaim $i -n ${PROJECT_CPD_INST_OPERANDS}; done
    2. Run the post-restore script:
      CPDBR_TENANT_SVC_POD=`oc get po -n ${PROJECT_CPD_INST_OPERATORS} | grep cpdbr-tenant-service- |  grep "Running" | awk '{ print $1 }'`
      echo "cpdbr-tenant-service pod name=$CPDBR_TENANT_SVC_POD"
      oc exec -it -n ${PROJECT_CPD_INST_OPERATORS} ${CPDBR_TENANT_SVC_POD} -- \
        bash -c "/cpdbr-scripts/cpdbr/cpdbr-post-restore.sh ${PROJECT_CPD_INST_OPERATORS} 30m"

What to do next

If running the post-restore script is not successful, and the source cluster no longer exists, try running the post-restore script again.

If running the post-restore script is not successful, and the source cluster still exists, do the following steps:

  1. Delete non-namespaced resources.
    1. Get the list of PersistentVolumes (PVs) from the Cloud Pak for Data instance project that are in the Released state:
      oc get pv --no-headers | grep "Released.*${PROJECT_CPD_INST_OPERANDS}/.*" | awk '{print $1}'
    2. Delete those PVs:
      oc get pv --no-headers | grep "Released.*${PROJECT_CPD_INST_OPERANDS}/.*" | awk '{print $1}' | xargs oc delete pv
    3. Deleted outdated security context constraints (SCCs):
      oc get scc | grep ${PROJECT_CPD_INST_OPERANDS} | awk '{print $1}' | xargs oc delete scc
  2. Delete the Cloud Pak for Data instance projects (namespaces) in the destination cluster.
    1. Locate and remove finalizers that might block the deletion of the Cloud Pak for Data instance operand project, and then delete the operand project by running the following commands.
      oc project ${PROJECT_CPD_INST_OPERANDS}
      while read -r resource_type
      do
          echo "${resource_type}"
          while read -r resource
          do
              if [ -z "${resource}" ]; then
                  continue
              fi
              kubectl delete "${resource}" -n "${PROJECT_CPD_INST_OPERANDS}" --timeout=10s \
              || kubectl patch "${resource}" -n "${PROJECT_CPD_INST_OPERANDS}" \
                  --type=merge \
                  --patch '{"metadata":{"finalizers":[]}}'
          done <<< "$(kubectl get "${resource_type}" -n "${PROJECT_CPD_INST_OPERANDS}" -o name  | sort)"
      done <<< "$(kubectl api-resources --namespaced=true -o name | grep ibm.com | sort)"
      oc delete project ${PROJECT_CPD_INST_OPERANDS}
    2. When all finalizers are removed, check that the Cloud Pak for Data instance operand project was deleted by running the following command:
      oc get project ${PROJECT_CPD_INST_OPERANDS} -o yaml
      If the project was deleted, the command returns the following message:
      Error from server (NotFound): namespaces "${PROJECT_CPD_INST_OPERANDS}" not found
    3. If some services were installed in tethered projects, repeat steps 3 and 4 for each tethered project.

      In the commands, replace the PROJECT_CPD_INST_OPERANDS environment variable with PROJECT_CPD_INSTANCE_TETHERED.

      Tip: If you set the PROJECT_CPD_INSTANCE_TETHERED_LIST environment variable, print the list of tethered projects to the terminal:
      echo $PROJECT_CPD_INSTANCE_TETHERED_LIST

      Use this information to set the PROJECT_CPD_INSTANCE_TETHERED environment variable before you re-run the commands.

    4. Delete the Cloud Pak for Data instance operator project:
      oc project ${PROJECT_CPD_INST_OPERATORS}
      while read -r resource_type
      do
          echo "${resource_type}"
          while read -r resource
          do
              if [ -z "${resource}" ]; then
                  continue
              fi
              kubectl delete "${resource}" -n "${PROJECT_CPD_INST_OPERATORS}" --timeout=10s \
              || kubectl patch "${resource}" -n "${PROJECT_CPD_INST_OPERATORS}" \
                  --type=merge \
                  --patch '{"metadata":{"finalizers":[]}}'
          done <<< "$(kubectl get "${resource_type}" -n "${PROJECT_CPD_INST_OPERATORS}" -o name  | sort)"
      done <<< "$(kubectl api-resources --namespaced=true -o name | grep ibm.com | sort)"
      oc delete project ${PROJECT_CPD_INST_OPERATORS}
    5. If the Cloud Pak for Data scheduling service was installed, uninstall it.
  3. Resume the migration schedule.
    storkctl resume migrationschedule cpd-tenant-migrationschedule -n ${PX_ADMIN_NS}