Activating applications in the destination cluster (Portworx asynchronous disaster recovery)
If the source cluster becomes unavailable, activate the applications in the destination cluster.
About this task
It
is important to maintain a steady connection to your OpenShift® cluster for the following steps. When
running these commands from a workstation, ensure that the connection remains active, or consider
using a terminal multiplexer like tmux over SSH to decouple the running
commands from your main terminal. For more information, see the following articles:
Procedure
What to do next
If running the post-restore script is not successful, and the source cluster no longer exists, try running the post-restore script again.
If running the post-restore script is not successful, and the source cluster still exists, do the following steps:
- Delete non-namespaced resources.
- Get the list of PersistentVolumes (PVs) from the Cloud Pak for Data instance project that are in the
Released
state:oc get pv --no-headers | grep "Released.*${PROJECT_CPD_INST_OPERANDS}/.*" | awk '{print $1}'
- Delete those
PVs:
oc get pv --no-headers | grep "Released.*${PROJECT_CPD_INST_OPERANDS}/.*" | awk '{print $1}' | xargs oc delete pv
- Deleted outdated security context constraints
(SCCs):
oc get scc | grep ${PROJECT_CPD_INST_OPERANDS} | awk '{print $1}' | xargs oc delete scc
- Get the list of PersistentVolumes (PVs) from the Cloud Pak for Data instance project that are in the
- Delete the Cloud Pak for Data instance projects
(namespaces) in the destination cluster.
- Locate and remove finalizers that might block the deletion of the
Cloud Pak for Data instance operand project, and then
delete the operand project by running the following
commands.
oc project ${PROJECT_CPD_INST_OPERANDS}
while read -r resource_type do echo "${resource_type}" while read -r resource do if [ -z "${resource}" ]; then continue fi kubectl delete "${resource}" -n "${PROJECT_CPD_INST_OPERANDS}" --timeout=10s \ || kubectl patch "${resource}" -n "${PROJECT_CPD_INST_OPERANDS}" \ --type=merge \ --patch '{"metadata":{"finalizers":[]}}' done <<< "$(kubectl get "${resource_type}" -n "${PROJECT_CPD_INST_OPERANDS}" -o name | sort)" done <<< "$(kubectl api-resources --namespaced=true -o name | grep ibm.com | sort)"
oc delete project ${PROJECT_CPD_INST_OPERANDS}
- When all finalizers are removed, check that the Cloud Pak for Data instance operand project was deleted by running
the following
command:
oc get project ${PROJECT_CPD_INST_OPERANDS} -o yaml
If the project was deleted, the command returns the following message:Error from server (NotFound): namespaces "${PROJECT_CPD_INST_OPERANDS}" not found
- If some services were installed in tethered projects, repeat steps 3 and
4 for each tethered project.
In the commands, replace the PROJECT_CPD_INST_OPERANDS environment variable with PROJECT_CPD_INSTANCE_TETHERED.
Tip: If you set thePROJECT_CPD_INSTANCE_TETHERED_LIST
environment variable, print the list of tethered projects to the terminal:echo $PROJECT_CPD_INSTANCE_TETHERED_LIST
Use this information to set the
PROJECT_CPD_INSTANCE_TETHERED
environment variable before you re-run the commands. - Delete the Cloud Pak for Data instance operator
project:
oc project ${PROJECT_CPD_INST_OPERATORS}
while read -r resource_type do echo "${resource_type}" while read -r resource do if [ -z "${resource}" ]; then continue fi kubectl delete "${resource}" -n "${PROJECT_CPD_INST_OPERATORS}" --timeout=10s \ || kubectl patch "${resource}" -n "${PROJECT_CPD_INST_OPERATORS}" \ --type=merge \ --patch '{"metadata":{"finalizers":[]}}' done <<< "$(kubectl get "${resource_type}" -n "${PROJECT_CPD_INST_OPERATORS}" -o name | sort)" done <<< "$(kubectl api-resources --namespaced=true -o name | grep ibm.com | sort)"
oc delete project ${PROJECT_CPD_INST_OPERATORS}
- If the Cloud Pak for Data scheduling service was installed, uninstall it.
- Locate and remove finalizers that might block the deletion of the
Cloud Pak for Data instance operand project, and then
delete the operand project by running the following
commands.
- Resume the migration
schedule.
storkctl resume migrationschedule cpd-tenant-migrationschedule -n ${PX_ADMIN_NS}