Preparing to migrate Cloud Pak for Data projects for Portworx asynchronous disaster recovery
Complete various prerequisite tasks before you migrate Cloud Pak for Data projects for Portworx asynchronous disaster recovery. Tasks that are service-specific need to be done only when those services are installed.
Ensure that you source the environment variables before you run the commands in this task.
Check the primary instance of every PostgreSQL cluster is in sync with its replicas
The replicas for Cloud Native PostgreSQL and EDB Postgres clusters occasionally get out of sync with the primary node. For information about diagnosing and fixing this problem, see PostgreSQL cluster replicas get out of sync.
Check the deployment profile of IBM Cloud Pak foundational services
The minimum deployment profile of IBM Cloud Pak foundational services required by the backup and
restore process is Small. For more information about sizing IBM Cloud Pak foundational services, see Hardware requirements and recommendations for foundational services.
Prepare IBM Knowledge Catalog
If large metadata enrichment jobs are running while an online backup operation is triggered, the Db2 pre-backup hooks might fail because the database cannot be put into a write-suspended state. It is recommended to have minimal enrichment workload while the online backup is scheduled.
Prepare watsonx Assistant
5.0.0- 5.0.2 If you upgraded Cloud Pak for Data from a previous release, some labels on PostgreSQL Persistent Volume Claims (PVCs) must be removed before a backup is taken. Do the following steps:
-
Log in to Red Hat® OpenShift® Container Platform as a cluster administrator.
${OC_LOGIN}Remember:OC_LOGINis an alias for theoc logincommand. - Set the watsonx Assistant instance name and
Cloud Pak for Data instance project (namespace)
environment
variables:
export INSTANCE=<watsonx Assistant instance name> export NAMESPACE=<Cloud Pak for Data namespace> - Remove the
labels:
for pvc in $(oc get pvc -n $NAMESPACE -l app=$INSTANCE-postgres -o jsonpath='{.items[*].metadata.name}'); do if [ "X$(oc get pvc $pvc -o jsonpath='{.metadata.labels.velero\.io/exclude-from-backup}' -n $NAMESPACE)" != "X" ]; then oc patch pvc $pvc -p '{"metadata": {"labels": {"velero.io/exclude-from-backup": null}}}' -n $NAMESPACE echo "Label 'velero.io/exclude-from-backup' removed for PVC: $pvc" else echo "Label 'velero.io/exclude-from-backup' not found for PVC: $pvc" fi if [ "X$(oc get pvc $pvc -o jsonpath='{.metadata.labels.icpdsupport/empty-on-nd-backup}' -n $NAMESPACE)" != "X" ]; then oc patch pvc $pvc -p '{"metadata": {"labels": {"icpdsupport/empty-on-nd-backup": null}}}' -n $NAMESPACE echo "Label 'icpdsupport/empty-on-nd-backup' removed for PVC: $pvc" else echo "Label 'icpdsupport/empty-on-nd-backup' not found for PVC: $pvc" fi done
Prepare watsonx Orchestrate
5.0.0- 5.0.2 Create a PersistentVolumeClaim (PVC) to store MongoDB backups.
cat <<EOF | oc apply -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-wo-mongo-backups
namespace: ${PROJECT_CPD_INST_OPERANDS}
spec:
storageClassName: ocs-storagecluster-ceph-rbd
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
EOF
Check the status of installed services
Ensure that the status of all installed services is Completed. Do the following steps.
-
Log the
cpd-cliin to the Red Hat OpenShift Container Platform cluster:${CPDM_OC_LOGIN}Remember:CPDM_OC_LOGINis an alias for thecpd-cli manage login-to-ocpcommand. - Run the following command to get the status of all
services.
cpd-cli manage get-cr-status \ --cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS}
Separately back up services that do not support online backups
For services that do not support online backups, back up those services separately by using their service-specific backup process before you migrate a Cloud Pak for Data instance. For more information about services that do not support online backups, see Services that support backup and restore.