Creating Disaster Recovery Policy on Hub cluster
The Disaster Recovery Policy (DRPolicy) resource specifies OpenShift Container Platform clusters participating in the disaster recovery solution. DRPolicy is a cluster scoped resource that users can apply to applications that require Disaster Recovery solution.
Before you begin
About this task
The Fusion Data Foundation MultiCluster Orchestrator Operator facilitates the creation of each DRPolicy and the corresponding DRClusters through the Multicluster Web console.
Procedure
- On the OpenShift console, navigate to All Clusters > Data Services > Disaster recovery.
- On the Overview tab, click Create a disaster recovery policy or you can navigate to Policies tab and click Create DRPolicy.
- Enter Policy name. Ensure that each DRPolicy has a unique name (for example:
ocp4bos1-ocp4bos2-5m
). - Select two clusters from the list of managed clusters to which this new policy will be
associated with. Note: If you get an error message "OSDs not migrated" after selecting the clusters, then follow the instructions from knowledgebase article on Migration of existing OSD to the optimized OSD in OpenShift Data Foundation for Regional-DR cluster before proceeding with the next step.
- Replication policy is automatically set to
Asynchronous
(async), based on the OpenShift clusters selected and a Sync schedule option will become available. - Set Sync schedule. Important: For every desired replication interval a new DRPolicy must be created with a unique name (such as: `ocp4bos1-ocp4bos2-10m`). The same clusters can be selected but the Sync schedule can be configured with a different replication interval in minutes/hours/days. The minimum is one minute.
- Click Create.
- Verify that the DRPolicy is created successfully. Run this command on the Hub cluster for each of the DRPolicy resources created, where <drpolicy_name> is replaced with your unique name.
oc get drpolicy <drpolicy_name> -o jsonpath='{.status.conditions[].reason}{"\n"}'
Example output: Succeeded
When a DRPolicy is created, along with it, two DRCluster resources are also created. It might take up to 10 minutes for all three resources to be validated and for the status to show as Succeeded.
Note: Editing ofSchedulingInterval
,ReplicationClassSelector
,VolumeSnapshotClassSelector
andDRClusters
field values are not supported in the DRPolicy. - Verify the object bucket access from the Hub cluster to both the
Primary-managed cluster and the Secondary-managed cluster.
- Get the names of the DRClusters on the Hub cluster.
oc get drclusters
Example output:NAME AGE ocp4bos1 4m42s ocp4bos2 4m42s
- Check S3 access to each bucket created on each managed cluster. Use the DRCluster validation command, where <drcluster_name> is replaced with your unique name.Note: Editing of
Region
andS3ProfileName
field values are non supported in DRClusters.oc get drcluster <drcluster_name> -o jsonpath='{.status.conditions[2].reason}{"\n"}'
Example output: Succeeded
Note: Make sure to run command for both DRClusters on the Hub cluster.
- Get the names of the DRClusters on the Hub cluster.
- Verify that the IBM Storage Fusion
Data Foundation DR Cluster operator
installation was successful on the Primary-managed cluster and the Secondary-managed
cluster.
oc get csv,pod -n openshift-dr-system
Example output:NAME DISPLAY VERSION REPLACES PHASE clusterserviceversion.operators.coreos.com/odr-cluster-operator.v4.14.0 Openshift DR Cluster Operator 4.14.0 Succeeded clusterserviceversion.operators.coreos.com/volsync-product.v0.8.0 VolSync 0.8.0 Succeeded NAME READY STATUS RESTARTS AGE pod/ramen-dr-cluster-operator-6467cf5d4c-cc8kz 2/2 Running 0 3d12h
You can also verify that IBM Storage Fusion Data Foundation DR Cluster Operator is installed successfully on the OperatorHub of each managed cluster.
Note: On the initial run, VolSync operator is installed automatically. VolSync is used to set up volume replication between two clusters to protect CephFs-based PVCs. The replication feature is enabled by default. - Verify that the status of the IBM Storage Fusion
Data Foundation
mirroring
daemon
health on the Primary managed cluster and the Secondary managed cluster.oc get cephblockpool ocs-storagecluster-cephblockpool -n openshift-storage -o jsonpath='{.status.mirroringStatus.summary}{"\n"}'
Example output:{"daemon_health":"OK","health":"OK","image_health":"OK","states":{}}
CAUTION:It could take up to 10 minutes for the daemon_health and health to go from Warning to OK. If the status does not become OK eventually, then use the RHACM console to verify that the Submariner connection between managed clusters is still in a healthy state. Do not proceed until all values are OK.