Verifying Regional-DR configuration

Verify that all required components for regional disaster recovery (Regional-DR) are properly configured and operational.

Procedure

  1. Verify that the DRPolicy is created successfully.
    Run this command on the Hub cluster for each of the DRPolicy resources created, where <drpolicy_name> is replaced with your unique name.
    oc get drpolicy <drpolicy_name> -o jsonpath='{.status.conditions[].reason}{"\n"}'

    Example output: Succeeded

    When a DRPolicy is created, along with it, two DRCluster resources are also created. It might take up to 10 minutes for all three resources to be validated and for the status to show as Succeeded.

    Note: Editing of SchedulingInterval, ReplicationClassSelector, VolumeSnapshotClassSelector and DRClusters field values are not supported in the DRPolicy.
  2. Verify the object bucket access from the Hub cluster to both the Primary-managed cluster and the Secondary-managed cluster.
    1. Get the names of the DRClusters on the Hub cluster.
      oc get drclusters
      Example output:
      NAME        AGE
      ocp4bos1   4m42s
      ocp4bos2   4m42s
    2. Check S3 access to each bucket created on each managed cluster.
      Use the DRCluster validation command, where <drcluster_name> is replaced with your unique name.
      Note: Editing of Region and S3ProfileName field values are non supported in DRClusters.
      oc get drcluster <drcluster_name> -o jsonpath='{.status.conditions[2].reason}{"\n"}'

      Example output: Succeeded

      Note: Make sure to run command for both DRClusters on the Hub cluster.
  3. Verify that the IBM Fusion Data Foundation DR Cluster operator installation was successful on the Primary-managed cluster and the Secondary-managed cluster.
    oc get csv,pod -n openshift-dr-system
    Example output:
    NAME                                                                                          DISPLAY                         VERSION                 REPLACES                                PHASE
    clusterserviceversion.operators.coreos.com/metallb-operator.v4.18.0-202505081435              MetalLB Operator                4.18.0-202505081435     metallb-operator.v4.18.0-202504291304   Succeeded
    clusterserviceversion.operators.coreos.com/odr-cluster-operator.v4.18.4-rhodf                 Openshift DR Cluster Operator   4.18.4-rhodf            odr-cluster-operator.v4.18.3-rhodf      Succeeded
    clusterserviceversion.operators.coreos.com/openshift-gitops-operator.v1.16.0-0.1746014725.p   Red Hat OpenShift GitOps        1.16.0+0.1746014725.p   openshift-gitops-operator.v1.16.0       Succeeded
    
    NAME                                             READY   STATUS    RESTARTS   AGE
    pod/ramen-dr-cluster-operator-5458dbcb5c-jtzgm   2/2     Running   0          15s

    You can also verify that IBM Fusion Data Foundation DR Cluster Operator is installed successfully on the OperatorHub of each managed cluster.

    Note: On the initial run, VolSync operator is installed automatically. VolSync is used to set up volume replication between two clusters to protect CephFs-based PVCs. The replication feature is enabled by default.
  4. Verify that the status of the IBM Fusion Data Foundation mirroring daemon health on the Primary managed cluster and the Secondary managed cluster.
    oc get cephblockpoolradosnamespaces -n openshift-storage -o jsonpath='{range .items[*]}{.status.mirroringStatus.summary}{"\n"}{end}'
    Example output:
    {"daemon_health":"OK","health":"OK","image_health":"OK","states":{}}
    CAUTION:
    It could take up to 10 minutes for the daemon_health and health to go from Warning to OK. If the status does not become OK eventually, then use the RHACM console to verify that the Submariner connection between managed clusters is still in a healthy state. Do not proceed until all values are OK.