ApplicationSet-based application failover between managed clusters
Failover is a process that transitions an application from a primary cluster to a secondary cluster in the event of a primary cluster failure. While failover provides the ability for the application to run on the secondary cluster with minimal interruption, making an uninformed failover decision can have adverse consequences, such as complete data loss in the event of unnoticed replication failure from primary to secondary cluster. If a significant amount of time has gone by since the last successful replication, it’s best to wait until the failed primary is recovered. LastGroupSyncTime is a critical metric that reflects the time since the last successful replication occurred for all PVCs associated with an application. In essence, it measures the synchronization health between the primary and secondary clusters. So, prior to initiating a failover from one cluster to another, check for this metric and only initiate the failover if the LastGroupSyncTime is within a reasonable time in the past. During the course of failover the Ceph-RBD mirror deployment on the failover cluster is scaled down to ensure a clean failover for volumes that are backed by Ceph-RBD as the storage provisioner.
Before you begin
- If your setup has active and passive RHACM hub clusters, see Hub recovery using Red Hat Advanced Cluster Management [Technology preview].
- When primary cluster is in a state other than Ready, check the actual
status of the cluster as it might take some time to update.
- Navigate to tab.
- Check the status of both the managed clusters individually before performing a failover operation.
However, failover operation can still be run when the cluster you are failing over to is in a Ready state.
- Run the following command on the Hub Cluster to check if
lastGroupSyncTime
is within an acceptable data loss window, when compared to current time.oc get drpc -o yaml -A | grep lastGroupSyncTime
Example output:
[...] lastGroupSyncTime: "2023-07-10T12:40:10Z"