PowerHA SystemMirror and GDR coexistence
The IBM® PowerHA® SystemMirror® software provides cluster-based high availability (Standard Edition) and disaster recovery (Enterprise Edition) solutions. The GDR solution can operate with PowerHA SystemMirror Version 7.1.0 if you follow the guidelines that are required to deploy both the solutions together.
Disaster recovery by using PowerHA SystemMirror Enterprise Edition
If you are using the PowerHA SystemMirror Enterprise Edition to perform disaster recovery for some of the virtual machines in your environment, you do not need to deploy the GDR solution for those virtual machines. In this case, you must exclude those virtual machines from the GDR disaster recovery management. Use the ksysmgr unamange command to exclude the PowerHA virtual machines from the GDR configuration before you discover resources in the GDR solution.
High availability by using PowerHA SystemMirror Standard Edition
PowerHA SystemMirror Standard Edition is deployed within a site. PowerHA creates a cluster of a set of virtual machines within the active site for high availability management. If you are configuring such a cluster within the active site of the GDR environment, consider the following guidelines:
- Include all the virtual machines in the cluster to the GDR disaster recovery management.
- Perform a test failover of the cluster to the backup site to validate whether that cluster starts correctly on the backup site.
- Name
- Physical volume identifier (PVID)
- Universal Unique ID (UUID)
To obtain the disk information of the cluster nodes, complete the following steps on any one of the cluster nodes in the source site immediately after the GDR solution is implemented in your environment:
- Run the following command to obtain the name and UUID of each disk in the active site
cluster:
The same information is available on each node of the cluster./usr/lib/cluster/clras dumprepos - Run the following command to obtain the
PVIDs:
where disk1 and disk2 are the disk names as displayed in the previous output.lspv -u | egrep -w "<disk1>|<disk2>|..." - Identify and differentiate the disks in the primary repository from the disks in the backup repository.
To restart the PowerHA SystemMirror Version 7.1.0 or Cluster Aware AIX® (CAA) clusters on the backup site after the disaster recovery operation is complete, complete the following steps on any one of recovered virtual machine that is used to start the cluster. You can use a single node to perform all the following steps:
- If the recovered virtual machines are not active, run the following command with the identified
PVIDs to get the corresponding
disks:
where pvid1 and pvid2 are the PVIDs that are listed in the active site.lspv -u | egrep -w "<pvid1>|<pvid2>|..."Note: The disks in the active site and the backup site share common PVIDs. - Save the type (primary or backup), name, PVID, and UUID for each disk in the active and backup sites and identify the corresponding disks between sites. For example, hdiskA with PVID B and UUID C mirrors to hdiskX with PVID B and UUID Y.
- Identify the disks that must be used as the primary repository and the backup repository in the backup site.
- Remove any CAA information from the backup site repository by using the following
command:
This command removes only the CAA data.CAA_FORCE_ENABLED=true rmcluster -fr <backup_repos_name> - Run the following command to write the repository disk information by using the information from
the CAA repository backup
file:
chrepos -c <backup_repos_name> - If you were using non-native AIX Multipath I/O (MPIO) disks (for example, EMC PowerPath), run the following command to register the disks with the CAA and AIX disk drivers:
clusterconf -d - Run the following command for each backup repository disk in the previous
site:
For example, if the backup repository disk hdisk1 with PVID X and UUID Y mirrors to the backup repository disk hdisk5 with PVID X and UUID Z, run the following command:chrepos -x <old_backup_UUID>,<new_backup_name | new_backup_UUID>
or,chrepos -x Y,hdisk5chrepos -x Y,Z - Run the following command to start the cluster in the backup
site:
clusterconf
