Recovering from a disaster using Global Mirror

Use this information to complete the high-level steps that must be done to recover from a disaster using Global Mirror processing.

About this task

A failure at the local or primary site stops all I/O to and from the local storage server. The local server cannot communicate with the remote sites. This might impact the formation of consistency groups, because the entire process is managed and controlled by the master storage server, which is the primary storage server.

Your initial goal is to swap operations between the local and remote sites and then restart the applications. This requires that you make available a set of consistent volumes at the remote site, before the application can restart at the remote site.

When the local site is operational again, you want to return processing to the local site. Before you can return processing to the local site, you must apply changes from the remote site to the local site. These changes are the transactions that occurred after you started failover processing to the remote site.

The following considerations can help you determine where transactions are being processed:
  • The local site contains A volumes (the source volume), which are copied to the recovery site using Global Copy
  • The recovery (or remote) site contains B volumes (the target volume and FlashCopy source volume) and C volumes (the FlashCopy target volume)
  • A storage unit at the local site is designated as the Global Mirror master and all other local (or production) storage units are designated as subordinate storage units. The master storage unit sends commands to its subordinate storage units. These subordinates work together to create a consistency group and to communicate the FlashCopy commands to the recovery (or remote) site. All status is relayed back to the Global Mirror master.
To recover from a disaster, you must complete the following high-level tasks using the Global Mirror function and the DS CLI commands:
Note: You can also refer to a scenario that the steps have referenced in this task by referring to Running Global Mirror for an unplanned failover and failback.

Procedure

  1. End Global Mirror processing when a disaster occurs.
  2. Check the status of the current processing for Global Mirror transactions.
  3. Initiate the failover process of A volumes to B volumes.
  4. Analyze the consistency group status.
  5. Use the revertflash command to correct FlashCopy relationships.
  6. Use the commitflash command to correct FlashCopy relationships.
  7. Initiate the fast reverse restore process.
  8. Wait for the background copy to complete.
    See Waiting for the background copy to complete for additional substeps.
  9. Reestablish the FlashCopy relationships, B volumes to C volumes.
  10. Prepare to reinstate production at the local site.
  11. Resynchronize the volumes.
    See Resynchronizing the volumes for additional substeps.
  12. Query for first pass and drain time out-of-synch zero value and quiesce your system.
    See Querying, quiescing, and re-querying for additional substeps.
  13. Reestablish the remote mirror and copy paths, A site to B site.
  14. Run Global Copy failover processing to the A volumes.
  15. Run Global Copy failback processing for the A volumes.
  16. Resume Global Mirror processing at site A.
    See Resuming Global Mirror processing at site A for additional substeps.