Prerequisites for the recovery site

This topic describes the preparation steps that must be done at the secondary (restore) site.

Detailed preparation steps or prerequisites are as follows:
  1. Allocate temporary restore staging space for the file system backup image:
    • This is referred to as the global_directory_path on the recovery site and is given the same name as the primary site in the examples.
    • It is recommended to use a separate dedicated file system.
    • Use standard GPFS methodology or the install toolkit to allocate storage for this file system:
  2. Verify that sufficient back-end storage space exists on the recovery site for the recovered file systems:
    Note: Each file system on the on the recovery site will need at least as much capacity as its corresponding primary-site file system (Actual file system creation will take place in a later step).
    1. On the primary cluster, use the mmdf command for each file system to determine the required amount of space necessary for the matching recovery site file system (look for total blocks in the second column).
    2. If it is necessary to determine sizes for separated metadata and data disks, look for the corresponding information on the primary site (look for data and metadata distribution in the second column). For example,
      mmdf gpfs_tctbill1 | egrep '(data)|(metadata)|failure|fragments|total|- ----- -|====='
      
      disk                disk size  failure holds    holds           free in KB          free in KB
      name                    in KB    group metadata data        in full blocks        in fragments
      --------------- ------------- -------- -------- ----- -------------------- -------------------
      (pool total)      15011648512                           13535170560 ( 90%)      53986848 ( 0%)
      =============                         ==================== ===================
      (data)            12889330688                           12675219456 ( 98%)      53886584 ( 0%)
      (metadata)         2122317824                             859951104 ( 41%)        100264 ( 0%)
      =============                         ==================== ===================
      (total)           15011648512                           13535170560 ( 90%)      53986848 ( 0%)
      Note: NSD details are filtered out, these are displayed in 1 KB blocks (use '--block-size auto' to show in human readable format).
    3. Use the previous information as a guide for allocating NSDs on the recovery site and preparing stanza files for each file system.
      Note: It is preferable to have the same number, size, and type of NSD for each file system on the recovery site as on the primary site, however it is not a requirement. This simply makes the auto-generated stanza file easier to modify in the recovery portion of this process.
  3. Ensure that there are no preexisting cloud services node classes on the recovery site, and that the node classes that you create are clean and unused.
  4. Create cloud services node classes on the recovery site by using the same node class name as the primary site. For more information, see Creating a user-defined node class for transparent cloud tiering or cloud data sharing.
  5. Install (or update) cloud services server rpm on all cloud services nodes on the recovery site. For more information, see Installation steps.
  6. Enable cloud services on the appropriate nodes of the recovery-site. For example,
    
    mmchnode --cloud-gateway-enable -N <tctnode_ip1,tctnode_ip2,tctnode_ip3,tctnode_ip4>
    --cloud-gateway-nodeclass TCTNodeClassPowerLE
    For more information, see Designating the cloud services nodes.
  7. Ensure that there is no active cloud services configuration on the recovery site.
  8. If this is an actual disaster, and you are transferring ownership of the cloud services to the recovery cluster, ensure that all write activity is suspended from the primary site while the recovery site has ownership of cloud services.