Restore file protocol configuration for new primary

The file protocol configuration is restored in the following example. To restore the file exports on the old primary during restore, run the following command: mmcesdr primary restore --file-config --restore. The completion of failback on the secondary where the NFS transport export is re-created can also be performed by running this commands: mmcesdr secondary failback --post-failback-complete --file-config --restore. Use the following steps for failing back to an old primary cluster in an IBM Spectrum Scale™ cluster with protocols

  1. On the old secondary cluster, use the following command to prepare recovery snapshots that contain data that will be transferred to the new primary cluster.
    mmcesdr secondary failback --generate-recovery-snapshots --output-file-path "/root/" 
    --input-file-path "/root/"

    The system displays output similar to the following:

    Performing step 1/2, generating recovery snapshots for all AFM DR acting primary filesets.
    Transfer all data under snapshot located on acting primary cluster at: 
    /gpfs/fs0/combo1/.snapshots/psnap0-newprimary-base-rpo-090B66F65623DEBF-1 to fileset link point of 
    fileset fs0:combo1 on new primary cluster.
    Transfer all data under snapshot located on acting primary cluster at: 
    /gpfs/fs0/combo2/.snapshots/psnap0-newprimary-base-rpo-090B66F65623DEBF-2 to fileset link point of 
    fileset fs0:combo2 on new primary cluster.
    Transfer all data under snapshot located on acting primary cluster at: 
    /gpfs/fs0/nfs-ganesha1/.snapshots/psnap0-newprimary-base-rpo-090B66F65623DEBF-3 to fileset link
    point of fileset fs0:nfs-ganesha1 on new primary cluster.
    Transfer all data under snapshot located on acting primary cluster at: 
    /gpfs/fs0/nfs-ganesha2/.snapshots/psnap0-newprimary-base-rpo-090B66F65623DEBF-4 to fileset link
    point of fileset fs0:nfs-ganesha2 on new primary cluster.
    Transfer all data under snapshot located on acting primary cluster at: 
    /gpfs/fs0/smb1/.snapshots/psnap0-newprimary-base-rpo-090B66F65623DEBF-5 to fileset link point of
    fileset fs0:smb1 on new primary cluster.
    Transfer all data under snapshot located on acting primary cluster at: 
    /gpfs/fs0/smb2/.snapshots/psnap0-newprimary-base-rpo-090B66F65623DEBF-6 to fileset link point of
    fileset
    fs0:smb2 on new primary cluster.
    Transfer all data under snapshot located on acting primary cluster at: 
    /gpfs/fs1/.async_dr/.snapshots/psnap0-newprimary-base-rpo-090B66F65623DECB-2 to fileset link
    point of fileset fs1:async_dr on new primary cluster.
    Transfer all data under snapshot located on acting primary cluster at: 
    /gpfs/fs1/obj_sofpolicy1/.snapshots/psnap0-newprimary-base-rpo-090B66F65623DECB-3 to fileset link
    point of fileset fs1:obj_sofpolicy1 on new primary cluster.
    Transfer all data under snapshot located on acting primary cluster at: 
    /gpfs/fs1/obj_sofpolicy2/.snapshots/psnap0-newprimary-base-rpo-090B66F65623DECB-4 to fileset link
    point of fileset fs1:obj_sofpolicy2 on new primary cluster.
    Transfer all data under snapshot located on acting primary cluster at: 
    /gpfs/fs1/object_fileset/.snapshots/psnap0-newprimary-base-rpo-090B66F65623DECB-1 to fileset link 
    point of fileset fs1:object_fileset on new primary cluster.
    Successfully completed step 1/2, generating recovery snapshots for all AFM DR acting primary
    filesets.
    Performing step 2/2, creation of recovery output file for failback to new primary.
    Successfully completed step 2/2, creation of recovery output file for failback to new primary.
    
    File to be used with new primary cluster in next step of failback to new primary cluster:
    /root//DR_Config
  2. Transfer the newly created DR configuration file to the new primary cluster.
    scp /root//DR_Config zippleback-vm1:/root/

    The system displays output similar to the following:

    root@zippleback-vm1's password:
    DR_Config 100% 1996 2.0KB/s 00:00
  3. On the new primary cluster, use the following command to create the independent filesets that will receive the data transferred from the recovery snapshots.
    mmcesdr primary failback --prep-outband-transfer --input-file-path "/root/"

    The system displays output similar to the following:

    Creating independent filesets to be used as recipients of AFM DR outband transfer of data.
    Successfully completed creating independent filesets to be used as recipients of AFM DR outband
    transfer of data.

Transfer data from recovery snapshots through outbound trucking to the newly created independent filesets before proceeding to the next step.

  1. Transfer data from within the recovery snapshots of the secondary cluster to the new primary cluster.
    Note: Only one transfer is shown in the example below.
    rsync -av /gpfs/fs0/smb2/.snapshots/psnap0-newprimary-base-rpo-090B66F65623DEBF-6/*
    zippleback-vm1:/gpfs/fs0/smb2/

    The system displays output similar to the following:

    root@zippleback-vm1's password:
    sending incremental file list
    test
    
    sent 68 bytes received 31 bytes 15.23 bytes/sec
    total size is 0 speedup is 0.00
    Attention: When transferring files that need to also transfer GPFS™ extended attributes, extra steps are required. This example uses standard rsync which does not transfer extended attributes.
  2. On the new primary cluster, use the following command to convert the independent filesets to primary filesets and generate a new DR configuration file that will be used on the primary cluster for the next steps and then transferred to the secondary cluster to be used in a later step.
    mmcesdr primary failback --convert-new --output-file-path /root/ --input-file-path /root/

    The system displays output similar to the following:

    Performing step 1/2, conversion of independent filesets into new primary filesets to be used for AFM DR.
    Successfully completed step 1/2, failback to primary on all AFM DR protected filesets.
    Performing step 2/2, creation of output file for remaining failback to new primary steps.
    Successfully completed step 2/2, creation of output file for remaining failback to new primary steps.
    
    File to be used with new primary cluster in next step of failback to new primary cluster: /root//DR_Config
  3. On the new primary cluster, use the following command.
    mmcesdr primary failback --start --input-file-path "/root/"

    The system displays output similar to the following:

    Performing failback to primary on all AFM DR protected filesets.
    Successfully completed failback to primary on all AFM DR protected filesets.
    Note: The --input-file-path parameter is optional but it might be needed if access to the configuration file is not available in the configuration fileset.
  4. On the new primary cluster, use the following command one or more times until the amount of time it takes to complete the operation is less than the RPO value that you have set.
    mmcesdr primary failback --apply-updates --input-file-path "/root/"

    The system displays output similar to the following:

    Performing apply updates on all AFM DR protected filesets.
    Longest elapsed time is for fileset fs1:obj_sofpolicy1 and is 0 Hrs. 45 Mins. 10 Secs.
    Successfully completed failback update on all AFM DR protected filesets.
    Depending on user load on the acting primary, this step may need to be performed again before
    stopping failback.
  5. On the secondary cluster (acting primary), quiesce all client operations.
  6. On the new primary cluster, use the following command one more time.
    mmcesdr primary failback --apply-updates --input-file-path "/root/"

    The system displays output similar to the following:

    Performing apply updates on all AFM DR protected filesets.
    Longest elapsed time is for fileset fs1:obj_sofpolicy1 and is 0 Hrs. 0 Mins. 16 Secs.
    Successfully completed failback update on all AFM DR protected filesets.
    Depending on user load on the acting primary, this step may need to be performed again before
    stopping failback.
    Note: The --input-file-path parameter is optional but it might be needed if access to the configuration file is not available in the configuration fileset.
  7. On the new primary cluster, use the following command to stop the failback process and convert the new primary filesets to read/write.
    mmcesdr primary failback --stop --input-file-path "/root/"

    The system displays output similar to the following:

    Performing stop of failback to primary on all AFM DR protected filesets.
    Successfully completed stop failback to primary on all AFM DR protected filesets.
  8. On the new primary cluster, use the following command to restore the protocol and export services configuration information.
    mmcesdr primary restore --new-primary --file-config --restore
    Note: The --new-primary option must be used to ensure protocol configuration is restored correctly.

    The system displays output similar to the following:

    Restoring cluster and enabled protocol configurations/exports.
    Successfully completed restoring cluster and enabled protocol configurations/exports.
    
    ================================================================================
    =  If all steps completed successfully, remove and then re-create file
    =  authentication on the Primary cluster.
    =  Once this is complete, Protocol Cluster Configuration Restore will be complete.
    ================================================================================
  9. On the primary cluster, remove the file authentication and then add it again.
  10. Transfer the updated DR configuration file from the new primary cluster to the secondary cluster.
    scp /root//DR_Config windwalker-vm1:/root/

    The system displays output similar to the following:

    root@windwalker-vm1's password:
    DR_Config 100% 2566 2.5KB/s 00:00
  11. On the secondary cluster, use the following command to register the new primary AFM IDs to the independent filesets on the secondary cluster acting as part of the AFM DR pairs.
    mmcesdr secondary failback --post-failback-complete --new-primary --input-file-path "/root"--file-config --restore

    The system displays output similar to the following:

    Performing step 1/2, converting protected filesets back into AFM DR secondary filesets.
    Successfully completed step 1/2, converting protected filesets back into AFM DR secondary filesets.
    Performing step 2/2, restoring AFM DR-based NFS share configuration.
    Successfully completed step 2/2, restoring AFM DR-based NFS share configuration.
    
    ================================================================================
    = If all steps completed successfully, remove and then re-create file
    = authentication on the Secondary cluster.
    = Once this is complete, Protocol Cluster Failback will be complete.
    ================================================================================
  12. On the secondary cluster, remove the file authentication and then add it again.