mmcesdr command
Manages protocol cluster disaster recovery.
Synopsis
mmcesdr primary config --output-file-path FilePath --ip-list IPAddress[,IPAddress,...]
[--allowed-nfs-clients {--all | --gateway-nodes | IPAddress[,IPAddress,...]}]
[--rpo RPOValue] [--inband] [-v]
or
mmcesdr primary backup [-v]
or
mmcesdr primary restore [--new-primary] [--input-file-path FilePath] [--file-config {--recreate | --restore}] [-v]
or
mmcesdr primary update {--obj | --nfs | --smb | --ces} [-v]
ormmcesdr primary failback --prep-outband-transfer --input-file-path FilePath [-v]
or
mmcesdr primary failback --convert-new --output-file-path FilePath --input-file-path FilePath [-v]
or
mmcesdr primary failback {--start | --apply-updates | --stop [--force]} [--input-file-path FilePath] [-v]
or
mmcesdr secondary config --input-file-path FilePath [--prep-outband-transfer] [--inband] [-v]
or
mmcesdr secondary failover [--input-file-path FilePath]
[--file-config {--recreate | --restore}] [-v]
or
mmcesdr secondary failback --generate-recovery-snapshots --output-file-path FilePath
[--input-file-path FilePath] [-v]
or
mmcesdr secondary failback --post-failback-complete [--input-file-path FilePath]
[--file-config {--recreate | --restore}][-v]
or
mmcesdr secondary failback --post-failback-complete --new-primary --input-file-path FilePath
[--file-config {--recreate | --restore}] [-v]
Availability
Available with IBM Spectrum Scale™ Advanced Edition.
Description
Use the mmcesdr command to manage protocol cluster disaster recovery.
You can use the mmcesdr primary config command to perform initial configuration for protocols disaster recovery on the primary cluster and to generate a configuration file that is used on the secondary cluster. The protocol configuration data can be backed up using the mmcesdr primary backup command and the backed up data can be restored using the mmcesdr primary restore command. The backed up configuration information for the primary cluster can be updated using the mmcesdr primary update command. You can use the mmcesdr primary failback command to fail back the client operations to the primary cluster.
You can use the mmcesdr secondary config command to perform initial configuration for protocols disaster recovery on the secondary cluster using the configuration file generated from the primary cluster. The secondary read-only filesets can be converted into read-write primary filesets using the mmcesdr secondary failover command. You can use the mmcesdr secondary failback command to either generate a snapshot for each acting primary fileset or complete the failback process, and convert the acting primary filesets on the secondary cluster back into secondary filesets.
For information on detailed steps for protocols disaster recovery, see Protocols disaster recovery in IBM Spectrum Scale: Advanced Administration Guide.
Parameters
- primary
- This command is run on the primary cluster.
- config
- Perform initial configuration of protocol cluster disaster recovery.
- --output-file-path FilePath
- Specifies the path to store output of the generated configuration file, which is always named DR_Config.
- --ip-list IPAddress[,IPAddress,...]
- Comma separated list of public IP addresses on the secondary cluster to be used for active file management (AFM) DR related NFS exports.
- --allowed-nfs-clients {--all | --gateway-nodes | IPAddress[,IPAddress,...]}
- Optional. Specifies the entities that can connect to the AFM DR
related NFS shares, where:
- --all
- Specifies that all clients must be allowed to connect to the AFM DR related NFS shares. If omitted, the default value of --all is used.
- --gateway-nodes
- Specifies the gateway nodes currently defined on the primary that must be allowed to connect to the AFM DR related NFS shares.
- IPAddress[,IPAddress,...]
- Specifies the comma separated list of IP addresses that must be allowed to connect to the AFM DR related NFS shares.
- --rpo RPOValue
- Optional. Specifies the integer value of recovery point objective (RPO) to use for AFM DR filesets. If omitted, the default value of 15 is used. The valid range is: 5 <= RPO <= 2147483647.
- --inband
- Optional. Specifies to use the inband (across the WAN) method of initial data transfer from primary to secondary cluster. If omitted, the default value of outband is used.
- backup
- Backs up all protocol configuration and CES configuration into a dedicated, independent fileset with each protocol in its own subdirectory.
- restore
- Restores object, NFS, and SMB protocol configuration and CES configuration
from the configuration data backed up.
- --new-primary
- Optional. Performs restore operation to a newly, failed back primary cluster.
- --input-file-path FilePath
-
Optional. Specifies the original configuration file that was used to set up the secondary cluster. If not specified, the file saved in the configuration independent fileset will be used as default.
- --file-config {--recreate | --restore}
-
Optional. Specifies whether SMB and NFS exports will be re-created, or if the entire protocol configuration will be restored. If not specified, the SMB and NFS exports will be re-created by default.
- update
- Updates the backed up copy of the protocol configuration or CES
configuration.
- --obj
- Specifies the backed up copy of the object protocol configuration to be updated with the current object configuration.
- --nfs
- Specifies the backed up copy of the IBM® NFSv4 stack protocol configuration to be updated with the current IBM NFSv4 stack configuration.
- --smb
- Specifies the backed up copy of the SMB protocol configuration to be updated with the current SMB configuration.
- --ces
- Specifies the backed up copy of the CES configuration to be updated with the current CES configuration.
- failback
-
Used for several options to failback client operations to a primary cluster.
Failback involves transfer of data from the acting primary (secondary) cluster to the old primary cluster as well as restoring protocol and possibly CES configuration information and transformation of protected filesets to primary filesets.
- --prep-outband-transfer
- Creates independent filesets that out of band data will be transferred to.
- --input-file-path FilePath
- Specifies the configuration file that is the output from the mmcesdr secondary failback --generate-recovery-snapshots command.
- failback
-
Used for several options to failback client operations to a primary cluster.
Failback involves transfer of data from the acting primary (secondary) cluster to the old primary cluster as well as restoring protocol and possibly CES configuration information and transformation of protected filesets to primary filesets.
- --convert-new
- Specifies that the failback is not going to the old primary but instead a new primary and this step specifically converts the newly created independent filesets to primary AFM DR filesets.
- --output-file-path FilePath
- Specifies the path to store output of generated configuration file, DR_Config, with the new AFM primary IDs.
- --input-file-path FilePath
- Specifies the configuration file that is the output from the mmcesdr secondary failback --generate-recovery-snapshots command.
- failback
-
Used for several options to failback client operations to a primary cluster.
Failback involves transfer of data from the acting primary (secondary) cluster to the old primary cluster as well as restoring protocol and possibly CES configuration information and transformation of protected filesets to primary filesets.
- --start
- Begins the failback process and restores the data to the last RPO snapshot.
- --apply-updates
- Transfers data that has been written to the secondary cluster
while failover was in-place. Note: The use of this option might need to be done more than once depending on the system load.
- --stop [--force]
- Completes the transfer of data process and puts the filesets in
the read-write mode. Optionally, if this fails you can use the --force option.Note: In addition to using these options, after stopping the data transfer, you need to use the mmcesdr primary restore command to restore the protocol and the CES configuration.
- --input-file-path FilePath
- Optional. Specifies the original configuration file that was used to set up the secondary cluster. If not provided, the default will be to use the one saved in the configuration independent fileset.
- secondary
- This command is run on the secondary cluster.
- config
- Perform initial configuration of protocol cluster disaster recovery.
- --prep-outband-transfer
- Creates independent filesets that out of band data will be transferred to as part of the initial configuration. If out of band data transfer is used for DR configuration, this option must be used before data is transferred from the primary to the secondary using out of band transfer. If out of band transfer is used, this command is run once with this option and then again after the data is transferred without the option.
- --input-file-path FilePath
- Specifies the path of the configuration file generated from the configuration step of the primary cluster.
- --inband
- Optional. Specifies to use the inband (across the WAN) method
of initial data transfer from primary to secondary cluster. If omitted,
the default value of outband is used.Note: If --inband is used for the primary configuration, it must also be used for the secondary configuration.
- failover
- Converts secondary filesets from read-only to read-write primary
filesets and converts the secondary protocol configurations to those
of the failed primary.
- --input-file-path FilePath
-
Optional. Specifies the original configuration file that was used to set up the secondary cluster. If not specified, the file saved in the configuration independent fileset will be used as default.
- --file-config {--recreate | --restore}
-
Optional. Specifies whether SMB and NFS exports will be re-created, or if the entire protocol configuration will be restored. If not specified, the SMB and NFS exports will just be re-created by default.
- failback
- Runs one of the two failback options: either generates a snapshot
for each acting primary fileset or completes the failback process
and convert the acting primary filesets on the secondary cluster back
into secondary filesets
- --generate-recovery-snapshots
- Generates the psnap0 snapshot for each acting primary fileset and stores in the default snapshot location for use in creation of a new primary cluster with new primary filesets to fail back to. The files within the snapshot need to be manually transported to the new primary.
- --output-file-path FilePath
- Specifies the path to store output of generated snapshot recovery configuration file.
- --input-file-path FilePath
- Optional. Specifies the path of the original configuration file that was used to set up the secondary cluster. If not provided, the default is to use the one saved in the configuration independent fileset.
- failback
- Runs one of the two failback options: either generates a snapshot
for each acting primary fileset or completes the failback process
and convert the acting primary filesets on the secondary cluster back
into secondary filesets
- --post-failback-complete
- Completes the failback process by converting the acting primary filesets back into secondary, read-only filesets and ensures that the proper NFS exports for AFM DR exist.
- --new-primary
- Performs the failback operation to a newly, failed back primary cluster.
- --input-file-path FilePath
- Specifies the path of the updated configuration file created from the mmcesdr primary failback --convert-new command which includes updated AFM primary IDs.
- --file-config {--recreate | --restore}
-
Optional. Specifies whether SMB and NFS exports will be re-created, or if the entire protocol configuration will be restored. If not specified, the SMB and NFS exports will just be re-created by default.
Exit status
- 0
- Successful completion.
- nonzero
- A failure has occurred.
Security
You must have root authority to run the mmcesdr command.
The node on which the command is issued must be able to execute remote shell commands on any other node in the cluster without the use of a password and without producing any extraneous messages. See the following IBM Spectrum Scale: Administration and Programming Reference topic: Requirements for administering a GPFS file system.
Examples
- Issue the following command on the primary cluster to configure
independent fileset exports as AFM DR filesets and backup configuration
information:
The system displays output similar to this:mmcesdr primary config --output-file-path /root/ --ip-list "9.11.102.211,9.11.102.210" --rpo 10 --inband
Performing step 1/5, configuration fileset creation/verification. Successfully completed step 1/5, configuration fileset creation/verification. Performing step 2/5, protocol and export services configuration backup. Successfully completed step 2/5, protocol and export services configuration backup. Performing step 3/5, determination of protocol exports to protect with AFM DR. WARNING: Export /gpfs/fs0/nfs-ganesha-dep of type nfs will NOT be protected through AFM DR because it is a dependent fileset. Not all exports of type NFS-ganesha will be protected through AFM DR, rc: 2 WARNING: Export /gpfs/fs0/smb-dep of type smb will NOT be protected through AFM DR because it is a dependent fileset. Not all exports of type SMB will be protected through AFM DR, rc: 2 Completed with errors step 3/5, determination of protocol exports to protect with AFM DR. Performing step 4/5, conversion of protected filesets into AFM DR primary filesets. Successfully completed step 4/5, conversion of protected filesets into AFM DR primary filesets. Performing step 5/5, creation of output DR configuration file. Successfully completed step 5/5, creation of output DR configuration file. File to be used with secondary cluster in next step of cluster DR setup: /root//DR_Config
- Issue the following command on the secondary cluster to create
the independent filesets that will be a part of the pair of AFM DR
filesets associated with those on the primary cluster:
mmcesdr secondary config --input-file-path /root/ --inband
In addition to fileset creation, this command also creates the necessary NFS exports and converts the independent filesets to AFM DR secondary filesets.
The system displays output similar to this:Performing step 1/3, creation of independent filesets to be used for AFM DR. Successfully completed step 1/3, creation of independent filesets to be used for AFM DR. Performing step 2/3, creation of NFS exports to be used for AFM DR. Successfully completed step 2/3, creation of NFS exports to be used for AFM DR. Performing step 3/3, conversion of independent filesets to AFM DR secondary filesets. Successfully completed step 3/3, conversion of independent filesets to AFM DR secondary filesets.
- Issue the following command on the primary cluster to configure
independent fileset exports as AFM DR filesets, back up configuration
information, and facilitate outband data transfer (which is the default):
The system displays output similar to this:mmcesdr primary config --output-file-path /root/ --ip-list "9.11.102.211,9.11.102.210" --rpo 10
Performing step 1/5, configuration fileset creation/verification. Successfully completed step 1/5, configuration fileset creation/verification. Performing step 2/5, protocol and export services configuration backup. Successfully completed step 2/5, protocol and export services configuration backup. Performing step 3/5, determination of protocol exports to protect with AFM DR. Successfully completed step 3/5, determination of protocol exports to protect with AFM DR. Performing step 4/5, conversion of protected filesets into AFM DR primary filesets. Successfully completed step 4/5, conversion of protected filesets into AFM DR primary filesets. Performing step 5/5, creation of output DR configuration file. Successfully completed step 5/5, creation of output DR configuration file. File to be used with secondary cluster in next step of cluster DR setup: /root//DR_Config
- Issue the following command on the secondary cluster to create
the independent filesets that will later be paired with those on the
primary cluster to form AFM DR pairs as part of failing back to a
new primary cluster:
The system displays output similar to this:mmcesdr secondary config --input-file-path /root --prep-outband-transfer
Creating independent filesets to be used as recipients of AFM DR outband transfer of data. Transfer all data on primary cluster for fileset fs0:combo1 to fileset fs0:combo1 on secondary cluster. Transfer all data on primary cluster for fileset fs0:combo2 to fileset fs0:combo2 on secondary cluster. Transfer all data on primary cluster for fileset fs0:nfs-ganesha1 to fileset fs0:nfs-ganesha1 on secondary cluster. Transfer all data on primary cluster for fileset fs0:nfs-ganesha2 to fileset fs0:nfs-ganesha2 on secondary cluster. Transfer all data on primary cluster for fileset fs0:smb1 to fileset fs0:smb1 on secondary cluster. Transfer all data on primary cluster for fileset fs0:smb2 to fileset fs0:smb2 on secondary cluster. Transfer all data on primary cluster for fileset fs1:async_dr to fileset fs1:async_dr on secondary cluster. Transfer all data on primary cluster for fileset fs1:obj_sofpolicy1 to fileset fs1:obj_sofpolicy1 on secondary cluster. mmcesdr: CES Object protocol is not enabled but there is an object related export present. Skipping clearing out the object related files and directories from export. Transfer all data on primary cluster for fileset fs1:obj_sofpolicy2 to fileset fs1:obj_sofpolicy2 on secondary cluster. mmcesdr: CES Object protocol is not enabled but there is an object related export present. Skipping clearing out the object related files and directories from export. Transfer all data on primary cluster for fileset fs1:object_fileset to fileset fs1:object_fileset on secondary cluster. mmcesdr: CES Object protocol is not enabled but there is an object related export present. Skipping clearing out the object related files and directories from export. Successfully completed creating independent filesets to be used as recipients of AFM DR outband transfer of data. Transfer data from primary cluster through outbound trucking to the newly created independent filesets before proceeding to the next step.
- After all the data has been transferred to the secondary, sssue
the following command to complete the set up on the secondary:
The system displays output similar to this:mmcesdr secondary config --input-file-path /root
Performing step 1/3, verification of independent filesets to be used for AFM DR. Successfully completed step 1/3, creation of independent filesets to be used for AFM DR. Successfully completed 1/3, verification of independent filesets to be used for AFM DR. Performing step 2/3, creation of NFS exports to be used for AFM DR. Successfully completed step 2/3, creation of NFS exports to be used for AFM DR. Performing step 3/3, conversion of independent filesets to AFM DR secondary filesets. Successfully completed step 3/3, conversion of independent filesets to AFM DR secondary filesets.
- Issue the following command on the secondary cluster after the
primary cluster has failed:
The system displays output similar to this:mmcesdr secondary failover
Performing step 1/4, saving current NFS configuration to restore after failback. Successfully completed step 1/4, saving current NFS configuration to restore after failback. Performing step 2/4, failover of secondary filesets to primary filesets. Successfully completed step 2/4, failover of secondary filesets to primary filesets. Performing step 3/4, protocol configuration/exports restore. Successfully completed step 3/4, protocol configuration/exports restore. Performing step 4/4, create/verify NFS AFM DR transport exports. Successfully completed step 4/4, create/verify NFS AFM DR transport exports.
- Issue the following command on the secondary cluster to prepare
recovery snapshots that contain data that will be transferred to the
new primary cluster:
The system displays output similar to this:mmcesdr secondary failback --generate-recovery-snapshots --output-file-path "/root/" --input-file-path "/root/"
Performing step 1/2, generating recovery snapshots for all AFM DR acting primary filesets. Transfer all data under snapshot located on acting primary cluster at: /gpfs/fs0/combo1/.snapshots/psnap0-newprimary-base-rpo-090B66F65623DEBF-1 to fileset link point of fileset fs0:combo1 on new primary cluster. Transfer all data under snapshot located on acting primary cluster at: /gpfs/fs0/combo2/.snapshots/psnap0-newprimary-base-rpo-090B66F65623DEBF-2 to fileset link point of fileset fs0:combo2 on new primary cluster. Transfer all data under snapshot located on acting primary cluster at: /gpfs/fs0/nfs-ganesha1/.snapshots/psnap0-newprimary-base-rpo-090B66F65623DEBF-3 to fileset link point of fileset fs0:nfs-ganesha1 on new primary cluster. Transfer all data under snapshot located on acting primary cluster at: /gpfs/fs0/nfs-ganesha2/.snapshots/psnap0-newprimary-base-rpo-090B66F65623DEBF-4 to fileset link point of fileset fs0:nfs-ganesha2 on new primary cluster. Transfer all data under snapshot located on acting primary cluster at: /gpfs/fs0/smb1/.snapshots/psnap0-newprimary-base-rpo-090B66F65623DEBF-5 to fileset link point of fileset fs0:smb1 on new primary cluster. Transfer all data under snapshot located on acting primary cluster at: /gpfs/fs0/smb2/.snapshots/psnap0-newprimary-base-rpo-090B66F65623DEBF-6 to fileset link point of fileset fs0:smb2 on new primary cluster. Transfer all data under snapshot located on acting primary cluster at: /gpfs/fs1/.async_dr/.snapshots/psnap0-newprimary-base-rpo-090B66F65623DECB-2 to fileset link point of fileset fs1:async_dr on new primary cluster. Transfer all data under snapshot located on acting primary cluster at: /gpfs/fs1/obj_sofpolicy1/.snapshots/psnap0-newprimary-base-rpo-090B66F65623DECB-3 to fileset link point of fileset fs1:obj_sofpolicy1 on new primary cluster. Transfer all data under snapshot located on acting primary cluster at: /gpfs/fs1/obj_sofpolicy2/.snapshots/psnap0-newprimary-base-rpo-090B66F65623DECB-4 to fileset link point of fileset fs1:obj_sofpolicy2 on new primary cluster. Transfer all data under snapshot located on acting primary cluster at: /gpfs/fs1/object_fileset/.snapshots/psnap0-newprimary-base-rpo-090B66F65623DECB-1 to fileset link point of fileset fs1:object_fileset on new primary cluster. Successfully completed step 1/2, generating recovery snapshots for all AFM DR acting primary filesets. Performing step 2/2, creation of recovery output file for failback to new primary. Successfully completed step 2/2, creation of recovery output file for failback to new primary. File to be used with new primary cluster in next step of failback to new primary cluster: /root//DR_Config
- Issue the following command on the primary cluster to restore
the protocol and export services configuration information:
The system displays output similar to this:mmcesdr primary restore --new-primary
Restoring cluster and enabled protocol configurations/exports. Successfully completed restoring cluster and enabled protocol configurations/exports.
- Issue the following command on the secondary cluster to restore
the protocol and export services configuration information:
The system displays output similar to this:mmcesdr secondary failback --post-failback-complete --new-primary --input-file-path "/root"
Performing step 1/2, converting protected filesets back into AFM DR secondary filesets. Successfully completed step 1/2, converting protected filesets back into AFM DR secondary filesets. Performing step 2/2, restoring AFM DR-based NFS share configuration. Successfully completed step 2/2, restoring AFM DR-based NFS share configuration.
- Issue the following command on the primary cluster to back up
configuration:
The system displays output similar to this:mmcesdr primary backup
Performing step 1/2, configuration fileset creation/verification. Successfully completed step 1/2, configuration fileset creation/verification. Performing step 2/2, protocol and export services configuration backup. Successfully completed step 2/2, protocol and export services configuration backup.
- Issue the following command on the primary cluster to restore
configuration when the primary cluster is not in a protocols DR relationship
with another cluster:
The system displays output similar to this:mmcesdr primary restore --file-config --restore
Restoring cluster and enabled protocol configurations/exports. Successfully completed restoring cluster and enabled protocol configurations/exports. ================================================================================ = If all steps completed successfully, remove and then re-create file = authentication on the Primary cluster. = Once this is complete, Protocol Cluster Configuration Restore will be complete. ================================================================================