Backing up the object storage

IBM Storage Scale Object Nodes and IBM Storage Protect client nodes need to be available with the object file system mounted on each node when the backup is being created. The IBM Storage Protect server needs to also be available.

Store any relevant cluster and file system configuration data in a safe location outside your GPFS cluster environment. This data is essential to restoring your object storage quickly, so you might want to store it in a site in a different geographical location for added safety.

Follow these steps to back up the object storage manually:
Remember: The sample file system used throughout this procedure is called smallfs. Replace this value with your file system name wherever necessary.
  1. Back up the cluster configuration information.
    The cluster configuration needs to be backed up by the administrator. The following cluster configuration information is necessary for the backup:
    • IP addresses are needed.
    • Node names are needed.
    • Roles are needed.
    • Quorum and server roles are needed.
    • Cluster-wide configuration settings from the mmchconfig command are needed.
    • Cluster manager node roles are needed.
    • Remote shell configuration is needed.
    • Mutual Secure Shell (SSH) and Remote Shell (RSH) authentication setup are needed.
    • Cluster UID is needed.
    Note: Comprehensive configuration information can be found in the mmsdrfs file.
  2. Preserve disk configuration information.
    Disk configuration needs to also be preserved to recover a file system. The fundamental disk configuration information needed for a backup intended for disaster recovery is as follows:
    • The number of disk volumes that were previously available is needed.
    • The sizes of those volumes are needed.
    Important: To recover from a total file system loss, at least as much disk space as was previously available is needed for restoration.
    It is only possible to restore the image of a file system onto replacement disks if the disk volumes available are of similar enough sizes to the originals. This allows any data to be restored to the new disks. The following disk configuration information is necessary for the recovery:
    • Disk device names are needed.
    • Disk device sizes are needed.
    • The number of disk volumes is needed.
    • NSD server configuration is needed.
    • Disk RAID configurations are needed.
    • Failure group designations are needed.
    • The mmsdrfs file contents are needed.
  3. Back up the GPFS™ file system configuration information.
    In addition to the disks, the file system built on those disks has the following configuration information that can be captured using the mmbackupconfig command:
    • Block size can be captured.
    • Replication factors can be captured.
    • Number and size of disks can be captured.
    • Storage pool layout can be captured.
    • Filesets and junction points can be captured.
    • Policy rules can be captured.
    • Quota information can be captured.
    • Other file system attributes can be captured.
    The file system configuration information can be backed up into a single file using a command similar to the following:
    mmbackupconfig smallfs -o /tmp/smallfs.bkpcfg.out925
  4. Save the following IBM Storage Protect configuration files for each IBM Storage Protect client node in the same safe location outside of your GPFS cluster.
    /etc/adsm/TSM.PWD
    Contains the client password that is needed to access IBM Storage Protect. This file is present only when the IBM Storage Protect server setting of authentication is set to on.
    /opt/tivoli/tsm/client/ba/bin/dsm.sys and
    /opt/tivoli/tsm/client/ba/bin/dsm.opt
    Contains the IBM Storage Protect client configuration files.
  5. Back up the object storage content to an IBM Storage Protect server by running the mmbackup command:
    1. Create a global snapshot by running the following command:
      mmcrsnapshot <file system device> <snapshot name>.

      For example, create a snapshot that is named objects_globalsnap1 by running the following command:
      mmcrsnapshot smallfs objects_globalsnap1
    2. Create global and local work directories by running the following commands:
      mkdir -p /smallfs0/.es/mmbackupglobal
      mkdir -p /smallfs0/.es/mmbackuplocal
    3. Run the following command to start the snapshot-based backup:

      mmbackup <file system device> -t incremental -N <TSM client nodes> \ -g <global work directory> \ -s <local work directory> \-S <global snapshot name> --tsm-servers <tsm server> --noquote

      The \ indicates the line wrap:

      mmbackup smallfs -t incremental -N node1,node2 \
      -g /smallfs0/.es/mmbackupglobal \
      -s /smallfs0/.es/mmbackuplocal \
      -S objects_globalsnap1 -–tsm-servers tsm1 --noquote
      In this example:
      -N
      Specifies the nodes that are involved in the backup process. These nodes need to be configured for the IBM Storage Protect server that is being used.
      -S
      Specifies the global snapshot name to be used for the backup.
      --tsm-servers
      Specifies which IBM Storage Protect server is used as the backup target, as specified in the IBM Storage Protect client configuration dsm.sys file.

      There are several other parameters available for the mmbackup command that influence the backup process, and the speed with which its handles the system load. For example, you can increase the number of backup threads per node by using the -m parameter. For the full list of parameters available, see the mmbackup command.

    4. Run the following command to remove the snapshot that was created in step 6a:
      mmdelsnapshot <file system device> <snapshot name>

      You can use the following example:
      mmdelsnapshot smallfs objects_globalsnap1