Configuring IBM Spectrum Protect using a Spectrum Scale (GPFS) file system

The Spectrum Protect Client writes the backup data through the Spectrum Protect Storage Agent (residing on the same server as the Spectrum Protect Client), directly to the SAN storage, bypassing the Spectrum Protect Server Instance. The SAN storage used in this topic is a Spectrum Scale (GPFS) file system shared between the Spectrum Protect Server Instance and the Spectrum Protect Storage Agent. You can use a Spectrum Scale (GPFS) file system shared between the Spectrum Protect Server Instance and the Spectrum Protect Storage Agent to configure IBM Spectrum Protect f LAN-free backups.

Before you begin

Important: In order to run LAN-Free operations, contact IBM Support for initial setup and then continue with the following steps.
  1. Run the following commands from either the administrative command-line client dsmadmc on the IBM Spectrum Protect server or the IBM Spectrum Protect Operations Center, by using the command builder:
    https://< IBM_Spectrum_Protect_Server_IP>:11090/oc/login
    If you do not have the IBM Spectrum Protect Operations Center configured, configure it as in Configuring the Operations Center.

    For installation process, see Installing the Operations Center.

  2. Only the administrator of IBM Spectrum Protect server should configure IBM Spectrum Protect server for all the LAN-free operations.
  3. Do not enable client-side data compression or client-side data deduplication when you do LAN-free backups.

Procedure

  1. Log in to Spectrum Protect Server as an administrator with system privileges.
  2. Ensure that the external storage is mounted as a GPFS file system and is accessible by both IBM Spectrum Protect server and IBM Spectrum Protect storage agent, which runs on the client, with both read/write permissions.
    1. On host node (outside of the container), execute:
      mmlsconfig
    2. Scroll down to File systems in cluster. You should see a file system named: /dev/external_mnt/SAN listed.
    3. Repeat steps on IBM Spectrum Protect Server.
  3. Run the command below to enable IBM Spectrum Protect for server to server communication:
    
    set servername <IBMSpectrumProtectServerName>         
    set serverpassword <IBMSpectrumProtectServerPassword>     
    set serverhladdress <IBMSpectrumProtectServerIPaddress>      
    set serverlladdress <IBMSpectrumProtectServerTCPport>
    The server name, password, high-level address and low-level address must be set. The server password is not the same as the Client node password. Once set, you should not change this.
    
    set servername tsminst1 
    set serverpa tsm
    set serverhla <IP address>
    set serverlla 1500
    Then, verify whether the server has been added by running the
    query server <IBM spectrum protect servername>
    command.
  4. Define a device class and storage pool:
    1. Run the command below to define device class:
      DEFINE DEVCLASS <device class name> devtype=<type> directory=<newly mounted filesystem on TSM SERVER> shared=yes mountlimit=<maximum number of files that can be opened simultaneously for input and output> maxcap=<max size of any data storage files assigned to stg pool in dev class>

      In the example, the device class is called sscale1.

      The parameter devtype must be set to file.

      The parameter shared must be set to yes to allow the Storage Agents to write directly to the volume.

      The parameter dir pertains to the location of the directory in which the volumes will reside.

      Note: Multiple directories can enhance performance. Directories are delimited with a comma.
      Note: mountlimit number should be minimum of however many MLNs there are in total in IBM® Integrated Analytics System (IAS) when planning to run backup from IAS using 1 session. If you want to run more, do MLN * # of session. IAS currently supports 1-2 sessions..

      You can change the maxcap parameter later on.

      Example:
      
      def dev sscale1 devt=file shared=yes mountl=256 
      dir=/sscale11/db2bkup,/sscale12/db2bkupmaxcap=64g
      The command logs the following output:
      
      ANR8400I Library SSCALE1defined.
      ANR8404I Drive SSCALE11 defined in library SSCALE1.
      ANR8404I Drive SSCALE12 defined in library SSCALE1.
      .
      .
      ANR8404I DriveSSCALE1256 defined in library SSCALE1.
      ANR2203I Device class SSCALE1defined.

      The device class, file library and 256 logical drivers are defined.

      To verify that all the logical drives and the library are there, issue q drive or q libr.

      For more information: DEFINE DEVCLASS (Define a FILE device class)

    2. Define a storage pool.
      DEFINE STGPOOL <Storage pool name> <device class> maxscratch=<max # of scratch volumes> pooltype=primary collocate=<specify which candidate keeps data> reusedelay=<specifies # of days that must elapse after all files are deleted from a volume to rewrite to it>
      Example:
      def stg db2-disk sscale1 maxscr=2000 colloc=node reclaimpr=4 reclaim=100
      In the example, a storage pool db2-disk with the sscale1 device class is defined. It will use a maximum of 2000 file volumes.
      The maxscratch parameter controls how many volumes can be created on the file system. The volumes are of the size that is specified in the maxcap parameter in the define devclass command. Attempt to set a maxscratch that would use around 85-90% of the free file system space. That way, if you begin to fill the file system, you can increase the maxscratch to get a little more data into the storage pool, as you attempt to either add capacity or clean it up. When the storage pool is fixed, you can set the maxscratch value such that you will use no more than 85-90% of its free space.
      Note: To enhance performance and manage disk space, define the storage pool to run four processes when it reclaims space instead of automatic reclaim. reclaimpr=100 means that disk volume must be empty in 100% before it is automatically reclaimed by the Server Instance.
      For more information, see: DEFINE STGPOOL (Define a primary storage pool assigned to sequential access devices).
  5. Define a backup domain and its retention policies that the LAN-free client nodes can use.
    1. Define a db2_backups backup domain:
      def do db2_backups

      For more information, see DEFINE DOMAIN (Define a new policy domain).

    2. Define a policy set under the db2_backups domain.
      Name the policy set the same as the domain:

      def do db2_backups db2_backups

      For more information, see DEFINE POLICYSET (Define a policy set).

    3. Under the db2_backups db2_backups policy set, define a default management class database
      Add the migdest parameter to avoid a warning from being displayed when the policy set is activated:
      def mg db2_backups db2_backups database migdest=db2-disk

      Under the default management class, there are two copy groups: backup and archive. These copy groups are where the retention rules are defined.

    4. Activate the policy set:
      act po db2_backups db2_backups
  6. Register IBM Spectrum Protect Client to IBM Spectrum Protect Server as described in REGISTER NODE (Register a node).

    Command:

    register node <nodename> <nodepassword> backdel=yes maxnummp=<# of sessions allowed to access tsm server from this node>

    Use the password passw0rd. In 90 days, the password will be automatically changed.

    Set do to db2_backups to ensure the nodes are in the db2_backup domain.

    Set backdel to yes so that Db2 Warehouse can control the deletion of backups.

    Set maxnummp to 8 or above to allow 8 or more simultaneous mounts.

    Example:
    
    reg node node0101-fab passw0rd do=db2_backups backdel=yes maxnummp=8
    reg node node0102-fab passw0rd do=db2_backups backdel=yes maxnummp=8
    reg node node0103-fab passw0rd do=db2_backups backdel=yes maxnummp=8
    reg node node0104-fab passw0rd do=db2_backups backdel=yes maxnummp=8
    Important: You need to register all the nodes that are part of the container.
  7. Define each Storage Agent on the IBM Spectrum Protect server by running the following command.
    def ser <storage_agent_name> serverpassword=<set password> hladdress=<server/storage agent ip address> lladdress=<based on number set in above command>
    
    def ser node0101_sta serverpa=passw0rd hla=<IP address> ssl=yes lla=1500
    def ser node0102_sta serverpa=passw0rd hla=<IP address> ssl=yes lla=1500
    def ser node0102_sta serverpa=passw0rd hla=<IP address> ssl=yes lla=1500
    def ser node0102_sta serverpa=passw0rd hla=<IP address> ssl=yes lla=1500
    Note: DEFINE SERVER refers to the storage agent name that pertains to each node, not the IBM Spectrum Protect Server name.
  8. Define a library.
    Note: This step is optional as you can use an existing library.
    Follow DEFINE LIBRARY (Define a library) according to the type of library you’ll be using.
    Example: define a FILE library by running
    define library SHARED_FILE_DC libtype=file shared=yes
  9. Define path of the Storage Agents.
    The IBM Spectrum Protect Server communicates to the Storage Agent how to write to the storage pool by telling the Storage Agent what path it can use to write to the shared location. For every logical drive, you need a path definition. The server keeps tracks to the paths of all the shared logical and physical drives. DEFINE PATH (Define a path when the destination is a drive).
    DEFINE PATH <storage_agent_name> <drive name_#> <srctype=<source type> desttype=<destination type> device=<device type> library=<library name> directory=/external_mnt.SAN..DEFINE PATH<storage_agent_name> <drive name_#> srctype=<source type>
    Example:
    
    def path node0101_sta sscale11 srct=server destt=drive libr=sscale1 devi=file dir=/external_mnt/SAN
    def path node0101_sta sscale12 srct=server destt=drive libr=sscale1 devi=file dir=/external_mnt/SAN
    .
    .
    def path node0101_sta sscale1256 srct=server destt=drive libr=sscale1 devi=file dir=/external_mnt/SAN
    Note: The following command needs to be performed for however many drives are set under the mountlimit parameter. For example,if you set the parameter to 30, you must run the command 30 times, 1 time per each drive.
    You must define paths on the IBM Spectrum Protect server by using the disk device names as seen by the storage agent on each client system that is /external_mnt/SAN.
    Important: On IBM Integrated Analytics System the path to use is path or mount point of the IBM Spectrum Scale (GPFS) file system inside each container.
  10. Configure the Storage Agent on the IBM Spectrum Protect client side.
    1. Log in to the docker container:
      ssh bluadmin@<nodename> -p 50022
    2. Issue command to validate the server and storage agent are communicating:
      netstat -an | grep :1500 

      This command shows the TCP/IP port.

  11. On IBM Spectrum Protect server, run the following commands to test if the configuration is completed successfully:
    1. Show the number of sessions from Spectrum Storage Agent. The number depends on the number of configured Spectrum Storage Agents.
      q sess
    2. Show the other side of the settings for IBM Spectrum Protect server and Spectrum Storage Agent. This command routes the q sess command to the Spectrum Storage Agent that is specified by sta_name.
      sta_name:q sess
    3. Check whether Spectrum Storage Agent is pinged successfully, where sta_name is the name of Spectrum Storage Agent.
      ping server sta_name
    4. Check whether IBM Spectrum Protect server is pinged successfully, where sta_name is the name of Spectrum Storage Agent, and name is the name of IBM Spectrum Protect server.
      sta_name:ping server name

    If any of these tests fails, update the security for the node session for all nodes by running the following command:

    update node nodename from client side sessionsecurity=transitional