Installation overview for the space management client for IBM Spectrum Scale AIX systems

Before you install the space management client on IBM Spectrum Scale AIX® systems, review both the general and the system-specific requirements. If you are installing the product for the first time, use the steps for an initial installation, otherwise use the steps for an upgrade.

There are several installation limitations for the space management client for IBM Spectrum Scale AIX systems:

  • On AIX 6.1 and 7.1, the space management client can be installed in the global partition and supports transparent recall for both global and local workstation partitions (WPARs). Using HSM commands from a local WPAR is not supported. You cannot install the space management client in a local WPAR.

When you install the space management client on GPFS file systems, the installation process does the following tasks:

  • Stops any space management daemons that are running.
  • Removes any statement from the /etc/inittab file that loads the dsmwatchd command at system startup.
  • Removes any statement from the /var/mmfs/etc/gpfsready script file that loads the other space management daemons at IBM Spectrum Scale system startup.
  • Extracts the HSM modules.
  • Adds a statement to the /etc/inittab file that loads the dsmwatchd daemon at system startup.
  • Adds a statement to the /var/mmfs/etc/gpfsready script file that loads the other space management daemons at IBM Spectrum Scale system startup.
  • Starts the space management daemons.
Table 1 indicates the packages available on the installation media in the /usr/sys/inst.images directory:
Table 1. The space management client for IBM Spectrum Scale AIX systems installation packages
Package Installs Into this directory
tivoli.tsm.client.ba64.gpfs The backup-archive client for IBM Spectrum Scale AIX /usr/tivoli/tsm/client/ba/bin
tivoli.tsm.client.hsm.gpfs The space management client for IBM Spectrum Scale AIX /usr/tivoli/tsm/client/hsm/bin
tivoli.tsm.client.api.64bit The API for AIX /usr/tivoli/tsm/client/api/bin

For an initial installation, follow these steps:

  1. If you want the GPFS policy engine to control automatic migration, you can disable the dsmmonitord and dsmscoutd automatic migration daemons. Disabling these daemons conserves system resources. To disable the automatic migration daemons, start this command in a shell:
    export HSMINSTALLMODE=SCOUTFREE

    For information about configuring IBM Spectrum Scale integration with the space management client, see Technote 7018848.

  2. Edit the dsm.opt and dsm.sys files that are installed with the backup-archive client to configure the space management client.

  3. Install the space management client on each node. For AIX clients, see Installing the space management client for AIX systems. For Linux® clients, see Installing the space management client for IBM Spectrum Scale Linux systems.

  4. Make sure that after installation, the dsmrecalld daemon is running on at least one node.

  5. Enable the Data Management Application Programming Interface (DMAPI) for GPFS for all file systems to which you plan to add space management. Enable DMAPI only once for each file system.
    1. Unmount all GPFS file systems on all nodes within the IBM Spectrum Scale cluster to which you plan to add space management.
    2. Activate DMAPI management for the GPFS file systems with the following command: mmchfs device -z yes.

      For information about IBM Spectrum Scale commands and IBM Spectrum Scale requirements for the IBM Spectrum Protect for Space Management client, go to the IBM Spectrum Scale product information and see mmbackup command: requirements.

    3. Remount all GPFS file systems on all nodes within the IBM Spectrum Scale cluster.

      The HSM daemons detect the initial state of each node and assign all nodes an instance number in relation to the IBM Spectrum Scale cluster definition.

  6. On the HSM owner nodes, add space management to each GPFS file system with the dsmmigfs command.

  7. Use the dsmmigfs enablefailover command to enable failover of space management on the owner and source cluster nodes that participate in the failover group.