Installation overview for the space management client for IBM Spectrum Scale Linux systems

Before you install the space management client on IBM Spectrum Scale™ Linux systems, review both the general and the system-specific requirements. If you are installing the product for the first time, use the steps for an initial installation. Otherwise, use the steps for an upgrade.

Note:
  • HSM cluster installations are certified on IBM® Linux Cluster 1350. See the IBM Redbooks®: Linux Clustering with CSM and GPFS.
  • Also, see the recommendations that are provided with the IBM Spectrum Scale Linux systems.

When you install the space management client on GPFS™ file systems, the installation process makes the following changes:

Table 1 lists the packages available on the installation media for Linux on x86_64 systems:

Table 1. IBM Spectrum Scale Linux x86_64 available packages
Package Installs Into this directory
TIVsm-API64.x86_64.rpm The API for Linux x86_64 (64-bit only) /opt/tivoli/tsm/client/api/bin64
TIVsm-BA.x86_64.rpm The IBM Spectrum Protect™ backup-archive client (command line), the administrative client (command line), and the web backup-archive client (64-bit only) /opt/tivoli/tsm/client/ba/bin
TIVsm-HSM.x86_64.rpm The space management client for Linux x86_64 (64-bit only) /opt/tivoli/tsm/client/hsm/bin

Table 2 lists the packages available on the installation media for Linux on Power Systems Little Endian systems:

Table 2. Linux on Power Systems Little Endian available packages
Package Installs Into this directory
TIVsm-API64.ppc64le.rpm The API for Linux on Power Systems Little Endian systems (64-bit only) /opt/tivoli/tsm/client/api/bin64
TIVsm-BA.ppc64le.rpm The IBM Spectrum Protect backup-archive client (command line), the administrative client (command line), and the web backup-archive client (64-bit only) /opt/tivoli/tsm/client/ba/bin
TIVsm-HSM.ppc64le.rpm The space management client for Linux on Power Systems Little Endian systems (64-bit only) /opt/tivoli/tsm/client/hsm/bin

For an initial installation, complete the following steps:

  1. If you want the GPFS policy engine to control automatic migration, you can disable the dsmmonitord and dsmscoutd automatic migration daemons. Disabling these daemons conserves system resources. To disable the automatic migration daemons, start this command in a shell:
    export HSMINSTALLMODE=SCOUTFREE

    For information about configuring IBM Spectrum Scale integration with the space management client, see Technote 7018848.

  2. Edit the dsm.opt and dsm.sys files that are installed with the backup-archive client to configure the space management client.

  3. Install the space management client on each node. For AIX® clients, see Installing the space management client for AIX systems. For Linux clients, see Installing the space management client for IBM Spectrum Scale Linux systems.

  4. Make sure that after installation, the dsmrecalld daemon is running on at least one node.

  5. Enable the Data Management Application Programming Interface (DMAPI) for GPFS for all file systems to which you plan to add space management. Enable DMAPI only once for each file system.
    1. Unmount all GPFS file systems on all nodes within the IBM Spectrum Scale cluster to which you plan to add space management.
    2. Activate DMAPI management for the GPFS file systems with the following command: mmchfs device -z yes.

      For information about IBM Spectrum Scale commands and IBM Spectrum Scale requirements for the IBM Spectrum Protect for Space Management client, go to the IBM Spectrum Scale product information and see mmbackup command: IBM Spectrum Protect requirements.

    3. Remount all GPFS file systems on all nodes within the IBM Spectrum Scale cluster.

      The HSM daemons detect the initial state of each node and assign all nodes an instance number in relation to the IBM Spectrum Scale cluster definition.

  6. On the HSM owner nodes, add space management to each GPFS file system with the dsmmigfs command.

  7. Use the dsmmigfs enablefailover command to enable failover of space management on the owner and source cluster nodes that participate in the failover group.