Installation overview for the space management client for AIX GPFS systems
Before you install the space management client on AIX® General Parallel File Systems (GPFS™) systems, review both the general and the system-specific requirements. If you are installing the product for the first time, use the steps for an initial installation, otherwise use the steps for an upgrade.
There are several installation limitations for the space management client for AIX GPFS systems:
- The space management client for AIX GPFS systems is not compatible with the space management client for AIX JFS2 systems or the backup-archive client for JFS2. If you have either of these clients installed and want to install the space management client for AIX GPFS systems, you must remove the JFS2 clients.
- On AIX 6.1 and 7.1, the space management client can be installed in the global partition and supports transparent recall for both global and local workstation partitions (WPARs). Using HSM commands from a local WPAR is not supported. You cannot install the space management client in a local WPAR.
When you install the space management client on GPFS file systems, the installation process does the following tasks:
- Stops any space management daemons that are running.
- Removes any statement from the /etc/inittab file that loads the dsmwatchd command at system startup.
- Removes any statement from the /var/mmfs/etc/gpfsready script file that loads the other space management daemons at GPFS system startup.
- Extracts the HSM modules.
- Adds a statement to the /etc/inittab file that loads the dsmwatchd daemon at system startup.
- Adds a statement to the /var/mmfs/etc/gpfsready script file that loads the other space management daemons at GPFS system startup.
- Starts the space management daemons.
Package | Installs | Into this directory |
---|---|---|
tivoli.tsm.client.ba64.gpfs | The backup-archive client for AIX GPFS | /usr/tivoli/tsm/client/ba/bin |
tivoli.tsm.client.hsm.gpfs | The space management client for AIX GPFS | /usr/tivoli/tsm/client/hsm/bin |
tivoli.tsm.client.api.64bit | The API for AIX | /usr/tivoli/tsm/client/api/bin |
For an initial installation, follow these steps:
- If you want the GPFS policy engine to control automatic migration, you can disable the dsmmonitord and dsmscoutd automatic migration daemons. Disabling these daemons conserves system resources. To disable the automatic migration daemons, start this command in a shell:
export HSMINSTALLMODE=SCOUTFREE
For information about configuring GPFS integration with the space management client, see technote 7018848.
Edit the dsm.opt and dsm.sys files that are installed with the backup-archive client to configure the space management client.
Install the space management client on each node. For AIX clients, see Installing the space management client for AIX systems. For Linux clients, see Installing the space management client for Linux GPFS systems.
Make sure that after installation, the dsmrecalld daemon is running on at least one node.
- Enable the Data Management Application Programming Interface (DMAPI) for GPFS for all file systems to which you plan to add space management. Enable DMAPI only once for each file system.
- Unmount all GPFS file systems on all nodes within the GPFS cluster to which you plan to add space management.
- Activate DMAPI management for the GPFS file systems with
the following command: mmchfs device -z yes.
For information about GPFS commands and GPFS requirements for Tivoli® Storage Manager, space management client^^^ go to the General Parallel File Systems product information and see mmbackup command: Tivoli Storage Manager requirements.
- Remount all GPFS file systems
on all nodes within the GPFS cluster.
The HSM daemons detect the initial state of each node and assign all nodes an instance number in relation to the GPFS cluster definition.
On the HSM owner nodes, add space management to each GPFS file system with the dsmmigfs command.
Use the dsmmigfs enablefailover command to enable failover of space management on the owner and source cluster nodes that participate in the failover group.