IBM Tivoli Storage Manager, Version 7.1

Configuring shared access example

Shared access must be configured according to the nodes on the server and the relationships between the nodes.

About this task

The following example shows how to set up proxy node authority for shared access. In the example, client agent nodes NODE_1, NODE_2, and NODE_3 all share the same General Parallel File System (GPFS™). Because the file space is so large, it is neither practical nor cost effective to back up this file system from a single client node. By using Tivoli® Storage Manager proxy node support, the very large file system can be backed up by the three agent nodes for the target NODE_GPFS. The backup effort is divided among the three nodes. The end result is that NODE_GPFS has a backup from a given point in time.

All settings used in the proxy node session are determined by the definitions of the target node, in this case NODE_GPFS. For example, any settings for DATAWRITEPATH or DATAREADPATH are determined by the target node, not the agent nodes (NODE_1, NODE_2, NODE_3).

Assume that NODE_1, NODE_2 and NODE_3 each need to execute an incremental backup and store all the information under NODE_GPFS on the server.

Procedure

Perform the following steps to set up a proxy node authority for shared access:

  1. Define four nodes on the server: NODE_1, NODE_2, NODE_3, and NODE_GPFS. Issue the following commands:
    register node node_1 mysecretpa5s
    register node node_2 mysecret9pas
    register node node_3 mypass1secret
    register node node_gpfs myhiddp3as
  2. Define a proxy node relationship among the nodes by issuing the following commands:
    grant proxynode target=node_gpfs agent=node_1,node_2,node_3
  3. Define the node name and asnode name for each of the servers in the respective dsm.sys files. See the Backup-Archive Clients Installation and User's Guide for more information on the NODENAME and ASNODENAME client options. Issue the following commands:
    nodename	node_1
    asnodename	node_gpfs
  4. Optionally, define a schedule:
    define schedule standard gpfs-sched action=macro options="gpfs_script"
  5. Assign a schedule to each client node by issuing the following commands:
    define association standard gpfs-sched node_1
    define association standard gpfs-sched node_2 
    define association standard gpfs-sched node_3 
  6. Execute the schedules by issuing the following command:
    dsmc schedule


Feedback