With a Microsoft Failover Cluster Manager, you can place Tivoli® Storage Manager server cluster resources into a cluster group. The Tivoli Storage Manager cluster group has a network name, an IP address, one or more physical disks, a DB2 server, and a Tivoli Storage Manager server service.
The Tivoli Storage Manager instance network name is independent of the name of the physical node on which the Tivoli Storage Manager cluster group runs and migrates from node to node. Clients connect to a Tivoli Storage Manager server by using the Tivoli Storage Manager instance network name, rather than the Windows node name. The Tivoli Storage Manager instance network name maps to a primary or backup node. The mapping is dependent on which node owns the Tivoli Storage Manager cluster group. Any client that uses Windows Internet Name Service (WINS) or directory services to locate servers can automatically track the Tivoli Storage Manager clustered server as it moves between nodes. You can automatically track the Tivoli Storage Manager clustered server without modifying or reconfiguring the client.
Each Tivoli Storage Manager cluster group has its own disk as part of a cluster resource group. Tivoli Storage Manager cluster groups cannot share data between the cluster groups. Each Tivoli Storage Manager server that is configured in a Tivoli Storage Manager cluster group has its database, active logs, recovery logs, and set of storage pool volumes on a separate disk owned by that Tivoli Storage Manager cluster group.
The following example demonstrates the way that a Microsoft Failover Cluster Manager for a Tivoli Storage Manager cluster server works.
Assume that a clustered Tivoli Storage Manager server that is named JUPITER is running on Node Z and a clustered Tivoli Storage Manager server that is named SATURN is running on Node X. Clients connect to the Tivoli Storage Manager server JUPITER and the Tivoli Storage Manager server SATURN without knowing which node hosts their server.

If Node X fails, Node Z assumes the role of running SATURN. To a client, it is exactly as if Node X were turned off and immediately turned back on again. Clients experience the loss of all connections to SATURN and all active transactions are rolled back to the client. Clients must reconnect to SATURN after the connection is lost. The location of SATURN is not apparent to the client.