IBM Support

Recommendations for Using LAN-free with Tivoli Storage Manager for Virtual Environments

White Papers


Abstract

Tivoli Storage Manager for Virtual Environments - Data Protection for VMware provides the ability to backup and restore virtual machine data utilizing SAN-based data movement. There are two different data paths where the SAN-based data movement is possible. The first data path is from the VMware datastore to the vStorage backup server, and the second data path is from the vStorage backup server to the TSM server. For the remainder of this document, these two data paths will be referred to as the transport and the backup data path respectively. This document covers setup considerations for enabling SAN data transfer for both of these data paths. The backup data path uses the Tivoli Storage Manager for Storage Area Networks feature which is referred to as LAN-free backup for the remainder of this document.

Content

The DP for VMware product stores virtual machine full backup images as a collection of control and data files. The data files store the contents of virtual machine disk files, and the control files are small meta-data files which are used during full VM restore operations and full VM incremental backups. For optimal performance, the control files need to be placed in a disk-based TSM server storage pool which is not allowed to migrate to a non-disk-based storage pool (including virtual tape libraries which use disk.) A typical LAN-free configuration utilizes a physical tape or virtual tape based storage pool. TSM also supports some configurations with LAN-free to disk-based storage pools. With the disk-based configurations the separation of control and data files is not necessary in cases where the storage pools are not configured to migrate data to a tape or virtual-tape storage pool.
The following configuration steps will be covered in this document:
  1. Preparation steps for the vStorage backup server.
  2. TSM server storage pool configuration.
  3. TSM server policy configuration.
  4. Setting up the TSM Storage Agent.
  5. Setting up LAN-free options for each backup instance.
  6. Testing the configuration.
An example will be presented through the remainder of this document which implements the architecture depicted in the diagram below. The following document should also be reviewed for additional information regarding setting up multiple TSM client instances, scheduling backups, and recommendations for creating TSM node definitions.
Recommendations for Scheduling with Tivoli Storage Manager for Virtual Environments
Figure 1: Recommendations for Scheduling with Tivoli Storage Manager for Virtual Environments
Preparation steps for the vStorage backup server (Windows)
The ideal configuration will use SAN data movement for both of the data paths described in the introduction. To accomplish this, the vStorage backup server must have access to the disk LUNs which hold the VMware datastores, in addition to the tape devices which will be the backup target.
The following tasks must be completed for the setup of the vStorage backup server. Since the procedures involved vary significantly depending on the specific hardware types involved, these steps are intentionally left at a high-level.
Please note:
The diskpart step below is necessary to prevent the potential for the backup server to damage SAN volumes which are used for raw disk mapping (RDM) virtual disks.
  1. Use the Windows diskpart.exe program to disable automatic mounting of volumes.
    diskpart -> automount disable -> exit
  2. For Windows 2008 systems, use the Windows diskpart.exe command to set the SAN policy to OnlineAll, which will automatically bring newly discovered SAN volumes on-line.
    diskpart -> san policy=OnlineAll -> exit
  3. Modify the SAN zone configuration so that the vStorage backup server has visibility to the LUNs hosting the VMwares datastores and all of the tape drives used by the TSM server storage pool which will be the LAN-free backup target. If your vStorage backup server has multiple HBA's, you can separate the VMware disk and tape traffic by placing the HBA ports on the backup server in different zones.
  4. Install the multi-path driver support package provided by the vendor of the disk subsystem which is used to back your VMware datastores.
  5. Install the tape device driver required for the specific tape technology which is used in your environment.
  6. Determine the tape drive device addresses to be used later using the define path command on the TSM server. (See screen shot below.)
  7. On the storage subsystem which hosts the LUNs backing the VMware datastores, update the host assignments so that the vStorage backup server has access to the LUNs. For most subsystems, this involves defining a cluster or host group that allows multiple hosts to be assigned to the same group of disks. Many subsystems will give a warning that hosts of different types have been granted access to the same disks.
  8. Install the TSM Backup-Archive client including the VMware Backup Tools feature.
  9. Install the TDP for VMware mount interface to enable full VM incremental backups.
The Windows device manager and disk management interfaces can be used to confirm the setup steps have been completed successfully. From the device manager, the disk devices and tape drives sections should contain the expected number of disk LUNs and tape drives.
LAN-free disk
LAN-free tape
By right-clicking and opening the properties for each of these tape drives, the tape address can be determined as shown below. In the example shown, the device address for this drive would be \\.\tape4.
LAN-free device
From the disk management view, the disk LUNs containing VMware datastores should be Online, but should not be mounted to drive letters. It is normal for the partitions to be listed as "Health (Primary Partition)" for Windows 2008 or "Healthy (Unknown Partition)" for Windows 2003.
Windows 2008:
Window 2008 device manager

Windows 2003:
LAN-free disk manager

TSM server storage pool and policy configuration

Two storage pools are required on the TSM server. The first storage pool named VTLPOOL will be the primary container of virtual machine data files. The second pool named VMCTLPOOL will be created for storing control files that are used during full VM incremental backup and virtual machine restore. The amount of space used in each of these storage pools will vary based on the size of the virtual disks.
Note: you will likely have some of these items already defined on your TSM server, but they are shown for completeness of the example. All of the following commands are issued to the TSM server using the dsmadmc interface.
Create the primary LAN-free storage pool
  1. Set the TSM server name and password:
      set servername scorpio2set serverpassword pass4server
  2. Create a library definition on the TSM server:
      define library VTLLIB LIBTYPE=scsi SHARED=yes AUTOLABEL=overwrite RELABELSCRATCH=yes
  3. Define a path from the server to the library:
      define path scorpio2 VTLLIB SRCT=server DESTT=library DEVICE=/dev/smc0 online=yes
  4. Define each of the 10 tape drives in the virtual tape library:
      define drive VTLLIB drivea ELEMENT=autodetect SERIAL=autodetectdefine drive VTLLIB driveb ELEMENT=autodetect SERIAL=autodetect< ... >define drive VTLLIB drivej ELEMENT=autodetect SERIAL=autodetect
  5. Define paths from the server to each of the 10 tape drives:
      define path scorpio2 drivea SRCT=server DESTT=drive LIBR=vtllib DEVICE=/dev/rmt0define path scorpio2 driveb SRCT=server DESTT=drive LIBR=vtllib DEVICE=/dev/rmt1< ... >define path scorpio2 drivej SRCT=server DESTT=drive LIBR=vtllib DEVICE=/dev/rmt9
  6. Define the device class and storage pool:
      define devclass vtl_class DEVTYPE=lto LIBRARY=vtllibdefine stgpool  vtlpool vtl_class MAXSCRATCH=100
Create the control file storage pool
  1. Create the file device class:
      define devc vmctlfile DEVT=file MOUNTLIMIT=150 MAXCAP=1024m DIR=/tsmfile
  2. Create the storage pool to contain control files:
      def stg VMCTLPOOL vmctlfile MAXSCRATCH=200
Create policy domain and node definitions
A policy domain with two management classes needs to be created to allow for separation of VMware data and control files. The default management class will be used for the data files and will write directly to the LAN-free capable tape storage pool. The second management class will be used for the control files and will write to the storage pool created for this purpose. All of the following commands are entered through the dsmadmc interface.
  define domain  vmfullbackupdefine pol     vmfullbackup policy1define mgmt    vmfullbackup policy1 lanfreeassign defmgmt vmfullbackup policy1 lanfreedefine mgmt    vmfullbackup policy1 controldefine copy    vmfullbackup policy1 lanfree TYPE=backup DEST=vtlpool VERE=3 VERD=1 RETE=30 RETO=10define copy    vmfullbackup policy1 control TYPE=backup DEST=vmctlpool VERE=3 VERD=1 RETE=30 RETO=10activate pol   vmfullbackup policy1register node  zergling password02 DOMAIN=vmfullbackup MAXNUMMP=8

Setting up the TSM Storage Agent

After the initial setup of the vStorage backup server and TSM server have been completed, the next step is to setup the TSM Storage Agent on the vStorage backup server. The storage agent is the component which allows LAN-free data movement between the TSM backup-archive client and the TSM server. The following steps outline the setup procedure.
  • Install the TSM storage agent which is available as one of the sub-components in the installation package for the TSM server.
  • Setup the required definitions on the TSM server for the storage agent. All of the following commands are entered through the dsmadmc interface:
  1. Create a server definition on the TSM server for the storage agent:
      define server zergling_sta hla=zergling.acme.com lla=1500 serverpa=password01
  2. Define paths on the TSM server for the storage agent to all of the tape drives. This step requires the device addresses that were collected in the previous section.
      define path zergling_sta DRIVEA SRCT=server DESTT=drive LIBR=vtllib device=\\.\tape0define path zergling_sta DRIVEB SRCT=server DESTT=drive LIBR=vtllib device=\\.\tape1< ... >define path zergling_sta DRIVEJ SRCT=server DESTT=drive LIBR=vtllib device=\\.\tape9
  3. Customize the storage agent's options file named dsmsta.opt which is installed by default in C:\Program Files\tivoli\tsm\storageagent.
    DEVCONFIG       devconfig.outCOMMMETHOD      tcpipTCPPORT            1500COMMMETHOD      sharedmemSHMPORT            1512
  4. Run the setstorageserver command on the storage agent to complete the storage agent configuration.
    Note: The storage agent password was set when issuing the define server command, and the TSM server password was set during initial server configuration using the set serverpassword command.
      > c:> cd \program files\tivoli\tsm\storageagent> dsmsta setstorageserver myname=zergling_sta mypass=password01 myhla=zergling.acme.com servername=scorpio2 serverpass=pass4server hladdress=scorpio2.acme.com lladdress=1500
  5. Start the storage agent in the foreground to verify it starts correctly. In the startup messages, confirm that both communication protocols initialize successfully, and that the shared library initializes successfully with a message similar to the one shown below.
  6. Create a service which will run the storage agent to run as a background task. After creating the service, update the service properties from the services management console to allow the service to start automatically at boot. Also, it might be necessary to grant the user ID which is specified the right to run as a service within Windows.
      > c:> cd \program files\tivoli\tsm\storageagent> install.exe "TSM StorageAgent1" "e:\program files\tivoli\tsm\storageagent\dstasvc.exe" zergling\administrator secretPW
Configure the backup-archive client options file

The example backup-archive client option file shown below includes the required options for one backup instance on the vStorage backup server. The enablelanfree, lanfreecommmethod, and lanfreeshmport options enable LAN-free and identify the communication parameters required to pair with the storage agent. The vmmc and vmctlmc options force the separation of data and control files via the two different management classes previously defined on the TSM server.
The vmcpw option shown is the result of generating the option from the preferences editor of the TSM client interface. Alternatively, the command shown below can be used to set the password.
  > c:> cd \program files\tivoli\tsm\storageagent> install.exe "TSM StorageAgent1" "e:\program files\tivoli\tsm\storageagent\dstasvc.exe" zergling\administrator secretPW

 
Note: The vmvstortransport option is not required to be set since SAN is the preferred transport by default, but is included as an example of how the behavior of falling back to LAN transports can be disabled. The example option is commented out, but if enabled, backups will fail in the event the SAN path is not available rather than switching to a LAN-based transport.
Example dsm.opt file:
  * TSM server communication optionsTCPSERVERADDRESS  scorpio2.acme.comTCPP              1500NODENAME          zerglingPASSWORDACCESS    GENERATE  * VMware related options  VMCHOST           vcenter.acme.comVMCUSER           administratorVMCPW  ****VMBACKUPTYPE      fullVMFULLTYPE        vstor  * LAN-free optionsenablelanfree     yeslanfreecommm      sharedmemlanfreeshmport    1512  * Management class control optionsVMMC              lanfreeVMCTLMC           control  * Transport control (optionally uncomment one of the following)  * CAUTION: use of non-default settings for the VMVSTORTRANSPORT* option can result in undesirable backup failures.  * VMVSTORTRANSPORT san:nbd  * prevent the use of NBDSSL  * VMVSTORTRANSPORT san       * prevent the use of all LAN transports
Testing the setup

Validate the LAN-free configuration
The validate lanfree command on the TSM server verifies that the required LAN-free configuration steps are in place for a given TSM node and storage agent combination. With the command example shown below, it is important that there is at least one LAN-free capable storage pool, and that the ping test between the storage agent and server completes successfully. The command takes the node name and storage agent names as parameters.
  tsm: SCORPIO2>validate lanfree zergling zergling_staANR0387I Evaluating node ZERGLING using storage agent ZERGLING_STA for LAN-free data movement.Node     Storage     Operation    Mgmt Class    Destination     LAN-Free     ExplanationName     Agent                    Name          Name            capable?-----    --------    ---------    ----------    ------------    --------ZERG-    ZERGLIN-    BACKUP       CONTROL       VMCTLPOOL       No           LING     G_STA                                                               ZERG-    ZERGLIN-    BACKUP       LANFREE       VTLPOOL         Yes LING     G_STAANR1706I Ping for server 'ZERGLING_STA' was able to establish a connection.ANR0388I Node ZERGLING using storage agent ZERGLING_STA has 1 storage poolscapable of LAN-free data movement and 1 storage pools not capable of LAN-free data movement.

Run a test backup

After the validate lanfree command has tested out successfully, you are ready to perform some test backups. In the example output below, two points to note have been outlined in red. The first indicates that the SAN transport has been used for the data path between the VMware datastore and the vstorage backup server. The second indicates that LAN-free data movement has been used for the backup data path between the vStorage backup server and the TSM server.
LAN-free backup
Confirm separation of control and data files

Use the query occupancy command on the TSM server to confirm that backup files have been stored in both of the storage pools that have been configured. If query occupancy does not report this, then something is out of place with the storage pool, management class definitions, or client options. It is normal to see two more files in the control file storage pool than the tapepool for every backup version.
  tsm: SCORPIO2>q occ zerglingNode Name  Type Filespace   FSID Storage    Number of  Physical                   Name             Pool Name      Files     Space                                                                (MB)      ---------- ---- ---------- ----- ---------- --------- ----------ZERGLING   Bkup \VMFULL-w-     1 TAPEPOOL         110 12,782.30                  in2003x64                 - host3ZERGLING   Bkup \VMFULL-w-     1 VMCTLPOOL        112      7.77                                        in2003x64                 - host3

[{"Business Unit":{"code":"BU058","label":"IBM Infrastructure w\/TPS"},"Product":{"code":"SS8TDQ","label":"Tivoli Storage Manager for Virtual Environments"},"Component":"","Platform":[{"code":"PF016","label":"Linux"},{"code":"PF033","label":"Windows"}],"Version":"All Versions","Edition":"","Line of Business":{"code":"LOB26","label":"Storage"}}]

Document Information

Modified date:
19 March 2020

UID

ibm13433611