IBM Tivoli Storage Manager, Version 7.1

UPDATE STGPOOL (Update a primary sequential access pool)

Use this command to update a primary sequential access storage pool.

Restrictions:
  1. You cannot use this command to change the data format for the storage pool.
  2. If the value for DATAFORMAT is NETAPPDUMP, CELERRADUMP, or NDMPDUMP, you can modify only the following attributes:
    • DESCRIPTION
    • ACCESS
    • COLLOCATE
    • MAXSCRATCH
    • REUSEDELAY

Privilege class

To issue this command, you must have system privilege, unrestricted storage privilege, or restricted storage privilege for the storage pool to be updated.

Syntax

Read syntax diagramSkip visual syntax diagram
>>-UPDate STGpool--pool_name--+-----------------------------+--->
                              '-DESCription--=--description-'   

>--+----------------------------+------------------------------->
   '-ACCess--=--+-READWrite---+-'   
                +-READOnly----+     
                '-UNAVailable-'     

>--+-------------------------------------------+---------------->
   |                                   (1) (2) |   
   '-MAXSIze--=--+-maximum_file_size-+---------'   
                 '-NOLimit-----------'             

>--+-------------------------+---------------------------------->
   |                     (1) |   
   '-CRCData--=--+-Yes-+-----'   
                 '-No--'         

>--+-----------------------------------+------------------------>
   |                           (1) (2) |   
   '-NEXTstgpool--=--pool_name---------'   

>--+-----------------------------+------------------------------>
   |                     (1) (2) |   
   '-HIghmig--=--percent---------'   

>--+----------------------------+------------------------------->
   |                    (1) (2) |   
   '-LOwmig--=--percent---------'   

>--+-----------------------------+------------------------------>
   |                     (1) (2) |   
   '-REClaim--=--percent---------'   

>--+-----------------------------------+------------------------>
   |                           (1) (2) |   
   '-RECLAIMPRocess--=--number---------'   

>--+--------------------------------------+--------------------->
   |                              (1) (2) |   
   '-RECLAIMSTGpool--=--pool_name---------'   

>--+---------------------------------+-------------------------->
   |                             (2) |   
   '-COLlocate--=--+-No--------+-----'   
                   +-GRoup-----+         
                   +-NODe------+         
                   '-FIlespace-'         

>--+---------------------------+--+-------------------------+--->
   |                       (2) |  |                     (2) |   
   '-MAXSCRatch--=--number-----'  '-REUsedelay--=--days-----'   

>--+----------------------------------+------------------------->
   |                          (1) (2) |   
   '-OVFLOcation--=--location---------'   

>--+---------------------------+-------------------------------->
   |                   (1) (2) |   
   '-MIGDelay--=--days---------'   

>--+---------------------------------+-------------------------->
   |                         (1) (2) |   
   '-MIGContinue--=--+-Yes-+---------'   
                     '-No--'             

>--+-------------------------------+---------------------------->
   |                       (1) (2) |   
   '-MIGPRocess--=--number---------'   

>--+----------------------------+------------------------------->
   '-AUTOCopy--=--+-None------+-'   
                  +-CLient----+     
                  +-MIGRation-+     
                  '-All-------'     

>--+-------------------------------------------+---------------->
   |                  .-,--------------------. |   
   |                  V              (1) (2) | |   
   '-COPYSTGpools--=----copypoolname---------+-'   

>--+----------------------------------+------------------------->
   |                          (1) (2) |   
   '-COPYContinue--=--+-Yes-+---------'   
                      '-No--'             

>--+-----------------------------------------------+------------>
   |                     .-,---------------------. |   
   |                     V                       | |   
   '-ACTIVEDATApools--=----active-data_pool_name-+-'   

>--+-----------------------------+------------------------------>
   '-DEDUPlicate--=--+-No------+-'   
                     |     (3) |     
                     '-Yes-----'     

>--+--------------------------------+--------------------------><
   |                            (4) |   
   '-IDENTIFYPRocess--=--number-----'   

Notes:
  1. This parameter is not available for storage pools that use the data formats NETAPPDUMP, CELERRADUMP, or NDMPDUMP.
  2. This parameter is not available for Centera storage pools.
  3. This parameter is valid only for storage pools that are defined with a FILE-type device class.
  4. This parameter is only available if the value of the DEDUPLICATE parameter is YES.

Parameters

pool_name (Required)
Specifies the name of the storage pool to be updated.
DESCription
Specifies a description of the storage pool. This parameter is optional. The maximum length of the description is 255 characters. Enclose the description in quotation marks if it contains any blank characters. To remove an existing description, specify a null string ("").
ACCess
Specifies how client nodes and server processes (such as migration and reclamation) can access files in the storage pool. This parameter is optional. Possible values are:
READWrite
Specifies that client nodes and server processes can read and write to files stored on volumes in the storage pool.
READOnly
Specifies that client nodes can only read files from the volumes in the storage pool.

Server processes can move files within the volumes in the storage pool. However, no new writes are permitted to volumes in the storage pool from volumes outside the storage pool.

If this storage pool has been specified as a subordinate storage pool (with the NEXTSTGPOOL parameter) and is defined as readonly, the storage pool is skipped when server processes attempt to write files to the storage pool.

UNAVailable
Specifies that client nodes cannot access files stored on volumes in the storage pool.

Server processes can move files within the volumes in the storage pool and can also move or copy files from this storage pool to another storage pool. However, no new writes are permitted to volumes in the storage pool from volumes outside the storage pool.

If this storage pool has been specified as a subordinate storage pool (with the NEXTSTGPOOL parameter) and is defined as unavailable, the storage pool is skipped when server processes attempt to write files to the storage pool.

MAXSIze
Specifies the maximum size for a physical file that the server can store in the storage pool. This parameter is optional. Possible values are:
NOLimit
Specifies that there is no maximum size limit for physical files stored in the storage pool.
maximum_file_size
Limits the maximum physical file size. Specify an integer from 1 to 999999, followed by a scale factor. For example, MAXSIZE=5G specifies that the maximum file size for this storage pool is 5 gigabytes. Scale factors are:
Scale factor Meaning
K kilobyte
M megabyte
G gigabyte
T terabyte

If a file exceeds the maximum size and no pool is specified as the next storage pool in the hierarchy, the server does not store the file. If a file exceeds the maximum size and a pool is specified as the next storage pool, the server stores the file in the next storage pool that can accept the file size. If you specify the next storage pool parameter, at least one storage pool in your hierarchy should have no limit on the maximum size of a file. By having no limit on the size for at least one pool, you ensure that no matter what its size, the server can store the file.

For logical files that are part of an aggregate, the server considers the size of the aggregate to be the file size. Therefore, the server does not store logical files that are smaller than the maximum size limit if the files are part of an aggregate that is larger than the maximum size limit.

Restriction: This parameter is not available for storage pools that use the following data formats:
  • NETAPPDUMP
  • CELERRADUMP
  • NDMPDUMP
CRCData
Specifies whether a cyclic redundancy check (CRC) validates storage pool data when audit volume processing occurs on the server. This parameter is only valid for NATIVE data format storage pools. This parameter is optional. The default value is NO. By setting CRCDATA to YES and scheduling an AUDIT VOLUME command you can continually ensure the integrity of data that is stored in your storage hierarchy. Possible values are:
Yes
Specifies that data is stored containing CRC information, allowing for audit volume processing to validate storage pool data. This mode impacts performance because more overhead is required to calculate and compare CRC values between the storage pool and the server.
No
Specifies that data is stored without CRC information.
Tip: For storage pools that are associated with the 3592, LTO, or ECARTRIDGE device type, logical block protection provides better protection against data corruption than CRC validation for a storage pool. If you specify CRC validation for a storage pool, data is validated only during volume auditing operations. Errors are identified after data is written to tape.
To enable logical block protection, specify a value of READWRITE for the LBPROTECT parameter on the DEFINE DEVCLASS and UPDATE DEVCLASS commands for the 3592, LTO, or ECARTRIDGE device types. Logical block protection is supported only on the following types of drives and media:
  • IBM® LTO5 and later.
  • IBM 3592 Generation 3 drives and later with 3592 Generation 2 media and later.
  • Oracle StorageTek T10000C drives.
NEXTstgpool
Specifies a primary storage pool to which files are migrated. You cannot migrate data from a sequential access storage pool to a random access storage pool. This parameter is optional. The next storage pool must be a primary storage pool.

To remove an existing value, specify a null string ("").

If this storage pool does not have a next storage pool, the server cannot migrate files from this storage pool and cannot store files that exceed the maximum size for this storage pool in another storage pool.

When there is insufficient space available in the current storage pool, the NEXTSTGPOOL parameter for sequential access storage pools does not allow data to be stored into the next pool. In this case, the server issues a message and the transaction fails.

For next storage pools with a device type of FILE, the server completes a preliminary check to determine whether sufficient space is available. If space is not available, the server skips to the next storage pool in the hierarchy. If space is available, the server attempts to store data in that pool. However, it is possible that the storage operation might fail because, at the time the actual storage operation is attempted, the space is no longer available.

You cannot create a chain of storage pools that leads to an endless loop through the NEXTSTGPOOL parameter. At least one storage pool in the hierarchy must have no value that is specified for NEXTSTGPOOL.

Restriction: This parameter is not available for storage pools that use the following data formats:
  • NETAPPDUMP
  • CELERRADUMP
  • NDMPDUMP

If you specify a sequential access pool as the NEXTSTGPOOL, the pool can be only NATIVE or NONBLOCK data format.

HIghmig
Specifies that the server starts migration when storage pool utilization reaches this percentage. For sequential-access disk (FILE) storage pools, utilization is the ratio of data in a storage pool to the pool's total estimated data capacity, including the capacity of all scratch volumes specified for the pool. For storage pools that use tape media, utilization is the ratio of volumes that contain data to the total number of volumes in the storage pool. The total number of volumes includes the maximum number of scratch volumes. This parameter is optional. You can specify an integer from 0 to 100.

When the storage pool exceeds the high migration threshold, the server can start migration of files by volume to the next storage pool defined for the storage pool. You can set the high migration threshold to 100 to prevent migration for the storage pool.

Restriction: This parameter is not available for storage pools that use the following data formats:
  • NETAPPDUMP
  • CELERRADUMP
  • NDMPDUMP
LOwmig
Specifies that the server stops migration when storage pool utilization is at or below this percentage. For sequential-access disk (FILE) storage pools, utilization is the ratio of data in a storage pool to the pool's total estimated data capacity, including the capacity of all scratch volumes specified for the pool. For storage pools that use tape media, utilization is the ratio of volumes that contain data to the total number of volumes in the storage pool. The total number of volumes includes the maximum number of scratch volumes. This parameter is optional. You can specify an integer from 0 to 99.

When the storage pool reaches the low migration threshold, the server does not start migration of files from another volume. You can set the low migration threshold to 0 to permit migration to empty the storage pool.

Restriction: This parameter is not available for storage pools that use the following data formats:
  • NETAPPDUMP
  • CELERRADUMP
  • NDMPDUMP
REClaim
Specifies when the server reclaims a volume, which is based on the percentage of reclaimable space on a volume. Reclaimable space is the amount of space that is occupied by files that are expired or deleted from the Tivoli® Storage Manager database.

Reclamation makes the fragmented space on volumes usable again by moving any remaining unexpired files from one volume to another volume, thus making the original volume available for reuse. This parameter is optional. You can specify an integer from 1 to 100.

When determining which volumes in a storage pool to reclaim, the Tivoli Storage Manager server first determines the reclamation threshold indicated by the RECLAIM. The server then examines the percentage of reclaimable space for each volume in the storage pool. If the percentage of reclaimable space on a volume is greater that the reclamation threshold of the storage pool, the volume is a candidate for reclamation.

For example, suppose storage pool FILEPOOL has a reclamation threshold of 70 percent. This value indicates that the server can reclaim any volume in the storage pool that has a percentage of reclaimable space that is greater that 70 percent. The storage pool has three volumes:
  • FILEVOL1 with 65 percent reclaimable space
  • FILEVOL2 with 80 percent reclaimable space
  • FILEVOL3 with 95 percent reclaimable space

When reclamation begins, the server compares the percent of reclaimable space for each volume with the reclamation threshold of 70 percent. In this example, FILEVOL2 and FILEVOL3 are candidates for reclamation because their percentages of reclaimable space are greater than 70. To determine the percentage of reclaimable space for a volume, issue the QUERY VOLUME command and specify FORMAT=DETAILED. The value in the field Pct. Reclaimable Space is the percentage of reclaimable space for the volume.

Specify a value of 50 percent or greater for this parameter so that files stored on two volumes can be combined onto a single output volume.

AIX operating systems Sun Solaris operating systems Windows operating systems For storage pools that use a WORM device class, you can lower the value from the default of 100. Lowering the value allows the server to consolidate data onto fewer volumes when needed. Volumes that are emptied by reclamation can be checked out of the library, freeing slots for new volumes. Because the volumes are write-once, the volumes cannot be reused.

Restriction: This parameter is not available for storage pools that use the following data formats:
  • NETAPPDUMP
  • CELERRADUMP
  • NDMPDUMP
RECLAIMPRocess
Specifies the number of parallel processes to use for reclaiming the volumes in this storage pool. This parameter is optional. Enter a value from 1 to 999.

When calculating the value for this parameter, consider the number of sequential storage pools that will be involved with the reclamation and the number of logical and physical drives that can be dedicated to the operation. To access a sequential access volume, IBM Tivoli Storage Manager uses a mount point and, if the device type is not FILE, a physical drive. The number of available mount points and drives depends on other Tivoli Storage Manager and system activity and on the mount limits of the device classes for the sequential access storage pools that are involved in the reclamation.

For example, suppose that you want to reclaim the volumes from two sequential storage pools simultaneously and that you want to specify four processes for each of the storage pools. The storage pools have the same device class. Assuming that the RECLAIMSTGPOOL parameter is not specified or that the reclaim storage pool has the same device class as the storage pool that is being reclaimed, each process requires two mount points and, if the device type is not FILE, two drives. (One of the drives is for the input volume, and the other drive is for the output volume.) To run eight reclamation processes simultaneously, you need a total of at least 16 mount points and 16 drives. The device class for the two storage pools must have a mount limit of at least 16.

If the number of reclamation processes you specify is more than the number of available mount points or drives, the processes that do not obtain mount points or drives will wait for mount points or drives to become available. If mount points or drives do not become available within the MOUNTWAIT time, the reclamation processes will end. For information about specifying the MOUNTWAIT time, see DEFINE DEVCLASS (Define a device class).

The Tivoli Storage Manager server will start the specified number of reclamation processes regardless of the number of volumes that are eligible for reclamation. For example, if you specify ten reclamation processes and only six volumes are eligible for reclamation, the server will start ten processes and four of them will complete without processing a volume.

Restriction: This parameter is not available for storage pools that use the following data formats:
  • NETAPPDUMP
  • CELERRADUMP
  • NDMPDUMP
RECLAIMSTGpool
Specifies another primary storage pool as a target for reclaimed data from this storage pool. This parameter is optional. When the server reclaims volumes for the storage pool, unexpired data is moved from the volumes that are being reclaimed to the storage pool named with this parameter.

To remove an existing value, specify a null string ("").

A reclaim storage pool is most useful for a storage pool that has only one drive in its library. When you specify this parameter, the server moves all data from reclaimed volumes to the reclaim storage pool regardless of the number of drives in the library.

To move data from the reclaim storage pool back to the original storage pool, use the storage pool hierarchy. Specify the original storage pool as the next storage pool for the reclaim storage pool.

Restriction: This parameter is not available for storage pools that use the following data formats:
  • NETAPPDUMP
  • CELERRADUMP
  • NDMPDUMP
COLlocate
Specifies whether the server attempts to keep data, stored on as few volumes as possible, that belong to one of the following candidates:
  • A single client node
  • A group of file spaces
  • A group of client nodes
  • A client file space
This parameter is optional.

Collocation reduces the number of sequential access media mounts for restore, retrieve, and recall operations. However, collocation increases both the amount of server time that is needed to collocate files for storing and the number of volumes required. Collocation can also impact the number of processes migrating disks to sequential pool.

You can specify one of the following options:
No
Specifies that collocation is disabled. During migration from disk, processes are created at a file space level.
GRoup
Specifies that collocation is enabled at the group level for client nodes or file spaces. For collocation groups, the server attempts to put data for nodes or file spaces that belong to the same collocation group on as few volumes as possible.

If you specify COLLOCATE=GROUP but do not define any collocation groups, or if you do not add nodes or file spaces to a collocation group, data is collocated by node. Consider tape usage when you organize client nodes or file spaces into collocation groups.

For example, if a tape-based storage pool consists of data from nodes and you specify COLLOCATE=GROUP, the server completes the following actions:
  • Collocates the data by group for grouped nodes. Whenever possible, the server collocates data that belongs to a group of nodes on a single tape or on as few tapes as possible. Data for a single node can also be spread across several tapes that are associated with a group.
  • Collocates the data by node for ungrouped nodes. Whenever possible, the server stores the data for a single node on a single tape. All available tapes that already have data for the node are used before available space on any other tape is used.
  • During migration from disk, the server creates migration processes at the collocation group level for grouped nodes, and at the node level for ungrouped nodes.
If a tape-based storage pool consists of data from grouped file spaces and you specify COLLOCATE=GROUP, the server completes the following actions:
  • Collocates by group, the data for grouped file spaces only. Whenever possible, the server collocates data that belongs to a group of file spaces on a single tape or on as few tapes as possible. Data for a single file space can also be spread across several tapes that are associated with a group.
  • Collocates the data by node (for file spaces that are not explicitly defined to a file space collocation group). For example, node1 has file spaces named A, B, C, D, and E. File spaces A and B belong to a filespace collocation group but C, D, and E do not. File spaces A and B are collocated by filespace collocation group, while C, D, and E are collocated by node.
  • During migration from disk, the server creates migration processes at the collocation group level for grouped file spaces.

Data is collocated on the least amount of sequential access volumes.

NODe
Specifies that collocation is enabled at the client node level. For collocation groups, the server attempts to put data for one node on as few volumes as possible. If the node has multiple file spaces, the server does not try to collocate those file spaces. For compatibility with an earlier version, COLLOCATE=YES is still accepted by the server to specify collocation at the client node level.

If a storage pool contains data for a node that is a member of a collocation group and you specify COLLOCATE=NODE, the data is collocated by node.

For COLLOCATE=NODE, the server creates processes at the node level when you migrate data from disk.

FIlespace
Specifies that collocation is enabled at the file space level for client nodes. The server attempts to place data for one node and file space on as few volumes as possible. If a node has multiple file spaces, the server attempts to place data for different file spaces on different volumes.

For COLLOCATE=FILESPACE, the server creates processes at the file space level when you migrate data from disk.

MAXSCRatch
Specifies the maximum number of scratch volumes that the server can request. This parameter is optional. You can specify an integer from 0 to 100000000. By allowing the server to request scratch volumes, you avoid having to define each volume to be used.

The value that is specified for this parameter is used to estimate the total number of volumes available in the storage pool and the corresponding estimated capacity for the storage pool.

Scratch volumes are automatically deleted from the storage pool when they become empty. When scratch volumes with the device type of FILE are deleted, the space that the volumes occupied is freed by the server and returned to the file system.

Tip: For server-to-server operations that use virtual volumes and that store a small amount of data, consider specifying a value for the MAXSCRATCH parameter that is higher than the value you typically specify for write operations to other types of volumes. After a write operation to a virtual volume, Tivoli Storage Manager marks the volume as FULL, even if the value of the MAXCAPACITY parameter on the device-class definition is not reached. The Tivoli Storage Manager server does not keep virtual volumes in FILLING status and does not append to them. If the value of the MAXSCRATCH parameter is too low, server-to-server operations can fail.
REUsedelay
Specifies the number of days that must elapse after all files are deleted from a volume before the volume can be rewritten or returned to the scratch pool. This parameter is optional. You can specify an integer from 0 to 9999. The value 0 means that a volume can be rewritten or returned to the scratch pool as soon as all files are deleted from the volume.

By specifying this parameter, you can ensure that the database can be restored to an earlier level and database references to files in the storage pool would still be valid.

OVFLOcation
Specifies the overflow location for the storage pool. The server assigns this location name to a volume that is ejected from the library by the MOVE MEDIA command. This parameter is optional. The location name can be a maximum length of 255 characters. Enclose the location name in quotation marks if the location name contains any blank characters.

To remove an existing value, specify a null string ("").

Restriction: This parameter is not available for storage pools that use the following data formats:
  • NETAPPDUMP
  • CELERRADUMP
  • NDMPDUMP
MIGDelay
Specifies the minimum number of days a file must remain in a storage pool before it becomes eligible for migration. All files on a volume must be eligible for migration before the server selects the volume for migration. To calculate a value to compare to the specified MIGDELAY, the server counts the number of days that the file has been in the storage pool.

This parameter is optional. You can specify an integer from 0 to 9999.

If you want the server to count the number of days based only on when a file was stored and not when it was retrieved, use the NORETRIEVEDATE server option.

Restriction: This parameter is not available for storage pools that use the following data formats:
  • NETAPPDUMP
  • CELERRADUMP
  • NDMPDUMP
MIGContinue
Specifies whether you allow the server to migrate files that do not satisfy the migration delay time. This parameter is optional.

Because you can require that files remain in the storage pool for a minimum number of days, the server may migrate all eligible files to the next storage pool yet not meet the low migration threshold. This parameter allows you to specify whether the server is allowed to continue migration by migrating files that do not satisfy the migration delay time.

Possible values are:
Yes
Specifies that, when necessary to meet the low migration threshold, the server continues to migrate files that have not been stored in the storage pool for the number of days specified by the migration delay period.
No
Specifies that the server stops migration when no eligible files remain to be migrated, even before reaching the low migration threshold. The server does not migrate files unless the files have been stored in the storage pool for the number of days specified by the migration delay period.
Restriction: This parameter is not available for storage pools that use the following data formats:
  • NETAPPDUMP
  • CELERRADUMP
  • NDMPDUMP
MIGPRocess
Specifies the number of parallel processes to use for migrating the files from the volumes in this storage pool. This parameter is optional. Enter a value from 1 to 999.

When calculating the value for this parameter, consider the number of sequential storage pools that will be involved with the migration, and the number of logical and physical drives that can be dedicated to the operation. To access a sequential-access volume, Tivoli Storage Manager uses a mount point and, if the device type is not FILE, a physical drive. The number of available mount points and drives depends on other Tivoli Storage Manager and system activity and on the mount limits of the device classes for the sequential access storage pools that are involved in the migration.

For example, suppose you want to simultaneously migrate the files from volumes in two primary sequential storage pools and that you want to specify three processes for each of the storage pools. The storage pools have the same device class. Assuming that the storage pool to which files are being migrated has the same device class as the storage pool from which files are being migrated, each process requires two mount points and, if the device type is not FILE, two drives. (One drive is for the input volume, and the other drive is for the output volume.) To run six migration processes simultaneously, you need a total of at least 12 mount points and 12 drives. The device class for the storage pools must have a mount limit of at least 12.

If the number of migration processes you specify is more than the number of available mount points or drives, the processes that do not obtain mount points or drives will wait for mount points or drives to become available. If mount points or drives do not become available within the MOUNTWAIT time, the migration processes will end. For information about specifying the MOUNTWAIT time, see DEFINE DEVCLASS (Define a device class).

The Tivoli Storage Manager server will start the specified number of migration processes regardless of the number of volumes that are eligible for migration. For example, if you specify ten migration processes and only six volumes are eligible for migration, the server will start ten processes and four of them will complete without processing a volume.

Note: When you specify this parameter, consider whether the simultaneous-write function is enabled for server data migration. Each migration process requires a mount point and a drive for each copy storage pool and active-data pool that is defined to the target storage pool.
AUTOCopy
Specifies when Tivoli Storage Manager completes simultaneous-write operations. This parameter affects the following operations:
  • Client store sessions
  • Server import processes
  • Server data-migration processes

If an error occurs while data is being simultaneously written to a copy storage pool or active-data pool during a migration process, the server stops writing to the failing storage pools for the remainder of the process. However, the server continues to store files into the primary storage pool and any remaining copy storage pools or active-data pools. These pools remain active for the duration of the migration process. Copy storage pools are specified using the COPYSTGPOOLS parameter. Active-data pools are specified using the ACTIVEDATAPOOLS parameter.

Possible values are:
None
Specifies that the simultaneous-write function is disabled.
CLient
Specifies that data is written simultaneously to copy storage pools and active-data pools during client store sessions or server import processes. During server import processes, data is written simultaneously to only copy storage pools. Data is not written to active-data pools during server import processes.
MIGRation
Specifies that data is written simultaneously to copy storage pools and active-data pools only during migration to this storage pool. During server data-migration processes, data is written simultaneously to copy storage pools and active-data pools only if the data does not exist in those pools. Nodes whose data is being migrated must be in a domain associated with an active-data pool. If the nodes are not in a domain associated with an active pool, the data cannot be written to the pool.
All
Specifies that data is written simultaneously to copy storage pools and active-data pools during client store sessions, server import processes, or server data-migration processes. Specifying this value ensures that data is written simultaneously whenever this pool is a target for any of the eligible operations.
COPYSTGpools
Specifies the names of copy storage pools where the server simultaneously writes data. You can specify a maximum of three copy pool names that are separated by commas. Spaces between the names of the copy pools are not permitted. To add or remove one or more copy storage pools, specify the pool name or names that you want to include in the updated list. For example, if the existing copy pool list includes COPY1 and COPY2 and you want to add COPY3, specify COPYSTGPOOLS=COPY1,COPY2,COPY3. To remove all existing copy storage pools that are associated with the primary storage pool, specify a null string ("") for the value (for example, COPYSTGPOOLS="").

When you specify a value for the COPYSTGPOOLS parameter, you can also specify a value for the COPYCONTINUE parameter. For more information, see the COPYCONTINUE parameter.

The combined total number of storage pools that are specified in the COPYSGTPOOLS and ACTIVEDATAPOOLS parameters cannot exceed three.

When a data storage operation switches from a primary storage pool to a next storage pool, the next storage pool inherits the list of copy storage pools and the COPYCONTINUE value from the primary storage pool. The primary storage pool is specified by the copy group of the management class that is bound to the data.

The server can write data simultaneously to copy storage pools during the following operations:
  • Back up and archive operations by Tivoli Storage Manager backup-archive clients or application clients that use the Tivoli Storage Manager API
  • Migration operations by Tivoli Storage Manager for Space Management clients
  • Import operations that involve copying exported file data from external media to a primary storage pool associated with a copy storage pool list
Restrictions:
  1. This parameter is available only to primary storage pools that use NATIVE or NONBLOCK data format. This parameter is not available for storage pools that use the following data formats:
    • NETAPPDUMP
    • CELERRADUMP
    • NDMPDUMP
  2. Simultaneous-write operations takes precedence over LAN-free data movement, causing the operations to go over the LAN. However, the simultaneous-write configuration is accepted.
  3. The simultaneous-write function is not supported for NAS backup operations. If the primary storage pool specified in the DESTINATION or TOCDESTINATION in the copy group of the management class has copy storage pools defined, the copy storage pools are ignored and the data is stored into the primary storage pool only.
  4. You cannot use the simultaneous-write function with Centera storage devices.
Attention: The function that is provided by the COPYSTGPOOLS parameter is not intended to replace the BACKUP STGPOOL command. If you use the COPYSTGPOOLS parameter, continue to use the BACKUP STGPOOL command to ensure that the copy storage pools are complete copies of the primary storage pool. There are cases when a copy might not be created. For more information, see the COPYCONTINUE parameter description.
COPYContinue
Specifies how the server should react to a copy storage pool write failure for any of the copy storage pools that are listed in the COPYSTGPOOLS parameter. This parameter is optional. The default is YES. When you specify the COPYCONTINUE parameter, either a COPYSTGPOOLS list must exist or the COPYSTGPOOLS parameter must also be specified.

The COPYCONTINUE parameter has no effect on the simultaneous-write function during migration.

Possible values are:
Yes
If the COPYCONTINUE parameter is set to YES, the server will stop writing to the failing copy pools for the remainder of the session, but continue storing files into the primary pool and any remaining copy pools. The copy storage pool list is active only for the life of the client session and applies to all the primary storage pools in a particular storage pool hierarchy.
No
If the COPYCONTINUE parameter is set to NO, the server will fail the current transaction and discontinue the store operation.
Restrictions:
  • The setting of the COPYCONTINUE parameter does not affect active-data pools. If a write failure occurs for any of the active-data pools, the server stops writing to the failing active-data pool for the remainder of the session, but continues storing files into the primary pool and any remaining active-data pools and copy storage pools. The active-data pool list is active only for the life of the session and applies to all the primary storage pools in a particular storage pool hierarchy.
  • The setting of the COPYCONTINUE parameter does not affect the simultaneous-write function during server import. If data is being written simultaneously and a write failure occurs to the primary storage pool or any copy storage pool, the server import process fails.
  • The setting of the COPYCONTINUE parameter does not affect the simultaneous-write function during server data migration. If data is being written simultaneously and a write failure occurs to any copy storage pool or active-data pool, the failing storage pool is removed and the data migration process continues. Write failures to the primary storage pool cause the migration process to fail.
ACTIVEDATApools
Specifies the names of active-data pools where the server simultaneously writes data during a client backup operation. The ACTIVEDATAPOOLS parameter is optional. Spaces between the names of the active-data pools are not permitted.

The combined total number of storage pools that are specified in the COPYSGTPOOLS and ACTIVEDATAPOOLS parameters cannot exceed three.

When a data storage operation switches from a primary storage pool to a next storage pool, the next storage pool inherits the list of active-data pools from the destination storage pool specified in the copy group. The primary storage pool is specified by the copy group of the management class that is bound to the data.

The server can write data simultaneously to active-data pools only during backup operations by Tivoli Storage Manager backup-archive clients or application clients that use the Tivoli Storage Manager API.
Restrictions:
  1. This parameter is available only to primary storage pools that use NATIVE or NONBLOCK data format. This parameter is not available for storage pools that use the following data formats:
    • NETAPPDUMP
    • CELERRADUMP
    • NDMPDUMP
  2. Writing data simultaneously to active-data pools is not supported when the operation is using LAN-free data movement. Simultaneous-write operations take precedence over LAN-free data movement, causing the operations to go over the LAN. However, the simultaneous-write configuration is accepted.
  3. The simultaneous-write function is not supported when a NAS backup operation is writing a TOC file. If the primary storage pool specified in the TOCDESTINATION in the copy group of the management class has active-data pools defined, the active-data pools are ignored and the data is stored into the primary storage pool only.
  4. You cannot use the simultaneous-write function with Centera storage devices.
  5. Data being imported cannot be stored in active-data pools. After an import operation, use the COPY ACTIVEDATA command to store the imported data in an active-data pool.
Attention: The function that is provided by the ACTIVEDATAPOOLS parameter is not intended to replace the COPY ACTIVEDATA command. If you use the ACTIVEDATAPOOLS parameter, use the COPY ACTIVEDATA command to ensure that the active-data pools contain all active data of the primary storage pool.
DEDUPlicate
Specifies whether the data that is stored in this storage pool will be deduplicated. This parameter is optional and is valid only for storage pools that are defined with a FILE device class.
IDENTIFYPRocess
Specifies the number of parallel processes to use for server-side duplicate identification. This parameter is optional and is valid only for storage pools that are defined with a device class associated with the FILE device type. Enter a value from 1 to 50.

When calculating the value for this parameter, consider the workload on the server and the amount of data requiring data deduplication. Server-side duplicate identification requires disk I/O and processor resources, so the more processes you allocate to data deduplication, the heavier the workload that you place on your system. In addition, consider the number of volumes that require processing. Server-side duplicate-identification processes work on volumes containing data that requires deduplication. If you update a storage pool, specifying that the data in the storage pool is to be deduplicated, all the volumes in the pool require processing. For this reason, you might have to define a high number of duplicate-identification processes initially. Over time, however, as existing volumes are processed, only the volumes containing new data have to be processed. When that happens, you can reduce the number of duplicate-identification processes.

Remember: Duplicate-identification processes can be either active or idle. Processes that are working on files are active. Processes that are waiting for files to work on are idle. Processes remain idle until volumes with data to be deduplicated become available. The output of the QUERY PROCESS command for a duplicate-identification process includes the total number of bytes and files that have been processed since the process first started. For example, if a duplicate-identification process processes four files, becomes idle, and then processes five more files, then the total number of files processed is nine. Processes end only when canceled or when the number of duplicate-identification processes for the storage pool is changed to a value less than the number currently specified.

Example: Update the primary sequential storage pool's mountable scratch volumes

Update the primary sequential storage pool named TAPEPOOL1 to permit as many as 10 scratch volumes to be mounted.
update stgpool tapepool1 maxscratch=10


Feedback