Synchronous mode copy

Use synchronous mode copy to enable tape synchronization across two clusters within a grid configuration.

Refer to the glossary for definitions of terms used in the following paragraphs.

Synchronous mode copy is a form of replication similar to host-initiated duplexing that provides a zero recovery point objective (RPO) for sets of records (data sets) or byte-stream data (objects) written to virtual tape. IBM Z® applications like DFSMShsm, DFSMSdfp OAM Object Support, and other workloads can achieve a zero RPO when using this method of replication. When zero RPO is achieved, any exposure to delayed replication is eliminated and host-initiated duplexing is not required.

Synchronous mode copy duplexes all compressed host writes simultaneously to two library locations. Before any host-initiated explicit or implicit synchronize tape operation can succeed, all content written on virtual tape up to that point is written to persistent disk cache at both library locations. If a cluster fails, the host application or job can access the secondary copy with consistent access up to the last completed synchronization point. No disk cache read is necessary to replicate the written content since all writes are duplexed. This reduces the overhead on the disk cache repository. Additional options enable a user to enforce strict synchronization for a given workload, or permit a write workload to enter a synchronous-deferred state when full synchronization is not possible.

Applications such as DFSMShsm and DFSMSdfp OAM Object Support often store data sets or objects on virtual tape. Then, the host application issues an explicit synchronize operation that forces all previously written records to tape volume cache (TVC). Finally, the host application then classifies the data set or object as written to tape and discards the primary host source copy before the volume is closed.
Note: Applications that use data set-style stacking and migration are the expected use case for synchronous mode copy. However, any host application that requires near-zero RPOs can benefit from the synchronous mode copy feature. There is no host software dependency other than the ability to issue an implicit or explicit synchronize command after critical data has been written. The synchronous mode copy or duplexing occurs external to the IBM Z® server, within the TS7700, and relies entirely on the TS7700 Grid network. No FICON channel activity occurs to the secondary duplexed location.

Supported configurations

To enable synchronous mode copy, create a management class that specifies exactly two grid clusters with the synchronized ("S") copy mode. Synchronous mode copy is configured using the following construct options on the Management Class panel of the TS7700 Management Interface:
Synchronous Mode Copy Settings
These settings specify how the library or virtual tape drive operates when the two "S" locations are not synchronized, and whether the library opens the volume on both TVC clusters when a private mount occurs. These options are available only when synchronous mode copy is enabled.
Default Settings
By default, the synchronous-mode-copy clusters fail mount and tape operations if two copies of a volume cannot be maintained during an update (synchronous failure). When the synchronous failure setting is used, a zero RPO is provided for the target workload, independent of failures. Consider the following circumstances when using the default strict synchronization behavior include:
  • If the failure to synchronize is detected after the mount has already occurred, then tape operations fail to the targeted volume until a RUN occurs and a demount command is issued.
  • If content was written prior to the synchronization failure, then previous content on the emulated volume up to the last successful tape synchronization operation point is considered persistently synchronized and can be accessed later from either "S" consistency point.
  • If either "S" consistency point is unavailable, then scratch mount operations fail.
Synchronous Deferred On Write Failure
Enable this option to permit update operations to continue to any valid consistency point in the grid. If there is a write failure the failed "S" locations are set to a state of "synchronous-deferred". After the volume is closed, any synchronous-deferred locations are updated to an equivalent consistency point through asynchronous replication. If the Synchronous Deferred On Write Failure option is not checked and a write failure occurs at either of the "S" locations, then host operations fail.
Note: An "R", "D," or "T" site is chosen as the primary consistency point only when both "S" locations are unavailable.
On Private Mount: Always open single copy
By default, synchronous mode copy opens only one TVC during a private mount. The best TVC choice is used to satisfy the mount. The best TVC choice selection is made with location preferences in this order: synchronized ["S"], RUN ["R"], deferred ["D"], and time-delayed ["T"]. If a write operation occurs, the job enters the synchronous-deferred state regardless of whether the Synchronous Deferred On Write Failure option is enabled.
On Private Mount: Always open both copies
Enable this option to open both previously written "S" locations when a private mount occurs. If one or both "S" locations are on back end tape, the tape copies are first recalled into disk cache within those locations. The Always open both copies option is useful for applications that require synchronous updates during appends. Private mounts can be affected by cache misses when this option is used. Other circumstances to consider include:
  • If a private mount on both locations is successfully opened, then all read operations use the primary location. If any read fails, then the host read also fails and no failover to the secondary source occurs unless a z/OS DDR swap is initiated.
  • If a write operation occurs, both locations receive write data and must synchronize it to TVC disk during each implicit or explicit synchronization command.
  • If either location fails to synchronize, the host job either fails or enters the synchronous-deferred state, depending on whether the Synchronous Deferred On Write Failure option is enabled.
On Private Mount: Open both copies on z/OS implied update
Open both previously written "S" locations only when requested by the host to do so. This takes place when the mount request from the host has either write from BOT or update intent specified.
Table 1 displays valid combinations of the options and mount types and what happens when one or both "S" locations do not exist at mount time.
Table 1. Expected results when one or both "S" locations are unavailable at mount time
Scenario Mount type Synch Deferred On Write Failure On Private Mount Dual opened? Mount delay if either or both "S" are paused or out of physical scratch1 Results when one or both "S" locations are unavailable at mount time2
Always open single copy Always open both copies Open both copies on z/OS implied update Mount result Enter synch defer on mount finish Fail when write received Enter synch defer when write received
1 Scratch Disabled N/A N/A N/A Both Yes Failure No N/A N/A
2 Scratch Enabled N/A N/A N/A Both Yes Success3 No No Yes
3 Private Disabled N/A Set N/A Both Yes Success3 No Yes No
4 Private Enabled N/A Set N/A Both Yes Success3 No No Yes
5 Private Disabled N/A N/A Set Both4 Yes5 Success3 No Yes5 7 No6
6 Private Enabled N/A N/A Set Both4 Yes5 Success3 No No Yes
7 Private Disabled Set N/A N/A Single No Success3 No No Yes
8 Private Enabled Set N/A N/A Single No Success3 No No Yes
Notelist:
  1. Any delay due to a paused state is applicable only if a recall is required.
  2. Assumes at least one healthy (and consistent for private mount) cluster when both 'S' locations are unavailable.
  3. The best TVC choice (preferring locations in this order: "S", "R", "D", and "T") is used to satisfy the mount before it continues.
  4. With Open both copies on z/OS implied update specified: Both when requested by the host; Single when not requested by the host.
  5. With Open both copies on z/OS implied update specified: Yes when requested by the host; No when not requested by the host.
  6. With Open both copies on z/OS implied update specified: No when requested by the host; Yes when not requested by the host.
  7. The library always enters the synchronous-deferred state independent of whether the Synchronous Deferred On Write Failure option is specified.

Primary "S" selection

One location is identified as the primary when both "S" locations are available. The primary location receives all inline host operations and can only buffer read and write operations when remote. All implicit or explicit synchronization operations are sent to the primary location first, then to the secondary location when available. The local cluster always takes precedence over remote clusters for primary identification. If both "S" locations are remote, the "S" location within the same cluster family takes precedence. If both "S" locations are within the same family or are external to the mount-point family, then the existing performance-based and latency-based selection criteria is used. If only one "S" location is available, it takes precedence over all "R", "D", and "T" locations. If neither "S" location is available, the "R", then "D", then "T" locations take precedence. All read operations are always sent to the primary location. If a read fails at any time when two instances of a volume are open, the read does not failover to the secondary location and the host command fails.

Synchronous-deferred state

If one or both "S" locations are not in an active, synchronized state and a write operation is permitted to continue, the distributed library enters the synchronous-deferred state. The distributed library remains in this state until all "S" copies managed by the distributed library are replicated using an alternative method. The "S" copies managed by the distributed library are those that were initiated through mounts targeting the distributed library. If one or more distributed libraries enter the synchronous-deferred state, the composite library also enters this state. The composite library exits this state when no distributed libraries are in the synchronous-deferred state. The priority of the synchronous-deferred copy is one level above immediate-deferred.

Use with other functions

Scratch mount candidates
There is no direct association between the scratch mount candidates and clusters that are enabled with synchronous mode copy. If neither of the "S" locations are identified as scratch mount candidates, then they are not included in the candidate list that is returned to the host.

Copy policy override
The synchronous copy mode takes precedence over any copy override settings. If the force local copy policy override is selected on a cluster with the No Copy copy mode selected, the copy policy override is ignored and up to two clusters are selected as TVC.