Resynchronization of target data without stopping replication (spilling)
You can resynchronize individual replication mappings within a subscription without stopping replication for the entire subscription by temporarily spilling changes at the target site.
Spilling changes for a replication mapping allows you to fix a problem with one replication mapping with minimal impact to other mappings in the same subscription. You can follow the best practice of using fewer subscriptions while still having the ability to control specific replication mappings.
Discrete insert, update, and delete operations for replication mappings that are marked for spill are split out of their UOR as the target server receives the UOR. Those changes are written (spilled) into a different storage repository. This repository is considered temporary storage until the target data store object becomes available to replication again. Data Replication for VSAM can then apply the changes to the target data store object to catch up to the rest of the subscription and return the replication mapping to normal, active replication.
Spilling changes allows replication to deal with changes for unavailable replication mapping targets as they are encountered in the replication log that is being read for the subscription, and avoids the complications and costs of attempting to reposition the log reader to find those changes later when the target data store object becomes available.
Each replication mapping for which you request spill processing is spilled independently into its own temporary spill storage.
When the spill command is received at the target, all further received changes are written to the spill queue for the replication mapping. When the UOR is committed, the commit might be written to multiple spill queues if the UOR had changes spilled for multiple replication mappings in that committing UOR.
Changes that are received and staged before the spill command reaches the target server are either drained (applied to the target data store object) or discarded (never applied to the target data store object), depending on the SPILL command that you use. The system uses an empty, serialized dependency for the mapping to ensure that all changes are drained or discarded from the apply cache. The empty dependency is a dependency for a manufactured UOR with no changes that has one resource in the first-level hash table of the replication mapping object name. When the empty dependency is processed by the writer, it is simply returned to the apply service to allow the replication mapping state to transition to spill only because it is known at that point that no changes for the replication mapping object name are staged in the apply cache.
Spilled changes are written into the current spill queue segment file until its size exceeds the SPILLQSEGMENTSZ configuration value. After then next commit is received, the apply service moves to the next sequential spill queue segment file. A very large UOR could cause a segment file to be larger than expected, but the segment size is approximate and not for precisely predicting a spill segment file size. zFS segment files are written in monotonically increasing order from Spill00000000 to Spillm. The set of spill queue segment files makes up the spill queue when ordered sequentially. This is effectively a FIFO queue of changes for the replication mapping.
By writing multiple spill queue segment files, Data Replication for VSAM returns space back to the file system as data is processed from the spill queue. When all of the changes from a spill queue segment file are applied to the target data store object, that file is removed from the file system by the target server. This approach can help avoid out-of-space problems when spilling while also reading from the spill queue, because the apply service might still be spilling incoming changes while working to catch up the spilled changes from the replication mapping. At the end of spill queue processing, all segment files should be removed from the file system. All spill queue segment files are also removed at the start of spilling in case old spill files exist for any reason.
When the replication mapping is activated at the source, the target server is notified and the target then starts applying changes from the spill queue, even while still spilling newly received changes. Some spilled changes might be skipped if they occurred prior to the activation time. Future changes continue to be sent to the target server and spilled, but the apply service simultaneously is reading and applying the oldest changes from the spill queue.
Changes are spilled until the replication mapping returns to active state, which occurs when the last spilled UOR is applied.