Offloading log data from interim storage by freeing and/or moving it to DASD
As one or more connectors write data to a log stream, the log stream's interim
storage medium begins to fill up, eventually reaching or exceeding its high threshold. If that
interim storage medium fills completely, the log stream is not able to accept new IXGWRITE
requests.
Log stream offload processing is the mechanism by which system logger deletes and/or
moves log data from interim storage to offload data sets. This frees up space in the interim storage
medium and ensures that the log stream can continue to accept new IXGWRITE requests.
- You can set the high and low thresholds that control offloading.
- You understand when, why, and how often offloading will occur for your installation.
- You understand when log data is physically deleted from the log stream.
- You can plan coupling facility, log data set, and staging data set sizes to control how often offloading occurs.
- For a coupling facility log stream, system logger off-loads data from the coupling facility to DASD log data sets when coupling facility usage for the log stream reaches the high threshold. The high threshold is the point, in percent, of coupling facility structure usage for a log stream, where system logger will begin offload processing.
- For a DASD-only log stream, system logger off-loads data from local storage buffers to DASD log data sets. The high threshold, however, is the point, in percent, of staging data set usage. Thus, for a DASD-only log stream, offload processing moves log data from local storage buffers to DASD log data sets, but the offloading is actually triggered by staging data set usage.
Other events exist that can trigger an offload. For more information, see Other events that trigger offloading.
For either type of log stream, when a log stream reaches or exceeds its high threshold, system logger begins processing to offload enough of the oldest log stream data to get to the low offload threshold point specified in the log stream definition. Note that a log stream might exceed the high threshold before offloading starts, because applications might keep writing data before system logger can begin offload processing.
The low threshold is the target where offloading is to stop, leaving roughly the specified percentage of log data for that log stream in the coupling facility or staging data set.
For a coupling facility log stream, the high threshold you define for the coupling facility structure also applies to the staging data sets for the log stream; offloading is performed when either a staging data set or the coupling facility structure for a log stream hits its high threshold.
When an offload is triggered, each system connected to a log stream competes to gain
ownership of processing the offload. This is controlled internally by system logger.
While offload processing is occurring, system logger applications can still write data to the log stream; system logger accepts and completes write requests while offloading is in progress. Since system logger calculates the amount of data to offload at the time offload processing starts, by the time offloading has finished, the amount of data in the coupling facility or staging data set exceeds the target low threshold.
The system that "wins" this competition then commences the offload process for the
log stream. The offload process is bounded by the log stream's defined HIGHOFFLOAD and LOWOFFLOAD
thresholds. Historically, offload processing comprises the following multiple phases:- Delete-only phase
- Any data marked for deletion by the exploiter can be physically deleted from the interim storage medium without movement to DASD offload data sets. All log data that resides in interim storage that falls into this category is physically deleted, regardless of the log stream LOWOFFLOAD threshold. The only exception is when a RETPD value other than zero (0) is defined for the log stream. In that case, all the log data in the interim storage is moved to DASD before any physical deletion can occur.
- Movement phase
- Following the delete-only phase, if the amount of physical space consumed in interim storage is
still above the LOWOFFLOAD threshold, system logger determines that it needs to begin moving data
from the interim storage medium to the log stream secondary storage medium, for example DASD offload
data sets.This movement process brings in the additional complexity of managing these DASD offload data sets, and has the following multiple sub-phases:
- When the movement phase begins, the offloading system must first determine whether the current offload data set (that is, the data set currently available for logger to write into) is already allocated and available on the system. If not, a request is initiated internally to drive a logger task to allocate and open the offload data set.
- Once the current offload data set is allocated and ready for use, the offloading system will begin reading from the interim storage medium, for example CF, and writing the corresponding log data to the current offload data set. As logger confirms that the data was written successfully, it can physically delete the hardened data from interim storage. This process is followed until the LOWOFFLOAD threshold target is reached.
- Periodically, the current offload data set fills up. The frequency at which this occurs is based
on various factors, including the size of the data set and the amount of data being moved. When this
occurs, a current offload data set switch is initiated. This consists of:
- Deallocating and closing the current offload data set.Note: After this, it is still available for read processing, but can no longer be written into.
- Creating a new offload data set. This is done by allocating a data set new,keep, then deallocating it. It is then allocated again with the SHR disposition and opened for processing.
- Deallocating and closing the current offload data set.
- Offload data set delete phase
- Offload data sets that are eligible for deletion might be physically deleted at this point. This phase can occur at various points during the offload based on certain factors, such as if Logger detects that it is constrained on directory entries.

For more information about when log data sets become eligible for physical deletion,
see Deleting log data and log data sets.
When the offload completes, system logger issues an ENF 48 signal.
The LS_ALLOCAHEAD log stream attribute can assist in minimizing the impact of
offload data set allocations and related problems that can inhibit offload progress. This keyword
enables logger to keep an available queue of up to three (advanced-current) offload data sets ready
for offload processing.
When a nonzero value is specified for LS_ALLOCAHEAD, systems with an IXGCNFxx
parmlib policy statement of ALLOCAHEAD(YES) behave as described later. See the IXGCNFxx parmlib
member in
z/OS MVS Initialization and Tuning Reference for more information about the MANAGE OFFLOAD ALLOCAHEAD specification. See LOGR keywords and parameters for the administrative data utility and
z/OS MVS Programming: Assembler Services Reference IAR-XCT for additional details about the log stream LS_ALLOCAHEAD attribute.
When the LS_ALLOCAHEAD value is (inclusively) between 1 and 3, it means all systems
that are connected to and performing offloads for the log stream should be proactive in newly
allocating up to the intended number of advanced-current offload data sets. It also indicates that
these systems are proactive in opening (to make as ready as possible) the current offload data set
and the first advanced-current offload data set.
The following characteristics are true of offloading systems: - Asynchronous to the offload, an offloading system attempts to make ready the current offload data set, for example by allocating SHR and opening it, in preparation for the next offload activity.
- In parallel with the offload, an offloading system attempts to acquire the number of advanced-current, for example newly allocated, offload data sets. In addition, the first advanced-current offload data set is allocated again with the SHR disposition and then opened for write processing when it is needed by the offload thread.
- Normal data set switch processing becomes a "fast path" request:
- Logger will still deallocate the existing current offload data set as described previously.
- A new advanced-current offload data set is already allocated and open, and therefore available for use. This means that a significantly cheaper data set switch occurs in-line during the offload.
Note: If the offload movement phase processing ever catches up to allocate ahead processing, meaning that there is no next advanced-current offload data set available for use, the previously asynchronous data set request becomes synchronous. The offload must wait for it before resuming movement of log data to DASD.

Other non-offloading systems that connect to the log stream and specify
ALLOCAHEAD(YES) are notified when the offload completes. At that point, if needed, those systems
allocate SHR and open the then current offload data set. This ensures that the connected systems
have the data sets ready should they gain control of the next or subsequent offload for the log
stream.
Systems that specify ALLOCAHEAD(NO) in the IXGCNFxx parmlib policy behave as follows:- An offloading system
- Still needs to allocate new current offload data sets on an as-needed basis.
- Data sets already created for the log stream as advanced-current offload data sets can be used.
- However, these systems will not pro-actively allocate/open any advanced- current offload data sets created for the log stream.
- Other non-offloading systems
- These systems do not proactively allocate or open ahead any new current offload data sets.

LS_ALLOCAHEAD is also intended to assist the installation by providing earlier
awareness of problems that might impede an offload. By allocating data sets before they are needed,
errors can be surfaced earlier, giving the installation time to address the problem.
It should be noted that even though the LS_ALLOCAHEAD option can be specified for
group PRODUCTION and TEST log streams, the logger offload data set management is better optimized
for PRODUCTION group log streams.
There is a set of messages that serve as indicators that Logger is having problems
allocating offload resources. See Offload and service task monitoring, which identifies messages and specific
actions which can be taken to resolve hung (or at least excessively delayed) log stream offloads,
for more information.