Use of temporary storage pools
Temporary storage is the primary CICS® facility for storing data that must be available to multiple transactions.
Data items in temporary storage are kept in queues whose names are assigned by the program that is storing the data. A temporary storage queue that contains multiple items can be thought of as a small dataset whose records can be addressed either sequentially or directly by item number. If a queue contains only a single item, it can be thought of as a named scratch-pad area.
Temporary storage can be main storage in the CICS region, auxiliary storage in a VSAM data set, or shared temporary storage pools in a z/OS® coupling facility. In a non-parallel sysplex environment, temporary storage queues are defined as nonrecoverable or recoverable. Nonrecoverable queues typically exist within the virtual storage of a CICS region, which provides better performance than having to do an I/O to DASD. However, if the CICS region becomes inactive, the data on the queue is lost. Recoverable temporary storage queues must be located in a VSAM dataset, so access to them is slower. Additionally, because all updates to the queue must be logged, the overhead of using recoverable queues is higher.
If the temporary storage data must be passed from a task in one region to a task in another region in an MRO scenario, a dedicated CICS region (a queue owning region, or QOR) can be defined and specified to each CICS application owning region (AOR) that wants to use the queues that are located in that region. While a QOR removes the affinity between the two transactions that are sharing data in the temporary storage queue, performance is not as good as a queue held within the same AOR as the transactions. The function shipping that is associated with communicating between the AOR and the QOR generates extra overhead, and the QOR constitutes a single point of failure. If the QOR fails, all data in the queues that it contains are lost.
CICS transactions that are running in an AOR access data in the temporary storage structure through a temporary storage server address space that supports a named pool of temporary storage queues. You must set up one temporary storage server address space in each z/OS image in the sysplex for each pool that is defined in the CF. All temporary storage pool access is performed by cross-memory calls to the temporary storage server for the named pool. The name of the temporary storage pool that the server is going to support is specified on the POOLNAME parameter in the temporary storage server address space JCL. You must also specify the numbers of buffers to allocate for the server address space. To avoid the risk of buffer waits and to reduce the number of CF accesses, you can increase the minimum number of buffers from the default of 10 buffers per CICS region that can connect to the server. Providing a reasonable number of buffers keeps the most recently used queue index entries in storage. When a READ or WRITE request is completed, the queue index information is retained in the buffer. If the current version of a queue index entry is in storage at the time a queue item is read, the request requires only one CF access instead of two.
For more information about temporary storage, see Temporary Storage. For information about the use of temporary storage and the considerations for avoiding transaction affinities, see Programming techniques and affinity/xref>. For more information about defining the shared temporary storage structure, see Defining temporary storage pools for temporary storage data sharing.