Start of change

Hiperspace caching for PDSE

PDSE Hiperspace Caching provides a means for PDSEs to use central storage as a substitute for I/O operations at the expense of CPU utilization.

PDSE Hiperspace caching is a means to improve PDSE performance in cases where the same member (or members) are accessed repeatedly. When members are opened, and eligible for Hiperspace Caching, the member pages are placed into the Hiperspace. The Hiperspace provides a means to applications to use central storage as a substitute for I/O operations in much the same way the LLA/VLF store reduces I/O for program objects. The primary goal of these forms of caching is to improve application efficiency. The BMF (Buffer Management Facility)/Hiperspace caching order of operations is as follows.

When member pages are requested from a PDSE the request is propagated through the BMF/Hiperspace Caching in order to cut down on I/O processing. If the member is found in the Hiperspace, then the member is returned and DASD I/O processing is avoided.

DASD I/O is initiated if the member is not found in the Hiperspace. The requested member is fetched and subsequently sent to both the caller as well as the Hiperspace Cache. The member pages are placed into the Hiperspace cache so that I/O processing can be avoided on the next lookup.

Hiperspace cached members are subject to LRU processing and unreferenced members pages will eventually be flushed from the cache.

It is important to recognize the performance trade-offs associated with Hiperspace caching, where reduced DASD I/O costs are offset by increased CPU and real storage usage. The CPU cost of Hiperspace caching is primarily due to the LRU which periodically evaluates and identifies pages in the cache that are eligible for reuse. Additionally, Hiperspace pages are the last to be stolen in a real storage-constrained environment. For this reason, the size of the Hiperspace may have to be limited when real storage is constrained.

End of change