Analyzing data in memory

Data in memory (DIM) is a strategy to improve system performance by efficiently using all the elements of the IBM storage hierarchy and the latest advancements in hardware and software technology. These statements summarize the DIM recommendations:

  • The fastest I/O is no I/O.
  • Place as much as you can in processor storage, then access the rest as fast as possible.

You can use several techniques to place programs in processor storage to avoid I/O:

PLPA
The pageable link pack area (PLPA) lets you place programs in common virtual storage. z/OS then manages the processor storage residency of these programs in an LRU manner.
Preload
Some subsystems, most notably CICS and IMS, let application programs be preloaded into their private virtual storage. The residency of these programs falls into the LRU working set management of z/OS.
Virtual fetch
Virtual fetch provides services to create a VIO data set for specified load libraries and to retrieve load modules from the VIO data set. This data set can then reside in expanded storage. IMS uses virtual fetch to hold IMS application programs. When an IMS region requests a particular program, a copy of the program is retrieved from expanded storage and moved into the private virtual storage of the requesting region.
LLA
The library lookaside (LLA) facility uses a virtual lookaside facility (VLF) dataspace to hold the most active modules of linklist and user-specified program libraries. When an address space requests an LLA-managed program that is in the dataspace, the load module is retrieved from VLF instead of from the program library on DASD.

This section describes ways you can get more data into processor storage.

Analyzing dataspace usage

A dataspace is a range of up to 2 gigabytes of contiguous virtual storage addresses that a program can directly manipulate through z/OS instructions. Unlike an address space, a dataspace contains only data; it does not contain common areas or system data or programs. Programs cannot execute in a dataspace, although load modules can reside in a dataspace. To reference the data in a dataspace, a program must be in access register (AR) mode. Up to 15 dataspaces can support an address space at one time.

Using dataspaces minimizes the need to store active data on external devices and increases integrity by separating code from data.

Analyzing hiperspace usage

High performance space (hiperspace) is a data buffer that is backed either in expanded storage only or in expanded storage and auxiliary storage. It can be used for temporary or permanent data. Its maximum size is 2 gigabytes. However, unlike dataspaces, which are limited to 15 active at any time, the number of active hiperspaces for an address space is limited only by address space limits defined in the IEFUSI SMF exit. For response-critical applications requiring large data structures, hiperspaces can provide almost unlimited definable storage, provided that the expanded storage is available.

To analyze the hiperspace usage on your system, create a report using data from the MVSPM_PAGING_H table. You can use data in the ESTOR_HIPER_AVG_MB column to create a report that shows hiperspace usage by hour for multiple days. This report would show the trends throughout the day, if hiperspaces are used, and how much storage is used. You can also create a report using the HS_PAGES_FROM_ES and PAGES_HIPER_TO_ES columns to analyze hiperspace movement to and from expanded storage.

Analyzing LLA/VLF

z/OS offers improved system performance by module placement, LLA and VLF. LLA is a component of z/OS that you can use to increase library performance, system availability, and system usability. LLA lets you eliminate I/O in searching production load libraries. Using LLA, you can eliminate I/O for your often-used (or fetched), read-only production load libraries. LLA also lets you reduce I/O in fetching modules from production load libraries.

VLF is a service offered by z/OS that provides the ability to create and retrieve named data objects, such as members of a partitioned data set in virtual storage. VLF uses dataspaces to keep large amounts of data in virtual storage. For certain applications, VLF reduces I/O for repeated retrieval of members from frequently used data sets and improves response time. Although VLF is an alternative to existing I/O functions, it does not replace these functions.

LLA uses VLF. Without VLF, LLA eliminates I/O for directory search and manages the data sets; with VLF, LLA also reduces the fetch I/O. Note that there are other exploiters of VLF, such as TSO/E and RACF. Carefully evaluate the use of LLA for libraries that are managed by other users of VLF.

The MVSPM VLF Dataspace Usage, Hourly Trend report shows various measurements for VLF for a selected period.

Figure 1. MVSPM VLF Dataspace Usage, Hourly Trend report

                   MVSPM  VLF Dataspace Usage, Hourly Trend
                      System: 'NRD1'  Period: PERIOD_NAME
                              Date: '1999-10-09'


                   Storage            Storage   Storage   Storage    Largest
   VLF             used avg  MAXVIRT  used min  used max  used avg   object
  class     Time     (%)      (MB)      (MB)      (MB)      (MB)      (MB)
  --------  -----  --------  -------  --------  --------  --------  --------
  CSVLLA    02:00        92     16.0    14.578    14.961    14.706     2.488
            03:00        91     16.0    14.586    14.602    14.598     2.488
            04:00        92     16.0    14.613    14.668    14.641     2.488
            05:00        91     16.0    13.434    14.992    14.603     2.488
            06:00        84     16.0    13.434    13.434    13.434     2.488
            07:00        89     16.0    13.859    14.297    14.178     2.488
            08:00        87     16.0    12.750    14.984    13.848     2.488
            09:00        92     16.0    14.473    14.809    14.643     2.488
            10:00        16     16.0     0.000     4.832     2.584     2.488
            11:00        36     16.0     5.406     6.148     5.750     2.488
            12:00        41     16.0     6.391     6.867     6.598     2.488
            13:00        57     16.0     8.582     9.242     9.063     2.488
            14:00        62     16.0     9.242    10.879     9.986     2.488
            15:00        79     16.0    12.055    13.609    12.624     2.488
            16:00        85     16.0    13.609    13.625    13.617     2.488
            17:00        85     16.0    13.625    13.633    13.629     2.488
            18:00        85     16.0    13.633    13.633    13.633     2.488
            19:00        85     16.0    13.633    13.633    13.633     2.488
            20:00        85     16.0    13.633    13.633    13.633     2.488
            21:00        85     16.0    13.633    13.633    13.633     2.488
            22:00        85     16.0    13.633    13.633    13.633     2.488
            23:00        85     16.0    13.664    13.664    13.664     2.488


               IBM Z Performance and Capacity Analytics Report: MVSPM65

Note: LLA always shows a hit rate greater than 99%, because it manages which objects VLF handles and it only requests those objects that it passes to VLF.

Analyzing virtual I/O (VIO)

As a means of improving system performance by eliminating much of the overhead and time required to allocate a device and move data physically between main storage and an I/O device, z/OS provides virtual input/output (VIO). A VIO operation uses the system paging routines to transfer data. VIO can be used only for temporary data sets that store data for the duration of the current job; it uses the system paging routines to transfer data into and out of a page data set.

VIO uses paging rather than explicit I/O to transfer data. VIO eliminates the channel program translation and page fixing done by the EXCP driver and by some device allocation and data management overhead. It also provides dynamic allocation of DASD space as it is needed. Another advantage of VIO is that the data set can remain in processor (central or expanded) storage after it is created because RSM attempts to keep the pages in central storage as long as possible.