Processing techniques
The information in this topic is for planning purposes only. It is not absolute or exact regarding storage requirements. You should use it only as a guideline for estimating storage requirements. Individual observations might vary depending on specific implementations and processing.
System-managed buffering (SMB), a feature of DFSMSdfp, supports batch application processing. SMB takes the following actions:
- It changes the defaults for processing VSAM data sets. This enables the system to take better advantage of current and future hardware technology.
- It initiates a buffering technique to improve application performance. The technique is one that the application program does not specify.
- Direct Optimized (DO)
- Sequential Optimized (SO)
- Direct Weighted (DW)
- Sequential Weighted (SW)
Direct Optimized (DO). The DO processing technique optimizes for totally random record access. This is appropriate for applications that access records in a data set in totally random order. This technique overrides the user specification for non-shared resources (NSR) buffering with a local shared resources (LSR) implementation of buffering.
-
SMBVSP. This option specifies the amount of virtual storage to obtain
for data component
buffers when a data set
is opened.
Without SMBVSP
specified, SMB will create a data component virtual buffer size of up to 100M based on the size of
the data set.
You can specify the virtual buffer size in kilobytes from 1K to 2048000K, or in
megabytes from 1M to 2048M.
You
can also use SMBVSP to improve performance when too few index buffers were allocated for a data set
that grows from small to large without being closed and reopened over time. When SMBVSP is specified
without specifying SMBVSPI, it will also define a minimum buffer size for the data set’s index
records. With SMBVSP specified, but without SMBVSPI, the buffer size for the data set’s index
records will be the larger of the following:
- 20% of the SMBVSP value.
- Enough buffers to contain all of index records currently in the data set (up to 65535 index records)

A LISTCAT of the data
set can be used to determine the size of index component buffer that will be created based on a data
set’s index record count. Based on the LISTCAT’s Index section:
- If REC-TOTAL is less than 65535, the index component buffer size, in bytes, will be equal to the HI-U-RBA value.
- If REC-TOTAL is greater than or equal to 65535, the index component buffer size, in bytes, will be equal to CISIZE * 65535.

- SMBVSPI. This option specifies the amount of virtual
storage to obtain for index buffers when an index is opened.
Without SMBVSPI, SMB will create an index
component virtual buffer size large enough to accommodate all records in the index component (up to
65535 index records).
You can specify the virtual buffer size in kilobytes from 1K to 2048000K,
or in megabytes from 1M to 2048M.You can use SMBVSPI to control the pool size that is built for the index so that virtual storage is not exhausted. You can also use SMBVSPI to increase the index pool size so that there are enough buffers for an index that grows significantly after it is initially opened.
SMBVSPI can be used by itself or with the SMBVSP parameter. SMBVSPI takes precedence over SMBVSP for controlling the virtual storage for the index buffers. SMB VSPI will control the pool size for the index and SMBVSP will control the pool size for the data component when both parameters are used together. SMB will have the best performance when enough virtual space is given to contain all of the index.
- SMBDFR. This option
specifies the deferred write processing. By using
SMBDFR,
you can defer writing buffers to the medium until either of the following situations occur:
- The buffer is required for a different request.
- The data set is closed.
- SMBHWT. This option specifies the range of the decimal value for buffers. You can specify a whole decimal value from 1-99 for allocating the Hiperspace buffers. The allocation is based on a multiple of the number of virtual buffers that have been allocated.
- You can specify SMBDFR and SMBHWT through the JCL AMP parameter. See z/OS MVS JCL Reference for details.
- You can specify SMBVSP through the ISMF data class. See z/OS DFSMS Using the Interactive Storage Management Facility for details.
Sequential Optimized (SO). The SO technique optimizes processing for record access that is in sequential order. This is appropriate for backup and for applications that read the entire data set or a large percentage of the records in sequential order.
Direct Weighted (DW). The majority is direct processing; some is sequential. DW processing provides the minimum read-ahead buffers for sequential retrieval and the maximum index buffers for direct requests.
Sequential Weighted (SW). The majority is sequential processing; some is direct. This technique uses read-ahead buffers for sequential requests and provides additional index buffers for direct requests. The read-ahead will not be as large as the amount of data transferred with SO.
To implement SMB, an application program must specify nonshared resources (NSR) buffering, ACB MACRF=(NSR). The system does not apply SMB when any VSAM data set is opened with a request for any other buffering option, MACRF=(LSR|GSR|UBF|RLS).
The basis for the default technique is the application specification for ACB MACRF=(DIR,SEQ,SKP) Also, specification of the following values in the associated storage class (SC) influence the default technique:
- Direct millisecond response
- Direct bias
- Sequential millisecond response
- Sequential bias
You can specify the technique externally by using the ACCBIAS subparameter of the AMP= parameter. The system invokes the function only during data set OPEN processing. After SMB makes the initial decisions during that process, it has no further involvement.
| BIAS Selection based on ACB MACRF= and Storage Class MSR/BIAS | ||||
|---|---|---|---|---|
| MACRF Options | MSR/BIAS Value Specified in Storage Class | |||
| SEQ | DIR | Both | None | |
| DIR | DW | DO | DO | DO |
| SEQ - default | SO | SW | SO | SO |
| SKP | DW | DW | DW | DW |
| (SEQ,SKP) | SO | SW | SW | SW |
| (DIR,SEQ) or (DIR,SKP) or (DIR,SEQ,SKP) | SW | DW | DW | DW |
- DO = Direct Optimized
- DW = Direct Weighted
- SO = Sequential Optimized
- SW = Sequential Weighted.
In the case where ACCBIAS=DO is specifically asked for on JCL AMP parameter, SMB may default to DW if there is not enough storage. To avoid this situation, there are two techniques:
- Allocate more storage for the job.
- Specify SMBVSP=xx on JCL to limit the amount of storage SMB will use for DO. For details of how to use SMBVSP, see the related topic.
If you request SMB and specify JCL AMP MSG = SMBBIAS, VSAM Open issues message IEC161I 001 to indicate which Access Bias is chosen by SMB. The following output is an example of message IEC161I 001
J E S 2 J O B L O G -- S Y S T E M 3 0 9 0 -- N O D E S J P L 3 7 2
16.59.12 JOB00019 ---- WEDNESDAY, 26 APR 2006 ----
16.59.12 JOB00019 IRR010I USERID IBMUSER IS ASSIGNED TO THIS JOB.
16.59.12 JOB00019 ICH70001I IBMUSER LAST ACCESS AT 16:57:04 ON WEDNESDAY, APRIL 26, 2006
16.59.12 JOB00019 $HASP373 OPENCLOS STARTED - INIT 1 - CLASS A - SYS 3090
16.59.24 JOB00019 IEC161I 001(DW)- 255,OPENCLOS,TESTIT,DD1,,,IBMUSER.TEST.BASE1,, 547
547 IEC161I SYS1.MVSRES.MASTCAT