Dump Data Set Processing

IPCS decides if the source data set should be treated as a system dump by comparing the data set to the following criteria:
  • The dump data must be stored on a data set with sequential (PS), direct (DA), or unidentified (*) organization. With z/OS® R2 and later, IPCS also allows data stored on z/OS UNIX file systems to be accessed.
  • The logical record length (LRECL) must be 4160 bytes.
  • The data set must have one of the following combinations of record format (RECFM) and block size (BLKSIZE):
    • RECFM=F,BLKSIZE=4160
    • RECFM=FB,BLKSIZE=4160
    • RECFM=FBS,BLKSIZE=n*4160   where n=1,2,...

If the data set meets these criteria, IPCS provides dump processing, meaning that IPCS simulates system services, such as dynamic address translation and control block formatting, when processing the source.

Files stored the z/OS UNIX file system that may contain dumps for access by IPCS users, can be accessed by their path names. The size of the z/OS UNIX file must be a multiple of 4160 bytes. There are two methods by which IPCS can access these files — the FILE keyword and the PATH keyword.
  • FILE support is extended in z/OS Release 2 to allow the association of a z/OS ddname with a full path name of from 3 to 255 characters. IPCS uses the ddname to access the file.
    ALLOCATE FILE(MYDD) PATH('/u/x')
    OPEN FILE(MYDD) DEFAULT

    This support is available also for systems at OS/390® Release 10 and higher with APAR OW44412.

  • PATH support is added in z/OS Release 2. IPCS accepts any valid path name up to 44 characters in length. The limit is applied after implicit qualifiers, if any, are resolved.
    OPEN PATH('/u/x') DEFAULT

    If partially-qualified path names are used, IPCS will determine the fully-qualified path names.

Start of changeDumps that are in extended format data sets instead of basic or large format data sets have these advantages:
  • Have a greater capacity than sequential data sets.
  • Support striping.
  • Support compression.
End of change

Some dump data sets are quite large compared with other data sets generated by a system. The capacity of an extended format data sets is enough to hold the largest stand-alone dumps, as much as 128 gigabytes.

Striping spreads sections, or stripes, of a data set across multiple volumes and uses independent paths, if available, to those volumes. The multiple volumes and independent paths accelerate sequential reading and writing of the data set, reducing the time during which dump and trace I/O competes with production I/O.

In a striped data set, when the last volume receives a stripe, the next stripes are placed on the first volume, the second volume, the third, and so on to the last volume, then back to the first volume. If n volumes are used, striping allows sequential access to the data set at nearly n times the rate at which a single volume data set can be processed. The faster processing speeds up moving dump data from relatively expensive data space storage to less expensive DASD.

Compression allows dump data sets to use less DASD space. Before using compression, however, remember that compression and decompression trade off processing cycles for more efficient use of DASD. If software compression is used because hardware compression is not available, the number of processing cycles is significantly higher and noticeably impacts response time.