AMAPDUPL: Problem Documentation Upload Utility

The IBM® z/OS® Problem Documentation Upload Utility (PDUU) is a utility that sends large amounts of documentation in a more efficient manner than sending one large data set to IBM sites. This utility sections the input data set (such as stand-alone dump data set) into smaller data sets that are compressed and sent in parallel using multiple, simultaneous transfer sessions. This results in shorter transmission time for very large data sets. You can also encrypt the data sets. These sessions can send diagnostic documentation to IBM using File Transfer Protocol (FTP) or Secured Hypertext Transfer Protocol (HTTPS).

There are two work buffers for each transfer session (the "A" buffer and the "B" buffer). Each "A" work buffer is filled by copying records from the input data set. When the "A" buffer is full, the sessions are started in parallel. At the same time, each "B" work buffer is filled by copying records from the input data set. When the "B" buffer is full and the transfer of the "A" buffer is complete, transfer of the next "B" buffer starts. This process continues between the "A" and the "B" buffers, until everything in the input data set is sent.

You can have up to 20 transfer sessions running simultaneously, specifiable by the CC_FTP or CC_HTTPS parameter.

For FTP sessions, data is buffered into work data sets. The work data sets are dynamically allocated and can range in size from 1 MB to 9,999 MB. You can experiment to see what works best in your environment, but here are some guidelines:
  • Start with three or four parallel FTP sessions. Too many parallel FTP sessions can saturate the network link.
  • Use medium size work data sets.

For HTTPS sessions, data is buffered into 31-bit storage. When choosing a WORK_SIZE value, note that you may have limited private storage available (managed on an installation basis) and this number will be used as the size of the buffer.

Each WORK_SIZE buffer sent to IBM results in the creation of a numbered file that IBM uses to recreate the original data set for diagnosis. If the WORK_SIZE is very small in relationship to the input data set, you can end up with too many files on the IBM sites. For example, if you are sending a 100 GB z/OS stand-alone dump and make the work data set size 1 MB, PDUU will attempt to create 100,000 files on the IBM site, which exceeds the IBM limit of 99,999 files. This also causes a lot of delay by starting and stopping the transfer sessions for each file.

If the work buffers are very large in relationship to the input data set size, the amount of overlap time is decreased. When the program first starts, it must fill the "A" work buffer before it starts transmitting any data, which means the copy time is not overlapping with data that needs to be sent. For example, if you were sending a 1 GB dump and you set the work data set size to 1 GB (1,000 MB), there is no overlap between copying the records and sending the work files.

If the input data set is a partitioned data set (PDS/PDSE), PDUU unloads it first into a sequential data set using the IEBCOPY utility.

PDUU typically compresses the input data before it is written to the work buffer; therefore, it is counterproductive to use a tool such as AMATERSE or TRSMAIN to compress the input data set before using PDUU to send it to the IBM site. If a file is tersed, PDUU will not perform further compression. Overall performance of using AMATERSE with PDUU to send the file takes longer than if an untersed file is compressed and sent using PDUU.