Creating the System Data Engine batch job for writing DCOLLECT data to data sets

To run the Z Common Data Provider System Data Engine in batch mode so that it writes its output to a file, rather than streaming it to the Data Streamer, you must create the job for loading DCOLLECT data in batch. You can create this job by using the sample job HBOJBDCO in the hlq.SHBOSAMP library, and updating the copy.

Procedure

To create the job, complete the following steps:

  1. Copy the job HBOJBDCO in the hlq.SHBOSAMP library to a user job library.
  2. Update the job card according to your site standards.
  3. If the job must have an affinity to a specific TCP/IP stack, add the name of the stack to the end of the HBOTCAFF job step, for example:
    //HBOTCAFF EXEC PGM=BPXTCAFF,PARM=TPNAME
  4. To enable the zIIP offloading function to run eligible code on zIIPs, specify ZIIPOFFLOAD=YES in the PARM parameter in the EXEC statement.
    For more information about the zIIP offloading function, see Offloading the System Data Engine code to System z Integrated Information Processors.
  5. Update the following STEPLIB DD statement to refer to the hlq.SHBOLOAD data set.
    //STEPLIB  DD DISP=SHR,DSN=HBOvrm.LOAD
  6. The output DD statements related to 18 record types are already specified in the sample job you copied. For more information about the 18 record types, see DCOLLECT Data stream reference. If you add a new DCOLLECT record type in the future, you must add DD statements related to the new record type. Refer to the SET IBM_FILE statements in HBOUDCOL for more information.
  7. Update the HBOLOG DD statement to specify the DCOLLECT log file name.