Creating the System Data Engine batch job for writing DCOLLECT data to data sets
To run the Z Common Data Provider System Data Engine in batch mode so that it writes its output to a
file, rather than streaming it to the Data Streamer, you must create the job for loading DCOLLECT
data in batch. You can create this job by using the sample job
HBOJBDCO in the
hlq.SHBOSAMP library, and updating the copy.
To create the job, complete the following steps:
Copy the job
hlq.SHBOSAMPlibrary to a user job library.
- Update the job card according to your site standards.
If the job must have an affinity to a specific TCP/IP stack, add the name of the stack to the
end of the
HBOTCAFFjob step, for example:
//HBOTCAFF EXEC PGM=BPXTCAFF,PARM=TPNAME
To enable the zIIP offloading function to run eligible code on zIIPs, specify
ZIIPOFFLOAD=YES in the PARM parameter in the
EXECstatement.For more information about the zIIP offloading function, see Offloading the System Data Engine code to System z Integrated Information Processors.
Update the following
STEPLIB DDstatement to refer to the
//STEPLIB DD DISP=SHR,DSN=HBOvrm.LOAD
The output DD statements related to 18 record types are already specified in the sample job you
copied. For more information about the 18 record types, see DCOLLECT Data stream reference.
If you add a new DCOLLECT record type in the future, you must add DD statements related to the new
record type. Refer to the
SET IBM_FILEstatements in
HBOUDCOLfor more information.
HBOLOG DDstatement to specify the DCOLLECT log file name.