To run the Z Common Data Provider System Data Engine in batch mode so that it writes its output to a
file, rather than streaming it to the Data Streamer, you must create the job for loading DCOLLECT
data in batch. You can create this job by using the sample job HBOJBDCO
in the
hlq.SHBOSAMP
library, and updating the copy.
Procedure
To create the job, complete the following steps:
-
Copy the job
HBOJBDCO
in the hlq.SHBOSAMP
library to a user job library.
-
Update the job card according to your site standards.
-
If the job must have an affinity to a specific TCP/IP stack, add the name of the stack to the
end of the
HBOTCAFF
job step, for example:
//HBOTCAFF EXEC PGM=BPXTCAFF,PARM=TPNAME
-
To enable the zIIP offloading function to run eligible code on zIIPs, specify
ZIIPOFFLOAD=YES in the PARM parameter in the
EXEC
statement.
-
Update the following
STEPLIB DD
statement to refer to the
hlq.SHBOLOAD
data set.
//STEPLIB DD DISP=SHR,DSN=HBOvrm.LOAD
-
The output DD statements related to 18 record types are already specified in the sample job you
copied. For more information about the 18 record types, see DCOLLECT Data stream reference.
If you add a new DCOLLECT record type in the future, you must add DD statements related to the new
record type. Refer to the
SET IBM_FILE
statements in HBOUDCOL
for
more information.
-
Update the
HBOLOG DD
statement to specify the DCOLLECT log file name.