CICS EYULOG DMY data stream
This reference lists the configuration values that you can update in the
Configure Log Forwarder data stream
window for the CICS
EYULOG DMY data stream. It also describes how to use wildcard characters in the
Job Name field for this data stream. The source for the CICS
EYULOG DMY data stream uses the date format day month year
(DMY) in the
timestamp.
Configuration values that you can update
- Name
- The name that uniquely identifies the data stream to the Configuration Tool. If you want to add more data streams of the same type, you must first rename the last stream that you added.
- Job Name
- The name of the server job from which to gather data. This value can contain wildcard
characters.
For information about the use of wildcard characters, see Use of wildcard characters in the Job Name field.
- Data Source Name
- The name that uniquely identifies the data source to subscribers. Tip: If you use the Auto-Qualify field in the subscriber configuration to fully qualify the data source name, this dataSourceName value is automatically updated with the fully qualified data source name. For more information about the values that you can select in the Auto-Qualify field, see Subscriber configuration.
- File Path
- A unique identifier, such as
jobName/ddName
, that represents the data origin. - Time Zone
- If the timestamp in the collected data does not include a time zone, this
value specifies a time zone to the target destination. Specify this value if the time zone is
different from the system time zone, which is defined in the Log Forwarder started task, as
described in Customizing the Log Forwarder started task to collect z/OS log data. The value must be in the format
plus_or_minusHHMM
, where plus_or_minus represents the+
or-
sign, HH represents two digits for the hour, and MM represents two digits for the minute.Examples:If you want this time zone Specify this value Coordinated Universal Time (UTC) +0000 5 hours west of UTC -0500 8 hours east of UTC +0800 - Discovery Interval
- In the process of streaming data, the number of minutes that the Log Forwarder waits before it checks for
a new log file in the data stream.
The value must be an integer in the range 0 - 1440. A value of 0 specifies that the Log Forwarder only checks for a new log file once when the data gatherer is started. The default value is the value that is defined in the Log Forwarder properties, as described in LOG FORWARDER properties: Defining your Log Forwarder intervals.
- Customized Data Source Type
- A specification of whether to customize the data source type for Splunk HEC. The default value is No, which represents that the subscriber uses the default data source type to identify the type and format of the streamed data. If the value is set to Yes, you need to specify the data source type for Splunk HEC in the following Data Source Type for Splunk HEC field.
- Data Source Type for Splunk HEC
- A value that the subscriber can use to uniquely identify the type and format of the streamed data. This field is available only when you set the value of Customized Data Source Type to Yes, choose to customize the data source type and the protocol is Splunk HEC with customized field support or Splunk HEC with customized field support secure. The default value is Data Source Type_KV. You can specify the value according to your needs.
Use of wildcard characters in the Job Name field
Wildcard character | What the character represents |
---|---|
? |
Any single character. There must be one and only one character. |
* |
Any sequence of characters, including an empty sequence. |
If you use wildcard characters in the job name, the job name value becomes a pattern, and the data stream definition becomes a template. When the Log Forwarder starts, it searches the Job Entry Subsystem (JES) spool for job names that match the pattern, and it creates a separate data stream for each unique job name that it discovers. After the Log Forwarder initialization is complete, the Log Forwarder continues to monitor the job names on the JES spool. As it discovers new job names that match the pattern, it uses the same template to create more data streams.
JOBNAME | JobID |
---|---|
ACMAS5 |
STC00556 |
ACMAS51 |
STC00557 |
CMAS5 |
STC00553 |
CMAS51 |
STC00554 |
CMAS512 |
STC00555 |
CMAS43 |
STC00586 |
CMAS482 |
STC00588 |
CMAS53 |
STC00587 |
CMAS5862 |
STC00589 |
CMAS61 |
STC00590 |
CMAS62 |
STC00600 |
HBODSPRO |
STC00623 |
HBOPROC |
STC00661 |
SYSLOG |
STC00552 |
- If the job name value is CMAS5?, the job names
CMAS51
andCMAS53
that match the pattern are found and the job name CMAS5 is not found. - If the job name value is CMAS5??, the job name
CMAS512
that matches the pattern is found and the job namesCMAS5
,CMAS51
andCMAS5862
are not found. - If the job name value is ?CMAS5?, the job name
ACMAS53
that matches the pattern is found, and the job namesACMAS5
andCMAS51
are not found. - If the job name value is CMAS5*, the job names
CMA5
,CMA51
,CMAS53
andCMAS5862
that match the pattern are found.
- To avoid gathering data from job logs that you do not intend to gather from, use a job name
pattern that is not too broad. If you want the wildcard characters to match a fixed number of
characters, use the
?
as your wildcard. - The Log Forwarder might discover jobs from other systems if spool is shared between systems or if JES multi-access spool is enabled. Although the data stream does not include data for the jobs that run on other systems, the Log Forwarder creates a data stream for that data. Therefore, ensure that the wildcard pattern does not match jobs that run on other systems.
Template field | Value |
---|---|
Job Name | The discovered job name |
Data Source Name | The value of the Data Source Name field in the template, with
_jobName_EYULOG appended to that value. The
jobName is the discovered job name. |
File Path | The value of the File Path field in the template, with
/jobName/EYULOG appended to that value. The
jobName is the discovered job name. |