Deploying the Logstash configuration files in the ingestion kit for raw data

To enable raw data that is sent from the Z Common Data Provider, deploy the Z Operational Log and Data Analytics Elastic Stack ingestion kit for raw data.

Procedure

To deploy the Z Operational Log and Data Analytics Elastic Stack ingestion kit for raw data, complete the following steps. These steps are for the deployment on a Linux® system. Use comparable steps if you are deploying on a Windows system.

  1. Log in to the Logstash server, and extract the Z Operational Log and Data Analytics Elastic Stack ingestion kit for raw data, which is in the file ZLDA-IngestionKit-raw-v.r.m.f.zip. By default, the files are extracted into the zlda-config-raw directory.
    For more information about how to get the package, see Obtaining and preparing the installation files.
    Note: Logstash processes all *.conf files that are included in the configuration file directory zlda-config-raw. Processing is done in lexicographical order.
  2. Define a policy with the Logstash as the subscriber.
    Tip: The port defined in the Logstash subscriber is the same port that is defined in the Logstash input configuration file for raw data, for example, the B_CDPz_Input.conf file. The curated and raw data streams cannot use the same port.
  3. Generate Elasticsearch index templates from a policy file for raw data.
    Creating index templates for raw (non-curated) data enables Elasticsearch to know how to handle the data. This will assign explicit data types to fields instead of relying on default types. For more information, see Generating Elasticsearch index templates from a policy file for raw data.
  4. Update the Logstash configuration files for raw data for your environment.
    Tip:
    Table 1 indicates the prefixes that are used in the file names for the Logstash configuration files in the Z Operational Log and Data Analytics Elastic Stack ingestion kit for raw data. The file name prefix is an indication of the configuration file content.
    Table 1. Mapping of the prefix that is used in a Logstash configuration file name to the content of the file
    Prefix in file name of Logstash configuration file Content of configuration file with this prefix
    B_ Input stage
    E_ Preparation stage
    H_ Field name annotation stage
    N_ Timestamp resolution stage
    O_ Post process filter stage
    Q_ Output stage
    The following descriptions further explain how to update these Logstash configuration files in the Z Operational Log and Data Analytics Elastic Stack ingestion kit for raw data:
    B_CDPz_Input.conf file
    This file contains the input stage that specifies the TCP/IP port on which Logstash listens for data from the Z Common Data Provider Data Streamer. Copy this file to your Logstash configuration directory. You might need to edit the port number after you copy the file.
    Note: You can only have one input stage in your configuration. If you use B_CDPz_Input.conf, you must remove B_CDPz_Kafka.conf and B_CDPz_Omegamon.conf from the zlda-config-raw folder.
    B_CDPz_Kafka.conf file
    This file is used only when you stream non-OMEGAMON® data from Apache Kafka to Logstash, for example, if you stream non-OMEGAMON data from the Z Data Analytics Platform to Logstash, you need to use this file.
    It contains the input stage that specifies the bootstrap server on which Logstash listens for data from Apache Kafka. Update the bootstrap server in this file as appropriate for your environment.
    Note: You can only have one input stage in your configuration. If you use B_CDPz_Kafka.conf, you must remove B_CDPz_Input.conf and B_CDPz_Omegamon.conf from the zlda-config-raw folder.
    E_CDPz_Index.conf file
    This file contains the preparation stage. Copy this file to your Logstash configuration directory.
    Files with H_ prefix in file name
    Each of these files contains a unique field name annotation stage that maps to a unique data stream that the Z Common Data Provider can send to Logstash. Copy the H_ files to your Logstash configuration directory.
    Note: Only copy the .conf files that match the data streams that you are sending to Elasticsearch. Copying extra and unneeded .conf files can impact the performance of Logstash.
    Files with N_ prefix in file name
    Each of these files contains a unique timestamp resolution stage that maps to a unique data stream that Z Common Data Provider can send to Logstash. Copy the N_ files to your Logstash configuration directory.
    Note: Only copy the .conf files that match the data streams that you are sending to Elasticsearch. Copying extra and unneeded .conf files can impact the performance of Logstash.
    O_postfilter.conf file
    This file is used to filter the records after other filter stages have completed. The filter is added to remove the event.original field. This field is created by Logstash 8 and later running in ECS Compatibility mode (the default) and will greatly increase the size of indexes.
    Q_CDPz_Elastic.conf file
    This file is used when you stream non-OMEGAMON data to Logstash. It contains an output stage that sends all records to a single Elasticsearch server. Copy this file to your Logstash configuration directory.

    After you copy the file, edit it to add the name of the host to which the stage is sending the indexing call. The default name is localhost, which indexes the data on the server that is running the ingestion processing. Change the value of the hosts parameter rather than the value of the index parameter. The index value is assigned during ingestion so that the data for each source type is sent to a different index. The host determines the Elasticsearch server farm in which the data is indexed. The index determines the index in which the data is held.

    To split data according to sysplex, you can use the [sysplex] field in an if statement that surrounds an appropriate Elasticsearch output stage.

    The following files are used only when you stream OMEGAMON data to Logstash. You must copy the following configuration files from the Z Operational Log and Data Analytics ingestion kit to your Logstash configuration directory. For more information about how to configure the files, see Streaming OMEGAMON data from Kafka to the Elastic Stack.
    B_CDPz_Omegamon.conf file
    It contains the input stage that specifies the TCP/IP port on which Logstash listens for data from the Data Streamer. Specify the port on which Logstash listens for data from the Data Streamer. The default value is 8080.
    Note: You can only have one input stage in your configuration. If you use B_CDPz_Omegamon.conf, you must remove B_CDPz_Input.conf and B_CDPz_Kafka.conf from the zlda-config-raw folder.
    CDPz_Omegamon.conf file
    This file contains the information of how Logstash parses and splits the concatenated JSON data, and a unique field name annotation stage that maps to OMEGAMON data.
    Q_CDPz_Omegamon.conf file
    It contains an output stage that sends all records to a single Elasticsearch server. Copy this file to your Logstash configuration directory.
    After you copy the file, edit it to change the value of the hosts parameter to the IP address where Elasticsearch is running. The default value is localhost.
  5. If Elasticsearch is set up with Transport Layer Security or user ID and password authentication, you need to enable the Logstash pipelines to support this security configuration. For more information see Enabling Logstash pipelines to use TLS and password authentication.
    Tip: You can use scripted tooling to update the security parameters in the Elasticsearch connection definition in the Logstash configuration files. For more information, see Updating the Elasticsearch connection definition in Logstash configuration files by using scripted tooling.
  6. Verify all index names.
    The Elasticsearch index name is defined by the following parameter in the Logstash configuration file Q_CDPz_Elastic.conf:
    index => "cdp-%{[@metadata][indexname]}-%{+yyyyMMdd}"
  7. Create a configuration directory for the raw data for use by Logstash. For example,
    mkdir /etc/logstash/zlda-config-raw
  8. Copy all required files from the installation directory to the configuration directory.
    The following example shows how to copy both the H*.conf and N*.conf files for the SMF_030 sourcetype:
    cp *SMF_030.conf /etc/logstash/zlda-config-raw
    Important: Only copy the files that are needed to support your intended sourcetype files. Including extra sourcetype files can slow down the performance of Logstash.

    Do not copy any raw files into the Logstash configuration directory for curated files.

  9. Configure a Logstash pipeline for the configuration directory for the raw data to be used by Logstash. For more information, see Configuring a pipeline for Logstash.
  10. Start Elasticsearch and Logstash.

Results

When Elasticsearch and Logstash are started successfully, the Z Common Data Provider starts sending data to Elasticsearch.