Streaming OMEGAMON Data from Kafka to Splunk via the HEC
You can use Z Common Data Provider to read and stream OMEGAMON® data from Kafka to Splunk via the HTTP Event Collector (HEC).
About this task
You must create a policy to stream OMEGAMON data. In the policy, select OMEGAMON data stream, specify the Kafka topic of your OMEGAMON data, and add Splunk via HEC as the subscriber. Then, set up and start the Data Streamer to start to stream OMEGAMON data from Kafka to Splunk HEC.
Create a policy to stream OMEGAMON data
For more information about creating a policy to stream OMEGAMON data, see Creating a policy to stream OMEGAMON data stream.
- Click the Create a new policy box.
- In the resulting Policy Profile Edit window, type or update the required policy name and, optionally, a policy description.
- Click the Add Data Stream icon .
- Select OMEGAMON data stream and click
OMEGAMON data stream is shown as a node in the graph.
- Click the Configure
icon on the OMEGAMON data stream node to configure Kafka
Topic Name. The value of the File Path will be updated automatically to be consistent with the value of the Kafka Topic Name that you fill in.
- Click the Subscribe
icon on the OMEGAMON data stream node, the Policy Profile
Edit window opens where you can select a previously defined subscriber, or define a new
subscriber by completing the following steps.
- In the
Subscribe to a data streamwindow, click the Add Subscriber icon .
- In the resulting
Add subscriberwindow, update the associated configuration values, and click OK to save the subscriber.You can update the following values in the
- The name of the subscriber.
- An optional description for the subscriber.
- The streaming protocol that the Data Streamer uses to send data to the subscriber. For example, you can choose Logstash if you want to use the Elastic Stack as a subscriber, or you can choose Splunk via Data Receiver if you want to use Splunk as a subscriber. Make sure you choose the protocol that meets your requirements. For more information about protocols, see Subscriber configuration.
- The host name or IP address of the subscriber.
- The port on which the subscriber listens for data from the Data Streamer.
- This value is ignored.
- Compression Type
- A specification of whether to compress data before sending to a Humio subscriber. You can choose
any of the following values. The default value is None.
- Indicates that the data will not be compressed before sending to a Humio subscriber.
- Specifies that data will be GZIP compressed before sending to a Humio subscriber.
- Number of threads
- This configuration value is valid only when you choose one of the HEC protocols or CDP Humio protocols. The number of threads that will send data to the subscriber. The default value is 12. The value must range from 1 to 20. For Splunk HEC protocols, generally you don't need to change this value. For CDP Humio protocols, you must change it based on your environment resource. For more information about the environment resources required by Humio, see Preparation for Installing Humio.
- This configuration value is valid when you choose one of the HEC protocols or one of the CDP Humio protocols. Specifies the token value. For more information about how to create a token value, see Sending data directly to Splunk by using Splunk HEC as the subscriber. For more information about how to create a Humio repository token, see Preparing to send data to Humio via Logstash.
- In the
Subscribe to a data streamwindow, select one or more subscribers, and click Update Subscriptions.
The subscribers that you choose are then shown on the graph.
- To save the policy, click Save.
- In the
- Copy the procedure
hlq.SHBOSAMPlibrary to a user procedure library.You can rename this procedure according to your installation conventions.
- In your copy of the procedure
HBODSPRO, customize the following parameter values and environment variables for your environment:
- If you specify this parameter, the Data Streamer will consume OMEGAMON data from the Apache Kafka server that is specified in the
KAFKA_SERVER environment variable or the customized Apache Kafka consumer
properties file, and send the data to the subscribers that are defined in the policy. For more
information, see KAFKA_SERVER and KAFKA_PROPER.Note: Even if you specify this parameter, the Data Streamer can still receive the data from the data gatherers. In other words, by specifying
mode=kafka, the Data Streamer can process data from both the data gatherers and Apache Kafka.
- The address list of Apache Kafka bootstrap servers. It is a comma-separated list of host:port pairs.
- The full path for the customized Apache Kafka consumer properties file. The default path is
CDP_HOME/gatherer.consumer.properties.Important: You must specify the address list of Apache Kafka bootstrap servers either in the KAFKA_SERVER environment variable or in the customized Kafka consumer properties file. If you specify both, the value in the KAFKA_SERVER environment variable will take effect.
HBODSPRO, see Configuring the Data Streamer.
- Start the Data Streamer by running the following command.