Streaming OMEGAMON Data from Kafka to Splunk via the HEC
You can use Z Common Data Provider to read and stream OMEGAMON® data from Kafka to Splunk via the HTTP Event Collector (HEC).
About this task
You must create a policy to stream OMEGAMON data. In the policy, select OMEGAMON data stream, specify the Kafka topic of your OMEGAMON data, and add Splunk via HEC as the subscriber. Then, set up and start the Data Streamer to start to stream OMEGAMON data from Kafka to Splunk HEC.
Procedure
-
Create a policy to stream OMEGAMON data
stream.
- Click the Create a new policy box.
- In the resulting Policy Profile Edit window, type or update the required policy name and, optionally, a policy description.
- Click the Add Data Stream icon .
- Select OMEGAMON data stream and click
Select.
OMEGAMON data stream is shown as a node in the graph.
- Click the Configure
icon on the OMEGAMON data stream node to configure Kafka
Topic Name. The value of the File Path will be updated automatically to be consistent with the value of the Kafka Topic Name that you fill in.
- Click the Subscribe
icon on the OMEGAMON data stream node, the Policy Profile
Edit window opens where you can select a previously defined subscriber, or define a new
subscriber by completing the following steps.
- In the
Subscribe to a data stream
window, click the Add Subscriber icon . - In the resulting
Add subscriber
window, update the associated configuration values, and click OK to save the subscriber.You can update the following values in theAdd subscriber
window:- Name
- The name of the subscriber.
- Description
- An optional description for the subscriber.
- Protocol
- The streaming protocol that the Data Streamer uses to send data to the subscriber. For example, you can choose Logstash if you want to use the Elastic Stack as a subscriber, or you can choose Splunk via Data Receiver if you want to use Splunk as a subscriber. Make sure you choose the protocol that meets your requirements. For more information about protocols, see Subscriber configuration.
- Host
- The host name or IP address of the subscriber.
- Port
- The port on which the subscriber listens for data from the Data Streamer.
- Auto-Qualify
- This value is ignored.
- Compression Type
- A specification of whether to compress data before sending to a Humio subscriber. You can choose
any of the following values. The default value is None.
- None
- Indicates that the data will not be compressed before sending to a Humio subscriber.
- GZIP
- Specifies that data will be GZIP compressed before sending to a Humio subscriber.
- Number of threads
- This configuration value is valid only when you choose one of the HEC protocols or CDP Humio protocols. The number of threads that will send data to the subscriber. The default value is 12. The value must range from 1 to 20. For Splunk HEC protocols, generally you don't need to change this value. For CDP Humio protocols, you must change it based on your environment resource. For more information about the environment resources required by Humio, see Preparation for Installing Humio.
- Token
- This configuration value is valid when you choose one of the HEC protocols or one of the CDP Humio protocols. Specifies the token value. For more information about how to create a token value, see Sending data directly to Splunk by using Splunk HEC as the subscriber. For more information about how to create a Humio repository token, see Preparing to send data to Humio via Logstash.
- In the
Subscribe to a data stream
window, select one or more subscribers, and click Update Subscriptions.The subscribers that you choose are then shown on the graph.
- To save the policy, click Save.
- In the
For more information about creating a policy to stream OMEGAMON data, see Creating a policy to stream OMEGAMON data stream. - Copy the procedure
HBODSPRO
in thehlq.SHBOSAMP
library to a user procedure library.You can rename this procedure according to your installation conventions. - In your copy of the procedure
HBODSPRO
, customize the following parameter values and environment variables for your environment:mode=kafka
- If you specify this parameter, the Data Streamer will consume OMEGAMON data from the Apache Kafka server that is specified in the
KAFKA_SERVER environment variable or the customized Apache Kafka consumer
properties file, and send the data to the subscribers that are defined in the policy. For more
information, see KAFKA_SERVER and KAFKA_PROPER.Note: Even if you specify this parameter, the Data Streamer can still receive the data from the data gatherers. In other words, by specifying
mode=kafka
, the Data Streamer can process data from both the data gatherers and Apache Kafka. - KAFKA_SERVER
- The address list of Apache Kafka bootstrap servers. It is a comma-separated list of host:port pairs.
- KAFKA_PROPER
- The full path for the customized Apache Kafka consumer properties file. The default path is
CDP_HOME/gatherer.consumer.properties.Important: You must specify the address list of Apache Kafka bootstrap servers either in the KAFKA_SERVER environment variable or in the customized Kafka consumer properties file. If you specify both, the value in the KAFKA_SERVER environment variable will take effect.
HBODSPRO
, see Customizing the Data Streamer started task. - If the value of Enable Multiple Kafka Topics for OMEGAMON data streams in the policy is set to
Yes
, copy the sample odp-topics-list.json file from the CDP_SMP_HOME/DS/LIB directory to the CDP_HOME directory. After you copy the odp-topics-list.json file, edit the file to select the OMEGAMON data streams that you want to collect. For more information, see Configuring the odp-topics-list.json file to collect OMEGAMON data from multiple Kafka topics. - Start the Data Streamer by running the following command.
START HBODSPRO