Configuring event output to Kafka

You can configure various output options for Apache Kafka to store the events that are sent by the Case event emitter.

About this task

The Case emitter emits events in JSON format.

If you do not specify the acks Kafka producer property, its default value is all instead of 1 as mentioned in the Kafka documentation.

For more information:
About the configuration file
See Structure of the Case event emitter JSON file
About the Kafka configuration
See Producer configs.

Running Kafka in Cloud Pak foundational services

Procedure

  1. Create a topic in Kafka, for example icp4ba-bai-ingress-test.
  2. Retrieve the username and password for Kafka as instructed in Retrieving information for connection to Kafka.

    Use the retrieved values to replace the <kafka username> and <kafka password> placeholders in the sasl.jaas.config property of the Code sample below.

  3. Retrieve the truststore and its password as instructed in Retrieving the Kafka truststore.
    Use the retrieved values to replace the placeholders in the ssl.truststore.location and ssl.truststore.password properties of the Code sample below.
    Note: The ssl.protocol, ssl.enabled.protocols, and ssl.endpoint.identification.algorithm fields from this example are optional. If you experience problems authenticating to Kafka and you observe that removing these fields resolves the authentication problem, you can omit them.

Code sample

"default" : {
       "enable" : true,
       "bootstrap.servers" : "details of the Cloud Pak foundational services Kafka bootstrap server:port",
       "sasl.mechanism": "SCRAM-SHA-512",
       "topic" : "icp4ba-bai-ingress",
       "security.protocol" : "SASL_SSL",
       "ssl.truststore.location" : "<path to the truststore file>",
       "ssl.truststore.password" : "<password used while creating that truststore file>",
       "ssl.keystore.location" : "<path to the keystore file>",
       "ssl.keystore.password" : "<password used to create the keystore file>",
       "ssl.protocol" : "TLSv1.2",
       "ssl.enabled.protocols" : "TLSv1.2",
       "sasl.jaas.config" : "org.apache.kafka.common.security.scram.ScramLoginModule required username=\"<kafka username>\" password=\"<kafka password>\";"
    }

Running Kafka locally

Procedure

  1. Download Kafka binary files from the Download page.
  2. Install the files as instructed by your Kafka provider.

    For example, Apache instructions are available on the Quickstart page.

  3. To start Kafka, run the following commands.
    ./bin/zookeeper-server-start.sh config/zookeeper.properties
    ./bin/kafka-server-start.sh config/server.properties
  4. Create a Kafka topic, for example icp4ba-bai-ingress.
    ./bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic icp4ba-bai-ingress
  5. Test the producer and then the consumer by running the following Kafka commands locally, each in a separate window.
    ./bin/kafka-console-producer.sh --broker-list localhost:9092 --topic  icp4ba-bai-ingress
    
    ./bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic  icp4ba-bai-ingress
  6. Edit the "output" section of the CaseEventEmitter.json file.
    "output" : {
         "default" : { 
          "enable" : true,
          "type" : "kafka",
          "sasl.mechanism": "PLAIN",
          "bootstrap.servers" : "localhost:9092",
          "topic" : "icp4ba-bai-ingress"
         }    
      }

    For example, to see events in the icp4ba-bai-ingress topic, you can run the Kafka consumer command as shown in step 5 before you start the emitter. The kafka-console-producer.sh command-line instructions produce the events to the Kafka topic directly. The kafka-console-consumer.sh command-line instructions consume the events from within the Kafka topic. Use these commands for testing purposes. However, after the emitter is configured to this Kafka topic and deployed to WebSphere® Application Server, start kafka-console-consumer.sh before you run the emitter to view the events that are emitted.

Running Kafka on IBM Cloud

Procedure

  1. To find the values for <kafka_brokers_sasl> and <api_key>, access your Message Hub instance in IBM Cloud, go to the Service Credentials tab, and select the credentials you need.
    Tip: The value for <kafka_brokers_sasl> must be a single string enclosed in quotation marks. For example: "host1:port1,host2:port2". It is recommended to use all the Kafka hosts that are listed in the credentials you selected.
  2. Specify the Kafka topic to use by adding the "topic" field to the "output" section of the JSON configuration file.
    "output" : {
         "default" : {
           "enable" : true,
           "bootstrap.servers" : "kafka04-prod02.messagehub.services.us-south.bluemix.net:9093,kafka01-prod02.messagehub.services.us-south.bluemix.net:9093,kafka05-prod02.messagehub.services.us-south.bluemix.net:9093,kafka02-prod02.messagehub.services.us-south.bluemix.net:9093,kafka03-prod02.messagehub.services.us-south.bluemix.net:9093",
           "sasl.mechanism": "PLAIN",
           "topic" : "icp4ba-bai-ingress",
           "security.protocol" : "SASL_SSL",
           "ssl.protocol" : "TLSv1.2",
           "ssl.enabled.protocols": "TLSv1.2",
           "ssl.endpoint.identification.algorithm" : "HTTPS",
           "api_key" : "pGZzplGE9vyi2lvJz3E7LTgHLxo9lntaXtEA6Q9kfve5aYnE"
         }
      }