Message processing by using a Kafka provider

With a Kafka messaging provider, you can process outbound messages sequentially and inbound messages sequentially or continuously. In general, Kafka message processing is similar to JMS message processing.

Kafka message processing

Kafka message processing tracks processed messages by maintaining an offset value that represents the sequential order in which messages are received by Kafka topics. The offset value indicates the next message to be processed. After a message is processed in Kafka, the offset is now beyond that processed message, and the message is never processed again.

The processed message is not deleted from the queue. It remains in the queue until the configured expiration time for the message is met.

Maximo® Manage provides an application to help you to reprocess messages that have errors. See more information about error processing.

Message size limits

You configure a message size limit on the Kafka server. When you configure this limit, consider the size of your transactions, including the size of integrated attachments. Large message size can have a negative performance impact on the reading and writing of messages in the queue.

In Maximo Manage, you can add the mxe.kafka.messagesize system property for the maximum message size that can be processed. Configure the property to match the message size, in bytes, that you set on the Kafka server. The default setting is 10 MB.

Kafka write timeouts

In Maximo Manage, Kafka write actions wait for acknowledgment from the Kafka brokers. You can control the write timeout for the Kafka producer by using the mxe.kafka.waittimeforack property. The measurement unit is in milliseconds.

Outbound sequential message processing

The integration framework uses Kafka topics as outbound, sequential message queues. When a message is sent from the integration framework, it is routed to your Kafka provider, where it is hosted as a JSON message in a predefined topic. Although publish channels can send messages in XML or JSON format, the product wraps them as JSON formatted messages and writes them to the Kafka topic. An outbound message uses the structure that is shown in the following code example:
{
      “interfacetype”:”MAXIMO”,
      “INTERFACE”:”MXPRInterface”,
      “payload”:”<xml/json message>”,
      “SENDER”:”MX”,
      “destination”:”<external system name>”,
      “destjndiname”:”<kafka topic name>”,
      “compressed”:”1”,
      “MEAMessageID”:”<providername>~<topic>~<partition>~<offset>”,
      “mimetype”:”application/xml”
}
The payload property contains the payload that is sent to the endpoint. The entire Kafka message is compressed at the time of storage inside the Kafka partitions. In the Kafka partitions, messages are stored in a strict timestamp order. The position of the message in the partition is given a sequential ID number that is known as a message offset. Although Kafka topics can be partitioned, sequentially processed topics must be in a single partition.

After the message is written to the Kafka topic, it is processed by the associated Kafka cron task that you configure in the Cron Task Setup application for each topic. Processing occurs according to the external system and endpoint that are configured for the channel.

Inbound sequential processing

Inbound message processing can be sequential or continuous. Inbound sequential processing is similar to outbound sequential processing and requires the same types of configuration, including an instance of the Kafka cron task.

The structure of a sequential inbound message is shown in the following example:
{
     “interfacetype”: "MAXIMO",
     “SENDER”: "testkafka",
     “destination”: "testkafka",
     “USER”: "wilson",
     “MEAMessageID”: "….",
     “INTERFACE”: "MXASSETInterface",
     “payload”: "{"assetnum":"AZ163","siteid":"BEDFORD"}",
     “mimetype”: "application/json",
    “destjndiname”: "anamitratestcont"
} 

Configure sequential queues with a single partition and no more than one cron task instance to consume messages from the queue.

See more information about error processing.

Outbound and inbound continuous queue processing and scaling

In general, and unlike sequential queues, continuous queue processing does not stop processing when a message has an error. Messages with errors are written to the error table and processed from the error table by the KAFKAERROR cron task instance for the respective queue.

A single Kafka queue can be scaled as multithreaded, or as having multiple consumers of messages, by defining more partitions for the queue. Each partition represents a separate consumer, and each partition requires its own cron task instance to support message consumption for the partition. To scale processing, configure multiple partitions. The number of cron task instances must match the number of partitions.

The outbound continuous queue writes outbound messages for interfaces if selected in the External Systems application of the Maximo Manage application. Multiple external systems can use the same outbound continuous queue.

Inbound continuous message processing with Kafka is similar to JMS message processing, however, there are differences in scaling and in error processing.

See more information about error processing of messages in continuous queues and configuring a redelivery delay.