Implementing event processing
The Processing Application is a programmable framework for users to develop custom event processing for business automation applications, in order to monitor them in real time with Business Performance Center and to trigger new synthetic events on the Kafka infrastructure.
As a prerequisite, you need to emit events in JSON format on a Kafka topic, called the ingress topic. These events will include the information required to identify the event accurately, which depends on your application domain. You may want to include information about the system and environment from which the event was sent, for example, the identifier of the case, or person, or claim, or incident, or opportunity. And all information from the business event that will be of interest to monitor, starting with the event timestamp, the status reached by an activity or task, the person who performed the task, the raw values of business indicators (such as amounts, quantities, grades, levels), qualitative indicators (such as criticality, severity, confidence, importance, urgency, achievement) as well as locations, markets, categories, and more.
Then the Processing Application provides a series of steps to process the events, either in a stateless way or in a stateful way. When the event represents a one-step action, or change, or decision, etc, and you don’t need information from other related events to compute a global ‘picture’ then stateless processing is sufficient. When you need to collect several events on the same context, that are coming progressively, and that contribute to build a summary by adding details, updating a status, a current value, or by aggregating values (summing costs, durations, ans so on) then stateful processing is required.
In a stateless Processing Application, each event is processed independently, and the processing flow is the following (most steps are optional, so you may leverage a subset of the capabilities):
- At ingress level:
- a selector reads the event and decides to retain it or not for the remainder of the flow. This way, only relevant events are exploited.
- a transformer implements data transformation: you may want to filter out some elements of the event, or to convert values into standard units, or to compute synthetic values from raw values (net price from gross price and VAT), or anonymize personal information,
- The transformed event can be sent to one or more egresses, either to be stored into an
OpenSearch index, and be visible to Business Performance Center, and/or to be sent to another Kafka topic, for other applications’ interest. In each egress:
- a selector can filter out events that are considered relevant for this egress,
- a transformer can apply other transformations to make the event ready to use by its consumer. For instance, for a OpenSearch egress, you need to make sure the JSON structure is flattened for Business Performance Center to exploit it correctly.
- The Processing Application sends the resulting event, also called timeseries event, to each egress.

In a stateful Processing Application, when each event is processed, it may be creating or contributing to a context, and the processing flow is the following (again most steps are optional) :
- At ingress level:
- a selector reads the event and decides to retain it or not for the remainder of the flow. This way, only relevant events are exploited.
- a transformer implements data transformation: you may want to filter out some elements of the event, or to convert values into standard units, or to compute synthetic values from raw values (net price from gross price and VAT), or anonymize personal information,
- One or more contexts take this transformed event into account.
- by computing a correlation key for the event, the processing application creates a new context for a new key or recognizes an existing context with that key. For instance, for a events related to orders, a context may be created for the order itself, based on an order ID (and monitor the progress of the order as a whole until its invoicing), and other contexts may be created, based on a shipment ID, for each distinct shipment of articles within this order (and monitor each shipment until delivery).
- within a context, one or more stateful operations can be computed, from the current event and the state gathered so far. For instance, an operation can compute a duration between the timestamps of a start event and a completion event. Or update the state with the latest information from the current event. Or compute an aggregate of values coming one by one with each event, for example summing costs.
-
each operation produces a summary event, which represents the latest up-to-date correlated and aggregated state of the operation.
- For each operation in each context, the summary event can be sent to one or more
egresses, either to be stored into an OpenSearch index, and be visible to Business
Performance Center, and/or to be sent to another Kafka topic, for other applications’ interest. In
each egress,
-
a selector can filter out summaries that are considered relevant for this egress,
-
a transformer can apply other transformations to make the event ready to use by its consumer. For instance, for a OpenSearch egress, you need to make sure the JSON structure of the summary is flattened for Business Performance Center to exploit it correctly.
-
-
The Processing Application takes care of sending the resulting event to each egress.
-
This full sequence repeats until an event is recognized as completing the operation. When completed, the operation is cleaned up.