Integration runtime configuration typeIntegration server configuration type

Resilience (using Kafka) type

Use the Resilience (using Kafka) type to create configurations that contain credentials for connecting to an Apache Kafka service that is used to host event messages that are generated for event-driven flows.

Supported mechanisms for establishing persistence and resiliency

By default, event messages from an endpoint are captured and added to an in-memory queue in the designereventflows container for processing, but these events can be lost if the container crashes. To protect against possible data loss, you can configure resiliency by alternatively choosing to host event messages from the endpoint in a set of brokers in a Kafka cluster. To consume these event messages for processing in the deployed integration, App Connect automatically creates and-then subscribes to a topic (named integrationServerID_topicName or integrationRuntimeID_topicName) in the Kafka cluster when an integration server or integration runtime is deployed with a configuration object of type Resilience (using Kafka).

App Connect also supports the use of a Redis data store as a mechanism for persisting the state of the event stream. If an interruption or outage occurs, when normal service is resumed, the ID of the event that was last generated from the endpoint can be read from Redis so that event streaming and processing can continue from where it left off. This persistence mechanism is configured when an integration server or integration runtime is deployed with a configuration object of type Persistence (using Redis).

For information about configuring persistence by using a Redis data store, see Persistence (using Redis) type.

Note:

Only IBM® Event Streams for IBM Cloud is supported as a Kafka service when you create a configuration object of type Resilience (using Kafka).

To use IBM Event Streams for resilience, you must have a service instance provisioned, which you can connect to. The Event Streams Enterprise plan is needed if you are enabling the schema registry.

Summary of key details for the configuration type

File name or type Contains secrets Path extracted/imported to Maximum allowed per integration server or integration runtime
YAML Yes /home/aceuser/secrets/ 1

Creating the file for a configuration object of type Resilience (using Kafka)

The Resilience (using Kafka) type requires a YAML file that specifies Kafka credentials (or account details) as key and value pairs, for connecting to IBM Event Streams. The account details must be specified in the same format as required for the accounts.yaml file, which is used to create configuration objects of type Accounts. The main difference between these two configuration types is that Accounts can be used to store the credentials for connecting to a Kafka instance that is referenced in flows in a deployed integration server or integration runtime, whereas Resilience (using Kafka) is used to provide quality of service.

You can apply only one configuration object of type Resilience (using Kafka) to an integration server or integration runtime.

To create a YAML file that contains account details for connecting to IBM Event Streams, complete the following steps:

  1. Create a text file with a preferred name (filename.yaml) to contain the account details.
  2. In the file, add the text accounts: as the first line, followed by the account details, as shown in the sample YAML. The account that is used to connect to IBM Event Streams must have read and write access, and only BASIC_SASL_SSL is supported as the authorization type. For help with completing these account details, see Connecting to Event Streams in the IBM Event Streams documentation.

    accounts:
      kafka:
        - name: Account 1
          authType: BASIC_SASL_SSL
          credentials:
            username: Kafka_cluster_username
            password: Kafka_cluster_password
            brokerList: '[commaSeparated_list_of_Kafka_brokers]'
            securityMechanism: PLAIN
            schemaRegistryType: schemaRegistry
          endpoint: {}
    Example:
    accounts:
      kafka:
        - name: Account 1
          authType: BASIC_SASL_SSL
          credentials:
            username: token
            password: 1234qOlflk567r8uBheda99yzLKVtryuquiu543ythD
            brokerList: '["broker-1-x5x15x2x7xxxxxxx.kafka.svc02.us-south.eventstreams.cloud.ibm.com:9093","broker-2-x5x15x2x7xxxxxxx.kafka.svc02.us-south.eventstreams.cloud.ibm.com:9093"]'
            securityMechanism: PLAIN
            schemaRegistryType: Apicurio
          endpoint: {}
    Note: The file must contain valid YAML. Tab characters are not permitted, and must be replaced with spaces if used. You might find it helpful to use a YAML validation tool to check the content.

  3. Save and close the filename.yaml file.

After you create the file, you can use it to create a configuration object as described in Configuration reference: Creating an instance from the Red Hat OpenShift web console and Creating an instance from the Red Hat OpenShift CLI or Kubernetes CLI.