Installing Apache Kafka

Apache Kafka provides a buffer for messages that are sent to and received from external interfaces. Apache Kafka is not required if the IBM® Maximo® Manage software is not interfacing with external systems.

About this task

Apache Kafka is required by IoT and is optional for Manage

Note: When you are updating an operator, there might be delays applying changes to the Kafka configuration.

Procedure

  • Install Kafka from Red Hat® OpenShift® web console or command line.
    • Red Hat AMQ Streams operator, which is based on the Strimzi operator, can be used to install Kafka for on-premises installations. It can also be used to install Kafka in cloud-based Maximo Application Suite installations, when a managed Kafka service by the cloud provider is not desirable.
      Note: Starting in 8.10.1, to deploy Maximo Application Suite in a FIPS enabled environment, it is recommended to install Kafka by using Strimzi Operator 0.33.2.
      To install by using the Red Hat OpenShift web console:
      1. From Home > Projects, click the Create Project button, enter the name kafka, and click Create to provision the new namespace for Kafka.
      2. In the banner, click Import YAML (Plus icon). Enter the following YAML.
        
        ---
        apiVersion: operators.coreos.com/v1
        kind: OperatorGroup
        metadata:
          name: "kafka"
          namespace: "kafka"
        spec:
          targetNamespaces:
            - "kafka"
        
      3. Click Create to create the operator group in the kafka namespace.
      4. In the banner, click Import YAML (Plus icon). Enter the following YAML.
        
        ---
        apiVersion: operators.coreos.com/v1alpha1
        kind: Subscription
        metadata:
          name: amq-streams
          namespace: "kafka"
        spec:
          channel: amq-streams-1.8.x
          installPlanApproval: Automatic
          name: amq-streams
          source: redhat-operators
          sourceNamespace: openshift-marketplace
        
        Tip: For Maximo Application Suite on AWS (BYOL) version 8.7, change amq-streams-1.8.x to amq-streams-1.7.x to match the version of AMQ streams that is installed in the BAS namespace.
      5. Click Create to create the subscription resources.
      6. From Operators > Installed Operators, search for AMQ Streams and verify that the operator Status is set to Succeeded.
      7. In the banner, click Import YAML (Plus icon). Enter the following YAML.
        
        ---
        apiVersion: kafka.strimzi.io/v1beta2
        kind: Kafka
        metadata:
          name: "maskafka"
          namespace: "kafka"
        spec:
          # -------------------------------------------------------
          kafka:
            version: 2.7.0
            replicas: 3
            resources:
              requests:
                memory: 4Gi
                cpu: "1"
              limits:
                memory: 4Gi
                cpu: "2"
            jvmOptions:
              -Xms: 3072m
              -Xmx: 3072m
            config:
              offsets.topic.replication.factor: 3
              transaction.state.log.replication.factor: 3
              transaction.state.log.min.isr: 2
              log.message.format.version: "2.7"
              log.retention.hours: 24
              log.retention.bytes: 1073741824
              log.segment.bytes: 268435456
              log.cleaner.enable: true
              log.cleanup.policy: delete
              auto.create.topics.enable: false
            storage:
              type: jbod
              volumes:
                - id: 0
                  type: persistent-claim
                  class: "ocs-storagecluster-ceph-rbd"
                  size: 100Gi
                  deleteClaim: true
            authorization:
                type: simple
            listeners:
              - name: tls
                port: 9094
                type: route
                tls: true
                authentication:
                  type: scram-sha-512
          # -------------------------------------------------------
          zookeeper:
            replicas: 3
            resources:
              requests:
                memory: 1Gi
                cpu: "0.5"
              limits:
                memory: 1Gi
                cpu: "1"
            jvmOptions:
              -Xms: 768m
              -Xmx: 768m
            storage:
              type: persistent-claim
              class: "ocs-storagecluster-ceph-rbd"
              size: 10Gi
              deleteClaim: true
          # -------------------------------------------------------
          entityOperator:
            userOperator: {}
            topicOperator: {}
        

        Ensure that you modify the specified storage class ocs-storagecluster-ceph-rbd to use a supported storage class for your cluster.

      8. Click Create to create the Kafka cluster.
      9. From Workloads > StatefulSets, switch to the kafka project. You should see two StatefulSets: maskafka-kafka (the Kafka brokers) and maskafka-zookeeper (the Kafka ZooKeepers). Select each statefulset and verify that each has three pods, which are in Ready state.
      10. In the banner, click Import YAML (Plus icon). Enter the following YAML.
        
        ---
        apiVersion: kafka.strimzi.io/v1beta2
        kind: KafkaUser
        metadata:
          name: "maskafkauser"
          labels:
            strimzi.io/cluster: "maskafka"
          namespace: "kafka"
        spec:
          authentication:
            type: scram-sha-512
          authorization:
            type: simple
            acls:
              - host: '*'
                operation: All
                resource:
                  name: '*'
                  patternType: literal
                  type: topic
              - host: '*'
                operation: All
                resource:
                  name: '*'
                  patternType: literal
                  type: group
              - host: '*' 
                operation: All
                resource:
                  name: '*'
                  patternType: literal
                  type: cluster
              - host: '*'
                operation: All
                resource:
                  name: '*'
                  patternType: literal
                  type: transactionalId
        
      11. Click Create to create a Kafka user, which is used by Maximo Application Suite to authenticate connections to Kafka.
      12. From Workloads > Secrets, switch to the kafka project. Verify that the maskafkauser secret was created by the user entity operator.
      13. From Networking > Routes, switch to the kafka project. Verify that the maskafka-kafka-tls-bootstrap route was created.
      14. Get the Kafka information.
        To get the Kafka host and port:
        
        oc get Kafka.kafka.strimzi.io maskafka  -o jsonpath="{.status.listeners[0].addresses[0]}"
        
        Sample output
        To get the Kafka ca crt:
        
        oc get Kafka.kafka.strimzi.io maskafka  -o jsonpath="{.status.listeners[0].certificates[0]}"
        
        Sample output
        To get the Kafka username and password:
        
        oc extract secret/maskafkauser -n kafka --keys=sasl.jaas.config --to=-
        Sample output:
        
        # sasl.jaas.config
        org.apache.kafka.common.security.scram.ScramLoginModule required username="maskafkauser" password="KbpatTNjUu5N";
        

        Where the username is maskafkauser and the password is KbpatTNjUu5N.

        To get the SASL Mechanism:
        
        oc get Kafka.kafka.strimzi.io maskafka -n kafka -o jsonpath='{.spec.kafka.listeners[0].authentication}' | jq -r . 
        
        Sample output:
        
        {
          "type": "scram-sha-512"
        }
        

        Where the SASL Mechanism is SCRAM-SHA-512.

    • To install by using the Red Hat OpenShift command-line interface (CLI):
      1. From the bastion host, create the YAML file kafka-sub.yaml, containing the Namespace, OperatorGroup, and Subscription resources that are used to install Kafka:
        
        ---
        apiVersion: v1
        kind: Namespace
        metadata:
          name: "kafka"
        ---
        apiVersion: operators.coreos.com/v1
        kind: OperatorGroup
        metadata:
          name: "kafka"
          namespace: "kafka"
        spec:
          targetNamespaces:
            - "kafka"
        ---
        apiVersion: operators.coreos.com/v1alpha1
        kind: Subscription
        metadata:
          name: amq-streams
          namespace: "kafka"
        spec:
          channel: amq-streams-1.8.x
          installPlanApproval: Automatic
          name: amq-streams
          source: redhat-operators
          sourceNamespace: openshift-marketplace
        
      2. Apply the kafka-sub.yaml file to the Red Hat OpenShift Container Platform cluster:
        
        oc apply -f kafka-sub.yaml
        
      3. Verify that the AMQ Streams operator was successfully deployed:
        
        oc get csv -n kafka -l operators.coreos.com/amq-streams.kafka
        
        Sample output
      4. From the bastion host, create the YAML file kafka-cluster.yaml, containing the Kafka resource that describes the configuration of the Kafka cluster:
        
        ---
        apiVersion: kafka.strimzi.io/v1beta2
        kind: Kafka
        metadata:
          name: "maskafka"
          namespace: "kafka"
        spec:
          # -------------------------------------------------------
          kafka:
            version: 2.7.0
            replicas: 3
            resources:
              requests:
                memory: 4Gi
                cpu: "1"
              limits:
                memory: 4Gi
                cpu: "2"
            jvmOptions:
              -Xms: 3072m
              -Xmx: 3072m
            config:
              offsets.topic.replication.factor: 3
              transaction.state.log.replication.factor: 3
              transaction.state.log.min.isr: 2
              log.message.format.version: "2.7"
              log.retention.hours: 24
              log.retention.bytes: 1073741824
              log.segment.bytes: 268435456
              log.cleaner.enable: true
              log.cleanup.policy: delete
              auto.create.topics.enable: false
            storage:
              type: jbod
              volumes:
                - id: 0
                  type: persistent-claim
                  class: "ocs-storagecluster-ceph-rbd"
                  size: 100Gi
                  deleteClaim: true
            authorization:
                type: simple
            listeners:
              - name: tls
                port: 9094
                type: route
                tls: true
                authentication:
                  type: scram-sha-512
          # -------------------------------------------------------
          zookeeper:
            replicas: 3
            resources:
              requests:
                memory: 1Gi
                cpu: "0.5"
              limits:
                memory: 1Gi
                cpu: "1"
            jvmOptions:
              -Xms: 768m
              -Xmx: 768m
            storage:
              type: persistent-claim
              class: "ocs-storagecluster-ceph-rbd"
              size: 10Gi
              deleteClaim: true
          # -------------------------------------------------------
          entityOperator:
                userOperator: {}
                topicOperator: {}
        

        Ensure that you modify the specified storage class ocs-storagecluster-ceph-rbd to use a supported storage class for your cluster.

      5. Apply the kafka-cluster.yaml file to the OCP cluster:
        
        oc apply -f kafka-cluster.yaml
        
      6. Verify that the Kafka cluster was successfully deployed. The Kafka CR should be in Ready state.
        The Kafka CR specified in the following command is fully qualified with its API group name kafkas.kafka.strimzi.io to avoid ambiguity with the Kafka CR provided by kafkas.ibmevents.ibm.com.
        
        oc get kafkas.kafka.strimzi.io -n kafka
        
        Sample output
      7. From the bastion host, create the YAML file kafka-user.yaml, containing the KafkaUser resource describing the configuration of the Kafka user which will be used by Maximo Application Suite to authenticate connections to Kafka:
        
        ---
        apiVersion: kafka.strimzi.io/v1beta2
        kind: KafkaUser
        metadata:
          name: "maskafkauser"
          labels:
            strimzi.io/cluster: "maskafka"
          namespace: "kafka"
        spec:
          authentication:
            type: scram-sha-512
          authorization:
            type: simple
            acls:
              - host: '*'
                operation: All
                resource:
                  name: '*'
                  patternType: literal
                  type: topic
              - host: '*'
                operation: All
                resource:
                  name: '*'
                  patternType: literal
                  type: group
              - host: '*' 
                operation: All
                resource:
                  name: '*'
                  patternType: literal
                  type: cluster
              - host: '*'
                operation: All
                resource:
                  name: '*'
                  patternType: literal
                  type: transactionalId
        
      8. Apply the kafka-user.yaml file to the OCP cluster:
        
        oc apply -f kafka-user.yaml
        
      9. Verify that the maskafkauser secret was created by the user entity operator:
        
        oc get secret maskafkauser -n kafka
        
        Sample output
      10. Get the Kafka information.
        To get the Kafka host and port:
        
        oc get Kafka.kafka.strimzi.io maskafka  -o jsonpath="{.status.listeners[0].addresses[0]}"
        
        Sample output:
        
        {"host":"maskafka-kafka-tls-bootstrap-kafka.apps.cluster1.example-cluster.com","port":443}
        
        To get the Kafka ca crt:
        
        oc get Kafka.kafka.strimzi.io maskafka  -o jsonpath="{.status.listeners[0].certificates[0]}"
        
        Sample output:
        
        -----BEGIN CERTIFICATE-----
        MIIFLTCCAxWgAwIBAgIUTExUl2XrdIPy6vZAtk9toGh2jbEwDQYJKoZIhvcNAQEN
        BQAwLTETMBEGA1UECgwKaW8uc3RyaW16aTEWMBQGA1UEAwwNY2x1c3Rlci1jYSB2
        MDAeFw0yMjA1MTEyMTAyMzFaFw0yMzA1MTEyMTAyMzFaMC0xEzARBgNVBAoMCmlv
        LnN0cmltemkxFjAUBgNVBAMMDWNsdXN0ZXItY2EgdjAwggIiMA0GCSqGSIb3DQEB
        AQUAA4ICDwAwggIKAoICAQDh6bYIudhZQ1/rR9IgSb7pzqTvtRiNOvzmnZPdtVtT
        q7lNLytPqpR6uuCIrhpuR0CPb++Rvjp2QrWgXr5VWBktT1MLk8WzDfX3+qxd5xC8
        B00EKneBZkhohxBdb0co8ipxDpQAFTy+SeXhuROd5vwLEuh3OJeZMEUfTcNfUbvo
        J/IHUIGeDmhK//DumQE79z3vfLc2EcQgenMo0VoBy4ooQ2o4B7Y3plXHuStvtn6h
        lam30rSA+p3nKskrMDDpNKadHtmCrwI/rZZBFYb7DTdUpi69NeW3TEMRXGG3dMdk
        YYTdKN0zkB5BTvRx5FC6GX+cz/Uq3SnxlSmWB1DT+2nlnlwzVAgbNdsW4HiDUIdI
        FBJyQDqWTH9e7aUv3RzlrT4c995YBTfh1Jdvq5mzneMf6lab7iZoW1hGYQLRRC5y
        v8iTycwHd7EEGf/tjGrJ/s5nWPgGv/DEOg95/UvTRz9dZUWRwHCFANd0LaFW/HdF
        qkhuiVZOKNXqfr7zxnCw/F+0408+vcR43HKUTwId7vql+F+EgjT69U5pDF4sh6ep
        SgLTHoCGd/bekq5HHkrylCOty+ZU9EEWp4fQD+wN3RzGxJ080AA3RjkqsXmHbd5e
        aXlnhDB68mWpoHFuJ6YciNBBXlC/2HhDeR7PiMD9Zj0/7A3UHZj4hHXcSQoCnSW7
        mwIDAQABo0UwQzAdBgNVHQ4EFgQU6yQKlZ+FEJyMkjsPxhmHERps1vgwEgYDVR0T
        AQH/BAgwBgEB/wIBADAOBgNVHQ8BAf8EBAMCAQYwDQYJKoZIhvcNAQENBQADggIB
        AEfcrS4I2xsbTuULMtHlOGLgv7Mo+aJ8Os+vCE+MvSMVrsSvslVnigzE6aSvi7Ys
        TTpstmAhIfOcEEqldRa5GcG6Az6NWlbskZXfftojWtjnZevkuRnn/xICdizX+mj4
        A3WL/GOVpTAWVUa5+lUh1AzFWhBw5kDvMxHyQhmpegt98ptxNpj5n9cHSWwJpjXl
        boNil+Y5kA4raWGa6gEOE0lwmLyS5pjOWCTCTD2MvldNakYPMqObVPE4DNia4qal
        huxOyxdr51KNBc7yVgQ1Fa7ZD+rF1a6aa6GwvwAKYNoxd7VW7fmZBSckpuWer9+R
        YCVvgE2a4vLnc5zLFwOfhjqaZSiIx0PMEmkHx1ZTriVg0GVZ8beU+I9BxUQsJyJU
        S4z9UaHexmYu/YRAQXKODw1xhqqR6oW2+CXYrtUvzN6kamFh8jN3AKf4PKA+TmjL
        maW0M7FVp+0Erne59hBcZhKG0QYx4AkjCwKclRwDBxXcBTcmXduDFeGzLub0napJ
        Uczo2zURQ7L6qPew9Guh0O1dnGp+kgi8T8kt/DniMvQBWDK3GvFi0A5mVjLQqMHQ
        HvAPzshx7Si1O45hepGK4fxQMcwAHw6c1V3j10R8RHh7bckld5mJ5Nh/BjZhk/LK
        N5Klfwoek0QSVAXQfnX1YtJfrHfz5+TYx0NnYTcgX6fE
        -----END CERTIFICATE-----
        
        To get the Kafka username and password:
        
        oc extract secret/maskafkauser -n kafka --keys=sasl.jaas.config --to=-
        Sample output:
        
        # sasl.jaas.config
        org.apache.kafka.common.security.scram.ScramLoginModule required username="maskafkauser" password="KbpatTNjUu5N";
        

        Where the username is maskafkauser and the password is KbpatTNjUu5N.

        To get the SASL Mechanism:
        
        oc get Kafka.kafka.strimzi.io maskafka -n kafka -o jsonpath='{.spec.kafka.listeners[0].authentication}' | jq -r . 
        
        Sample output:
        
        {
          "type": "scram-sha-512"
        }
        

        Where the SASL Mechanism is SCRAM-SHA-512.

  • Installing Kafka by using the AMQ Streams Operator UI
    1. Installing the AMQ Streams and using the AMQ Streams UI to create a Kafka cluster and user.
      • Log into the OCP cluster with your username and password
      • Create a new project: kafka
      • Navigate to Operator Hub, and search for "AMQ Streams", then select the tile for Red Hat Integration - AMQ Streams
      • Click "Install"
      • On the next page, select the radio button for amq-streams-1.7.x.
      • Select a specific namespace on the cluster: kafka
      • click the "install" button
      • The operator is ready for use.
    2. Create a Kafka cluster.
      • In the kafka namespace, click "Installed Operators"
      • Navigate to the Kafka tab, and click "Create Kafka"
      • Enter the name "kafka" for the cluster
      • Expand the Kafka configuration and enter the following values:
        • Kafka Brokers: 3
        • Expand Storage:

          Kafka Storage: jbod

          Expand volumes and add a new volume:
          • id: 0
          • type: persistent-claim
          • Size: 100Gi
          • Storage class. ocs-storagecluster-ceph-rbd (replace with a supported block storage class)
          • Delete claim: true (checked)
        • Expand Listeners, and scroll down to the listener section named "tls"
          • port: 9093
          • type: route
          • tls: true (checked)
            Expand "Authentication":
            • Type: scram-sha-512
        • Expand Authorization
          • Type: Simple
      • Expand the Zookeeper configuration and enter the following values:
        • e. Zookeeper Nodes: 3
        • f. Expand Storage:
          • Zookeeper Storage: persistent-claim
          • Size: 10Gi
          • class: ocs-storagecluster-ceph-rbd (replace with a supported block storage class)
          • Delete claim: true (checked)
      Click Create.

      For more information about available and tested storage classes, and for deployment size guidance, see the Monitor and IoT section of the Maximo Application Suite system requirements document.

    3. Create the Kafka user.
      • In the AMQ Streams Operator, navigate to Kafka User tab. Click "Create KafkaUser.
      • Set name: "masuser".
      • Expand authentication:
        
        type: scram-sha-512 
          Expand authorization: 
          type: simple 
        
      • Expand acls and add 4 ACLs as shown:
        
        a:  host: '*' 
                   operation: All 
                   > Expand Resource: 
          name: '*' 
                 patternType: prefix 
                 type: topic 
           Type:  Allow
        b:  host: '*' 
             operation: All 
             > Expand Resource: 
                   name: '*' 
                   patternType: prefix 
                   type: group
            Type:  Allow
                c:  host: '*' 
                     operation: All 
                     > Expand Resource: 
                  name: '*' 
                  patternType: literal 
           type: topic
        Type:  Allow
              c:  host: '*' 
                   operation: All 
                   > Expand Resource: 
                  name: '*' 
                  patternType: literal 
                  type: group
        Type:  Allow
        
      • Click Remove acl for any extra acls shown that are not configured.
      • Click Create.
    4. Create the Kafka topics.
      1. In the AMQ Streams Operator, navigate to Kafka Topic tab. Click Create KafkaTopic.
        
        Name: cqin
        Labels:  strimzi.io/cluster=kafka
        Partitions:  1
        Replication factor:  3
        Topic Name: cqin
        
      2. Click Create.
      3. Create another topic using the these values:
        
        Name: cqinerr
        Labels:  strimzi.io/cluster=kafka
        Partitions:  1
        Replication factor:  3
        Topic Name: cqinerr
        
      4. Click Create.
      5. Create another topic using the these values:
        
        Name: sqin
        Labels:  strimzi.io/cluster=kafka
        Partitions:  1
        Replication factor:  3
        Topic Name: sqin
        
      6. Click Create.
      7. Create another topic using the these values:
        
        Name: sqout
        Labels:  strimzi.io/cluster=kafka
        Partitions:  1
        Replication factor:  3
        Topic Name: sqout
        
      8. Click Create
      9. When you complete this section, a Kafka cluster, user and topics are created. You will need to collect the following details to complete the Apache Kafka Configuration in the Maximo Application Suite Administration UI.
        • Kafka bootstrap hosts and ports
        • username and password
        • CA certificate
      10. Next steps, configure the Suite parameters for Kafka in the Maximo Application Suite UI.

What to do next

Configure Maximo Application Suite parameters
Now you are ready to configure Apache Kafka details.
  1. In the Maximo Application Suite instance, login to the Administration dashboard.
  2. in Other > Configurations, select Apache Kafka. The following information is needed to configure the Apache Kafka details: Hosts/HostnamesUsername/passwordCertificates

    Hosts - to obtain the bootstrap hosts, in the Red Hat OpenShift console.

  3. In the Kafka project, go to Networking > Routes and search for the route kafka-kafka-tls-bootstrap.
  4. Copy the value in the host field.

    For example, kafka-kafka-tls-bootstrap-kafka.<yourdomain.com>.

  5. The port number in the external route is 443. Enter the host name and port values in the hosts section.
  6. To obtain the Kafka user's password, in the Red Hat OpenShift Kafka project, go to Workloads > Secrets. Search for your Kafka user, i.e. masuser. The data section will contain the user's password.
  7. To obtain the certificates, during configuring the Apache Kafka parameters, click the Retrieve option to automatically retrieve the certificates from the Kafka bootstrap host.
    Alternatively, you can obtain the certificate details prior to configuring the Maximo Application Suite parameters to enter the information manually.
    1. To do this, in the Red Hat OpenShift console, switch to the Kafka project, and then navigate to Custom Resource Definitions. Search for Kafka.
    2. Click the Instances tab and select your instance in the Kafka namespace.
    3. Click YAML view. From this view, you can copy the certificate. The certificate will have BEGIN CERTIFICATE and END CERTIFICATE tags which must be included.
      
      -----BEGIN CERTIFICATE-----
      MIIDLTCCAhWgAwIBAgIJANfi6SPho4cIM...
      -----END CERTIFICATE-----
      
  8. Copy the certificate text to be added in the Maximo Application Suite UI.
  9. Log in to the Maximo Application Suite Admin User Interface and go to Administration.
  10. Click Configurations
  11. Click Apache Kafka
  12. Add the tls bootstrap host name and port:

    For example, Host Portxxx.xxx.xxx.xx.com 443

  13. Enter the username and password
  14. Enter an alias name. For example, strimzi
  15. Add the copied certificate(s), and set the alias name then click confirm.
  16. Click Save.
  17. To confirm that the configuration is successful, in the OpenShift console, navigate to Administration > Custom Resource Definitions. Search for kafkacfg. Click the Instances tab. Click your instance, then view the YAML for any success or failure messages.
IBM Event Streams
Event Streams is an alternative for AMQ Streams for Kafka dependency which is available in IBM Cloud® catalog.
  1. To install an Event Streams instance in IBM Cloud, login to your IBM Cloud account, go to Catalog and search for "Event Streams". Once you Click the "Event Streams" tile, go to Create tab and you will get to the provisioning details page where you will have to enter information regarding your Event Streams instance.
  2. Location - It is recommended to choose a location that is close to the server/cluster location of your Maximo Application Suite instance for improved network performance.
  3. Pricing Plan - Choose the plan that best fits your expected Kafka usage.
  4. Resource details - Enter a Service name (it can be any unique name), and the optionally enter more details such as IBM Cloud resource group, and tags.
  5. Review the summary of your Events Streams instance, review and accept the license agreement terms and click Create.
The Events Streams instance will be provisioned and you be redirected to the Event Streams Home page. Click Service Credentials from the menu. Click "New credential" to create a new service credential that contains the details that will be used to integrate Event Streams into your Maximo Application Suite instance.
  • Name: Unique name for your service credential
  • Example: Service credentials-1
  • Role: Defined the level of permissions for your Event Streams instance
  • Example: Manager (default)
When the Event Streams service credentials is created, expand it to see all the credential details. You will need the following information from Event Streams service credentials to configure it properly in Maximo Application Suite:
  • kafka_brokers_sasl - Contains 6 hostnames for available Kafka brokers of your Event Streams instance.
  • user - Kafka username, default is token
  • password - Kafka password
Suite Configuration Parameters for event streams
Now you are ready to configure Event Streams into Maximo Application Suite.
  1. Login to the Suite Administration dashboard of your Maximo Application Suite instance, go to Other configurations > Configurations.
  2. Select Apache Kafka.
  3. Enter the following information to configure Events Streams as a Kafka service for Maximo Application Suite:
    • Hosts/Hostnames - Add a row for each of the six Kafka broker hostnames provided in the Event Streams service credential.
      Note: Make sure you do not copy the port. Copy the Kafka broker hostname.
      For example,
      broker-0-<your-event-streams-broker-id>.kafka.svc07.us-south.eventstreams.cloud.ibm.com
      broker-1...
      ....
      broker-6-<your-event-streams-broker-id>.kafka.svc07.us-south.eventstreams.cloud.ibm.com
    • Port - Enter the port associated to the kafka broker hostnames provided in the Event Streams service credential.

      For example, 9093

    • SASL Mechanism - Select plain. This is the default authentication mechanism for Event Streams.
    • Username - Enter the user provided on Event Streams service credential.
    • Password - Enter the password provided on Event Streams service credential.
    • Certificates - Enter the chain of SSL certificates for your Event Streams instance.
    • Click Add to add the intermediate of the certificate chain.
    • Enter an alias.

      For example, kafkacertpart1

    • Enter the Certificate content. Here you will include the Let's Encrypt R3 intermediate certificate, issued to US, Let's Encrypt, R3. For more information about certificate content , see here.
      For example,
      
      -----BEGIN CERTIFICATE-----
      MIIF5jCCBM6gAwIBAgISA0Y...
      -----END CERTIFICATE-----
      
  4. Click Confirm. The first part of this certificate should have valid dates and look like the following example:
    Issued to: US, Let's Encrypt, R3
    Issued by: US, Internet Security Research Group, ISRG Root X1
    Valid from: Thu Sep 01 2022
    Valid to: Mon Sep 15 2025

    This is the intermediate certificate which is required for the SSL connection to Event Streams endpoint.

  5. Click Add to add the root of the certificate chain.
  6. Enter an alias.

    For example, kafkacertpart2

  7. Enter the Certificate content. Here you will include the ISRG Root X1 cross-signed certificate, issued to US, Internet Security Research Group, ISRG Root X1. For more information about certificate content, see here.
    
    -----BEGIN CERTIFICATE-----
    MIIFazCCA1OgAw...
    -----END CERTIFICATE-----
    
  8. Click Confirm. The second part of this certificate should have valid dates and look like the following example:
    Issued to: US, Internet Security Research Group, ISRG Root X1
    Issued by: US, Internet Security Research Group, ISRG Root X1
    Valid from: Thu Jun 04 2015
    Valid to: Mon Jun 04 2035

    This is the root certificate which is required for the SSL connection to Event Streams endpoint.

  9. Save the Apache Kafka configuration.

    Now, wait for the Apache Kafka configuration to reconcile, this process might take up to 10 minutes. The configuration will be successfully completed when the configuration status is set to Ready.

    Configuration Ready - Kafka configuration was successfully verified