Installing Apache Kafka for IBM Maximo Manage
Apache Kafka provides a buffer for messages that are sent to and received from external interfaces. Apache Kafka is not required if the IBM® Maximo® Manage software does not interface with external systems.
Red Hat® AMQ Streams operator, which is based on the Strict operator, is the preferred way to install Kafka for on-premises installations. It can also be used to install Kafka in cloud-based Maximo Application Suite installations when a managed Kafka service by the cloud
provider is not desirable. For more information, see Red Hat AMQ Streams operator and Strimzi
operator.
Tip: This task can also be done by using the following Ansible® role: Kafka. For more information, see IBM Maximo Application Suite installation with Ansible collection and kafka.
What to do next
Configure Apache Kafka Suite parameters. For more information, see Apache Kafka.
Installing by using the Red Hat OpenShift Container Platform web console
Procedure
- In Red Hat OpenShift® Container Platform, from the side navigation menu, click Home > Projects and then click Create Project. Enter the name kafka, and click Create to provision the new namespace for Kafka.
- In the global navigation bar, click the Import YAML icon . Enter the
following YAML.
--- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: "kafka" namespace: "kafka" spec: targetNamespaces: - "kafka" --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: amq-streams namespace: "kafka" spec: channel: amq-streams-1.8.x installPlanApproval: Automatic name: amq-streams source: redhat-operators sourceNamespace: openshift-marketplace
Tip: For Maximo Application Suite on AWS (BYOL) version 8.7, changeamq-streams-1.8.x
toamq-streams-1.7.x
to match the version of AMQ streams that is installed in the BAS namespace. - Click Create to create the operator group and subscription resources in the kafka namespace.
- From the side navigation menu, click Operators > Installed Operators. Search for AMQ Streams and verify that the operator status is set to Succeeded.
- In the global navigation bar, click the Import YAML icon. Enter
the following YAML code.
--- apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: "maskafka" namespace: "kafka" spec: # ------------------------------------------------------- kafka: version: 2.7.0 replicas: 3 resources: requests: memory: 4Gi cpu: "1" limits: memory: 4Gi cpu: "2" jvmOptions: -Xms: 3072m -Xmx: 3072m config: offsets.topic.replication.factor: 3 transaction.state.log.replication.factor: 3 transaction.state.log.min.isr: 2 log.message.format.version: "2.7" log.retention.hours: 24 log.retention.bytes: 1073741824 log.segment.bytes: 268435456 log.cleaner.enable: true log.cleanup.policy: delete auto.create.topics.enable: false storage: type: jbod volumes: - id: 0 type: persistent-claim class: "ocs-storagecluster-ceph-rbd" size: 100Gi deleteClaim: true authorization: type: simple listeners: - name: tls port: 9094 type: route tls: true authentication: type: scram-sha-512 # ------------------------------------------------------- zookeeper: replicas: 3 resources: requests: memory: 1Gi cpu: "0.5" limits: memory: 1Gi cpu: "1" jvmOptions: -Xms: 768m -Xmx: 768m storage: type: persistent-claim class: "ocs-storagecluster-ceph-rbd" size: 10Gi deleteClaim: true # ------------------------------------------------------- entityOperator: userOperator: {} topicOperator: {}
Modify the specified storage class
ocs-storagecluster-ceph-rbd
to use a supported storage class for your cluster. - Click Create to create the Kafka cluster.
- From the side navigation menu, click Workloads > StatefulSets and switch to the kafka project. You see two statefulsets: maskafka-kafka, which is the Kafka brokers, and maskafka-zookeeper, which the Kafka ZooKeepers. Select each statefulset and verify that each has three pods that are in Ready state.
- In the global navigation bar, click the Import YAML icon. Enter
the following YAML.
--- apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: "maskafkauser" labels: strimzi.io/cluster: "maskafka" namespace: "kafka" spec: authentication: type: scram-sha-512 authorization: type: simple acls: - host: '*' operation: All resource: name: '*' patternType: literal type: topic - host: '*' operation: All resource: name: '*' patternType: literal type: group - host: '*' operation: All resource: name: '*' patternType: literal type: cluster - host: '*' operation: All resource: name: '*' patternType: literal type: transactionalId
- Click Create to create a Kafka user, which is used by Maximo Application Suite to authenticate connections to Kafka.
- From the side navigation menu, click Workloads > Secrets and switch to the kafka project. Verify that the user entity operator created the maskafkauser secret.
- From the side navigation menu, click Networking > Routes and switch to the kafka project. Verify that the maskafka-kafka-tls-bootstrap route was created.
- Get the Kafka information.
- To get the Kafka host and port, input the following code:
oc get Kafka.kafka.strimzi.io maskafka -o jsonpath="{.status.listeners[0].addresses[0]}"
Sample output{"host":"maskafka-kafka-tls-bootstrap-kafka.apps.cluster1.example-cluster.com","port":443}
- Get the Kafka CA certificate.
oc get Kafka.kafka.strimzi.io maskafka -o jsonpath="{.status.listeners[0].certificates[0]}"
Sample output-----BEGIN CERTIFICATE----- MIIFLTCCAxWgAwIBAgIUTExUl2XrdIPy6vZAtk9toGh2jbEwDQYJKoZIhvcNAQEN BQAwLTETMBEGA1UECgwKaW8uc3RyaW16aTEWMBQGA1UEAwwNY2x1c3Rlci1jYSB2 MDAeFw0yMjA1MTEyMTAyMzFaFw0yMzA1MTEyMTAyMzFaMC0xEzARBgNVBAoMCmlv LnN0cmltemkxFjAUBgNVBAMMDWNsdXN0ZXItY2EgdjAwggIiMA0GCSqGSIb3DQEB AQUAA4ICDwAwggIKAoICAQDh6bYIudhZQ1/rR9IgSb7pzqTvtRiNOvzmnZPdtVtT q7lNLytPqpR6uuCIrhpuR0CPb++Rvjp2QrWgXr5VWBktT1MLk8WzDfX3+qxd5xC8 B00EKneBZkhohxBdb0co8ipxDpQAFTy+SeXhuROd5vwLEuh3OJeZMEUfTcNfUbvo J/IHUIGeDmhK//DumQE79z3vfLc2EcQgenMo0VoBy4ooQ2o4B7Y3plXHuStvtn6h lam30rSA+p3nKskrMDDpNKadHtmCrwI/rZZBFYb7DTdUpi69NeW3TEMRXGG3dMdk YYTdKN0zkB5BTvRx5FC6GX+cz/Uq3SnxlSmWB1DT+2nlnlwzVAgbNdsW4HiDUIdI FBJyQDqWTH9e7aUv3RzlrT4c995YBTfh1Jdvq5mzneMf6lab7iZoW1hGYQLRRC5y v8iTycwHd7EEGf/tjGrJ/s5nWPgGv/DEOg95/UvTRz9dZUWRwHCFANd0LaFW/HdF qkhuiVZOKNXqfr7zxnCw/F+0408+vcR43HKUTwId7vql+F+EgjT69U5pDF4sh6ep SgLTHoCGd/bekq5HHkrylCOty+ZU9EEWp4fQD+wN3RzGxJ080AA3RjkqsXmHbd5e aXlnhDB68mWpoHFuJ6YciNBBXlC/2HhDeR7PiMD9Zj0/7A3UHZj4hHXcSQoCnSW7 mwIDAQABo0UwQzAdBgNVHQ4EFgQU6yQKlZ+FEJyMkjsPxhmHERps1vgwEgYDVR0T AQH/BAgwBgEB/wIBADAOBgNVHQ8BAf8EBAMCAQYwDQYJKoZIhvcNAQENBQADggIB AEfcrS4I2xsbTuULMtHlOGLgv7Mo+aJ8Os+vCE+MvSMVrsSvslVnigzE6aSvi7Ys TTpstmAhIfOcEEqldRa5GcG6Az6NWlbskZXfftojWtjnZevkuRnn/xICdizX+mj4 A3WL/GOVpTAWVUa5+lUh1AzFWhBw5kDvMxHyQhmpegt98ptxNpj5n9cHSWwJpjXl boNil+Y5kA4raWGa6gEOE0lwmLyS5pjOWCTCTD2MvldNakYPMqObVPE4DNia4qal huxOyxdr51KNBc7yVgQ1Fa7ZD+rF1a6aa6GwvwAKYNoxd7VW7fmZBSckpuWer9+R YCVvgE2a4vLnc5zLFwOfhjqaZSiIx0PMEmkHx1ZTriVg0GVZ8beU+I9BxUQsJyJU S4z9UaHexmYu/YRAQXKODw1xhqqR6oW2+CXYrtUvzN6kamFh8jN3AKf4PKA+TmjL maW0M7FVp+0Erne59hBcZhKG0QYx4AkjCwKclRwDBxXcBTcmXduDFeGzLub0napJ Uczo2zURQ7L6qPew9Guh0O1dnGp+kgi8T8kt/DniMvQBWDK3GvFi0A5mVjLQqMHQ HvAPzshx7Si1O45hepGK4fxQMcwAHw6c1V3j10R8RHh7bckld5mJ5Nh/BjZhk/LK N5Klfwoek0QSVAXQfnX1YtJfrHfz5+TYx0NnYTcgX6fE -----END CERTIFICATE-----
- Get the Kafka username and password.
oc extract secret/maskafkauser -n kafka --keys=sasl.jaas.config --to=-
- Get the Simple Authentication and Security Layer (SASL) mechanism.
oc get Kafka.kafka.strimzi.io maskafka -n kafka -o jsonpath='{.spec.kafka.listeners[0].authentication}' | jq -r .
Sample output:{ "type": "scram-sha-512" }
Note: The SASL mechanism is SCRAM-SHA-512.
- To get the Kafka host and port, input the following code:
Installing by using the Red Hat OpenShift command-line interface (CLI)
Procedure
-
From the bastion host, create the YAML file kafka-sub.yaml, which contains
the namespace,
OperatorGroup
, andSubscription
resources that are used to install Kafka:--- apiVersion: v1 kind: Namespace metadata: name: "kafka" --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: "kafka" namespace: "kafka" spec: targetNamespaces: - "kafka" --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: amq-streams namespace: "kafka" spec: channel: amq-streams-1.8.x installPlanApproval: Automatic name: amq-streams source: redhat-operators sourceNamespace: openshift-marketplace
Tip: For Maximo Application Suite on AWS (BYOL) version 8.7, changeamq-streams-1.8.x
toamq-streams-1.7.x
to match the version of AMQ streams that is installed in the BAS namespace. - Apply the kafka-sub.yaml file to the Red Hat OpenShift Container Platform cluster:
oc apply -f kafka-sub.yaml
-
Verify that the AMQ Streams operator was successfully deployed:
oc get csv -n kafka -l operators.coreos.com/amq-streams.kafka
Sample outputNAME DISPLAY VERSION REPLACES PHASE amqstreams.v1.8.4 Red Hat Integration - AMQ Streams 1.8.4 amqstreams.v1.8.3 Succeeded
- From the bastion host, create the YAML file kafka-cluster.yaml,
which contains the Kafka resource that describes the configuration of the Kafka cluster:
--- apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: "maskafka" namespace: "kafka" spec: # ------------------------------------------------------- kafka: version: 2.7.0 replicas: 3 resources: requests: memory: 4Gi cpu: "1" limits: memory: 4Gi cpu: "2" jvmOptions: -Xms: 3072m -Xmx: 3072m config: offsets.topic.replication.factor: 3 transaction.state.log.replication.factor: 3 transaction.state.log.min.isr: 2 log.message.format.version: "2.7" log.retention.hours: 24 log.retention.bytes: 1073741824 log.segment.bytes: 268435456 log.cleaner.enable: true log.cleanup.policy: delete auto.create.topics.enable: false storage: type: jbod volumes: - id: 0 type: persistent-claim class: "ocs-storagecluster-ceph-rbd" size: 100Gi deleteClaim: true authorization: type: simple listeners: - name: tls port: 9094 type: route tls: true authentication: type: scram-sha-512 # ------------------------------------------------------- zookeeper: replicas: 3 resources: requests: memory: 1Gi cpu: "0.5" limits: memory: 1Gi cpu: "1" jvmOptions: -Xms: 768m -Xmx: 768m storage: type: persistent-claim class: "ocs-storagecluster-ceph-rbd" size: 10Gi deleteClaim: true # ------------------------------------------------------- entityOperator: userOperator: {} topicOperator: {}
Ensure that you modify the specified storage class
ocs-storagecluster-ceph-rbd
to use a supported storage class for your cluster. -
Apply the kafka-cluster.yaml file to the Red Hat OpenShift
cluster:
oc apply -f kafka-cluster.yaml
-
Verify that the Kafka cluster was successfully deployed. The Kafka CR is in the Ready state.
The Kafka CR specified in the following command is fully qualified with its API group name kafkas.kafka.strimzi.io to avoid ambiguity with the Kafka CR that is provided by kafkas.ibmevents.ibm.com.
oc get kafkas.kafka.strimzi.io -n kafka
Sample outputNAME DESIRED KAFKA REPLICAS DESIRED ZK REPLICAS READY WARNINGS maskafka 3 3 True
-
From the bastion host, create the YAML file kafka-user.yaml. The file
contains the KafkaUser resource that describes the configuration of the Kafka user that is used by
Maximo Application Suite to authenticate connections to Kafka:
--- apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: "maskafkauser" labels: strimzi.io/cluster: "maskafka" namespace: "kafka" spec: authentication: type: scram-sha-512 authorization: type: simple acls: - host: '*' operation: All resource: name: '*' patternType: literal type: topic - host: '*' operation: All resource: name: '*' patternType: literal type: group - host: '*' operation: All resource: name: '*' patternType: literal type: cluster - host: '*' operation: All resource: name: '*' patternType: literal type: transactionalId
-
Apply the kafka-user.yaml file to the Red Hat OpenShift
cluster:
oc apply -f kafka-user.yaml
-
Verify that the user entity operator is created the maskafkauser
secret.
oc get secret maskafkauser -n kafka
Sample outputNAME TYPE DATA AGE maskafkauser Opaque 2 2m14s
- Get the Kafka information.
- Get the Kafka host and port.
oc get Kafka.kafka.strimzi.io maskafka -o jsonpath="{.status.listeners[0].addresses[0]}"
Sample output:{"host":"maskafka-kafka-tls-bootstrap-kafka.apps.cluster1.example-cluster.com","port":443}
- Get the Kafka CA certificate.
oc get Kafka.kafka.strimzi.io maskafka -o jsonpath="{.status.listeners[0].certificates[0]}"
Sample output:-----BEGIN CERTIFICATE----- MIIFLTCCAxWgAwIBAgIUTExUl2XrdIPy6vZAtk9toGh2jbEwDQYJKoZIhvcNAQEN BQAwLTETMBEGA1UECgwKaW8uc3RyaW16aTEWMBQGA1UEAwwNY2x1c3Rlci1jYSB2 MDAeFw0yMjA1MTEyMTAyMzFaFw0yMzA1MTEyMTAyMzFaMC0xEzARBgNVBAoMCmlv LnN0cmltemkxFjAUBgNVBAMMDWNsdXN0ZXItY2EgdjAwggIiMA0GCSqGSIb3DQEB AQUAA4ICDwAwggIKAoICAQDh6bYIudhZQ1/rR9IgSb7pzqTvtRiNOvzmnZPdtVtT q7lNLytPqpR6uuCIrhpuR0CPb++Rvjp2QrWgXr5VWBktT1MLk8WzDfX3+qxd5xC8 B00EKneBZkhohxBdb0co8ipxDpQAFTy+SeXhuROd5vwLEuh3OJeZMEUfTcNfUbvo J/IHUIGeDmhK//DumQE79z3vfLc2EcQgenMo0VoBy4ooQ2o4B7Y3plXHuStvtn6h lam30rSA+p3nKskrMDDpNKadHtmCrwI/rZZBFYb7DTdUpi69NeW3TEMRXGG3dMdk YYTdKN0zkB5BTvRx5FC6GX+cz/Uq3SnxlSmWB1DT+2nlnlwzVAgbNdsW4HiDUIdI FBJyQDqWTH9e7aUv3RzlrT4c995YBTfh1Jdvq5mzneMf6lab7iZoW1hGYQLRRC5y v8iTycwHd7EEGf/tjGrJ/s5nWPgGv/DEOg95/UvTRz9dZUWRwHCFANd0LaFW/HdF qkhuiVZOKNXqfr7zxnCw/F+0408+vcR43HKUTwId7vql+F+EgjT69U5pDF4sh6ep SgLTHoCGd/bekq5HHkrylCOty+ZU9EEWp4fQD+wN3RzGxJ080AA3RjkqsXmHbd5e aXlnhDB68mWpoHFuJ6YciNBBXlC/2HhDeR7PiMD9Zj0/7A3UHZj4hHXcSQoCnSW7 mwIDAQABo0UwQzAdBgNVHQ4EFgQU6yQKlZ+FEJyMkjsPxhmHERps1vgwEgYDVR0T AQH/BAgwBgEB/wIBADAOBgNVHQ8BAf8EBAMCAQYwDQYJKoZIhvcNAQENBQADggIB AEfcrS4I2xsbTuULMtHlOGLgv7Mo+aJ8Os+vCE+MvSMVrsSvslVnigzE6aSvi7Ys TTpstmAhIfOcEEqldRa5GcG6Az6NWlbskZXfftojWtjnZevkuRnn/xICdizX+mj4 A3WL/GOVpTAWVUa5+lUh1AzFWhBw5kDvMxHyQhmpegt98ptxNpj5n9cHSWwJpjXl boNil+Y5kA4raWGa6gEOE0lwmLyS5pjOWCTCTD2MvldNakYPMqObVPE4DNia4qal huxOyxdr51KNBc7yVgQ1Fa7ZD+rF1a6aa6GwvwAKYNoxd7VW7fmZBSckpuWer9+R YCVvgE2a4vLnc5zLFwOfhjqaZSiIx0PMEmkHx1ZTriVg0GVZ8beU+I9BxUQsJyJU S4z9UaHexmYu/YRAQXKODw1xhqqR6oW2+CXYrtUvzN6kamFh8jN3AKf4PKA+TmjL maW0M7FVp+0Erne59hBcZhKG0QYx4AkjCwKclRwDBxXcBTcmXduDFeGzLub0napJ Uczo2zURQ7L6qPew9Guh0O1dnGp+kgi8T8kt/DniMvQBWDK3GvFi0A5mVjLQqMHQ HvAPzshx7Si1O45hepGK4fxQMcwAHw6c1V3j10R8RHh7bckld5mJ5Nh/BjZhk/LK N5Klfwoek0QSVAXQfnX1YtJfrHfz5+TYx0NnYTcgX6fE -----END CERTIFICATE-----
- Get the Kafka username and password
oc extract secret/maskafkauser -n kafka --keys=sasl.jaas.config --to=-
- Get the SASL mechanism.
oc get Kafka.kafka.strimzi.io maskafka -n kafka -o jsonpath='{.spec.kafka.listeners[0].authentication}' | jq -r .
Sample output:{ "type": "scram-sha-512" }
Note: The SASL Mechanism is SCRAM-SHA-512.
- Get the Kafka host and port.