You configure the primary cluster for geo-redundancy as described
here.
Before you begin
Ensure that you have the IP addresses of the worker nodes to be replicated.
About this task
To configure the primary cluster to securely expose Kafka via load balancers, you complete the
following steps:
- Configure Kafka for secure connections.
- Expose Kafka via load balancer services to match the HA proxy configuration.
- Verify access.
Procedure
- Configure Kafka for secure connections.
- On the primary cluster, import a CA certificate and key. (In the following
examples, these are myCertificate.crt and
myCertificate.key.)
- Create a secret to store the CA certificate and key to be used to sign the service
certificates, and be trusted in both clusters.
Any name can be used for the secret, but
must be given as
global.internalCaCertificate.secretName in the values file.
oc create secret tls ca-cert --cert=./myCertificate.crt --key=myCertificate.key
- Create a secret to store the Kafka client credentials, which are needed in both
clusters.
Any name can be used for the secret, but must be given as
global.kafka.clientUserSecret in the values
file.
oc create secret generic kafka-client-credentials --from-literal=username=kafkaClient --from-literal=password=clientPassword
- Configure the Kafka broker to use these values.
Add the following to the
helmValuesNOI section of the NOI
chart:
global.internalCaCertificate.secretName: ca-cert
global.kafka.clientUserSecret: kafka-client-credentials
- Expose Kafka via load balancer
services.
The value required in noi-helm to expose secured and unsecured brokers with OCP load balancer.
ibm-hdm-analytics-dev.kafka.externalAccess.hosts:
- kafka.apps.event-manager.cp.xyz.com
ibm-hdm-analytics-dev.kafka.externalAccess.externalIPs:
- 10.22.41.144
- 10.22.41.173
- 10.22.42.144
- 10.22.42.173
- 10.22.43.144
- 10.22.44.144
ibm-hdm-analytics-dev.kafka.externalAccess.securePorts:
- 19093
- 19095
- 19097
ibm-hdm-analytics-dev.kafka.externalAccess.serviceLabels:
- <kafka-1-ext>: <label1>
- <kafka-2-ext>: <label2>
- <kafka-3-ext>: <label3>
ibm-hdm-analytics-dev.kafka.externalAccess.serviceType: LoadBalancer
- Where
externalAccess.hosts
is the host to use in advertised listeners. This
value is the key flag to decide whether new external services should be created by OCP. Once OCP
creates the external service as type 'load-balancer', it will require the other three values of
externalIPs
, ports
, and/or securePorts
.
- Where
externalAccess.externalIPs
are the Node IPs on which to expose Kafka (if
required).
- Where
externalAccess.securePorts
are the secure ports to expose in advertised
listeners. (Unsecured ports would be externalAccess.ports
.)
- Where
serviceLabels
defines different labels for the external services.
- Where
externalAccess.serviceType
is the type of external service to create. It
defaults to ClusterIP
.
- Verify access.
To verify whether the Kafka brokers are exposed successfully, check whether the new service with
a name release-name-kafka-*-ext exist. These services map the advertised port,
as given in the above configuration, to the port used by the Kafka broker.
To validate the secured connection, use the Kafka tool as in the following example.
kafkacat -b kafka.apps.event-manager.cp.xyz.com:19093 -X ssl.ca.location=./myCertificate.crt -X security.protocol=SASL_SSL -X sasl.mechanisms=PLAIN -X sasl.username=kafkaClient -X sasl.password=clientPassword -L
Repeat
for all exposed ports to confirm complete access to the cluster. The certificate and
username/password must match those in the secrets created earlier. For example, this can be repeated
for ports 19093, 19095 and 19097.
If you see the topic list with no error message, the broker should be treated as exposed
successfully.
What to do next
Next, set up the backup cluster.