This example deploys a queue manager using the native high availability cross-region
replication (Native HA CRR) feature into the Kubernetes using YAML resources directly.
Before you begin
To complete this example, you must first have completed the following prerequisites:
- Create a Kubernetes namespace in each of the clusters where the Live and Recovery groups will be
deployed.
- On the command line, be able to log into either Kubernetes cluster, and switch to the relevant
namespace.
- Ensure that you have followed the steps in Preparing to use Kubernetes by creating a pull secret in the above
namespaces.
About this task
This example deploys two queue managers using the Native HA CRR feature into Kubernetes using
YAML resources directly. One of the Native HA queue managers will be Live and the other will be the
Recovery. This example builds upon the YAML files created in Example: Configuring Native HA in Kubernetes.
This example provides the modifications required on top of those defined for NativeHA: ConfigMap,
Service, StatefulSet, and Route.
Procedure
-
Create a new config map, or define a new .ini data definition within the existing ConfigMap
(see step 2 in the Native HA example), to define the INI file for a Live or a Recovery queue
manager.
Create a Kubernetes ConfigMap containing the definitions to configure the following groups:
- A native high availability group as Recovery:
apiVersion: v1
kind: ConfigMap
metadata:
name: example-nativeha-configmap
data:
...
example-crr: |
NativeHALocalInstance:
GroupName=beta
GroupRole=Recovery
GroupLocalAddress=(9415)
GroupCipherSpec=ANY_TLS12_OR_HIGHER
When the Recovery group is deployed, the replication address of that group can be determined, and
provided in the ConfigMap for the Live group. For example, this can be done by issuing the following
command to the deployed Recovery cluster:
kubectl get route example-ibm-mq-crr-route -n <recovery-qm-namespace> -o jsonpath="{.spec.host}"
- A Native HA CRR group as Live (pointing to the Recovery group):
apiVersion: v1
kind: ConfigMap
metadata:
name: example-nativeha-configmap
data:
...
example-crr: |
NativeHALocalInstance:
GroupName=alpha
GroupRole=Live
GroupLocalAddress=(9415)
GroupCipherSpec=ANY_TLS12_OR_HIGHER
NativeHARecoveryGroup:
Enabled=Yes
GroupName=beta
ReplicationAddress=<address.to.recovery.cluster>(443)
- Expose an additional Service ClusterIP port for the Native HA CRR traffic to flow into
the CRR group. This can be done by creating a new YAML definition, or by extending the Service
definition in step 3 of the Native HA example:
apiVersion: v1
kind: Service
metadata:
...
name: exampleqm-ibm-mq
spec:
ports:
...
- name: ha-crr
port: 9415
protocol: TCP
targetPort: 9415
...
selector:
...
type: ClusterIP
- Define a new Route into the Cluster to enable communication from the other Native HA CRR
group to reach this Native HA CRR group and enable TLS ‘passthrough’ to the new Service ClusterIP
port defined in step 2:
apiVersion: route.openshift.io/v1
kind: Route
metadata:
name: "exampleqm-ibm-mq-crr-route"
spec:
to:
kind: Service
name: exampleqm-ibm-mq
port:
targetPort: 9415
tls:
termination: passthrough
- Modify the StatefulSet created in step 6 of the Native HA example, to ensure that the
following requirements are met:
- The new ConfigMap INI definition for Native HA CRR is mounted so that the queue
manager can pick it up on start up:
...
volumeMounts:
...
# Mount the NativeHA CRR INI file to be applied when the queue manager starts
- mountPath: /etc/mqm/example-crr.ini
name: cm-example-nativeha-configmap
readOnly: true
subPath: example-crr.ini
...
volumes:
...
- configMap:
defaultMode: 420
items:
...
- key: example-crr.ini
path: example-crr.ini
name: example-nativeha-configmap
name: cm-example-nativeha-configmap
- Communication between the two Native HA CRR groups must be encrypted. Therefore,
declarations of the TLS secret key for the current NativeHA CRR group, and the trust certificate for
the other NativeHA CRR group, need to be defined and mounted in the
/etc/mqm/groupha
directory.
Using the group names alpha (for Live) and beta (for Recovery), the example modifications for the
YAML StatefulSet for the Live group would be:
...
volumeMounts:
...
- mountPath: /etc/mqm/groupha/pki/keys/ha-group
name: groupha-tls
readOnly: true
- name: groupha-tls-trust-0
mountPath: /etc/mqm/groupha/pki/trust/0
readOnly: true
...
volumes:
...
- name: groupha-tls
secret:
secretName: nha-crr-secret-alpha
defaultMode: 288
- name: groupha-tls-trust-0
secret:
secretName: nha-crr-secret-beta
defaultMode: 420
items:
- key: tls.crt
path: native-crr-nha-crr-secret-beta-0-tls.crt
- Confirm that the Native HA CRR groups are running, in-sync, connected, and in quorum, by
using the following dspmq command on the Ready and Running pods of the Live and
Recovery queue manager instance:
kubectl exec -t <qmgr_pod_name> -- dspmq -o nativeha -g -m EXAMPLEQM
The status should show that the two Native HA CRR groups are connected and acting as Live and
Recovery groups, for
example:
QMNAME(EXAMPLEQM) ROLE(Active) INSTANCE(testmqqm-ibm-mq-0) INSYNC(yes) QUORUM(3/3) GRPLSN(<0:0:10:30665>) GRPNAME(alpha) GRPROLE(Live)
GRPNAME(alpha) GRPROLE(Live) GRPADDR(Unknown) GRPVER(9.4.2.0) GRSTATUS(Normal) RCOVLSN(<0:0:10:30665>) RCOVTIME(2025-02-17T16:25:16.527089Z) INITLSN(<0:0:9:4684>) INITTIME(2025-02-17T16:25:01.226650Z) LIVETIME(2025-02-17T16:25:02.974677Z) ALTDATE(2025-02-17) ALTTIME(16.25.19)
GRPNAME(beta) GRPROLE(Recovery) GRPADDR(<address.to.recovery.cluster>) GRPVER(9.4.2.0) CONNGRP(yes) GRSTATUS(Normal) RCOVLSN(<0:0:10:30665>) RCOVTIME(2025-02-17T16:25:16.527089Z) BACKLOG(0) INSYNC(yes) SYNCTIME(2025-02-17T16:29:38.643916Z) ALTDATE(2025-02-17) ALTTIME(16.29.29)
Results
You have successfully deployed two queue manager groups with native high availability and TLS
authentication within and between the groups. You have also verified that the Live and Recovery
groups are connected and performing cross-region replication.