Connecting to a queue manager deployed in an OpenShift cluster
A set of configuration examples for connecting to a queue manager deployed in a Red Hat® OpenShift® cluster.
About this task
You need an OpenShift Route to connect an application to an IBM® MQ queue manager from outside a Red Hat OpenShift cluster.
You must enable TLS on your IBM MQ queue manager and client application, because Server Name Indication (SNI) is only available in the TLS protocol. The Red Hat OpenShift Container Platform Router uses SNI for routing requests to the IBM MQ queue manager.
The required configuration of the OpenShift Route depends on the SNI behavior of your client application.
To set the SNI header as TLS 1.2 or higher, a CipherSpec or CipherSuite must be used for your TLS communication.
- The IBM MQ C Client is V8 or later.
- The Java/JMS Client is V9.1.1 or later, and the Java installation supports the javax.net.ssl.SNIHostName class.
- The .NET Client is in unmanaged mode.
- The .NET Client is in managed mode.
- The AMQP or XR client is used.
- The Java/JMS Clients are used with AllowOutboundSNI set to NO.
- The IBM MQ C Client is V7.5 or earlier.
- IBM MQ C Client is used with AllowOutboundSNI set to NO.
- The Java/JMS Clients are used with a Java installation that does not support the javax.net.ssl.SNIHostName class.
Example
Host name based OpenShift Routes : For client applications that set the SNI to the host name
- ibm-mqadvanced-server-dev
- ibm-mqadvanced-server-prod
- ibm-mqadvanced-server-integration-prod in the IBM Cloud Pak® for Integration.
yaml
in your
cluster:apiVersion: route.openshift.io/v1
kind: Route
metadata:
name: <provide a unique name for the Route>
namespace: <namespace of your MQ deployment>
spec:
to:
kind: Service
name: <name of the Kubernetes Service for your MQ deployment (for example "<Helm Release>-ibm-mq")>
port:
targetPort: 1414
tls:
termination: passthrough
MQ channel based OpenShift Routes : For client applications that set the SNI to the MQ channel
Client applications that set the SNI to the MQ channel require a new OpenShift Route to be created for each channel you wish to connect to. You also have to use unique channel names across your Red Hat OpenShift cluster, to allow routing to the correct queue manager.
To determine the required host name for each of your new OpenShift Routes, you need to map each channel name to an SNI address as documented here: https://www.ibm.com/support/pages/ibm-websphere-mq-how-does-mq-provide-multiple-certificates-certlabl-capability
yaml
in your
cluster: apiVersion: route.openshift.io/v1
kind: Route
metadata:
name: <provide a unique name for the Route>
namespace: <the namespace of your MQ deployment>
spec:
host: <SNI address mapping for the channel>
to:
kind: Service
name: <the name of the Kubernetes Service for your MQ deployment (for example "<Helm Release>-ibm-mq")>
port:
targetPort: 1414
tls:
termination: passthrough
Configuring your client application connection details
oc get route <Name of hostname based Route (for example "<Helm Release>-ibm-mq-qm")>
-n <namespace of your MQ deployment> -o jsonpath="{.spec.host}"
The port for your client connection should be set to the port used by the OpenShift Container Platform (OCP) Router - normally 443.