Each cluster environment is different, and so it follows that each cluster has to be secured in different way. For example, a bank sector's IBM MQ environment might have one or more queue managers for each of its branch offices. Since the same kinds of transactions occur in those branch offices, all branch office queue managers can be clustered to access the applications running in the central office. In this kind of environment, all the cluster queue mangers are designed for the same purpose and belong to the same company. Therefore, the cluster queue managers can trust each other. Once the application transaction enters into the cluster through any of the cluster queue managers, MQ might not need to re-check the authorization in each level. So it becomes necessary to perform access control before the transaction enters into the MQ cluster, but it’s not necessary between those queue managers within the cluster.
Let’s consider another MQ environment; this one has clustered MQ queue managers for different departments in the same organization. In such scenarios, the applications connecting to each queue manager in the cluster will not be same. Some queue managers in the cluster might belong to a department such as payroll, which demands more security for its data. Such cluster queue mangers need to validate its incoming transactions locally before allowing them in, even though that data comes through other queue managers in the same cluster.
One very powerful feature of the MQ cluster is clustered queues, which are queues that are accessible from all queue managers in the cluster. Considering the two-cluster environment above which demands different levels of security, it becomes necessary to secure the cluster queues at different levels based on the business need.
This article describes an environment with three queue managers QM1, QM2 and QM3 in an MQ cluster, CLUS. In this setup (Figure 1), QM1 and QM2 are configured as full repositories and QM3 is a partial repository queue manager. CQL1 and CQL2 are the cluster queues defined in QM3 queue manager and visible from all other queue managers in the cluster. User user1 is connected to the QM1 queue manager and tries to put messages to cluster queues CQL1 and CQL2.
Figure 1. MQ cluster environment
In both QM1 and QM2, the queue manager's REPOS attribute is updated with the cluster name to make them full repositories. Cluster channels were defined between QM1 and QM2 queue managers. QM3 is added with a cluster as a partial repository by creating a cluster sender channel to QM1, one of the full repositories. Listed below are the definitions of the MQ objects created in each queue manager to group them into the cluster CLUS.
Listing 1. QM1 definition
ALTER QMGR REPOS(CLUS) DEFINE CHL(TO.QM1) CHLTYPE(CLUSRCVR) CONNAME('LOCALHOST(1444)') CLUSTER(CLUS) DEFINE CHL(TO.QM2) CHLTYPE(CLUSSDR) CONNAME('LOCALHOST(1445)') CLUSTER(CLUS)
Listing 2. QM2 definition
ALTER QMGR REPOS(CLUS) DEFINE CHL(TO.QM2) CHLTYPE(CLUSRCVR) CONNAME('LOCALHOST(1445)') CLUSTER(CLUS) DEFINE CHL(TO.QM1) CHLTYPE(CLUSSDR) CONNAME('LOCALHOST(1444)') CLUSTER(CLUS)
Listing 3. QM3 definition
DEFINE CHL(TO.QM3) CHLTYPE(CLUSRCVR) CONNAME('LOCALHOST(1446)') CLUSTER(CLUS) DEFINE CHL(TO.QM1) CHLTYPE(CLUSSDR) CONNAME('LOCALHOST(1444)') CLUSTER(CLUS) Two local queues CQL1 and CQL2 are created as cluster queues in QM3,
to make them accessible from anywhere in the cluster: DEFINE QL(CQL1) CLUSTER(CLUS) DEFINE QL(CQL2) CLUSTER(CLUS)
You have two cluster queues in this setup. Both are defined in same queue manager. Let’s take a scenario where a user, user1, should be able to put messages to CQL1 and should not be able to put messages to CQL2 at any time. The scenarios below describe how you can achieve this through remote and local queue manager authorization in a cluster.
Scenario 1: User authorized by the remote queue manager
Let’s consider a secure MQ cluster environment in which connections from all departments are controlled at the entry point of the cluster, and the queue managers within the cluster authenticate each other through SSL or a channel exit program defined in their cluster channels. This will make the connection between cluster queue managers more secure and trustworthy.
In such setups, a local queue manager (QM3), can trust and rely on the access control of the remote queue manager (QM1) with which the user is actually connected. However, access for any MQ object is controlled by the corresponding queue manager under which it resides.
Remote authorization before MQ V7.1
Prior to MQ V7.1, the queue manager makes authority checks on the local objects representation for the corresponding remote objects. For cluster queues, access checks are performed on SYSTEM.CLUSTER.TRASMIT.QUEUE. However, granting access to the cluster transmit queue will allow the user to access all cluster queues in that cluster. To avoid this and to grant the user with access to only specific cluster queues, you need an alternate local object for corresponding cluster queues. Alias queues will serve this purpose.
Creating an alias queue (QA.CQL1) in QM1 targeting to CQL1, will enable QM1 to control the access of cluster queue CQL1. Now, granting user1 to access that alias queue will enable user1 to put messages to the cluster queue CQL1.
Listing 4. Putting messages to cluster queue
DEFINE QA(QA.CQL1) TARGET(CQL1) setmqaut -m QM1 -t qmgr -p user1 +connect +inq setmqaut -m QM1 -n QA.CQL1 -t queue -p user1 +put
With this setup, user1 will be able to connect to QM1 and put messages to CQL1 in QM3 through the QA.CQL1 queue. Here, user1 gets authorized when he opens the alias queue QA.CQL1 for which he has been granted access. The access level for user1 was not checked when messages were put into local queue CQL1 in QM3 because user1 did not try to open the queue CQL1 directly. Since no local object was created in QM1 to control the access of CQL2, user1 cannot put messages to the CQL2 queue.
In the above setup, user1 had no access to QM1's cluster transmit queue. Access control was done using alias queues pointing to cluster queues (Figure 2).
Figure 2. Access control using alias queues
Remote authorization in MQ v7.1 and later
In MQ V7.1 and later, default security checks for cluster queues are still done on the cluster transmit queue. But by adding the below security stanza in qm.ini, you can configure the queue manager to perform authority checks for the remote queues and queue managers, using locally created security profiles for them.
Update QM1 queue manager's qm.ini file with above security stanza and restart the queue manager. Now, the security checks will no longer be done in a cluster transmit queue. If user1 tries to open a CQL1 cluster queue now by connecting to QM1, the queue manager will look for the security profile for the named queue CQL1. This can be set using either of these commands:
setmqaut -m QM1 -t queue -n CQL1 -p user1 +put
SET AUTHREC OBJTYPE(QUEUE) PROFILE(CQL1) PRINCIPAL('user1')
In this case, if user1 tries to open a fully qualified cluster queue by passing both queue name (CQL1) and queue manager name (QM3) while connecting to the queue manager QM1, the queue manager will look for the security profile for the remote queue manager QM3, which can be setup using either of these commands:
setmqaut -m QM1 -t rqmname -n QM3 -p user1 +put
SET AUTHREC OBJTYPE(RQMNAME) PROFILE(QM3) PRINCIPAL('user1')
Based on the incoming connection, by setting a security profile for either CQL1 or QM3, you will permit user1 to access the cluster queue CQL1 without creating any alias queue in QM1. But the security profile for the remote queue manager will grant user1 with put access to all queues and topics in QM3.
Scenario 2: User authorized by the local queue manager
In the above scenario, user1 was authorized by the QM1 queue manager to participate in the cluster. However, the queue manager QM3, in which CQL1 and CQL2 physically exist, did not get a chance to authorize or un-authorize user1 to access its queues CQL1 and CQL2.
Suppose you have a cluster environment in which one of the applications is local to the organization and needs to communicate extensively with all departments in the organization. In this cases, the queue manager with which the application is connected will grant full access for the application to participate in cluster activities. In such environments, the cluster queue manager can not rely on other cluster queue managers for application access control. This is because there could be some queue managers in the same cluster with business critical data flowing through them would want to perform a security check before allowing any incoming transactions.
In this example, if the application (user1) is trustworthy and has to access all cluster queues in the cluster, the remote queue manager (QM1) will permit the user ID to put messages to its SYSTEM.CLUSTER.TRANSMIT.QUEUE. In that case, user1 will be granted access to put messages to all cluster queues in that cluster.
This command will grant user1 +put access to QM1's cluster transmit queue (SYSTEM.CLUSTER.TRANSMIT.QUEUE):
setmqaut -m QM1 -n SYSTEM.CLUSTER.TRANSMIT.QUEUE -t queue -p user1 +put
Or, if ClusterQueueAccessControl=RQMName mode is set in QM1, it can create a security profile for QM3 that will permit user1 to put messages to all queues and topics in QM3.
It now becomes the responsibility of the local queue manager (QM3) to locally secure their queues from any such applications putting messages to them.
Now, if user1 tries to put messages to both CQL1 and CQL2, it will be able to do so without any difficulty because QM1 authorized user1 to put messages to all cluster queues. When the connection comes to QM3, it authorizes the user ID of the message channel agent (MCA) that is putting messages into its queue, but not the user ID that originally put messages to the cluster. If the MCAUSER of the cluster channel is empty, then the ID that started the MCA will be taken to authorize the message put.
If MCAUSER of the cluster channel is set to a low-privileged, non-admin ID, with PUTAUT(DEF), it will help to secure the message put to cluster queues. Here, one authorization profile is used to control all incoming connections through that channel. The cluster relies on the exchange of internal cluster messages. Therefore, the user ID mentioned in MCAUSER must have the rights to connect to the queue manager and write messages to the SYSTEM.CLUSTER.COMMAND.QUEUE.
In cases where different users send messages from remote queue managers, the local queue manager might want to authorize them based on their original user ID, rather than the ID specified in channel MCAUSER. This can be achieved by altering the PUTAUT attribute of CLUSRCVR channel to CTX.
With PUTAUT set to CTX, the local queue manager will authorize the incoming connection based on the UserIdentifier of incoming messages.
In your cluster setup, alter QM3's CLUSRCVR channels PUTAUT to CTX:
ALTER CHL(TO.QM3) CHLTYPE(CLUSRCVR) PUTAUT(CTX)
Restart the cluster receiver channel for the above change to take effect. Alter the queue manager QM3 to use a dead letter queue to handle undelivered messages:
ALTER QMGR DEADQ(QM3.DEADQ)
Now, user1 is trying to put messages from QM1 to CQL1 and CQL2. You can see the messages ending up in QM3.DEADQ with dead letter header MQRC_NOT_AUTHORIZED. Here, QM3 still takes the user identifier of each message and, based on that ID's access level, will route the messages to the corresponding local queue or dead letter queue.
Figure 3. Message routing based on identifier
Let’s grant access to user1 to put messages to only CQL1 in QM3:
Setmqaut -m QM3 -n CQL1 -t queue -p user1 +put
Now run the above test again by sending messages from QM1 to CQL1 and CQL2. You will see messages targeted to CQL1 end up correctly in the queue, but the messages targeted to CQL2 will still end up in QM3.DEADQ.
Here, all the messages coming with the user identified as user1 go to QM3. While putting messages to the corresponding queues, QM3 authorizes user1 to put messages only to CQL1 and denies access to CQL2. Those messages, then, end up in a dead letter queue with an authorization error.
In both scenarios, the ID of the incoming connection should be authorized by the operating system.
If PUTAUT(DEF) and MCAUSER are set to empty in a homogenous environment, the user ID of the connection is usually same, and so access rights are not normally a problem. For example, in UNIX® environments, an mqm ID is defined on all queue managers and is used to start the repository manager process. Since it has MQ administration rights by default, cluster messages will pass between the queue managers without any access issue. However, in a heterogeneous environment, the queue manager's user ID must be defined in all platforms and have rights to connect to the queue managers and to put messages to SYSTEM.CLUSTER.COMMAND.QUEUE. For example, a cluster with one queue manager in UNIX and one queue manager in Windows® NT, has queue managers running under the IDs mqm and MUSR_MQADMIN, respectively, by default. In this case, you need to create the ID MUSER_MQADMIN on the UNIX server and the ID mqm on the Windows platform and grant them access to connect to the queue manager and to put messages to the cluster command queue.
If MCAUSER is set to a non-privileged ID, that ID should be created at the platform level with above mentioned access. If PUTAUT is set to CTX, IDs of all incoming messages should be created at the platform level for authentication, and be given access to put messages to corresponding queues in the queue manager.
Hence, a large cluster environment with multiple platforms will require additional administration because all systems in the queue manager cluster must have all required IDs and MQ permissions set.
This article described how you can use remote and local authorization to secure a cluster queue from unauthorized users or applications putting messages onto it. It also explained how to test the setup by using amqsput programs and by defining a dead letter queue.
- IBM WebSphere MQ V7.5 Information Center
- IBM Technote: What is the difference between WebSphere MQ V7.1 or V7.5 and V7.0 or earlier versions, for access control checks on remote objects or cluster objects?
- IBM developerWorks WebSphere