All blogs selected for migration to be found here: https://community.ibm.com/community/user/imwuc/communities/community-home/librarydocuments?LibraryKey=7aaf3c23-2c83-47ba-9906-539bef972b48&CommunityKey=183ec850-4947-49c8-9a2e-8e7c7fc46c64&defaultview=folder&libraryfolderkey=98172f7b-459a-43da-b961-c9d51b9f7d74
MQdev Blog - moved to Messaging on Developerworks!
Peter Broadhurst 120000EDQP Tags:  wlm wmq ha ccdt cloud connamelist mq hub architecture clients resiliance cluster 10 Comments 20,343 Views
I've had a number of conversations recently with customers looking at how to use client connections to connect to multiple MQ queue managers.
In the MQ client workload-management (WLM) section of this article for Java EE, I discuss why I chose to provide a code stub sample to do WLM of outbound message sending, rather than describing how to use a CCDT. However, I realize customers looking to increase the availability characteristics of existing applications might be willing to accept the limitations of CCDT based approaches to minimize code changes. I also don't even mention options such as using load balancing hardware to create a Virtual IP address (VIP) for multiple queue managers.
So here is an attempt at a balanced comparison of the pros and cons of all the various approaches.
Note: these choices only relate to applications sending messages, or initiating synchronous request/reply messaging. The considerations for applications servicing those messages/requests (e.g. the listeners) are completely separate, and discussed in detail in the "Connecting a message listener to a queue" section of this article.
1 - CCDT (multi-QMGR):
2 - Load balancer:
Avoiding disruption during planned maintenance
There is another consideration not yet discussed, which is how to avoid disruption to applications (errors/timeouts visible to the end users) during planned maintenance of a queue manager. The general approach here is to drain all work from a queue manager before it is stopped.
Think about a request/reply scenario. You want all in-flight requests to complete, and the replies to be processed by the application, but you don't want any additional work to be submitted into the system. Simply quiescing the queue manager doesn't fulfill this need, as well-coded applications will receive a 2161/MQRC_Q_MGR_QUIESCING exception before they receive their reply messages for in-flight requests.
There is a tool and approach built in to MQ that helps: setting PUT(DISABLED) on the request queues used to submit work, while leaving the reply queues both PUT(ENABLED) and GET(ENABLED). Then you can monitor the depth of the request, transmission and reply queues, and once they all stabilize (in-flight requests complete or time out) you can stop the queue manager.
However, this relies on good coding in the requesting applications to handle a PUT(DISABLED) request queue, which will result in 2051/MQRC_PUT_INHIBITED errors when when they try and send a message. The exception won't occur when creating the connection to MQ, or opening the request queue, only when an attempt is made to actually send (MQPUT) a message.
Building a code stub that includes this error handling logic for request/reply scenarios, and asking your app teams to use such a code stub going forwards, can help you develop applications with consistent behavior.
As of today I haven't included MQRC_PUT_INHIBITED handling in the 'simple' synchronous request/reply case code samples I provided with the articles. It's covered in the 'advanced' synchronous request/reply sample, but that is more complex than most projects require.
So it's on my todo list to enhance the 'simple' synchronous request/reply sample to demonstrate how you might handle MQRC_PUT_INHIBITED to enable planned queue manager restart without any end-user visible errors/timeouts.