Some questions have come up again and again and again when I’ve talked to MQ customers:
What's the best way to configure an app server (IBM WebSphere Application Server etc.) cluster to talk to an MQ cluster?
How do I make MQ highly available (HA) and horizontally scalable?
If my app (or app server) fails, but my queue manager keeps running, how do I prevent stranded messages?
How do I workload balance app connections to an MQ cluster?
How do I get excactly once delivery?
Each time after giving the standard "it depends" answer, we’ve had a detailed discussion (often pulling in other L3/Development colleagues) to try and get the very best answer for the individual situation. Usually the constraints of the scenario determine the outcome – such as “I can’t change the apps”.
Then when I was on-site with a customer some months back talking through questions just like this for a new project unconstrained by previous implementations, it struck me how much could be standardized if you brought the app developers and MQ operations teams together to agree on the best way to build the solution – DevOps if you like.
So while I can’t give the perfect answer for every situation, I thought it was time to try and put an answer down in a re-usable form, with coding principles and examples for the developers and config samples for the MQ infrastructure teams.
e.g. how would I most likely build an MQ infrastructure for a new project, and code apps to use it, if I was told to make it scalable and highly available.
The result is a series of developerWorks articles, describing a possible architecture that solves common non-functional requirements for a range of scenarios. It is a client-server MQ topology design where the MQ queue managers (and hence the persistent messages and MQ administration) are hosted in a separate set of virtual or physical servers to the applications that use them. I’ve heard these increasingly referred to as ‘hubs’ of MQ so that’s how I refer to them in the articles.
I note that in my experience MQ hubs are usually still single-tenancy and owned by the application/project/team that connects to them – and usually for good reasons such as managing risk or data/operational isoation. Multiple hubs are then interconnected to build a ‘connectivity bus’ with store+forward between the various applications/data-centers/geographies. There is of course scope to share hubs between applications that don’t need store+forward between them, for example if they share the same network segment within a data center. I note the articles could even be a starting point for building an MQ cloud, although that wasn’t my reason for writing them.
The series is called: A flexible and scalable WebSphere MQ topology pattern.
Part 1 covers the overall approach in a technology-agnostic way for any MQ application: from C/C++ code on AIX to C# on Microsoft .NET, from Java EE in IBM WebSphere Application Server, to COBOL in CICS. For z/OS it’s worth noting I’ve ignored the massive benefits of a QSG, which makes the architecture less applicable to apps running inside a z/OS SYSPLEX, but still applicable to distributed applications that communicate with back-end applications running in the SYSPLEX.
Part 2 gives a worked set of application code and MQSC examples of the approach, using Java EE as the app platform. Why Java EE? Well it’s the platform where I’ve had the most questions, so it was the first one I got to. I’d like to build similar collateral for Microsoft .NET, and a colleague has worked through how the principles could be applied to IBM Business Process Manager SCA components. I hope to get these out either as full developerWorks articles, or directly via this blog, over the coming months if there’s the demand (and time permits).
So take a look, and hopefully you find them interesting and informative – even if they aren’t quite the right size for you.