Part 1: Building a multi-purpose WebSphere MQ infrastructure with scalability and high availability
This content is part # of # in the series: A flexible and scalable WebSphere MQ topology pattern
This content is part of the series:A flexible and scalable WebSphere MQ topology pattern
Stay tuned for additional content in this series.
This article describes an end-to-end example of how to build a hub topology of IBM® WebSphere® MQ (hereafter called MQ) that supports:
- Continuous availability to send MQ messages, with no single point of failure
- Linear horizontal scale of throughput, for both MQ and the attaching applications
- Exactly once delivery, with high availability of individual persistent messages
- Three messaging styles: Request/response, fire-and-forget, and publish/subscribe
- A hub model, with a centralized MQ infrastructure scaled independently from the application
Part 1 of this article series describes the overall infrastructure topology and summarizes how it meets the above non-functional requirements for a wide range of applications. Subsequent parts show you how to configure the various components, including how to code applications that connect to the infrastructure. The topology contains four logical tiers:
Figure 1. Topology overview
- Applications sending the message.
- Sender gateway
- MQ queue managers that the sending applications connect to.
- Receiver gateway
- MQ queue managers that the receiving applications connect to. Sending and receiving gateway queue managers can be the same queue managers.
- Applications receiving the message.
The only MQ installations are queue managers acting as the sending and receiving gateways. The sending and receiving applications attach to these queue managers as clients, as described below.
The term gateway in this instance indicates that these queue managers are the way that application gets messages into or out of the MQ network, and that each application is assigned a set of queue managers to use in the sending and receiving gateway roles. A group of queue managers that a set of applications connect to is called an MQ hub.
An individual queue manager in an MQ hub can act as both a sender and receiver gateway. A sender gateway in one MQ hub can communicate with a receiver gateway in another MQ hub. An MQ hub can be the gateway for multiple applications, or dedicated to a single application, depending on the isolation and performance requirements of that application.
The minimum number of queue managers required for the topology is two, in order to avoid a single point of failure. These two queue managers can act as both sending and receiving gateways. If automated recovery of individual persistent messages is required after a hardware failure, then these queue managers should themselves be made recoverable via a high availability (HA) failover technology. Automatic recovery of persistent messages helps prevent stranded messages, and is important in many exactly once delivery scenarios to ensure the timely delivery of messages.
Figure 2 below shows this minimum size topology, or MQ hub, with the MQ multi-instance feature used to provide queue manager HA recovery across two servers. You can use an HA cluster, such as IBM PowerHA, to achieve the same purpose with direct (fiber) attachment to a file system, such as a Storage Area Network (SAN). For more information on choosing a suitable HA failover technology, see Using WebSphere MQ with high availability configurations in the WebSphere MQ information center.
Figure 2. Two-queue-manager MQ hub with HA
Sending and receiving gateways
If the same set of queue managers are being used for the sending and receiving gateway roles within the MQ hub, why do you distinguish between the two roles in the topology?
Firstly, because messages that are sent by an application through a particular sending gateway queue manager might be workload balanced by the MQ cluster to a different receiving gateway queue manager in the same MQ hub, or in a different MQ hub somewhere else in the enterprise.
And secondly, because the queue managers provide fundamentally different features to the application when acting in these roles, summarized as follows:
- Sending gateway role:
- Provides continuously available store and forward capabilities, so fire-and-forget and publish actions can always be performed
- Contains response queues for applications performing request/reply actions
- Receiving gateway role:
- Contains queues from which applications host a service that needs to be continually available
- Delivers messages to applications with subscriptions to messages published on a topic
In order to access these features, applications connect differently to a queue manager, depending on whether they need it to act in the sending or receiving gateway role.
Extending the messaging hub
You can place additional messaging infrastructure tiers between the sending and receiving gateways, including using WebSphere Message Broker to perform message filtering, routing, and prioritization based on message content. An example is shown in Figure 3. Again, the sending and receiving gateways can be the same queue managers:
Figure 3. Extending topology to include WebSphere Message Broker
For more information on WebSphere Message Broker, see Related topics at the bottom of the article.
The continuous availability and scalability characteristics of the topology are based on some fundamental principles:
- Each application instance connects to exactly two queue managers.
- When sending messages, the messages are workload-balanced across the two. When listening for messages to arrive, the application listens to both queue managers for messages to arrive. The special case of receiving replies in request/reply messaging scenarios will be discussed later.
- Every receiving gateway configured for an application has at least two application instances attached.
- This arrangement prevents messages from becoming stranded if one application instance fails.
- There must be at least as many receiving application instances as receiver gateways configured for that application.
- If you are building a shared MQ infrastructure for many applications, some applications might have fewer instances than others, and hence be able to connect to fewer receiver gateway instances. As a result, some of your receiver gateways may be configured for different subsets of your applications.
For considerations for scenarios requiring non-durable publish/subscribe or message ordering, see Scenarios below.
Figure 4 below shows an example of how these principals are applied. The diagram shows a scenario with five queue managers in an MQ hub, acting as both sending and receiving gateways. A sending application is shown with eight instances, which utilize all five queue managers as sending gateways. A receiving application is shown with only four instances, which can utilize a maximum of four queue managers. One of the queue managers is not configured as a receiving gateway for the application, in order to prevent messages being routed to that queue manager and becoming stranded.
Figure 4. Example MQ hub showing application connections configured to meet the above principals
Connection types in detail
Applications connecting to the MQ hub are likely to be performing one of the following activities:
- Sending a message to a queue or a topic where no response is expected, such as sending a data update or emitting an event. We shall call this fire and forget.
- Beginning a long-running listener for messages arriving for processing, on a queue or a durable subscription. We shall call this a message listener.
- Sending a response message to a request it has processed via a message listener. We shall treat this identically to fire and forget.
- Sending a request message where the response is required immediately for processing to continue, such as querying some data. We shall call this synchronous request/response
- Sending a message that might generate one or more responses, and these responses are able to arrive at any time in the future. We shall treat this two-way asynchronous messaging pattern as a fire and forget of a request combined with a message listener for responses.
Each of these activities has different considerations for how an application connects to an MQ hub, which are described below along with the role that the queue managers in the MQ hub play as a sender or receiver gateway for the application. A future article in this series will show you how to achieve these connection patterns in common programming environments such as Java™ Enterprise Edition (Java EE), Java Standard Edition (Java SE), and Microsoft® .NET®.
Connecting for fire and forget
When an application connects for fire and forget messaging, it can connect to any available sender gateway -- any gateway queue manager in its local MQ hub. This queue manager is then responsible for delivering messages to the target queue, which might be on that same queue manager, workload balanced across the other queue managers in the local MQ hub, or workload balanced across a cluster to another MQ hub where the target application connects.
In order to avoid any single point of failure, and to spread the workload across all of the queue managers in the application's local MQ hub, the application should workload balance the connection it uses for its requests across multiple queue managers. WebSphere MQ features such as the Client Connection Definition Table (CCDT) can help, but to fully capitalize on connection caching and pooling, and to be able to use XA transactions for exactly-once delivery, using a small amount of custom code to balance messages between the two connections is often preferred. Figure 5 shows an application workload balancing fire and forget messages across gateways:
Figure 5. An application connecting for fire and forget messaging.
Connecting a message listener to a queue
In order to provide continuous availability, there should be more than one clustered target MQ queue, on different queue managers, for each receiving application. Having such multiple queues means that if one queue manager fails, the only requests that are stranded on that queue manager (or lost in the case of non-persistent messages) are those waiting to be processed on that queue manager when it failed. New requests are routed to the queue managers that are still available.
It is also important that messages do not become stranded on a particular queue manager if an instance of the application fails. The approach recommended in this article is to make each instance of the application listen to two receiving gateways, and configure those connections such that every queue manager has two applications listening to its queue. The benefit of this dual-listener approach is that handling the failure of the receiving application instance is instantaneous, as messages are already being processed by another instance connected to the same queue. The MQ feature AMQSCLM can also provide a solution here, by detecting the failure of the application and rerouting messages to other queues in the cluster. For more information, see The Cluster Queue Monitoring sample program (AMQSCLM) in the WebSphere MQ information center.
Figure 6 shows an application listening for messages against two receiving gateway queue managers:
Figure 6. An application listening for messages
Connecting a message listener to a durable subscription
Providing the same level of reliability for a durable subscription as described above for a queue is slightly more complex. If an application were to connect to two queue managers and create a durable subscription on each, then it would receive two copies of each message.
Instead, you can get the same level of reliability by administratively creating the subscription on each of the sending gateways to which applications connect to send messages,
and pointing that subscription at a clustered queue that is defined on the receiving gateways.
To prevent duplication of the messages within a cluster, it is important to set
QMGR on the subscriptions.
When using this
SUBSCOPE(QMGR) approach to durable subscriptions, you do not have to share the topic objects in the cluster -- in fact it is preferable to not cluster any topic objects.
The receiving application then attaches its listeners to the clustered queue, using the procedure described under Connecting a message listener to a queue above. Figure 7 shows the subscriptions and queues configured to allow a single logical durable subscription to exist with no single point of failure:
Figure 7. An application listening for messages on a subscriptions on the sending gateways>
Connecting for synchronous request/response
In synchronous request/response scenarios, an application sends a request, and then blocks waiting for a response or a timeout. It is possible for either the request or the response to get delayed (or lost for non-persistent messages), and for the requester to time out waiting for a response. The requester cannot determine whether the request has succeeded. It is good practice to configure requests and responses with an expiry to prevent orphaned response messages building up on queues when the requesting application times out waiting for a response. Another alternative is to configure the application to search for and handle orphaned response messages.
The simplest coding pattern for achieving request/response messaging with an MQ hub is shown in Figure 8 below, where the requests are workload-balanced across the available sending gateways, and the application looks for the response only on the queue manager to which it was connected when it sent the request. Using this approach, the application must use the same connection to the MQ hub for sending the request and receiving the response. If it were to reconnect before receiving the response, it might connect to a different queue manager, and it would not see the response message sent to the first queue manager.
Figure 8. An application performing simple request/response messaging
Minimizing timeout failures for synchronous request/response
There is an extension to the synchronous request/response pattern that minimizes the number of failed requests if a queue manager in the environment fails. The extension involves listening to two sending gateways for response messages on a clustered response queue. The clustered queues need to be managed so that a separate clustered response queue (or clustered queue manager alias) exists for each requesting application instance.
The additional complexity of listening to two response queues has the most benefit if the latency of the messaging environment is much smaller than the latency involved in performing the business logic (which is most commonly the case), and if there are a large number of parallel receiving instances or threads processing requests. In this scenario, if a sending gateway queue manager fails, it is likely that most requests will be in the middle of processing within application threads ,rather than waiting for delivery within MQ, so the responses can be routed by MQ to the alternative sending gateway queue manager. Figure 9 shows an application performing request/response messaging with a clustered response queue:
Figure 9. An application performing request/response messaging with a clustered queue
Scenarios requiring non-durable publish/subscribe or message ordering
The above patterns of messaging cover a wide variety of uses of MQ. However, there are some scenarios in which the principles described under Connecting applications above are more complicated to apply. Solutions for some of these scenarios are summarized below:
For non-durable publish/subscribe, if an application attaches multiple times, it receives multiple copies of each publication. Unlike with durable subscriptions, you cannot work around this in the topology described in this article by redirecting the subscription to cluster queues. Alternative approaches include:
- Using durable publish/subscribe
- The administrative overhead of using a durable subscription is worthwhile if a principal concern is to avoid loss of messages, or to scale message delivery across multiple queue managers in the MQ hub.
- Attaching to only one receiving gateway.
- Connecting to a single queue manager when receiving messages is a simple approach that is suitable for the majority of nondurable applications. The application does not need to connect to the same queue manager each time it connects, as the MQ cluster can be used to route publications to the application wherever it connects. The limitation of this approach is that the application cannot scale beyond a single receiving instance.
- Partitioning your topics
- If you need to scale across multiple application instances, you can partition your topics, and embed logic in your publishing applications to workload-balance across the partitions of a topic. With this approach, each application instance attaches to a single gateway, but you can have multiple application instances, each consuming one partition of the topic.
- Using Multicast publish/subscribe
- If you investigate partitioning your topics to scale across multiple application instances, then you might also want to investigate using the MQ Multicast publish/subscribe feature. It is particularly suitable if you have a large fan-out between publishers and subscribers, or if the equality and fairness of the latency between subscribers is important.
MQ assures order of delivery only when there is exactly one path between the single sending and receiving application threads within the MQ network. All of the approaches described in this article for providing a continuously available MQ infrastructure create multiple paths that messages might take through the MQ infrastructure.
Alternative approaches include:
- Allocate a single, highly available sending and receiving gateway to each ordered application
- High availability of individual queue managers is still achieved through MQ multi-instance or a HA cluster, as described above in MQ hub.
- Use the logical order feature of MQ
- Well-defined groups of messages with a beginning and an end can be sent through the MQ infrastructure as a logical group, and targeted to an individual destination queue manager.
- Perform reordering within the application
- The most flexible solutions involve the sending application adding sequencing information to the messages, which the receiving application then uses to reorder messages that arrive out of sequence. For example, you could use a database shared between the sending application instances to synchronize updates and generate a sequence number, and then the receiving application instances could maintain a similar sequence in their own database when processing the updates.
- WebSphere MQ resources
- WebSphere MQ V7 information center
A single Web portal to all WebSphere MQ V7 documentation, with conceptual, task, and reference information on installing, configuring, and using WebSphere MQ V7.
- Using WebSphere MQ with high availability configurations
- The Cluster Queue Monitoring sample program (AMQSCLM)
- WebSphere MQ developer resources page
Technical resources to help you design, develop, and deploy messaging middleware with WebSphere MQ to integrate applications, Web services, and transactions on almost any platform.
- WebSphere MQ product page
Product descriptions, product news, training information, support information, trial download, and more.
- WebSphere MQ product family
A description of the ten or so different editions of WebSphere MQ.
- WebSphere MQ documentation library
WebSphere MQ information centers and product manuals.
- IBM Redbook: WebSphere MQ V7 features and enhancements
Describes the fundamental concepts and benefits of message queuing technology, describes the new features in V7, and provides a business scenario that shows those features in action.
- Download a free trial version of WebSphere MQ V7
A 90-day, full featured, no-charge trial of WebSphere MQ V7
- WebSphere MQ support page
A searchable database of support problems and their solutions, plus downloads, fixes, and problem tracking.
- WebSphere MQ forum
Get answers to your WebSphere MQ technical questions and share your knowledge with other users.
- WebSphere MQ SupportPacs
Downloadable code, documentation, and performance reports for WebSphere MQ products. The majority of SupportPacs are available at no charge, while others can be purchased as fee-based services from IBM.
- WebSphere MQ V7 information center
- WebSphere Message Broker resources
- WebSphere Message Broker V8 information center
A single Web portal to all WebSphere Message Broker V8 documentation, with conceptual, task, and reference information on installing, configuring, and using your WebSphere Message Broker environment.
- WebSphere Message Broker developer resources page
Technical resources to help you use WebSphere Message Broker for connectivity, universal data transformation, and enterprise-level integration of disparate services, applications, and platforms to power your SOA.
- WebSphere Message Broker product page
Product descriptions, product news, training information, support information, and more.
- WebSphere Message Broker V8 information center
- WebSphere resources
- developerWorks WebSphere
Technical information and resources for developers who use WebSphere products. developerWorks WebSphere provides product downloads, how-to information, support resources, and a free technical library of more than 2000 technical articles, tutorials, best practices, IBM Redbooks, and online product manuals.
- developerWorks WebSphere application integration developer resources
How-to articles, downloads, tutorials, education, product info, and other resources to help you build WebSphere application integration and business integration solutions.
- Most popular WebSphere trial downloads
No-charge trial downloads for key WebSphere products.
- WebSphere forums
Product-specific forums where you can get answers to your technical questions and share your expertise with other WebSphere users.
- WebSphere demos
Download and watch these self-running demos, and learn how WebSphere products can provide business advantage for your company.
- WebSphere-related articles on developerWorks
Over 3000 edited and categorized articles on WebSphere and related technologies by top practitioners and consultants inside and outside IBM. Search for what you need.
- developerWorks WebSphere weekly newsletter
The developerWorks newsletter gives you the latest articles and information only on those topics that interest you. In addition to WebSphere, you can select from Java, Linux, Open source, Rational, SOA, Web services, and other topics. Subscribe now and design your custom mailing.
- WebSphere-related books from IBM Press
Convenient online ordering through Barnes & Noble.
- WebSphere-related events
Conferences, trade shows, Webcasts, and other events around the world of interest to WebSphere developers.
- developerWorks WebSphere
- developerWorks resources
- Trial downloads for IBM software products
No-charge trial downloads for selected IBM® DB2®, Lotus®, Rational®, Tivoli®, and WebSphere® products.
- developerWorks business process management developer resources
BPM how-to articles, downloads, tutorials, education, product info, and other resources to help you model, assemble, deploy, and manage business processes.
- developerWorks blogs
Join a conversation with developerWorks users and authors, and IBM editors and developers.
- developerWorks tech briefings
Free technical sessions by IBM experts to accelerate your learning curve and help you succeed in your most challenging software projects. Sessions range from one-hour virtual briefings to half-day and full-day live sessions in cities worldwide.
- developerWorks podcasts
Listen to interesting and offbeat interviews and discussions with software innovators.
- developerWorks on Twitter
Check out recent Twitter messages and URLs.
- IBM Education Assistant
A collection of multimedia educational modules that will help you better understand IBM software products and use them more effectively to meet your business requirements.
- Trial downloads for IBM software products