A flexible and scalable WebSphere MQ topology pattern

Part 2: Workload-balanced MQ client connections for Java EE applications


Content series:

This content is part # of # in the series: A flexible and scalable WebSphere MQ topology pattern

Stay tuned for additional content in this series.

This content is part of the series:A flexible and scalable WebSphere MQ topology pattern

Stay tuned for additional content in this series.

This article provides Java™ Enterprise Edition (Java EE) code examples that show how an application creates workload-balanced connections to an IBM® WebSphere® MQ (hereafter called MQ) queue manager cluster. The code examples cover both inbound and outbound scenarios, and include MQ Script (MQSC) commands to configure an MQ hub that applications connect to as clients, as described in Part 1. The article also provides deployment instructions and configuration scripts for WebSphere Application Server single-server and Network Deployment environments. The Java EE code examples are applicable to any Java EE compliant application server with the MQ resource adapter installed. You can use these examples as the basis of any Java EE application that performs JMS messaging, including the following JMS use cases:

  • Message listener -- Beginning a long-running listener for messages arriving for processing on a queue or a durable subscription.
  • Fire and forget -- Sending a message to a queue, or publishing to a topic, where no response is expected. Examples include sending a data update, emitting an event, or sending a reply to a request.
  • Synchronous request/response -- Sending a request message where the response is immediately required for processing to continue, such as querying essential data.
  • Two-way asynchronous messaging -- Using fire and forget for requests and a message listener for responses, where the responses can be handled at any time.

The code examples are provided as-is, and they show you how to achieve continuous service availability and workload-balanced connections using features in WebSphere MQ V7.0.1, V7.1 and V7.5. In order to support two-phase (XA/JTA) transactions and provide maximum flexibility over use of Java EE connection pooling and other features of application servers, the examples do not exploit any of the built-in workload-balancing or reconnection capabilities provided by the Client Channel Definition Table (CCDT), Connection Namelist, or Automatic WebSphere MQ Client Reconnection features of MQ. Future versions of MQ may provide alternative approaches to solving workload-balanced client attachment.

This article also covers a number of concepts specific to use of MQ in a Java EE environment, such as transactions, connection pooling, tuning the MQ resource adapter to reconnect after failures. Scripting examples are provided for MQ and for WebSphere Application Server that address all aspects of the configuration, including deployment of the example applications, and creation of a local sandbox environment. While the article does not provide details on how to deploy to other Java EE environments, the code examples and information in this article should be applicable to any Java EE environment with the MQ resource adapter installed.

MQ client workload-management (WLM)

This article series describes one possible MQ topology, where you do not have MQ queue managers running locally on the application server machines. Instead, applications connect over a network to an MQ hub consisting of an MQ cluster containing multiple active queue managers. In many cases, the MQ hub may be dedicated to an individual application, but scaled on a separate set of virtual or physical machines to the Java EE application cluster. Alternatively, the MQ hub may be shared among applications within a department or network segment. Multiple MQ hubs may be interlinked via the MQ cluster to provide a connectivity bus between all applications in the enterprise.

Overview of MQ hub topology
Overview of MQ hub topology
Overview of MQ hub topology

Because the Java EE application servers do not have a queue manager running on their local machine, they cannot use the MQ cluster workload balancing that would be provided by a local (bindings attached) MQ queue manager. Instead, they must be configured to connect to one or more queue managers in the MQ hub as an MQ client over the network. The simplest choice is to connect each application instance (Java EE application server cluster member) to an individual queue manager in the MQ hub. However, this approach means that application instances will fail if their individual queue manager is temporarily unavailable, and it requires you to determine how to spread the application workload across the queue managers in the MQ hub.

Therefore, it is better to provide multiple queue managers for an individual application to connect to, either in a primary/secondary relationship, or preferably, with workload balancing between a set of queue managers in the MQ cluster.

Part 1 laid out principles for configuring multiple workload-balanced connections for each application instance, to meet the non-functional requirements of many environments. Part 2 now follows on from that introduction to provide Java EE code implementation and configuration examples. Since there are multiple options for implementing workload-balanced connections, it explains why we have chosen various options.

Outbound workload-management (WLM)

For outbound connections sending messages, the MQ Client Channel Definition Table (CCDT) feature is a viable option, since CCDTs let you prioritize connections to multiple queue managers, or perform simple workload management via randomization. If just prioritized primary/secondary connections are required, then using a connection namelist is an option.

However, instead of using these built-in features, this article provides custom code examples that perform workload-management between two or more members of the cluster. Why? Because the following considerations exist in a Java EE context but do not generally exist in a Java SE context, and they make using a CCDT or connection namelist challenging for many applications:

  • Java EE applications are generally designed with the open-use-close approach, relying on the connection pooling of the application server to make the approach efficient. Therefore any workload-balancing approach must consider that connections are long running, and held within JMS connection and JMS session pools scoped to the connection factory. A common problem with using a CCDT in this scenario is that if a queue manager becomes unavailable, the connection pool fills up with connections to the remaining queue managers. After the failed queue manager becomes active again, these connections are not rebalanced, unless an Aged Timeout is specified on the connection and session pools.

    While using an Aged Timeout is a viable approach, it causes regular reconnects during normal operation that have an inevitable performance cost associated with them, particularly when SSL/TLS security is enabled. Also, using an Aged Timeout can result in unexpected transaction rollbacks and exceptions. If connections in the connection pool are closed due to an Aged Timeout, all sessions associated with that connection are closed, even if they are in use within the session pool. If you are considering using an Aged Timeout with MQ, see the technote J2CA0027E errors in WebSphere Application Server for more information.

  • A key reason for using a Java EE application server is the transaction manager provided in the enterprise environment. The JTA transaction coordination specification assumes that each connection factory has a one-to-one mapping with a transactional resource manager, such as an individual MQ queue manager. Application servers rely on this one-to-one mapping to store recovery information that is used to determine commit/rollback decisions after a failure, including WebSphere Application Server when connecting to MQ. Specifying a CCDT or connection namelist in a connection factory that points to multiple queue managers (rather than two hosts for a single multi-instance queue manager), breaks this assumption. As a result, global transaction coordination should not be used with a CCDT or connection namelist pointing at multiple different queue managers.

For the above reasons, this article takes a different approach, and provides a small example code library to perform outbound workload-management between multiple connection factories. Each of these connection factories has its own connection and session pool, and points at a single queue manager, so the above challenges with a CCDT based approach do not exist.

Inbound listener workload-management

Long-running inbound connections to MQ present a different challenge, as the MQ cluster already performs workload management between the multiple instances of a cluster queue. Instead, the primary challenge is to ensure that each cluster queue has an application instance attached, even in the case that one application instance fails. Using a CCDT with randomization or prioritization is not a practical option, even in non-transactional scenarios, as startup sequence or randomization could cause messages to become stranded on a cluster queue with no applications attached.

The solution described in this article is for each application instance to listen to multiple queue managers concurrently, as described in Part 1.

An MQ resource adapter activation specification only supports connecting to a single queue manager, and connecting to a single queue, so what are the options for achieving this in a Java EE environment with MDB applications? You could deploy the application multiple times, but that would mean that other endpoints within the application, such as HTTP servlets, would also be deployed multiple times. Also, you would have the administrative overhead of deploying the application multiple times, and keeping the versions in sync.

The alternative is to create two message-driven beans (MDBs) that drive the same business logic, meaning that you will have two endpoints within a single Java EE application. You must do a bit of work, creating two MDB classes that extend from a parent class containing the business logic, and duplicating all the deployment information, such as resource references and transaction qualifiers, in both of those beans. You must also move static fields out into a separate class, to prevent the two MDBs from getting different copies. In most cases, these tasks are not difficult or time-consuming, and they make deployment and management of the twin-endpoint solution much simpler for operations teams. Therefore this article uses this approach.

Example Java EE applications

This article provides a set of Java EE applications that demonstrate workload-balanced inbound and outbound MQ client connections. They are designed to work together to give you a sandbox environment, where you can explore various aspects of the behavior. There are a set of sending applications, with HTTP servlet interfaces, that send request messages using the various patterns outlined in Part 1, and a single listener application that services the messages sent by all the servlets.

Overview of downloadable Java EE applications
Overview showing the Servlet and MDB applications communicating through the MQ hub
Overview showing the Servlet and MDB applications communicating through the MQ hub

The examples are provided as Eclipse projects, with a set of precompiled deployable EAR files, as described below:

  • WLMJMSAttachLibrary -- Code library used within all of the applications to establish workload-balanced outbound connections. In the example projects and deployment, this library is bundled individually within each EAR that depends on it. There would be a number of benefits in pulling this out into a shared library, possibly managed via OSGi, to make this a common infrastructure library used by all applications.
  • WLMMDB -- An example EJB 3.0 message-driven bean (MDB) that listens for requests using two endpoints, and sends back a response using workload-balanced outbound connections. Implements the "Connecting a message listener to a queue" pattern from Part 1, and sends a reply.
  • WLMMDBEAR -- An Enterprise Application project to build an EAR for deployment of the WLMMDB project.
  • SendingServletApp -- A Java EE Web project with a set of example Servlet 2.5 applications that perform the following common messaging tasks:
    • FireAndForget -- Sends a message for processing by an application (WLMMDB), without specifying a JMSReplyTo destination. Implements the "Connecting for fire and forget" pattern from Part 1.
    • SimpleRequestResponse -- Sends a message for processing by an application (WLMMDB), and waits for a response to arrive using the same connection and queue manager as was used for the request. Implements the "Connecting for synchronous request/response" pattern from Part 1.
    • AdvancedRequestReplySends a message for processing by an application (WLMMDB), and uses a dual-endpoint MDB (SendingAsyncReplyMDB) to listen for response on a clustered reply queue and deliver them back to the requesting thread. Implements the "Minimizing timeout failures for synchronous request/response" pattern from Part 1.
  • SendingAsyncReplyMDB -- An example EJB 3.0 MDB and synchronous request/reply API interface that together show you how to manage a synchronous request/reply scenario with messages arriving on two separate clustered reply queues. Used by the AdvancedRequestReply servlet.
  • SendingServletAppEAR -- An Enterprise Application project to build an EAR for deployment of the SendingServletApp and SendingAsyncReplyMDB projects.

The examples produce debug information via Java logging. The logging level is set to INFO in the examples, which means it will be written to SystemOut in WebSphere Application Server environments. Change the logging level to FINE in the WLMJMSLogger class in test, staging, and production environments. With logging level FINE, data is written to WebSphere Application Server trace, and only when trace is enabled on the classes.

Outbound workload-management example

API provided by the WLMJMSAttachLibrary
Application logic obtaining a JMS Connection+Session+Producer via round-robin WLM from the WLMJMSAttachLibrary
Application logic obtaining a JMS Connection+Session+Producer via round-robin WLM from the WLMJMSAttachLibrary

The small example code library provided in the WLMJMSAttachLibrary project shows how workload balancing can be built into the application using multiple separate connection factories. The downloadable example code has the following features:

  • Supports transactions coordinated by the application server, including global (XA/JTA) transactions.
  • Performs efficient round-robin WLM between the connection factories.
  • Removes connection retry logic from the applications.
  • Handles the fact that connections in the pool might be returned broken, which is detected only when creating a MessageProducer/QueueSender/TopicPublisher.
  • Lets each application provide its own resource references to the connection factories, and hence its own container-managed security credentials.
  • Puts the Java EE operations team in control of mapping resource references to MQ gateways during deployment, to meet the principles described in Part 1.
  • Lets multiple modules share the same connection factories, and hence connection and session pools.
  • Returns failed gateways to the WLM selection once they become available again, while greatly minimizing the number of threads that attempt to create connections to the failed gateway (which is usually an expensive operation)

Inbound MDB listener example, with multiple endpoints

The MDB application provided in the WLMMDBEAR project shows you how to use Java inheritance to provide two MDB endpoints that drive the same business logic.

The business logic implemented in the example sends a simple message to the JMSReplyTo destination if specified, or to a default queue if no JMSReplyTo is specified. This allows it to service requests from all the fire-and-forget and request/reply example servlets. It generates a reply message with the same delivery mode (persistence), priority, and time-to-live (expiry) as the incoming request. The reply is sent back with the JMSCorrelationID set to the JMSMessageID of the request. Outbound connections to send replies are workload balanced via the WLMJMSAttachLibrary, meaning that replies might be sent via a connection to a different queue manager to the one that delivered the request to the MDB.

If the TransactionAttribute of the onMessage method is changed to NOT_SUPPORTED, then replies can be sent back even if the gateway queue manager that delivered the request becomes unavailable before the response is sent.

WLMMDB with two MDB endpoints sharing common business logic
Class diagram of the WLMMDB1 and WLMMDB2 MDBs that extend the base class WLMMDBBase
Class diagram of the WLMMDB1 and WLMMDB2 MDBs that extend the base class WLMMDBBase

Sending servlets

Three servlets are provided in the SendingServletApp project, along with an MDB and request-correlation API example in the SendingAsyncReplyMDB project.

After deployment, you can access the servlets via a simple HTML index page at the root of the context root of the application. For example: http://localhost:9080/SendingServletApp/

Simple HTML index page to access the servlets
Index page, with links to the three servlets
Index page, with links to the three servlets

Fire-and-forget example

The FireAndForget servlet, in the SendingServletApp project, obtains a workload managed connection via the WLMJMSAttachLibrary, and sends a message with an empty JMSReplyTo header. It does not attempt to receive a response.

The example uses a Java EE transaction, and sends a persistent message. This means that JDBC operations could be inserted into the business logic, and would be coordinated in an atomic transaction with the send of the message.

Simple Request/Reply example

The SimpleRequestReply servlet, in the SendingServletApp project, obtains a workload managed connection via the WLMJMSAttachLibrary, and sends a request message with a JMSReplyTo header set, and an expiry. It then uses the same connection to wait for a reply with the JMSCorrelationID set to the value returned by MQ in the JMSMessageID of the sent message.

The send of the request must be committed immediately, before waiting for the response. To achieve this you can send the requests outside of any transaction context, as performed in the example, which is most efficient for non-persistent messages. For persistent messages it is more efficient to use an MQ transaction. Either by committing directly against the JMS session obtained outside of any transaction context, or by using a bean-managed transaction context and committing via the UserTransaction interface. For a summary of how to control the Java EE transaction context, see the section Transaction context.

A single-connection fully synchronous approach to request/reply messaging is very common in applications. However, when looking at using this approach consider whether it is really necessary to wait for a response, or if instead the request could be handled asynchronously at any point in the future by having a message listener. e.g. using a two-way asynchronous messaging pattern.

We do not provide example of a two-way asynchronous messaging pattern as it is a combination of fire-and-forget, plus a message listener, both of which examples are provided for. Sometimes in a two-way asynchronous messaging scenario, business logic requires the JMSMessageID of each request to be recorded to application state, such as by updating a Database in a transaction with sending the request.

Two-way asynchronous messaging is different from the special Advanced Request/Reply example explained below.

Advanced Request/Reply example

The AdvancedRequestReply servlet, contained in the SendingServletApp and SendingAsyncReplyMDB projects, represents a slightly more complex approach to synchronous request/reply messaging than used by most applications. The reason this example is provided, is because there are cases where MQ is being used as the backbone of highly parallel query interfaces. An example might be as the workload optimization layer for a RESTful Web API.

In these cases the workload smoothing and parallelization features of MQ are critical, but the exactly-once delivery features are not. If a query times out then it has failed, and the end-user of the API will encounter an error. Instead what is important is that the number of failures encountered when any individual infrastructure component is stopped or fails are minimized. Here MQ has the benefit that replies can be routed down a different route to requests automatically by the MQ cluster, but only if the thread waiting for the reply can listen to multiple queue managers for the response.

The reply correlation logic supplied in the AdvancedRequestReply example is an example of how to listen to multiple reply queues, and correlate replies for multiple requesting application threads, using a Java EE compliant and scalable approach.

As with the Simple Request/Reply example, the send of the request must be committed before waiting synchronously for the response.

Components of the AdvancedRequestReply example
Servlet logic interacting with the ReplyCorrelator.requestReply() API of the SendingAsyncReplyMDB project
Servlet logic interacting with the ReplyCorrelator.requestReply() API of the SendingAsyncReplyMDB project

Exploring the Advanced Request/Reply scenario in detail

In order to test the full range of behavior of the AdvancedRequestReply example in failure scenarios, make the following changes to the responding MDB in the WLMMDB project:

  • Change the @TransactionAttribute(TransactionAttributeType.REQUIRED) to @TransactionAttribute(TransactionAttributeType.NOT_SUPPORTED) on the onMessage() methods of WLMMDB1 and WLMMDB2. Otherwise, the MDB will fail to send a reply back when the queue manager that delivered the request fails. This is because the send of the reply is joined into a transaction with the consume of the request, and the consume of the request cannot be completed.
  • Add a Thread.sleep() call to the WLMMDB logic, before obtaining the connection to send the reply. This gives you an opportunity to stop queue managers while the MDB is running, and see what happens on the requesting side.

In order to deploy the AdvancedRequestReply servlet and SendingAsyncReplyMDB MDB to a Java EE cluster, you will also need to create a separate MQ cluster queue manager alias for routing replies to each cluster member, by tailoring the JNDI resources at server-scope. For more information, see the section Creating the JMS resources in JNDI.

The sandbox MQ hub environment

In Part 1 we introduced the concept of sender and receiver gateway queue managers. The term gateway in this instance indicates that these queue managers are the way that an application gets messages into or out of the MQ network, and that each application is assigned a set of queue managers to use in the sending and receiving gateway roles. In this series of articles we call a group of queue managers that a set of applications connect to an MQ hub.

If multiple MQ hubs for different domains were to be joined together via MQ clustering, these queue managers might also become MQ cluster gateways. They might be members of multiple MQ clusters if MQ hubs in different domains were to choose to use different MQ clusters. However, there is only a single MQ cluster in the example sandbox environment. A good practice with MQ clusters is that less is more. MQ clusters make it easy to do a lot, so it is a good idea to resist the temptation to overcomplicate them by creating large numbers of overlapping clusters.

The sandbox MQ hub environment contains an MQ cluster of two queue managers. Both queue managers act as both sending gateways and receiving gateways. The queue managers are configured to accept client connections from the Java EE applications, and have all of the queues required to run the applications defined.

MQ sandbox environment created via the supplied MQSC scripts
The two gateway queue managers in the MQ hub with the objects created by the supplied MQSC scripts
The two gateway queue managers in the MQ hub with the objects created by the supplied MQSC scripts

To create the sandbox MQ environment on your Linux®, UNIX®, or Windows® machine with WebSphere MQ V7.0.1 or later installed, run the following commands from the directory where you have extracted the MQSC scripts Gateway1.MQSC and Gateway2.MQSC:

crtmqm GATEWAY1
crtmqm GATEWAY2
strmqm GATEWAY1
strmqm GATEWAY2
runmqsc GATEWAY1 < Gateway1.MQSC
runmqsc GATEWAY2 < Gateway2.MQSC

If you configure the sandbox on a Windows laptop or desktop that is regularly put into standby, and have problems with messages not arriving, timeouts, and connection errors, then check to see if any of your channels are in STOPPED status. The behavior of MQ on Windows is to stop channels on suspend and restart them on resume. If the restarts do not work for any reason, such as a power failure in suspend, then your channels may be left in STOPPED status and may need a manual restart. For more information, see Advanced Configuration and Power Interface (ACPI) in the MQ information center.

After the scripts have been used to configure the queue managers, you can use the WebSphere MQ Explorer to view and customize the configuration, or continue to use MQSC depending on your preference.

MQHUB cluster displayed in the MQ Explorer
Queue Manager Clusters section of MQ Explorer, with GATEWAY1 and GATEWAY2 listed as full repositories, and Cluster Queues of GATEWAY1 highlighted
Queue Manager Clusters section of MQ Explorer, with GATEWAY1 and GATEWAY2 listed as full repositories, and Cluster Queues of GATEWAY1 highlighted

Limitations of the sandbox MQ hub environment

The example configuration is intended to provide a simple sandbox development environment to try out the code examples. At minimum, the following limitations that exist in the sandbox environment should be addressed in test, staging and production environments:

Configuring high availability of the individual queue managers

Java EE application server environment

You can choose to deploy the applications to an existing Single Server or Clustered Java EE environment. Jython scripts are supplied that perform the configuration required in IBM WebSphere Application Server environments. A Single Server environment has a single point of failure, so a clustered Java EE environment with at least two cluster members is recommended for test, staging and production environments. For a single server environment, such as the one in Figure 9, the AdvancedRequestReply example only consumes from SENDINGAPP.INST1.LISTENER.

A single server Java EE sandbox environment connecting to the MQ hub
The single server workload balancing sends through the MQ cluster, and receiving messages from both queue managers
The single server workload balancing sends through the MQ cluster, and receiving messages from both queue managers

For a two-server clustered environment (such as in a Network Deployment cell), it is necessary to customize the JNDI resources at server scope to point to SENDINGAPP.INST1.LISTENER on the first cluster member, and SENDINGAPP.INST2.LISTENER on the second cluster member. Otherwise the AdvancedRequestReply will fail, as requests made by one Java EE cluster member might be delivered to the other Java EE cluster member. If you add additional cluster members, you will need to add additional queues and queue manager aliases in the MQ hub for each cluster member. For more information, see the section Creating the JMS resources in JNDI.

Two-server clustered Java EE environment connecting to the MQ hub
Two-server cluster with each cluster member using a different reply queue for each AdvancedRequestReply instance
Two-server cluster with each cluster member using a different reply queue for each AdvancedRequestReply instance

Configuration scripts are provided only for WebSphere Application Server. For other Java EE environments you must perform the configuration manually. It is important to understand what the scripts are doing even for WebSphere Application Server, so let's look at each step performed by the scripts.

Tuning the MQ resource adapter

To meet the non-functional criteria set out in Part 1, you need to ensure that the MQ resource adapter in the Java EE environment is tuned to maximize the reliability of the applications. The following tuning parameters should be considered in any Java EE environment connecting to MQ. In WebSphere Application Server, MQ resource adapter custom properties must be set on the cell scope MQ resource adapter to be applied correctly by MQ. The supplied script shows how to set the same settings on all MQ resource adapters, at all scopes, which is the safest approach to ensure that MQ applies the properties.

  • maxConnections -- MQ resource adapter custom property. The overall maximum number of connections that can be made by the inbound MQ resource adapter within the application server. The example script sets maxConnections to its maximum value (2147483647), so that tuning can be performed on a per-application basis without encountering a JVM-wide limit.
  • connectionConcurrency -- MQ resource adapter custom property. This setting was made obsolete in V7.1 of the WebSphere MQ resource adapter, where it always has the value 1. WebSphere Application Server V7 and V8 ship with V7 of the MQ resource adapter, where this setting should be set to 1 via tuning to maximize the efficiency of MDBs.
  • reconnectionRetryCount and reconnectionRetryInterval -- MQ resource adapter custom properties. When an individual gateway queue manager becomes unavailable, an application with two MDB endpoints will continue to consume messages from the second gateway that is still available, and the MQ cluster stops delivering messages to the unavailable queue manager. However, once the queue manager becomes available again, the MQ cluster starts delivering messages to it almost immediately. Therefore it is very important that the MDB endpoint reconnect quickly to that queue manager, to prevent messages from being stranded. If reconnectionRetryCount and reconnectionRetryInterval are left at their default values, then the endpoint might take a long time to reconnect, or if the queue manager is down for more than 25 minutes it will never reconnect without manual intervention. The example script sets reconnectionRetryCount to its maximum value (2147483647) and reconnectionRetryInterval to 5000ms.
  • and -- MQ V7.0.1 Java system properties. Also startupRetryCount and startupRetryIntervalresource adapter custom properties in MQ V7.1 and later. By default, the MQ resource adapter prevents you from starting an application if one of the queue managers it connects to via an MDB endpoint is unavailable. However, in our scenario, we need to be able to start the application when one or more of our gateways are down, and expect the MQ resource adapter to continue attempting to reconnect to the queue managers until they become available. The example script sets both the V7.0.1 system properties and the V7.1 resource adapter custom properties (if available) to the same values as reconnectionRetryCount and reconnectionRetryInterval, allowing the application to start successfully and enter normal retry logic to connect to any unavailable gateways.
  • -- Java system property. In order for the workload-management logic in the WLMJMSAttachLibrary to detect when a failed queue manager becomes available again, some threads need to attempt connections to the unavailable queue manager. If these connections take too long to raise exceptions, because the TCP/IP stack is slow to report an error, then business logic could fail even though one gateway is still available. The example script sets the system property to 10 seconds, so that the MQ connection logic will set a SO_TIMEOUT when establishing TCP/IP connections to ensure that they fail reasonably quickly when a queue manager is unavailable.
  • HBINT -- Server Connection (SVRCONN) Channel property. When a queue manager becomes unavailable, there will be a number of in-use or pooled connections to that queue manager. It is important that exceptions are thrown quickly to threads attempting to use these connections, and if the negotiated heartbeat on the channel to MQ is too high, then it can take a long time for exceptions to be thrown. The MQSC scripts for setting up the sandbox MQ hub create a SVRCONN channel called WAS.CLIENTS that has HBINT(5) specified, to cause a heartbeat every 5 seconds.
  • SHARECNV -- Server Connection (SVRCONN) Channel property. MQ has a feature to let multiple connections share a single TCP/IP network socket and receiving thread within the queue manager. When queue managers are thread constrained or TCP/IP socket constrained, a higher SHARECNV limit has the benefit of reducing thread and socket usage. Conversely, sharing conversations introduces resource contention in certain situations. The MQSC scripts for setting up the sandbox MQ hub create a SVRCONN channel called WAS.CLIENTS that has SHARECNV(1) specified, meaning that each connection has its own TCP/IP socket, and MQ V7.0 and later features are enabled on the channel. The SHARECNV(1) setting is a sensible starting point for tuning, but might not be optimal in all environments. For more information on SHARECNV, see MQI client: Default behavior of client-connection and server-connection in the MQ information center.

A Jython script is supplied called that configures the MQ resource adapter custom properties and Java system properties described above, on all servers within any WebSphere Application Server cell. Run it as follows from your Deployment Manager or single-server profile:

  • Windows: C:\path\to\AppServer\profiles\PROFILE_NAME\bin\wsadmin -lang jython -f
  • Linux and UNIX: /path/to/AppServer/profiles/PROFILE_NAME/bin/ -lang jython -f

Client CCSID

In the example configuration, the CCSID setting of connection factories and activation specifications is set to 1208 (UTF-8).

This UNICODE UTF-8 setting ensures all characters supported by Java can be preserved, regardless of the codepage in which they were encoded in the message. Specifying a UNICODE client CCSID in Java also has some efficiency benefits, as Java eventually needs to convert to UNICODE (UTF-16) for its internal string representations. UTF-8 (1208) to UTF-16 (1200) string conversion is optimized, and UTF-8 generally makes more efficient use of network bandwidth than UTF-16, as well as the space in MQ headers.

Consider also setting the codepage of your queue managers to 1208 to ensure all characters can be preserved throughout the message lifetime. UTF-8 is a well supported codepage, and a wide range of conversion tables are available on all platforms for attaching clients and queue managers. If you consider changing an existing queue manager to use 1208, then codepage conversion support should be tested in your environment, for all applications and queue managers that connect to it (and do not themselves use a UNICODE codepage).

UTF-8 does require more than one byte per character, for characters outside the fixed ASCII set. As a result the length of string that can be supplied in fixed-length MQ header fields is reduced when multi-byte characters are used.

Creating the JMS resources in JNDI

The JNDI resources created by the example scripts for WebSphere Application Server are as follows:

  • MQ JMS Destinations for the WLMMDB MDB and SimpleRequestReply Servlet:
    • jms/WLMMDB.REQUEST -- JMS Queue pointing to WLMMDB.REQUEST
    • jms/WLMMDB.BACKOUT -- JMS Queue pointing to WLMMDB.BACKOUT
    • jms/SENDINGAPP.REPLY -- JMS Queue pointing to SENDINGAPP.REPLY
  • MQ JMS Destinations for the AdvancedRequestReply Servlet -- See comments in the script if you plan to deploy and test the AdvancedRequestReply Servlet in a clustered Java EE environment.
    • jms/SENDINGAPP.LISTENER -- JMS Queue pointing to SENDINGAPP.INST1.LISTENER. Must be redefined at server-scope in a clustered Java EE environment, pointing at a different queue on each server.
    • jms/SENDINGAPP.APPINST.QMALIAS -- JMS Queue pointing to queue name SENDINGAPP.INST1.LISTENER and cluster queue manager alias name SENDINGAPP.INST1 Must be redefined at server-scope in a clustered Java EE environment, pointing at a different cluster queue manager alias on each server.
  • MQ JMS Activation Specifications -- stopEndpointIfDeliveryFails is set to false on the activation specifications to prevent WebSphere Application Server from stopping the endpoint if delivery of a message fails. maxPoolSize is set to 20 on the activation specifications, to specify a maximum of 20 concurrent onMessage invocations per MDB endpoint. This is a key tuning parameter for your application. ccsid is set to 1208 on the activation specifications.
    • jms/GATEWAY1_AS -- Activation Specification pointing at GATEWAY1
    • jms/GATEWAY2_AS -- Activation Specification pointing at GATEWAY2
  • MQ JMS Connection Factories -- maxConnections on connectionPool and sessionPool are set to 20 on the connection factories, to allow a maximum of 20 threads across each application server to concurrently use connections from that connection factory (the maximum number of MQ connections seen in the queue manager could be much larger, up to approximately 400 per connection factory). These are key tuning parameters for your application. For more information on connection and session pools, see the developerWorks article Using JMS connection pooling with WebSphere Application Server and WebSphere MQ. ccsid is set to 1208 on the connection factories.

    If you are using WebSphere Application Server V6.1, or any Java EE application server with a WebSphere MQ V6.0 JMS client, you must configure a REPLYTOSTYLE=MQMD custom property on your connection factories. This property ensures that the ReplyToQMgr field of the MQMQ is used to route replies back to the correct queue manager. For more information, see Changes to the WebSphere MQ JMSReplyTo header field.

    • jms/GATEWAY1_CF - Connection Factory pointing at GATEWAY1
    • jms/GATEWAY1_CF - Connection Factory pointing at GATEWAY2

The supplied Jython script is called To run it, use the following commands:

  • Windows: C:\path\to\AppServer\profiles\PROFILE_NAME\bin\wsadmin -lang jython
  • Linux and UNIX: /path/to/AppServer/profiles/PROFILE_NAME/bin/ -lang jython

Deploying the applications

The applications use resource references to look up all JNDI objects, as is good practice for all Java EE applications. These are bound to the actual JNDI objects during deployment.

Default bindings are provided in ibm-ejb-jar-bnd.xml and ibm-web-bnd.xml files in EJB and Web projects, to bind the resource references to the JMS resources created by the script.

In Rational Application Developer, or in Eclipse with the WebSphere Application Server Developer Tools for Eclipse installed, you should be able to add the WLMMDBEAR and SendingServletAppEAR projects to a configured WebSphere Application Server development environment via the Servers view.

Servers view in Rational Application Developer with the applications installed
Servers view in Rational Application Developer with a WebSphere Application Server V8 server and pop-up menu open on Add and Remove
Servers view in Rational Application Developer with a WebSphere Application Server V8 server and pop-up menu open on Add and Remove

For a Network Deployment environment, you can deploy the application from the EAR files using the administrative console. You can use scripting to deploy the application, but no script is supplied with this article. You can also change the resource references to point at different destinations, to test out different behavior. For example changing the FireAndForget servlet to publish on a topic. A list of the bindings defined in the applications is provided as follows:

jms/FireAndForgetTargetjavax.jms.Destinationjms/WLMMDB.REQUESTDestination to send messages to, for FireAndForget servlet
jms/RequestQueuejavax.jms.Queuejms/WLMMDB.REQUESTQueue to send messages to, for SimpleRequestReply servlet
jms/SimpleReplyQueuejavax.jms.Queuejms/SENDINGAPP.REPLYQueue to set in JMSReplyTo of requests, and receive replies from, for the SimpleRequestReply servlet
jms/AdvancedReplyQueuejavax.jms.Queuejms/SENDINGAPP.APPINST.LISTENERJMSReplyTo to put in requests, for AdvancedRequestReply servlet
jms/DefaultReplyQjavax.jms.Queuejms/WLMMDB.BACKOUTQueue to send replies to when the request does not have a JMSReplyTo set, for WLMMDB app
jms/GWCF1javax.jms.ConnectionFactoryjms/GATEWAY1_CFConnection factory to first gateway for workload balanced connections, in all apps
jms/GWCF1javax.jms.ConnectionFactoryjms/GATEWAY1_CFConnection factory to second gateway for workload balanced connections, in all apps
WLMMDB1MDB Endpoint (JCA Adapter)jms/GATEWAY1_ASjms/WLMMDB.REQUESTActivation specification and queue for first receiving gateway, for WLMMDB app
WLMMDB2MDB Endpoint (JCA Adapter)jms/GATEWAY2_ASjms/WLMMDB.REQUESTActivation specification and queue for second receiving gateway, for WLMMDB app
ReplyCorrelatorMBD1MDB Endpoint (JCA Adapter)jms/GATEWAY1_ASjms/SENDINGAPP.LISTENERActivation specification and queue for first clustered reply queue sending gateway, for SendingAsyncReplyMDB app
ReplyCorrelatorMBD2MDB Endpoint (JCA Adapter)jms/GATEWAY2_ASjms/SENDINGAPP.LISTENERActivation specification and queue for second clustered reply queue sending gateway, for SendingAsyncReplyMDB app

Using messaging in your Java EE applications

While they are provided AS-IS, the examples have been built with experience from real customer situations. There are TODO comments in the code that should help you see where the business logic should be inserted if you choose to use them as a basis for building applications. However, there are some areas that each application team will need to think carefully about when implementing any MQ JMS code (whether based on these examples or not):

When, why, and how to use messaging

Think about what you are trying to achieve in your application, and try to use JMS messaging in the best way for that scenario. In general, JMS messaging is most flexible when used for fire-and-forget, so that each message flows from producer to consumer without the producer needing to wait for it to be processed. Good examples of this are sending a critical update from one application to another, such as "debit account X with 10 dollars," or publishing an event such as "customer X just reached their overdraft limit."

MQ is designed to be as careful with data as a database, and customers rely on its data integrity just as heavily. So if you need to make something happen exactly once in another application, then consider using a global transaction to coordinate sending a persistent MQ message with updating your local Database. This means you know the message will be delivered exactly-once to the target application if, and only if, your local database update is successful.

MQ also provides fan-out of data from one application to many applications, via publish/subscribe messaging. The sender of the data does not need to know which applications need a copy, as that is managed by the MQ infrastructure.

Asynchronous fire-and-forget messaging, whether queue based or publish/subscribe, can happen in both directions between applications. As a result, you can achieve many of the same outcomes as with request/reply scenarios, but leaving the blocks of application code (or separate applications) to work and scale independently, and get the benefit of exactly-once delivery without needing complex compensation logic within your code.

However, there are also many synchronous request/reply scenarios where MQ messaging adds significant value over a simpler protocol, such as HTTP, particularly due to the prioritization and workload smoothing features of the protocol, as well as the flexible routing capabilities of MQ topologies and clusters. For these scenarios, make sure you consider what happens in a timeout scenario. As with HTTP, in the occurrence of a timeout, the requester does not know whether the responder has (or will) receive the message. However, with JMS, you have the choice over whether MQ keeps or discards the reply message.

If you choose to keep the reply messages (they do not have an expiry set), then ensure you have some system to process and clean-up orphaned reply messages. Otherwise they will build up over time on the queues, and eventually affect the efficiency of the system.


For an overview of how to implement and troubleshoot security for connections from WebSphere Application Server to MQ, see 2035 MQRC_NOT_AUTHORIZED Connecting to WebSphere MQ from WebSphere Application Server via CLIENT Bindings.

Transaction context

Think carefully about the transaction context that exists whenever you create a JMS session, or send/receive a message. While this is a significant topic in its own right that cannot be fully covered in this article, it can be difficult to find the information you need without first knowing the basics. The various transaction contexts that might exist for any method call in a Java EE environment are summarized as follows:

  • A Java EE global transaction coordinated between multiple MQ queue managers and/or databases. This context could be established by the container, or controlled via your business logic using the UserTransaction interface.
  • An unspecified transaction context, often referred to as Local Transaction Containment (LTC), allowing you to control local MQ transactions directly. Here the transacted boolean parameter on the createSession() method affects the MQ behavior as follows:
    • transacted=false -- Messages are sent immediately on send(). This setting is generally the most efficient for non-persistent messages.
    • transacted=true -- One or more operations on an individual JMS Session can be coordinated using the commit() and rollback() methods. When sending messages within an LTC, using a transacted session is generally the more efficient for persistent messages than using an untransacted session. The container might still commit or roll back the transaction at a transaction boundary, if actions are left uncommitted by the bean logic.

    For more information on the behavior of WebSphere Application Server in an unspecified transaction context, see Local transaction containment in the WebSphere Application Server information center.

For EJBs, you control whether the container or your logic demarks the transactions by setting the TransactionManagement attribute of the bean to CONTAINER or BEAN respectively.

For container managed transactions, you choose whether you want a transaction or an LTC on an individual method by setting TransactionAttribute to REQUIRED or NOT_SUPPORTED on that method.

Use the TransactionAttribute on MDB onMessage methods to control whether the MQ resource adapter endpoint uses a transaction to consume the messages from the MQ queue.

For servlets, the transaction context is always unspecified and can be controlled via the UserTransaction interface.

For more information on Java EE transaction semantics, see the appropriate version of the Java EE and EJB specifications for your application.

To help you test various combinations of Java EE parameters to see the transaction context established, the examples log the transaction context in WebSphere Application Server environments. The examples use the documented SPI to obtain the transaction context.

Persistent or non-persistent

Setting your messages to be persistent does not in itself give you exactly-once delivery, especially for client connections, as an application or network can still fail in a way that leaves an application unsure whether it successfully sent or processed a message. Exactly-once delivery is only achieved when the application coordinates the send or receive of a message with another action, such as sending another MQ message or updating a database. Simply changing a message from non-persistent to persistent prevents loss of data only while the message is in the MQ network. It does not protect against an application losing the data before it enters the MQ network, or after it leaves it.

While there are no hard-and-fast rules, a rule of thumb is to use persistent messages when coordinating multiple actions in a way that gives you exactly-once delivery, and do it in a transaction. In other scenarios, use non-persistent messages, and ideally send them outside of any Java EE transaction scope on a non-transacted JMS Session, since that gives the best performance.

One exception is where you need to coordinate delivery of an MQ message with an action that cannot be included in a Java EE transaction, such as ending an e-mail or writing to a file. In these cases, a common approach is at-least-once processing, where you order the send, receive, or commit of an MQ message with the desired operation so that the operation is repeated in the case of any failure. For example, send the e-mail before committing the delete of the message. You can combine this approach with duplicate checking logic within the application, which is a good fit for persistent messages.

For these at-least-once scenarios, it is best to use a transacted JMS session, or Java EE transaction context, since it lets you control when the commit of the send or receive of a message occurs, and MQ is optimized to handle persistent messages within a transaction.

Downloadable resources

Related topics


Sign in or register to add and subscribe to comments.

ArticleTitle=A flexible and scalable WebSphere MQ topology pattern: Part 2: Workload-balanced MQ client connections for Java EE applications