Using JMS connection pooling with WebSphere Application Server and WebSphere MQ, Part 2

This two-part article series explains how JMS connection pooling works in WebSphere Application Server and WebSphere MQ. Part 2 describes connection pool error handling, configuring the pool to handle concurrent connection requests, and how WebSphere Application Server manages JMS connections to WebSphere MQ.

Share:

Paul Titheridge (PAULT@uk.ibm.com), Software Development Engineer, WebSphere MQ Support team", IBM WebSphere and Java Messaging Support

Paul Titheridge joined IBM in September 1995, after graduating from Exeter University. Following spells in the Voice and Business Integration departments, Paul is currently a member of the WebSphere and Java Messaging Support team, resolving problems for customers who use WebSphere MQ and WebSphere Application Server.



26 September 2007

Also available in Japanese

Introduction

Creating connections from IBM® WebSphere® Application Server to a Java™ Message Service (JMS) provider such as WebSphere MQ is costly in terms of both time and processor requirements. To improve performance, WebSphere Application Server maintains a pool of free connections that can be given to applications when they request a connection to the JMS provider. This two-part article series explains how JMS connection pooling works in WebSphere Application Server and WebSphere MQ. Part 1 of this article series described how the free connection pool is used, how the contents of the pool are managed, and how the various properties of the pool work together. Part 2 describes connection pool error handling, configuring the pool to handle concurrent connection requests, and how WebSphere Application Server manages JMS connections to WebSphere MQ.

WebSphere Application Server maintains a pool of connections to a JMS provider in order to improve performance. Connections are taken out of the pool when JMS applications need to communicate with the JMS provider, and are returned to the pool when the application has finished with them.

Connection pool purge policy

The connection pool Purge policy comes into play if an error is detected when an application is using a JMS connection to a JMS provider. The Connection Manager can either:

  • Close the connection that encountered the problem. Any other connections created from the factory (those in use by other applications, and those that are in the factory’s free pool) are left alone. This is known as the FailingConnectionOnly Purge policy and is the default behaviour.
  • Close the connection that encountered the problem, throw away any connections in the factory’s free pool, and mark any in-use connections as stale, so that the next time the application that is using it tries to perform a connection-based operation, it will receive a StaleConnectionException. For this behaviour, set the Purge policy to "Entire Pool."

Below are some examples of how the Purge policy property works.

Purge policy set to FailingConnectionOnly

Using the example from part 1, two MDBs are deployed into the application server, each one using a different listener port. The listener ports all use the jms/CF1 connection factory. After 10 minutes, MDBListener1 is stopped, and the connection that this listener port was using is returned to the connection pool:

Figure 1. MDBListener1 is stopped, and connection c1 is put into jms/CF1’s free pool
MDBListener1 is stopped

Suppose MDBListener2 encounters a network error while polling the JMS destination. What happens? The listener port shuts down, and because the Purge policy for the jms/CF1 connection factory is set to FailingConnectionOnly, the Connection Manager will throw away only the connection that was used by MDBListener2. The connection in the free pool is left where it is:

Figure 2. MDBListener2 stops due to a connection error, so connection c2 is closed. Connection c1 is left in the free pool.
MDBListener2 stops due to connection error

If the user now restarts MDBListener2, the Connection Manager passes the connection from the free pool to the listener:

Figure 3. MDBListener2 restarts, and gets connection c1 from the free pool.
MDBListener2 restarts, and gets connection c1

Purge policy set to EntirePool

This behaviour is more interesting. Imagine that we have three MDBs installed into our application server, each one using its own listener port. The listener ports have created connections from the jms/CF1 factory. After a few minutes, MDBListener1 is stopped, and its connection c1 is put into jms/CF1 free pool:

Figure 4. MDBListener1 is stopped, and connection c1 is put into jms/CF1 free pool.
MDBListener1 is stopped

When MDBListener2 detects a network error, it shuts itself down and closes c2. The Connection Manager now closes the connection in the free pool. However, the connection being used by MDBListener3 is left alone:

Figure 5. MDBListener2 stops due to a network error, so connection c2 is closed. Connection c1 is also closed, and c3 is left open.
MDBListener2 stops due to network error

What should the purge policy be set to?

As mentioned earlier, the default value of the Purge policy for JMS connection pools is FailingConnectionOnly. However, it is recommended that you set the Purge Policy to EntirePool, because in most cases, if an application detects a network error on its connection to the JMS provider, it's likely that all open connections created from the same connection factory will have the same problem. If the Purge policy is set to FailingConnectionOnly, then the Connection Manager leaves all of the connections in the free pool. The next time an application tries to create a connection to the JMS provider, the Connection Manager will return one from the free pool if there is one available. But when the application tries to use it, it will encounter the same network problem as the application.

Now, consider the same situation with the Purge policy of EntirePool. As soon as the first application encounters the network problem, the Connection Manager discards the failing connection and closes all connections in the factory’s free pool. When a new application starts up and tries to create a connection from the factory, the Connection Manager will try to create a new one, as the free pool is empty. Assuming that the network problem has been resolved, the connection returned to the application will be valid.

Advanced connection pool properties

WebSphere Application Server V6 introduces a number of advanced connection pooling properties, as described below.

Surge protection

Imagine that you have 50 EJBs that all create JMS connections from the same connection factory as part of their ejbCreate() method. If all of these beans are created at the same time, and there are no connections in the factory’s free connection pool, the application server will try to create 50 JMS connections to the same JMS provider simultaneously, putting quite a load on both WebSphere Application Server and the JMS provider.

The surge protection properties can prevent this situation by limiting the number of JMS connections that can be created from a connection factory at any one time, and staggering the creation of additional connections. It does this by using two properties: Surge threshold and Surge creation interval.

When EJB applications try to create a JMS connection from a connection factory, the Connection Manager checks to see how many connections are being created. If that number is less than or equal to the value of the Surge threshold property, the Connection Manager continues opening new connections. But if the number of connections being created exceeds the Surge threshold property, then the Connection Manager will wait for the period of time specified by the Surge creation interval property before creating and opening a new connection.

Figure 2 in Part 1 of this article series shows what happens when an application calls ConnectionFactory.createConnection(). Below is a more detailed diagram showing how the Surge properties fit in to this process:

Figure 6. How JMS connections are retrieved from the free connection pool when surge protection is enabled
How JMS connections are retrieved

Stuck connections

WebSphere Application Server V6 provides a way to detect "stuck" JMS connections To use this function, you must set three properties: Stuck Timer Time, Stuck Time, and Stuck Threshold. But what exactly is a stuck JMS Connection? A JMS connection is considered stuck if a JMS application uses that connection to send a request to the JMS provider, and the provider does not respond within a certain amount of time.

Part 1 of this article series discussed the pool maintenance thread, which runs periodically and checks the contents of a Connection Factory’s free pool, looking for connections that have either gone unused for a period of time, or have been in existence for too long. To detect stuck connections, the application server also manages a Stuck Connection thread that checks the state of all active connections created from a Connection Factory to see if any of them are waiting for a reply from the JMS provider. If the thread finds one waiting for a response, it determines how long it has been waiting, and compares this time to the value of the Stuck time property.

If the time taken for the JMS provider to respond exceeds the time specified by the Stuck time property, the application server marks the JMS connection as stuck. How often does the Stuck Connection thread run? Good question! It is determined by the value of the Stuck time timer property. Its default value is 0, which means that stuck connection detection is disabled. For example, suppose our connection factory jms/CF1 has the Stuck time timer property set to 10, and the Stuck time set to 15. The Stuck Connection thread will wake up every 10 seconds and check if any connection created from jms/CF1 has been waiting for longer than 15 seconds for a response from WebSphere MQ.

Suppose an EJB creates a JMS connection to WebSphere MQ using jms/CF1, and then tries to create a JMS Session using that connection by calling Connection.createSession(). However, something is preventing the JMS provider from responding to the request – perhaps the machine has frozen, or a process running on the JMS provider is deadlocked, preventing any new work from being processed:

Figure 7. Time 0 seconds. EJB1 calls createSession() using c1
Time 0 secs. EJB1 calls createSession() using c1

Ten seconds after the EJB called Connection.createSession(), the Stuck connection timer wakes up and looks at the active connections created from jms/CF1. There is only one active connection – c1. EJB1 has been waiting 10 seconds for a response to a request it sent down c1, which is less than the value of Stuck time, so the Stuck connection timer ignores this connection and goes back to sleep.

Ten seconds later, the Stuck Connection thread wakes up again and looks at jms/CF1’s active connections. Once again, there is only one connection – c1. It is now 20 seconds since EJB1 called createSession(), and it is still waiting for a response. Twenty seconds is longer than the time specified in the Stuck time property, so the Stuck Connection thread marks c1 as stuck.

Figure 8. Time 20 seconds. As EJB1 has been waiting for more than 15 seconds for a response from WebSphere MQ, c1 is marked as stuck.
Time 20 secs. c1 is marked as stuck

Assume that 5 seconds later, WebSphere MQ finally responds and lets EJB1 to create a JMS Session. The connection is back in use:

Figure 9. Time 30 seconds. WebSphere MQ finally responded after 25 seconds, so the next time the Stuck Connection Thread runs, c1 is marked as active again.
Time 30 secs. c1 is marked as active again.

The application server counts the number of JMS connections created from a Connection Factory that are stuck. When an application uses that Connection Factory to create a new JMS Connection, and there are no free connections in the Factory’s free pool, the Connection Manager compares the number of stuck connections to the value of the Stuck threshold property. If the number of stuck connections is less than the Stuck threshold, then the Connection Manager creates a new connection and gives it to the application. However, if the number of stuck connections is equal to the Stuck threshold, the application gets a resource exception. Figure 10 below shows how the Stuck threshold affects the way JMS connections are created:

Figure 10. How JMS connections are retrieved from the free connection pool when stuck connection detection is enabled
How JMS connections are retrieved

Pool partitions

WebSphere Application Server V6 provides two properties that let you partition a Connection Factory’s free connection pool: Number of free pool partitions, and Free pool distribution table size. The first property tells the application server how many partitions you want to divide the free connection pool into, and the second property determines how the partitions are indexed. Leave these properties at their default values of 0 unless you are asked to change them by IBM support.

One additional advanced connection pool property is called Number of shared partitions, and it specifies the number of partitions used to store shared connections. JMS connections are always unshared, which means that they can only be used by one application at a time, and therefore this property doesn’t apply.

JMS connections and WebSphere MQ

The next section contains information for those who use WebSphere MQ as their JMS provider.

JMS connections and WebSphere MQ client channels

A frequent question is “When my WebSphere MQ JMS connection factory is configured to use the CLIENT transport, how do the JMS connections relate to WebSphere MQ client channels?”

A general rule of thumb is that each JMS connection from WebSphere Application Server to WebSphere MQ uses a client channel. So, if you have two applications running, and they have both created a connection from the same connection factory, then two client channels are used. The connection factory property Maximum connections specifies the maximum number of JMS connections that can be created from the factory. As each connection relates to a client connection, you can see that the maximum number of client channels that can be used by this factory is equal to Maximum connections.

To determine the maximum number of client channels used by JMS connections at any one time, add up the value of the Maximum connections property for all of the connection factories that point to the same queue manager. For example, suppose you have two connection factories, jms/CF1 and jms/CF2, that both use the same WebSphere MQ queue manager. These factories are using the default connection pool properties, which means that Maximum connections is set to 10. If all of the connections are being used from both jms/CF1 and jms/CF2 at the same time, there will be 20 active client channels to WebSphere MQ.

JMS connections and WebSphere MQ connection pooling

Connections created from WebSphere MQ JMS connection factories are, by default, subject to two levels of pooling: application server pooling and WebSphere MQ pooling. What does this mean?

Connections to a JMS provider are pooled by the application server. When an application has finished with a connection, it is put into the connection factory’s free pool, where normally it will be either reused by another application, or closed if it remains in the free pool for longer than the value of the Unused timeout property. However, if the Unused timeout elapses for a JMS connection created to WebSphere MQ, the connection is not closed but instead is taken out of the application server’s free pool and put into the WebSphere MQ free connection pool.

If an application creates a new connection from a WebSphere MQ connection factory, and there are no connections in the factory’s free pool, it checks for free connections to the required queue manager in the WebSphere MQ pool. If one exists, it is taken out of the WebSphere MQ pool and given to the application.

There is a single WebSphere MQ free connection pool per application server, rather than separate free pools for each connection factory. This pool also has a pool maintenance thread associated with it (in WebSphere MQ terms this thread is called the pool scavenger thread, but to avoid confusion you can think of it as a pool maintenance thread). This thread runs periodically and looks at each connection in the WebSphere MQ free pool to see if has been there for longer than 30 minutes, in which case it closes that connection.

You're probably thinking “this sounds like the way Unused timeout works on the application server’s free pools.” That's correct, but confusingly, the Unused timeout property has no effect on the length of time connections remain in the WebSphere MQ free pool, which is determined by the Message Listener Service custom property mqjms.pooling.timeout. For details on how how to set this property, see the WebSphere Application Server information center.

The following diagram shows the two levels of connection pooling that are involved when WebSphere MQ is being used as the JMS provider:

Figure 11. WebSphere Application Server and WebSphere MQ JMS connection pools
JMS connection pools

Creating JMS connections to WebSphere MQ can be a time consuming operation, sometimes taking more than a second. On mission-critical systems, this delay can make a big difference in message processing time. Having a single WebSphere MQ free connection pool per application server means that free connections in this pool can be shared among connection factories that map to the same WebSphere MQ queue manager. If there are no free connections in a connection factory’s free connection pool, a free connection to the required queue manager may exist in the WebSphere MQ free pool, and it can be taken out and given to the application.

As mentioned earlier, WebSphere MQ connection pooling is enabled by default. The way to disable the pooling differs depending on the version of WebSphere Application Server:

  • For V6, ensure that the WebSphere MQ JMS connection factory property Enable MQ connection pooling is not selected.
  • For V5.1.1, set the Message Listener Service custom property mqjms.pooling.threshold to 0. For details, see the WebSphere Application Server information center.

Remember, there is a single WebSphere MQ free connection pool per application server. If any WebSphere MQ connection factory is configured to use WebSphere MQ connection pooling, then all JMS connections to WebSphere MQ will be pooled, not just the connections from the factory that is configured to use WebSphere MQ pooling. This behaviour is a feature of WebSphere MQ connection pooling. Here is an example showing how it works:

Assume two MDB listeners, MDBListener1 and MDBListener2, using the connection factory jms/CF1. MDBListener1 is running and using the connection c1 to WebSphere MQ, while MDBListener2 is stopped. When MDBListener1 shuts down, the connection is left open and put into jms/CF1’s free pool:

Figure 12. MDBListener1 stops, and connection c1 is put into jms/CF1’s free pool.
MDBListener1 stops

Assume that c1 remains in jms/CF1’s free pool for longer than the Unused timeout property specifies. When the pool maintenance thread runs, it finds the connection and attempts to close it. At this point, WebSphere MQ intercepts the call to close the connection and puts it into its own free pool instead:

Figure 13. The Connection Manager puts the connection in the WebSphere MQ free pool
Connection Manager puts connection in free pool

Then MDBListener2 starts up and tries to create a connection from the connection factory jms/CF1. There are no connections in this factory’s free pool, so the Connection Manager looks in the WebSphere MQ free pool and finds c1. Because this connection was created from the factory that MDBListener2 wants to use, it is removed from the WebSphere MQ free pool and given to the listener.

Conclusion

Part 2 of this article series has explained how you can use the purge policy to discard the contents of the free connection pool when a problem with a connection is detected, and how you can use the Advanced Connection Pool properties to detect stuck connections and limit the number of JMS connections that can be created at the same time. Finally, the article described how JMS connections from WebSphere Application Server map to WebSphere MQ client channels, and how the WebSphere MQ connection pooling mechanism affects the pooling functionality provided by WebSphere Application Server.

Resources

Comments

developerWorks: Sign in

Required fields are indicated with an asterisk (*).


Need an IBM ID?
Forgot your IBM ID?


Forgot your password?
Change your password

By clicking Submit, you agree to the developerWorks terms of use.

 


The first time you sign into developerWorks, a profile is created for you. Information in your profile (your name, country/region, and company name) is displayed to the public and will accompany any content you post, unless you opt to hide your company name. You may update your IBM account at any time.

All information submitted is secure.

Choose your display name



The first time you sign in to developerWorks, a profile is created for you, so you need to choose a display name. Your display name accompanies the content you post on developerWorks.

Please choose a display name between 3-31 characters. Your display name must be unique in the developerWorks community and should not be your email address for privacy reasons.

Required fields are indicated with an asterisk (*).

(Must be between 3 – 31 characters.)

By clicking Submit, you agree to the developerWorks terms of use.

 


All information submitted is secure.

Dig deeper into WebSphere on developerWorks


static.content.url=http://www.ibm.com/developerworks/js/artrating/
SITE_ID=1
Zone=WebSphere
ArticleID=258349
ArticleTitle=Using JMS connection pooling with WebSphere Application Server and WebSphere MQ, Part 2
publish-date=09262007