MDB load balancing from WebSphere MQ to WebSphere Application Server
ValerieLampkin 27000182R2 Visits (8753)
Some clients report an uneven distribution of messages from the queue manager to WebSphere Application Server (WAS) activation specifications in a MDB (Message Driven Bean) cluster. It may seem that most of the work being done by the application is on one WAS server in the cluster. Consequently this can cause one of the JVMs to be overloaded and experience CPU starvation.
If the version of the WebSphere MQ queue manager is between 22.214.171.124 and 126.96.36.199, you may be able to resolve this by applying a fixpack.
In WebSphere MQ v7.0.1 the default mode of operation is to use asynchronous message consumption when there are multiple activation specification instances consuming messages from a queue. In this mode, each activation specification instructs the queue manager that it is ready to receive messages, and they should be sent over when they are available on the queue.
Prior to WebSphere MQ 188.8.131.52, the distribution algorithm used by the queue manager would favor the first consumer and attempt to maximize the messages flowing to that queue manager to satisfy all of the server sessions before sending messages to the next consumer to register an interest in messages.
In the application server environment, it was realized that this was not the preferred distribution design, as most users would like to evenly distribute the messages across the application server instances to even the work load.
As a result of this, APAR IZ97460 was raised to document and change the behavior to use a more round-robin type distribution. This is described further in the following Technote:
APAR IZ97460 changes for the queue manager were included in WebSphere MQ 184.108.40.206 to allow messages to be distributed more evenly to all of the asynchronous consumers that have been registered on a Destination.
If uneven workload distribution is occurring for your MDB application and the queue manager is a version between 220.127.116.11 and 18.104.22.168, then upgrading to a later fixpack or MQ version should change the behavior and distribute the workload.
Thanks to Chris Andrews (MQ JMS Level 3 in Hursley) for assistance with this blog entry.