When WebSphere Commerce meets WebSphere MQ....
Roy.Leung 0600019YT3 Comments (2) Visits (11057)
My previous post "Is Your WebSphere Commerce Messaging Lost in Translation?" discusses some debugging techniques and my experience with outbound messages; I will continue and explore some of my experience with inbound message in WebSphere Commerce Server (WCS).
Most of the time, I see problem issues with WebSphere Commerce (WC) unable to process messages from WebSphere MQ. It is essential to follow and verify your settings documented in the Information Center or the Additional Software Guide (see Resources section below). The steps have been well-tested and I often find 99% of the time it was some typo and misconfiguration. A quick test is to insert a bogus message to MQ (with the MQ Explorer) and see if the bogus message is requeued to an error queue. This tells you whether WebSphere Application Server (WAS) is configured properly. Why WAS? Most of the steps you follow actually are done in the WAS admin console. In case your message is not consumed by WCS in the test, check your log to see if the MQ listener is being initialized. You can also manually create java core of the WCS/WAS process to see if the MQ listener thread(s) are being run from inspecting the java core files. The MQ listener threads are the ones that pull messages from the inbound queue. If you don't see any MQ listeners running, check your instance XML file to see if the enabled flag is set to true and threads attribute is set greater than 0:
Out of memory does sometimes associate with an inbound message. Typically, they introduce fragmented heap rather than memory leak. This happens when the incoming message size is huge, especially if the message is XML and you do not consider memory consumption in your design process. Upon receiving a message from MQ, WCS requires to allocate memory to hold a message. If messages are big enough, WCS's JVM heap can be fragemented, in which large chunks of continuous heap blocks are used for holding messages. Hence, it is more and more difficult for the JVM garbage collector to find a large continuous chunk of memory to hold your next message. At a certain point, the JVM garbage collector will be unable to find any more space and will report an out-of-memory. The best solution is to study the messages and see if they can be "refactored" to be smaller. On some occasions, I do see problems where a user dedicated a particular WC cluster member to only receive and process; however, it is only a workaround that prolongs the JVM's lifetime before it reaches an out of memory (OOM) condition by isolating web traffic to that JVM. If you have small messages coming in but once in a while, you believe you have big messages, you might want to detect what those messages are. In MQ, there is a setting you can use to limit the size of the incoming message: MAXMSGL. Note, this also affects how big a message is sent to MQ. The error is as follows:
Sometimes, it is important for WC to process a specific message serially. In a non-clustered environment, the answer is simple; on the other hand, in a clustered environment, you might have multiple clustered members running WCS requesting messages from MQ. How do you guarantee serial processing? The option is set "input open option" at MQ to "exclusive":
This will limit MQ to allow one connection established with a random WC cluster member. The other cluster members will try to connect and will be rejected (that is, it will fail). The retry mechanism in WCS will have those cluster members retry at specific time intervals. The side effect is that you might see a warning message that WCS fails to connect to MQ at every "retry" interval, but those are healthy signs, as you want those servers to re-establish connection at fail-over scenario. This topic is further discussed in technote Configuring WebSphere MQ to limit one queue connection for multiple serial WebSphere Commerce listeners for WebSphere MQ running under clones.