A better, stronger and faster WebSphere MQ on z/OS
MarkWomack 270000PC6X Comments (2) Visits (15512)
WebSphere MQ implements more enhancements and improvements with each new version that is released. While major changes receive a lot of fanfare through topics in the IBM Knowledge Center, there are often many behind-the-scenes updates that aren't as major, but still greatly enhance MQ. Sometimes the code is just tightened up a bit or (for MQ on z/OS) service parameters are implemented to help mitigate undesired behavior.
For a more detailed look at how middleware interacts with the DNS, you can read through my older blog "Dom
How the CHIN utilizes its own storage can be influenced by varying application patterns. Particularly though, for client connections, in some environments this storage may need to be used more efficiently.
There can be a lot of reasons why the CHIN storage is exhausted. For example, in the case of CSQX027E where the CHIN requests storage from the operating system, but the reply back is that there is not enough storage available.
There are however cases where client applications can request large message buffers (in and of itself not a big deal). The idea is that, when storage has been allocated and once the client is done using it, that the storage would be returned to the free pool so that the next application request that came along could immediately re-use that same storage. This saves the overhead involved if MQ were to make a brand new storage request to z/OS all over again.
But what if the client application initially requested a large 32MB buffer, did its work, then released that buffer back into the free pool. If the same application later returns and requests a much smaller 4MB buffer, then, instead of issuing a new storage request to tailor what the application gets, we save the overhead by re-provisioning the same 32MB buffer. This usually works great (since caching has always been a good thing).
It was observed though, in 'some' environments, that holding these large buffers which are ready to be handed out might eventually limit the amount of contiguous virtual storage that was still available for new requests (after all, there will only be available about 1.5GB, at most, in any particular address space so conservation becomes more important).
To that end, PM46114 introduced the ability to enable the return of certain types of large buffers back to the operating system instead of pooling them. For storage request overhead, using the default algorithm is still best, but if an environment just happens to be the sort where caching leads to an unnecessary impact on virtual storage usage, then a service parameter exists to help in that case.
CHANNELS START AND CHANNELS STOP
For client applications that started, did work quickly and then disconnected, the CPU costs were unreasonably high for the amount of messages that were logged to indicate channel starts and stops (specifically CSQX500I and CSQX501I). In an improvement which was small enough to be handled in an APAR, PQ93792 provided (by service parm) specific suppression of these messages that also removed the CPU cost (something which could not be achieved by using message suppression exits in z/OS). So useful was this sort of improvement that, in Version 8 the ZPARM (EXCLMSG) was added to allow exclusion of messages from any log as a permanent product feature. This queue manager attribute can also be changed dynamically.
Outside of service parms, when unexpected behavior has been discovered on other platforms, the coding teams have worked to close unwanted side effects. Recently, I had worked a case where WebSphere MQ for Java/JMS attempted to create more shared conversations across a connection than the configured SHARECNV limit allowed. This was the first time I had heard of this symptom. During a cursory search, virtually everyone reporting it had worked with SMEs on distributed platforms. For them, the problem was described by IV47311, which was created last year. The three circumventions were documented in technote - "MQ
Well, yes it does. CSQX504E was updated in Version 7.1 to include a new error type 00000023 to indicate such an attempt. Additionally, although the code had already accounted for many of these unexpected scenarios, the most recent Version 8 refresh of the IBM Knowledge Center will now document in far greater detail (see CSQX504E) what user action/focus should be taken in the vast majority of cases.
QUEUE ALIAS BEHAVIOR IN CLUSTERS (THE MORE THINGS CHANGE)
Prior to Version 7.1, an application which opened a clustered QALIAS, if defined with DEFBIND(OPEN), would always resolve to MQOO_BND_NOT_FIXED if the queue was opened with the option MQOO_BND_AS_Q_DEF. At version 7.1, in order to maintain that behavior, the DEFBIND option on the QALIAS should be changed from OPEN to NOTFIXED. PI18161 describes this change in greater detail.
See the IBM Knowledge Center topic (Beh
You will find a synopsis of version 8 changes in the IBM Knowledge Center, as well.
In the big picture, most MQ users will not need to employ the use of service parameters. The ones that include processing that benefits the wider audience can become a part of the product in general. Usually, however, service parms should only be necessary in rare cases. I encourage all MQ users to use the Request For Enhancement (RFE) process in order to get things into the product which will enhance your use of MQ the most. RFEs can be voted up and are easily submitted through the developerWorks IBM Community page.