Product Readmes
Abstract
This document describes the fixes shipped in WebSphere MQ V5.3 fix pack 10.
Content
Table of Contents
APARs in Fixpack 10
- SE18835
MQM400-MSGAMQ9543 FDC RM012000 IN RRXACCESSSTATUSENTRY STATUS TABLE CORRUPTION - LOCKING PROBLEM - SE18479
MQM400 CORRUPTED JMSADMIN.CONFIG FILE IN MQ/ISERIES V530/CSD07 - SE18376
MQM400 WRKMQMMSG FAILS WITH MQRC_MD_ERROR ON MIGRATED QUEUE MANAGERS - SE18184
MQM CRTMQMCHL CHLNAME(TO.AS400) CHLTYPE(*CLUSRCVR) CONNAME( AMIR ) CLUSTER(CL.BESWS) GETS AMQ9426 WITH *DEF FOR MQMNAME - SE18138
MQM400-AGENT FAILS WITH MCH3601, ATTEMPTING TO GET FROM A QUEUE - SE17758
OSP-F/LIBMQML_R FAILS WHILE HANDLING SECOND MESSAGE IN SEGMENTED MESSAGE. - SE17747
MQM400 STRMQMLSR FAILS WHEN ISSUED BY A USER WHO DOESN'T HAVE *USE PERMISSION TO USE QMQM USER PROFILE. - IY69761
C++ APPLICATION USING IIH CLASS FAILS IN FIXPACK 09 - IY69760
SHARED MEMORY LEAK FOR EACH CONNECTION WHERE A SYNCPOINTED MQGET/MQPUT IS PERFORMED. - IY69467
LOG SPACE LEAK DURING XA TRANSACTIONS OBSERVED AT WMQ5.3 CSD08 AND CSD09 - IY68586
FIX PACK 8 AND FIX PACK 9 PACKAGES ON AIX CREATE SYMLINK FILES IN /USR/BIN WITH CURLY BRACES AROUND THE FILENAMES. - IY67844
MQCONN AFTER FORK WITHOUT EXEC IN UNTHREADED APPLICATION RETURNS 2059 ON HP-UX AND AIX - IY67371
PUB/SUB APPLICATION FAILS WITH MQJMS1061 AFTER APPLYING FIX PACK 08 - IY67239
THE WEBSPHERE MQ JMS CLIENT SENDS INCORRECT TSH DATA. - IY67178
A JMS APPLICATION HITTING IY67239 CAUSES THE CHANNEL POOL PROCESS TO TRAP - IY67165
CONNECTION PROBLEMS WHEN USING AN MQ TOPIC CONNECTION WITH MULTICAST NETWORK ADDRESSES. - IY67125
POST INSTALLATION SCRIPT DOES NOT SET MQM OWNERSHIP TO FILES INSTALLED UNDER /VAR/MQM/TIVOLI - IY66826
CLUSSDR DOES NOT START, QUEUE MANAGER CACHE STATUS STARTING. - IY66824
XC015001, XC015002, XC014031, RM296000 OR OTHERS, TYPICALLY DURING ENDMQM. AIX ONLY. - IY66583
README FILE FOR CSD08 IS CONFUSING WITH GSKIT/MQSERIES - IY66581
WEBSPHERE APPLICATION SERVER LISTENER PORTS DO NOT STOP AND RESTART FOLLOWING A JMS EXCEPTION. - IY66462
MESSAGE ON CLUSTER COMMAND QUEUE READ MULTIPLE TIMES - IY66331
WHEN MQPMO.RECSPRESENT IS SET IN A CLIENT APPLICATION MQPUT1 FAILS WITH 2154 (MQRC_RECS_PRESENT_ERROR). - IY66326
WEBSPHERE MQ QUEUE MANAGER ENDS AFTER USER MANUALLY KILLS MQRRMFA. - IY66278
XC130003 (SIGSEGV) FROM XEHEXCEPTIONHANDLER WITH A LARGE NUMBER OF THREADS (>1024) IN A SINGLE WMQ CLIENT PROCESS CONNECTING TO WMQ. - IY66048
MQ JMS THROWS EXCEPTION MQJE001: COMPLETION CODE 2, REASON 2090 - IY66042
DYNAMIC QUEUE CORRUPTED WHEN ONE THREAD DELETES QUEUE WHILST ANOTHER THREAD OPENS THE (PARTIALLY) DELETED QUEUE. - IY65613
QUEUEBROWSER DOES NOT SHOW UNDERLYING JMSEXCEPTION - IY65612
CHANNEL FAILS WITH AMQ9636 WHEN DN OF A CERT CONTAINS ESCAPED CHARACTERS SUCH AS , AND SSLPEER IS SPECIFIED - IY65447
MQ JAVA CLIENT ALLOWS THE GET OF A MESSAGE WHOSE SIZE IS GREATER THAN THE MAXMSGL VALUE ON THE SVRCONN CHANNEL - IY65287
UNBLOCK SIGUSR1,SIGUSR2 AND SIGALRM IN RUNMQTRM - IY65269
UNDEFINED SYMBOL ERROR DUE TO INCOMPATIBLE NAMEMANGLING SCHEMES - IY65033
CLIENT MQGET FAILS WITH 2009 MQRC_CONNECTION_BROKEN AFTER THE AMQRMPPA TERMINATES WITH SIGBUS DURING CONVERSION OF AN MQRFH2 - IY64640
MQRC_SYNCPOINT_NOT_FROM AN MQPUT/MQGET CALL AFTER BEGINNING A NEW TRANSACTION IN TUXEDO USING DYNAMIC RESOURCE MANAGEMENT. - IY64557
XC076001 ON LOCALE GB18030 - IY64428
CLUSTER MIGRATION PROBLEM - WHEN A 5.3 QUEUE MANAGER IS ADDED TO A 5.2 CLUSTER, FDCS FROM ZXCRESTOREOBJECT ARE CREATED. - IY64363
SIGILL IN INIT_TRACE IN LIBMQJBND05.SO - IY64349
CHANNEL HANG AFTER A COMMS FAILURE AS A RESULT OF A FAILED DNS LOOKUP. - IY63833
C++ SSL CONNECTION FAILS WITH 2381 MQRC_KEY_REPOSITORY_ERROR. - IY63820
UNBLOCK SIGUSR1 BEFORE CALLING THE USER APPLICATION. RUNMQTRM OF MQ5.3 MASK SIGNAL WHILE MQ 5.2 DOES NOT. - IY63458
SSLPEER VALUE IS INCORRECTLY CHECKED BY WEBSPHERE MQ WHICH RESULTS IN CHANNEL FAILURE, INCLUDING ERROR AMQ9636 - IY63426
REFRESH CLUSTER REPOSITORY MANAGER REPEATEDLY ENDS ABNORMALLY. MISCALCULATING SSLPEERNAMEPTR, AN INTERNAL POINTER TO SSL PEER - IY63171
HEAVY LOCK CONTENTION UNDER STRESS, AND REPORT LONG LOCK WAIT FDCS - IY63056
CRTMQM HANGS ON LINUX SYSTEMS. - IY59833
CHANNEL GOES TO INITIALIZING WHEN STARTED FROM SCRIPT. - IC45588
JMS CLIENT APPLICATION FAIL WITH 2009 (MQRC_CONNECTION_BROKEN) AT RECEIVE - IC45412
MULTI-THREADED APPLICATIONS MIGHT CAUSE AMQ8074 ERROR POST CSD08 - IC44243
WEBSPHERE MAY FORGET MDB MESSAGE SELECTORS AFTER A LONG TIME - IC43947
HAMVMQM DOES NOT PUT MQM AUTHORITY ON THE DIRECTORIES - IC43915
MQ .NET (DOTNET) WRITEBYTES TRANSLATES CHARACTERS INCORRECTLY - IC43892
MQ SAMPLE PROGRAM AMQSBLST GENERATES DUPLICATE MQMD.MSGID VALUES - IC43762
DEFAULT PERSONAL CERT IS USED TO AUTHORISE CLIENT DURING SSL HANDSHAKE WHEN IBMWEBSPHEREMQUSERNAME CERT DOES NOT MATCH - IC43533
CLUSTER: ONLY SUSPEND OR RESUME CLUSTER QUEUE MANAGER OBJECTS WHICH ARE NOT DELETED. - IC43515
COM.IBM.MQJMS.JAR CONTAINS AN INCORRECT CLASS-PATH STATEMENT IN THE FILE MANIFEST.MF. - IC43190
HANDLING OF AN LU6.2 TIMEOUT IN THE WINDOWS RUNMQLSR PROGRAM - IC43176
COMMAND SERVER REPORTS RC 2005 - IC43014
FAILURE TO RETRIEVE A LARGE SEGMENTED MESSAGE WITH MQGMO_BROWSE_MSG_UNDER_CURSOR - IC42753
AMQMSRVN DOES NOT HANDLE SPECIAL CHARACTERS SUCH AS - IN PASSWORD. - 90564
SUPPORT ADDED FOR USE OF ';' AND '#' CHARACTERS IN XA_OPEN STRING, AS REQUIRED BY INFORMIX - 90299
QUEUE MANAGER CANNOT RESTART AND/OR DAMAGED OBJECTS WITH POTENTIAL MESSAGE CORRUPTION FOLLOWING RESTART. - 87597
CLIENT APPLICATIONS SEGV WHEN USING LARGE NUMBER OF THREADS SIMULTANEOUSLY CONNECTED TO WEBSPHERE MQ. - 82774
PROVIDE PID AND PROGRAM NAME OF ASYNCHRONOUS SIGNAL SENDER
SE18835
|
| Abstract | MQM400-MSGAMQ9543 FDC RM012000 IN RRXACCESSSTATUSENTRY STATUS TABLE CORRUPTION - LOCKING PROBLEM |
| Users Affected | All users upon starting requester channels . Platforms affected: All Distributed |
| Error Description | When starting the Requester channels, Status table gets corrupted. An FDC with probe ID RM012000 in component rrxAccessStatusEntry reports status table damaged (see example below). +-------------------------------------------------------------- ! WebSphere MQ First Failure Symptom Report ! ========================================= ! Date/Time :- Monday December 27 06:45:56 2004 ! Host Name :- MCP400.XXXXXXXXXXXXX ! PIDS :- 5724B4106 ! LVLS :- 530.8 CSD08 ! Product Long Name :- WebSphere MQ for iSeries ! Vendor :- IBM ! Probe Id :- RM012000 ! Application Name :- MQM ! Component :- rrxAccessStatusEntry ! Build Date :- Aug 31 2004 ! UserID :- 00006442 (@MCOCH) ! Job Name :- 926593/@MCOCH/QPADVV0463 ! Job Description :- QGPL/SYSOPSJD01 |
| Problem Summary | For the requester-sender channel pair, an attempt to start the second instance of the requester channel, locking problems are encountered in rriAddStatusEntry which can lead to status table corruption. |
| Problem Conclusion | When the requester channel is in requesting state, the system waits for the RequesterTimeout duration, by which time the requester should have come down. If it has not come down, the message rrcE_CHANNEL_IN_USE is returned.An additional FDC with Probe RM027003 in rriAddStatusEntry component will be produced if there has been a failure to lock the status table due to the channel being in use . |
SE18479
|
| Abstract | MQM400 CORRUPTED JMSADMIN.CONFIG FILE IN MQ/ISERIES V530/CSD07 |
| Users Affected | JMSAdmin for MQ at CSD04 - CSD08 on I-series. Platforms affected: iSeries |
| Error Description | CORRUPTED JMSAdmin.config FILE IN CSD07 FOR MQ/iSeries V530 RESULTING IN INABILITY TO RUN IVTSETUP WITH JNDI SUCCESSFULLY AND A MESSAGE STATING THAT A CONFIGURATION PROPERTIES FILE IS MISSING. |
| Problem Summary | A CORRUPTED JMSAdmin.config FILE WAS INTRODUCED AT CSD04 FOR MQ/iSeries. |
| Problem Conclusion | THE CORRUPTED JMSAdmin.config FILE HAS BEEN REPLACED. |
SE18376
|
| Abstract | MQM400 WRKMQMMSG FAILS WITH MQRC_MD_ERROR ON MIGRATED QUEUE MANAGERS |
| Users Affected | For a Queue Manager at WMQV53 that has been migrated from an earlier version of MQ, On Some queue with non-persistent records, WRKMQMMSG fails - MQRC_MD_ERROR on MQGET . Platforms affected: iSeries |
| Error Description | On a migrated Queue Manager, an FDC is fired with probe id AQ123001 on component aqqLoadMsgHdr and MajorErrorCode arcE_OBJECT_DAMAGED. Further analysis shows that MQGET on the queue in question fails with MQRC_MD_ERROR. |
| Problem Summary | aqhImageSize doesn't handle that first buffer space map segment correctly if more than one buffer space map segment exists. |
| Problem Conclusion | Changes has been done in aqhImageSize to handle the case where first buffer space map segment is more than one buffer space map segment exists. |
SE18184
|
| Abstract | MQM CRTMQMCHL CHLNAME(TO.AS400) CHLTYPE(*CLUSRCVR) CONNAME( AMIR ) CLUSTER(CL.BESWS) GETS AMQ9426 WITH *DEF FOR MQMNAME |
| Users Affected | All the iSeries users having a default qmgr which is a part of cluster and this qmgr is connected to the full repository ( on different box ) using a cluster sender which was defined using *DFT in MQMNAME. Platforms affected: iSeries |
| Error Description | Qmgr running on iSeries box as a default qmgr. Qmgr joins a cluster using a cluster sender channel to the full repository on a different box, if cluster sender is defined using *DFT in MQMNAME field of CRTMQMCHL. Then the output of the following command taken on the other box which holds full repository :- display clusqmgr(*) shows *D instead of the actual iSeries qmgr name. |
| Problem Summary | Problem involved here with the WMQ piece of code which handles channel creation. CRTMQMCHL CHLNAME(Cluster-sender-Chl-name) CHLTYPE (*CLUSSDR) CONNAME(Connection-details) CLUSTER(cluster-name) Above command interprets *DFT or anything prefixed with '*' in MQMNAME as a default qmgr and will use *D instead of real qmgr name. |
| Problem Conclusion | The problem is fixed for CRTMQMCHL and also for CHGMQMCHL + CPYMQMCHL. Changes done to achieve followings :- 1.) Code checks and gives error if user is using *ANYTHING instead of *DFT in MQMNAME field of these commands. 2.) It assures to pass the real qmgr name instead of *D , internally. |
SE18138
|
| Abstract | MQM400-AGENT FAILS WITH MCH3601, ATTEMPTING TO GET FROM A QUEUE |
| Users Affected | Users doing a get message from a queue can be exposed to this problem. Platforms affected: All Distributed+Java |
| Error Description | MQ Agent fails unexpectedly with a MSGMCH3601 when attempting to read a message from the queue. The FDC and FDC stack is below: Date/Time :- Friday October 29 05:38:35 2004 QueueManager :- TQMCG201 Max File Handles :- 5830 Process :- 00000124 Thread :- 00009394 Major Errorcode :- STOP Probe Type :- HALT6109 Probe Severity :- 1 Arith1 :- 9394 0x 24b2 Comment1 :- MCH3601 MQM Function Stack zlaMainThread zlaProcessMessage zlaProcessMQIRequest zlaMQGET zsqMQGET kpiMQGET kqiWaitForMessage apiGetMessage aqmGetMessage aqhGetMessage atmAdoptParent |
| Problem Summary | While we are aborting an attempt to pass a message synchronously from a waiting MQPUT to a MQGET, we abandon the adoption of a child transaction. Since the pointer to transaction's TCB(pChildTCB) is not valid, we fail with a MCH3601 |
| Problem Conclusion | The pointer to space handle needs to be initialized (initialise pSPc to SpcIndex)before we abandon the adoption of the child transaction while aborting the attempt to synchronously pass a message from a waiting put to a get. |
SE17758
|
| Abstract | OSP-F/LIBMQML_R FAILS WHILE HANDLING SECOND MESSAGE IN SEGMENTED MESSAGE. |
| Users Affected | All users using a segmented message putting into a triggered queue under syncpoint Platforms affected: iSeries |
| Error Description | When sending large messages in syncpoint into a triggered queue, a failure is encountered in the second trigger message. Following are the messages received: MQPUT ended with reason code 2009 MQCLOSE ended with reason code 2018 MQDISC ended with reason code 2018 Following is the joblog with MCH error during the problem: Message ID: MCH3601 Req msg status: NA Sending pgm type: ILE Recv pgm type: ILE No. statements for sending pgm: 1 No. statements for receiving pgm: 1 Sending proc name: atmGetTranDetails Sending module name: AMQATMXA_R Sending program name: LIBMQML_R Sending library name: QMQM Statements nos for sending prog: 0000000 Receiving proc name: atmGetTranDetails An FDC also cut with probe-id XY353001 that has an MCH3601 in comment1. |
| Problem Summary | When segmented messages are put onto a triggered queue under syncpoint, with logical order and while creating the triggered messages, the first atmGetTranDetails call is successful but the second call was failing while finding out which transaction the work need to associate with. The transaction was closing after each triggered message was created during the segmented message put. A new transaction was not creating for each message individually. |
| Problem Conclusion | The code has been modified so that the transaction is not closed after each triggered message creation when a SYNCPOINT is set. The transaction now closes after creating all the triggered messages for each segmented message. |
SE17747
|
| Abstract | MQM400 STRMQMLSR FAILS WHEN ISSUED BY A USER WHO DOESN'T HAVE *USE PERMISSION TO USE QMQM USER PROFILE. |
| Users Affected | All OS/400 users not having *USE permission for the QMQM user profile. Platforms affected: iSeries |
| Error Description | A user who is granted *USE object authority to the STRMQMLSR command, when attempted STRMQMLSR it fails with following errors due to the absence of *USE authority to use QMQM user profile object :- --------------------------------------------------------------- STRMQMLSR PORT(1414) MQMNAME(QMGRNAME) Not authorized to user profile QMQM. Not authorized to user profile QMQM. Errors occurred on SBMJOB command. Output file P000000184 created in library QMQM. Member P000000184 added to output file P000000184 in library QMQM. Output file S000000184 created in library QMQM. Member S000000184 added to output file S000000184 in library QMQM. Object P000000184 in QMQM type *FILE deleted. --------------------------------------------------------------- |
| Problem Summary | Before this code change, WMQ use to invoke the low level function of STRMQMLSR using the logged in user-id. When the user did not have *USE authority to use the QMQM profile, the command would fail. |
| Problem Conclusion | Instead of calling the lower level function of STRMQMLSR we are now calling it using a wrapper function. So that the authority check for the use of the QMQM profile object will be performed by QMQM profile's owner (i.e. QSYS). |
IY69761
|
| Abstract | C++ APPLICATION USING IIH CLASS FAILS IN FIXPACK 09 |
| Users Affected | Application using the IIH C++ class in a fixpack Platforms affected: Windows |
| Error Description | Post upgrade to fixpack9, C++ application gets the following error - "The ordinal 904 could not be located in dynamic link library imqb23vn.dll" |
| Problem Summary | This problem was due to a regression by APAR IC42634, which adds setLocalAddress and offsets the ordinals. |
| Problem Conclusion | The fix restores the moved C++ IIH class entrypoints to their original ordinal numbers, and moves the new local address related functions to new ordinals. Application of this APAR will mean any customer using the IIH C++ class who has recompiled their application at a fixpack 9 level will need to recompile it again. Additionally anyone who has compiled a C++ application using the new local address C++ functions will also need to recompile. This is because they accidentally reused the ordinals which the IIH class were using, and moved all the IIH entrypoints higher, breaking backward compatibility. This fix reverts to the fixpack 8 ordinals plus the new local address functions at new unique ordinals. |
IY69760
|
| Abstract | SHARED MEMORY LEAK FOR EACH CONNECTION WHERE A SYNCPOINTED MQGET/MQPUT IS PERFORMED. |
| Users Affected | Users performing large numbers of connections, where each connection does some syncpointed work. Platforms affected: All Distributed |
| Error Description | Customer will experience large amounts of shared memory being created over the course of a long-running qmgr where a large number of syncpointed MQGET/MQPUT operations are performed by separate WMQ connections. I.e. if an application thread connects, does some work under syncpoint, and disconnects, then a small amount of memory is leaked. If the leak consumes all available machine memory or addressable process space, then WMQ will dump FDCs to indicate the inability to create any more shared memory segments. At this stage, FDCs with Probe ID: XY132036 and AT061002 with AMQ6207 error could be observed. The leak is a CSD08-introduced regression, present also in CSD09. |
| Problem Summary | Failure to free a particular shared memory structure allocated as part of WMQ's internal transaction mechanism. |
| Problem Conclusion | Ensured that the shared memory structure is deallocated when a thread's transactional work is being wound up. |
IY69467
|
| Abstract | LOG SPACE LEAK DURING XA TRANSACTIONS OBSERVED AT WMQ5.3 CSD08 AND CSD09 |
| Users Affected | All installations of WebSphere MQ V5.3 CSD08 and later on unix and windows Platforms using XA Transactions Platforms affected: All Unix,Windows |
| Error Description | During XA transactions Active logs keep growing continuously and the following errors can be seen in the error logs : AMQ6717 ,AMQ6718 followed by AMQ7469 when MQ asynchronously starts rolling back some Transactions to release Log Space. The subsequent XA calls like xa_end or xa_prepare fail with xa_end= -4 (XAER_NOTA) , xa_prepare= 106 (XA_RBTIMEOUT) |
| Problem Summary | The problem occurred while WMQ5.3 CSD09 was acting as a resource manager and Tuxedo 8.1 was acting as a Transaction manager. While testing customers' application it was observed that the active logs were growing continuously till we received AMQ7469 when MQ started asynchronously rolling back some transactions to release log space. This resulted in the XA calls failing with either xa_end= -4 (XAER_NOTA) or xa_prepare= 106 (XA_RBTIMEOUT). This behaviour was not seen prior to WMQ 5.3 CSD08. The above behaviour was seen because unused reserve space associated with a space handle was not being appropriately released while deallocating a transaction. |
| Problem Conclusion | Changes were made to the WMQ code to release any unused reserved space associated with a space handle during deallocation of transactions. |
IY68586
|
| Abstract | FIX PACK 8 AND FIX PACK 9 PACKAGES ON AIX CREATE SYMLINK FILES IN /USR/BIN WITH CURLY BRACES AROUND THE FILENAMES. |
| Users Affected | All users of aix will meet this situation but no function is affected. The curly brace symlinks have an incorrect name, and hence never referenced. The files will not be removed however by removing the affected Fix Packs and will have to be manually deleted. Platforms affected: AIX |
| Error Description | Fix Pack 8 and Fix Pack 9 packages on AIX create symlink files in /usr/bin with curly braces around the filenames. The symlink files in question are: /usr/bin/{clrmqbrk} -> /usr/mqm/bin/clrmqbrk /usr/bin/{dltmqbrk} -> /usr/mqm/bin/dltmqbrk /usr/bin/{dspmqbrk} -> /usr/mqm/bin/dspmqbrk /usr/bin/{endmqbrk} -> /usr/mqm/bin/endmqbrk /usr/bin/{migmqbrk} -> /usr/mqm/bin/migmqbrk /usr/bin/{strmqbrk} -> /usr/mqm/bin/strmqbrk /usr/bin/{dltmqbrk} -> /usr/mqm/bin/dltmqbrk These symlinks are created during the install of the server filesets: mqm.server.rte.5.3.0.8.U497509 for fix pack 8 mqm.server.rte.5.3.0.9.U498450 for fix pack 9 The curly brace symlinks are in error and can be safely removed. |
| Problem Summary | The error was introduced in Fix Pack 8 following the introduction of new files in the server fileset. |
| Problem Conclusion | The error has been rectified in Fix Pack 10. If a customer has installed the server fileset from Fix Pack 8 or 9, then they can safely delete the erroneous symlinks. These curly brace symlinks are not used by the product, and consequently there is no effect if they are left in place or removed. |
IY67844
|
| Abstract | MQCONN AFTER FORK WITHOUT EXEC IN UNTHREADED APPLICATION RETURNS 2059 ON HP-UX AND AIX |
| Users Affected | Users with single threaded applications performing a fork without exec on HP-UX/AIX. Platforms affected: AIX,HP-UX |
| Error Description | A UNIX application is linked with single threaded MQ libraries (-lmqm, etc). If the application does MQCONN / MQDISC, then calls fork() and attempts to MQCONN in the child process, a FDC with probe id ZC002040 from zcpAttachPipe is produced. |
| Problem Summary | Required initialisation was not being performed for the child process. |
| Problem Conclusion | A code change was made to support fork without exec, from non- threaded applications only, after MQCONN/MQDISC has been performed in the parent. |
IY67371
|
| Abstract | PUB/SUB APPLICATION FAILS WITH MQJMS1061 AFTER APPLYING FIX PACK 08 |
| Users Affected | All users using JMS to receive durable subscriptions in Client mode at fix pack 08 or fix pack 09, when messages are between 3764 and 4209 bytes. Platforms affected: All Distributed+Java |
| Error Description | After applying fix pack 08, the publish/subscribe application starts failing. Messages are still being published with no errors. Subscriber receives a MQJMS1061: "Unable to deserialize object." This error does not happen with either fix pack 06 or fix pack 07 JMS classes The problem only happens in the following conditions: * Message size is very close to 4096 bytes (actually may be multiples of 4096, but we could not reproduce that). * Pub-Sub is used * Subscription is durable, and therefore messages are persistent. * MQ JMS fix pack 08 is used. If one of the previous parameters is changed, the testcase works. This will also show as a message missing it's last bytes. |
| Problem Summary | The message size for the receive buffer was incorrect. |
| Problem Conclusion | The message size for the buffer is now set correctly. |
IY67239
|
| Abstract | THE WEBSPHERE MQ JMS CLIENT SENDS INCORRECT TSH DATA. |
| Users Affected | All users of WebSphere MQ Classes for JMS. Platforms affected: All Distributed+Java |
| Error Description | The WebSphere MQ JMS Client sends incorrect TSH data when the message is resized from > 32K to a smaller size. (This only happens in client mode). This may lead to the channel closing. |
| Problem Summary | When sending a large message, the buffer used to keep the message is resized. However, the size is not reset when a put command is passed to the server channel. |
| Problem Conclusion | The size is now reset correctly when a put command is issued. |
IY67178
|
| Abstract | A JMS APPLICATION HITTING IY67239 CAUSES THE CHANNEL POOL PROCESS TO TRAP |
| Users Affected | All users of distributed WebSphere MQ who support client connections Platforms affected: All Distributed |
| Error Description | A WMQ for Windows v5.3 queue manager receives invalid data from an MQSeries adapter on AIX machine due to APAR IY67239. This causes the listener/amqrmppa (the channel pooling process) to trap. |
| Problem Summary | APAR IY67239 fixes a problem where a JMS client application sends in invalidly formatted data (specifically that the lengths were set incorrectly). This problem causes either a heap corruption, traps or hangs of the listener or channel pool process. A client application should not be able to cause the server side to trap or die in this way. |
| Problem Conclusion | This APAR adds server side checking for the protocol violation which IY67239 fixes, and closes the session with a protocol error rather than continuing and trapping. This effectively pushes the problem back to the client which caused it where it is easier to diagnose. Probe RM046002 has been added when this condition occurs. |
IY67165
|
| Abstract | CONNECTION PROBLEMS WHEN USING AN MQ TOPIC CONNECTION WITH MULTICAST NETWORK ADDRESSES. |
| Users Affected | This problem affects customers who use the Java Message Service (JMS) provided with WebSphere MQ Version 5.3. Platforms affected: All Distributed+Java |
| Error Description | When using the MQTopicConnectionFactory.setLocalAddress() method for multicast connections, this method should allow you to specify what NIC will be used for multicast traffic. However, calling this method doesn't actually do anything, which means that the MQTopicConnectionFactory will not use any ports on the NIC. |
| Problem Summary | The problem here was caused by the fact that the value passed in to the setLocalAddress() method was not being propagated down to NIC. |
| Problem Conclusion | The MQTopicConnectionFactory.setLocalAddress() method has been modified so that it now correctly passes the network address down to the NIC. |
IY67125
|
| Abstract | POST INSTALLATION SCRIPT DOES NOT SET MQM OWNERSHIP TO FILES INSTALLED UNDER /VAR/MQM/TIVOLI |
| Users Affected | Customers who have applied CSD7 / CSD8 and have files under /var/mqm/tivoli which are not owned by mqm. Platforms affected: All Unix |
| Error Description | A file in the directory /var/mqm/tivoli does not have mqm user/group ownership after applying CSD7 or CSD8. An example file on AIX is as follows: /var/mqm/tivoli/WMQBaseX050300.sys |
| Problem Summary | The post-installation script was not explicitly setting these permissions after generation of these files. |
| Problem Conclusion | The scripts were altered to explicitly set the ownership/permissions of these files. A customer can set these permissions manually. The permissions should be: -r--r--r-- (444) with mqm user and mqm group ownership. |
IY66826
|
| Abstract | CLUSSDR DOES NOT START, QUEUE MANAGER CACHE STATUS STARTING. |
| Users Affected | Users with Clusters Platforms affected: All Distributed |
| Error Description | Messages are stuck on the cluster transmission queue. The cluster queue manager status of the channels that should have been started is STARTING, but the channels are actually INACTIVE. |
| Problem Summary | Messages put to cluster queues remain on the cluster transmission queue, instead of being transmitted. The channels to cluster queue managers do not start. This is because the local queue manager has incorrect information about the status of the channel. |
| Problem Conclusion | Remove code which sets the cluster channel status to STARTING. |
IY66824
|
| Abstract | XC015001, XC015002, XC014031, RM296000 OR OTHERS, TYPICALLY DURING ENDMQM. AIX ONLY. |
| Users Affected | Users of WMQ 5.3 on AIX with applications that die whilst connected to WMQ. It is a very rare failure - it has a very small window of opportunity for failure within the code. Platforms affected: AIX |
| Error Description | The root problem is a locking failure of a general locking routine, which can therefore impact a wide variety of WMQ functionality and cause a wide variety of probes. There is a very small opportunity for this locking failure to occur when a mutex is reowned. Once the mistake has occurred a large number of processes are likely to throw FDCs, typically whilst connecting to, or disconnecting from WMQ. |
| Problem Summary | Caused by a complex sequence of events during application connection. If this is coincident with a particular mutex being reowned due to an application dying whilst connected, then one of the mutexes used during connection/disconnection may get corrupted. |
| Problem Conclusion | Eliminated the window of opportunity for this problem to occur. |
IY66583
|
| Abstract | README FILE FOR CSD08 IS CONFUSING WITH GSKIT/MQSERIES |
| Users Affected | Users installing csds who need SSL support. Platforms affected: AIX,HP-UX,Linux,Solaris |
| Error Description | The Readme/memo.ptf is a little confusing since it talks about the GSKit upgrade after the MQ upgrade, yet the proper installation order is to upgrade GSKit before MQ. Also it would be nice to include the README file in the tar file download for the convenience of systems administrators. |
| Problem Summary | The memo.ptf did not mention the need for users to upgrade to the required level of gskit before upgrading the WMQ file sets. This will cause problems when installing WMQ file sets over previous versions of gskit. |
| Problem Conclusion | Added note in memo.ptf instructing to upgrade gskit level before upgrading WMQ file sets. |
IY66581
|
| Abstract | WEBSPHERE APPLICATION SERVER LISTENER PORTS DO NOT STOP AND RESTART FOLLOWING A JMS EXCEPTION. |
| Users Affected | This problem affects customers who use the Java Message Service (JMS) functionality provided with WebSphere Application Server Version 5.x. Platforms affected: All Distributed+Java |
| Error Description | WebSphere Application Server Version 5.x contains logic that enables Listener Ports to stop and restart automatically following a JMS exception. Occasionally, if a Listener Port detects a number of exceptions over a period of time, it would fail to detect any new exceptions and would not shut itself down and restart automatically. |
| Problem Summary | Whenever an exception is detected on a JMS connection, a check is made to see if the exception has been seen before. If it has, then the exception is ignored, but if the exception is a new one, then the connection is shut down. The problem described in this APAR was caused by a programming error. The check to see if the exception had been seen before contained a typo which meant that once an exception had been detected, all subsequent exceptions were assumed to be the same as the original one! In a WebSphere Application Server environment, this had the following effect: - The first time an exception was detected on a connection, the Listener Port shut itself down and restarted correctly. - Once the Listener Port had restarted, all subsequent exceptions on that connection were ignored. This meant that the Listener Port would not automatically attempt to restart following a broken connection. |
| Problem Conclusion | The offending typo has been removed, which means that the code now correctly detects if an exception has been seen before. |
IY66462
|
| Abstract | MESSAGE ON CLUSTER COMMAND QUEUE READ MULTIPLE TIMES |
| Users Affected | Any cluster user. The problem may occur if the queues in the cluster are deleted such that messages have no possible destination, and there is no dead letter queue defined on the queue manager. Platforms affected: All Distributed |
| Error Description | A message on the cluster command queue can be read multiple times. The message is written to the command queue when a channel stops, and is used by the repository manager to drive the logic to reallocate messages to other queue managers in the cluster. If the reallocate fails such that the message read from the cluster transmission queue cannot be put to any queue in the cluster, for example because the queue no longer exists, or there is a 2189 (MQRC_CLUSTER_RESOLUTION_ERROR), the message is backed out to the cluster transmission queue again. This backout also backs out the message on the cluster command queue, and so the message is read again. The solution is to commit the message read from the command queue before reading any messages from the transmission queue. |
| Problem Summary | If there is a problem when reallocating messages to other queue managers in the cluster after a cluster channel has ended such that there is no possible destination in the cluster, or the destination cannot be resolved, the command message is backed out to the command queue, and so is read again (and again...) |
| Problem Conclusion | Commit the get of the command message before processing messages on the cluster transmission queue. |
IY66331
|
| Abstract | WHEN MQPMO.RECSPRESENT IS SET IN A CLIENT APPLICATION MQPUT1 FAILS WITH 2154 (MQRC_RECS_PRESENT_ERROR). |
| Users Affected | Users with apps that use MQPUT1 to put msgs to a distribution list. Platforms affected: All Distributed |
| Error Description | When MQPMO.RecsPresent is set in a client app (i.e. linked with libmqic.so instead of libmqm.so) calling MQPUT1 then 2154 (MQRC_RECS_PRESENT_ERROR) is returned; if the same code is run as a server app, the correct responses are returned. |
| Problem Summary | There was a check in the client code which does not exist in the server code. |
| Problem Conclusion | Remove the unnecessary check in the client code. |
IY66326
|
| Abstract | WEBSPHERE MQ QUEUE MANAGER ENDS AFTER USER MANUALLY KILLS MQRRMFA. |
| Users Affected | Users in locale "ja" where the repository manager process dies unexpectedly, or is killed. Platforms affected: All Distributed |
| Error Description | As a test, the WebSphere MQ repository manager process, amqrrmfa was killed. This caused the queue manager to end, but only when the locale was set to ja (Japanese). When the locale was C, the queue manager did not end. ***PLEASE NOTE: It is strongly advised that the repository process is not killed unless directed to do so by IBM. When a FFST with probe id ZX005025 was generated to report that the repository manager was killed, a buffer overrun occurred while writing the FFST. This caused a memory overwrite of a pointer which was later dereferenced, causing the next FFST with probe id ZI096002 and the stop of the queue manager. |
| Problem Summary | Memory overwrite when retrieving user messages from Japanese message catalogue for and FFST. |
| Problem Conclusion | Reserve enough memory for message catalogue messages. |
IY66278
|
| Abstract | XC130003 (SIGSEGV) FROM XEHEXCEPTIONHANDLER WITH A LARGE NUMBER OF THREADS (>1024) IN A SINGLE WMQ CLIENT PROCESS CONNECTING TO WMQ. |
| Users Affected | Customers with a large number of WMQ connections within a single process, using the client bindings for WMQ. Platforms affected: AIX,HP-UX,Linux,Linux/390,Solaris |
| Error Description | On UNIX platforms, when a very large number of client connections (>1024) are being made by a single WMQ client application, the application terminates during an MQCONN/MQCONNX call on one of the threads and an XC130003 FDC is produced with Comment1 :- SIGSEGV and the following stack: MQM Function Stack DoConnect rriInitSess ccxReceive cciTcpReceive xcsFFST |
| Problem Summary | WMQ utilised the operating system call 'select' which requires file descriptors are specified using an 'fd_set' structure which imposed a hard limit of 1024 on the number of file descriptors which could be used within a single WMQ client process on some UNIX platforms. When the number of fds used by a process rose above this number, either by the application opening a large number of files, or by having a large number of WMQ client connections each of which requiring an fd for the socket, WMQ overran the fd_set buffer causing the SIGSEGV. |
| Problem Conclusion | The WMQ code was changed to use the 'poll' operating system call on those plaftorms which support it. The 'poll' operating system call does not use an 'fd_set' structure, and hence does not impose the same hard limit on the number of fds in a process experienced on some UNIX platforms. The WMQ code was also changed to protect against overrunning of the 'fd_set' structure in cases where 'select' is still used. WMQ will revert to use of 'select' when the following environment variable is exported: AMQ_USE_SELECT=TRUE |
IY66048
|
| Abstract | MQ JMS THROWS EXCEPTION MQJE001: COMPLETION CODE 2, REASON 2090 |
| Users Affected | Users of MQ JMS Listener applications. Platforms affected: All Distributed+Java |
| Error Description | JMS subscriber applications using the MQ JMS classes may see an unexpected exception javax.jms.JMSException: MQJMS2002: failed to get message from MQ queue with an embedded exception com.ibm.mq.MQException: MQJE001: Completion Code 2, Reason 2090. The stack at the point of failure should also show the method com.ibm.mq.jms.MQMessageConsumer.getMessageT on it. The 2090 reason code in MQ is MQRC_WAIT_INTERVAL_ERROR, which typically means MQ saw a negative wait interval (but not MQWI_UNLIMITED). This problem may occur on a heavily stressed system where the application uses asynchronous receives (message listeners). |
| Problem Summary | The problem occurs when an asynchronous receive expires and is followed by a receive without a timeout value. Asynchronous receives are executed as a series of get operations each of 5000 ms. If the total asynchronous receive time is governed by the polling interval. If the polling interval expires this can leave the timeout field as a negative value. Any subsequent receive without a timeout was using the negative value in the timeout field causing the RC=2090. |
| Problem Conclusion | The code has been modified to ensure that receive without a timeout always sets the timeout field to zero. |
IY66042
|
| Abstract | DYNAMIC QUEUE CORRUPTED WHEN ONE THREAD DELETES QUEUE WHILST ANOTHER THREAD OPENS THE (PARTIALLY) DELETED QUEUE. |
| Users Affected | Users of dynamic queues where the same queue may be being deleted by one thread whilst another is accessing it. Platforms affected: All Distributed |
| Error Description | A Dynamic MQ queue is being deleted by a thread. Whist this deletion is in progress a different thread attempts to open the queue. An FDC is cut with Probe ID AO088020 in aocDeleteZombie. |
| Problem Summary | A small timing window in the interlocking between deletion and access of dynamic queues allows an access to succeed even though a queue is in the process of being deleted. |
| Problem Conclusion | Eliminated the timing window. |
IY65613
|
| Abstract | QUEUEBROWSER DOES NOT SHOW UNDERLYING JMSEXCEPTION |
| Users Affected | Any user of a QueueBrowser Platforms affected: All Distributed+Java |
| Error Description | When using the QueueBrowser to browse a queue, if there s any WMQ problem at MQGET time, it does not get sent back to the application. hasMoreElements() simply return false. The JMS Specification defines the QueueBrowser as returning an Enumeration, which we do under the guise of an MQQueueEnumeration. This is perfectly valid but means none of hasMoreElements() nor nextElement() can throw a JMSException. |
| Problem Summary | It is not possible to throw a JMSException from an MQQueueEnumeration. |
| Problem Conclusion | The JMSException is now made available to an ExceptionListener on the Connection. The ExceptionListener must be implemented by the user. |
IY65612
|
| Abstract | CHANNEL FAILS WITH AMQ9636 WHEN DN OF A CERT CONTAINS ESCAPED CHARACTERS SUCH AS , AND SSLPEER IS SPECIFIED |
| Users Affected | Customers using SSL with certificates which have escape characters such as \, in the fields of the distinguished name. Platforms affected: All Unix |
| Error Description | WMQ is incorrectly comparing the fields of the DN containing the escaped characters with the corresponding fields in the SSLPEER channel attribute. e.g. if the distinguished name in the certificate was as follows: "CN=Before Comma\,After Comma,O=CompanyX,UK" and the SSLPEER attribute on the channel was specified identically, then the channel would be expected to successfully connect. However, due to the escaped \, the channel fails with AMQ9636. |
| Problem Summary | The failure is due to the comparison algorithm checking the lengths of the two OU strings are equal, after the '\,' in the OU from theSSLPEER MQSC attribute had already been replaced with ','. Hence, the lengths do not match and the comparison fails. |
| Problem Conclusion | The code was fixed to correctly compare the fields. |
IY65447
|
| Abstract | MQ JAVA CLIENT ALLOWS THE GET OF A MESSAGE WHOSE SIZE IS GREATER THAN THE MAXMSGL VALUE ON THE SVRCONN CHANNEL |
| Users Affected | This affects users using any JAVA client to get a message off a queue. Platforms affected: All Distributed+Java |
| Error Description | There is no error involved in this scenario. MAXMSGL parameter not being honored when set on a svrconn channel when using the MQ Java client to get a message. |
| Problem Summary | When a Java client attempts to get a message off a queue, the maximum message length set on the channel is ignored and the client can get a message with a size larger that the maximum message length set. This problem does not happen when we put a message to a queue. Only during a get. |
| Problem Conclusion | The Java client should duplicate the C client behaviour which throws a MQRC 2010 when we attempt to get a message larger than the MaxMsgLength defined on the channel. There is no change to the externals of the product. We have added internal capability to negotiate the values and check the max length an throw an MQRC 2010 exception if necessary when we get messages. |
IY65287
|
| Abstract | UNBLOCK SIGUSR1,SIGUSR2 AND SIGALRM IN RUNMQTRM |
| Users Affected | If a triggered user application wants to make use of the signals SIGUSR1/SIGUSR2/SIGALRM the signals are not delivered to the application. Platforms affected: All Unix |
| Error Description | When the triggered application is started by runmqtrm of WMQ5.3, it can not handle SIGUSR1, SIGUSR2 and SIGALRM. Runmqtrm of WMQ5.3 mask these signals while MQ5.2 of runmqtrm does not. WMQ5.3 should unblock these signals before calling the user application. If the application code relies on signals and if the application uses any other signals, then the customer has to ensure that they are unblocked when the application starts. |
| Problem Summary | runmqtrm of WMQ5.3 mask these signals. |
| Problem Conclusion | Unblock SIGUSR1, SIGUSR2 and SIGALRM before calling the application. |
IY65269
|
| Abstract | UNDEFINED SYMBOL ERROR DUE TO INCOMPATIBLE NAMEMANGLING SCHEMES |
| Users Affected | When VisualAge c/C++ v6.0 compiler for AIX is used to compile C++ applications that are using WMQ C++ libraries at WMQ5.3 level. Platforms affected: AIX |
| Error Description | Is it possible to use dlopen on a library that is built with the v5 name mangling even though the module that calls the dlopen was built with v6 name mangling. FURTHER INFORMATION: WMQ5.3 readme says: Chapter 20. Building your application on AIX -------------------------------------------- Section "Preparing C programs" add a note "When using the VisualAge C/C++ v6.0 compiler for C++ programs you must include the option "-q namemangling=v5" to get all the WMQ symbols resolved when linking the libraries." ------------------------------------------------------------ If -qnamemangling=v5 option is not used, following linker errors are reported: ld: 0711-317 ERROR: Undefined symbol: .ImqChl::setHeartBeatInterval(long) ld: 0711-317 ERROR: Undefined symbol: .ImqStr::cutOut (ImqStr&,char) ld: 0711-317 ERROR: Undefined symbol: .ImqChl::setTransportType (long) ld: 0711-317 ERROR: Undefined symbol: .ImqObj::setOpenOptions (long) ld: 0711-317 ERROR: Undefined symbol: .ImqCac::useEmptyBuffer (const char*,unsigned long) ld: 0711-345 Use the -bloadmap or -bnoquiet option to obtain more information. |
| Problem Summary | Namemangling scheme has undergone changes between VisualAge c/C++ v5.0 and v6.0 compilers. Therefore, any C++ application linking with WMQ5.3 C++ libraries will have to specify -qnamemangling=v5 compiler option to instruct the compiler about the namemangling scheme used in WMQ libraries for C++ programs. Not doing so results in linker errors. |
| Problem Conclusion | Guarded WMQ C++ header files with #pragma namemangling directives. With this, -qnamemangling compiler option need not be provided, when VAC v6.0 or above is used. |
IY65033
|
| Abstract | CLIENT MQGET FAILS WITH 2009 MQRC_CONNECTION_BROKEN AFTER THE AMQRMPPA TERMINATES WITH SIGBUS DURING CONVERSION OF AN MQRFH2 |
| Users Affected | This issue only affects users with messages containing MQRFH2 headers with NameValueLength/NameValueData pairs which are not multiples of 4 Bytes in size. Platforms affected: HP-UX |
| Error Description | When performing a client MQGET call, with MQGMO_CONVERT specified, of a message containing an MQRFH2 header (with data elements which are not a multiple of 4 Bytes in size), the client received 2009 MQRC_CONNECTION_BROKEN. An FDC with probeid XC130003 is dumped by an amqrmppa process on the serving queue manager with 'Comment1 :- SIGBUS'. Note that this issue can also be observed by server bound WMQ applications performing an MQGET call with MQGMO_CONVERT specified on the same category of message. However, the SIGBUS will be received by the user application in this case, which will terminate. |
| Problem Summary | If an MQRFH2 message contains NameValueLength/NameValueData pairs where the length of the data is not a multiple of 4, this can cause the NameValueLength numerical field to be in an address in memory which is not 4 Btye aligned. WMQ was attempting to access a NameValueLength field which was not 4 Byte aligned during message conversion as part of an MQGET call from a client. This caused a SIGBUS error which subsequently terminated the amqrmppa channel pooling process which contained the SVRCONN channel performing the MQGET. This termination caused the 2009 MQRC_CONNECTION_BROKEN return code from the client MQGET call. |
| Problem Conclusion | As described in the Application Programming Reference book users should not specify values for NameValueLength in MQRFH2 headers which are not a multiple of 4. However, the SIGBUS behaviour of WMQ was not correct as it caused termination of the channel process. Due to this the WMQ code was changed to return MQRC_NOT_CONVERTED from the MQGET call to signify that conversion had failed, rather than terminating. However, WMQ remains unable to convert MQRFH2 headers where NameValueLength fields are not multiples of 4. |
IY64640
|
| Abstract | MQRC_SYNCPOINT_NOT_FROM AN MQPUT/MQGET CALL AFTER BEGINNING A NEW TRANSACTION IN TUXEDO USING DYNAMIC RESOURCE MANAGEMENT. |
| Users Affected | This issue only affects customers using BEA Tuxedo software to externally co-ordinate WebSphere MQ units of work. It only applies when the dynamic resource management interface into WMQ is being used by Tuxedo (the MQRMIXASwitchDynamic XA switch structure). Platforms affected: All Distributed |
| Error Description | The external symptoms are a 2072 MQRC_SYNCPOINT_NOT_AVAILABLE return code from an MQPUT/MQGET call after beginning a new transaction in Tuxedo, following a transaction which failed in the following specific way: An xa_rollback call was issued against WMQ for a transaction by Tuxedo while a thread of control (TOC) was still associated with the transaction in WMQ (this is outside of the XA specification, as discussed in IY50795 which provided a solution in the static registration case). |
| Problem Summary | WMQ was not removing the internal association between the TOC and the transaction when an xa_end call was subsequently made by Tuxedo (after the xa_rollback), on the TOC which was associated with the transaction in WMQ when the xa_rollback was called (the XA specification states that this must be made prior to the xa_rollback call). In the dynamic registration case, Tuxedo does not inform WMQ that a new unit of work has subsequently begun (using an xa_start call), instead WMQ uses an ax_reg call to request the XID of any current transaction from Tuxedo - this call is only made when the TOC in WMQ is not currently associated with a transaction. As the TOC remained associated with a transaction in WMQ, this ax_reg call was not being made by WMQ. |
| Problem Conclusion | The WebSphere MQ code was altered to bring the behaviour in the dynamic registration case in-line with the behaviour in the static registration case. |
IY64557
|
| Abstract | XC076001 ON LOCALE GB18030 |
| Users Affected | Users with Solaris or Linux machines with locale zh_CN.gb18030. Platforms affected: Linux,Solaris |
| Error Description | This applies to Linux and Solaris only. If the machine locale is zh_CN.gb18030, then XC076001 FFSTs are produced complaining that there is no conversion from 1383 -> 5488. This is true, because the message catalogue is supplied in codepage 1383, the EUC codepage for simplified Chinese, and indeed the conversion does not exist! The fix is to make a special case out of this in xcsOpenConversion - where fromCCSID is 1383 and toCCSID is5488, make toCCSID 1386. The reason this works is that 1386 is the PC equivalent of 1383, and is a valid subset of GB18030. This means that the conversion from 1383 -> 1386 will produce all valid characters in GB18030. |
| Problem Summary | Message catalog supplied in a codepage for which there is no conversion to the codepage used in the machine locale. |
| Problem Conclusion | Amend the conversion routine to use a converter which does exist. |
IY64428
|
| Abstract | CLUSTER MIGRATION PROBLEM - WHEN A 5.3 QUEUE MANAGER IS ADDED TO A 5.2 CLUSTER, FDCS FROM ZXCRESTOREOBJECT ARE CREATED. |
| Users Affected | If user migrates from MQ v5.2 to MQ v5.3 and adds this MQ v5.3 queue manager to an existing MQ v5.2 cluster then an FDC from component zxcRestoreObject is created. Adding a 5.3 queue manager to a 5.2 cluster causes the MQCD structure length to be 1648 instead of 1748. Platforms affected: All Unix |
| Error Description | You migrate to MQ 5.3 and try to add the newly migrated machine to an existing MQ 5.2 cluster. When you do, you get an FDC with component zxcRestoreObject. Additional keywords: amqzxma0_nd Probe IdZX054040 MQCD V7 length=1648 instead of 1748 AMQ9498 The MQCD structure supplied was not valid |
| Problem Summary | Migration Problem. |
| Problem Conclusion | Here we do not issue cut an FDC and do a MOD_EXIT. We continue and ensure that the correct structure length is calculated. |
IY64363
|
| Abstract | SIGILL IN INIT_TRACE IN LIBMQJBND05.SO |
| Users Affected | Any user invoking WMQ classes for Java/JMS through JNI, in BINDINGS mode from an application statically linked with a library containing a function named init_trace() Platforms affected: All Unix |
| Error Description | Unexpected Signal : 4 (SIGILL) occurred in Function init_trace or cpic_time_to_wait located in the SAP library in librfc.a. This occurs when using WebSphere MQ v5.3 Java or JMS through JNI in an application statically linked with librfc.a. One of the functions in librfc.a is called init_trace. WMQ libmqjbnd05.so also has an init_trace function and since the application has been statically linked with librfc.a the init_trace from SAP gets picked up first, in place of the WMQ one, leading to the conflict and causing the SIGILL. ADDITIONAL KEYWORDS: MQLINK SAPLINK MQ LINK R/3 RFC SIG4 SIG 4 SeeBeyond An unexpected exception has been detected in native code outside the VM. |
| Problem Summary | Function name conflict between libmqjbnd05.so and another library using an init_trace function statically linked with an application accessing WMQ Java or JMS through JNI in BINDINGS mode. |
| Problem Conclusion | Function name has been changed. |
IY64349
|
| Abstract | CHANNEL HANG AFTER A COMMS FAILURE AS A RESULT OF A FAILED DNS LOOKUP. |
| Users Affected | Output of message to WMQ error logs contains the hostname and IP address. When looking up these fields the DNS lookup hangs, and so the SDR channel also appears to hang. Platforms affected: HP-UX |
| Error Description | After a comms failure on a channel the DNS reverse lookup fails causing the channel to hang. The API call was done under lock, therefore later channel were forced to wait on the lock. The lock is never released and the channel appears hung. |
| Problem Summary | The DNS reverse lookup function gethostbyaddr() hangs or is very slow to return, causing the channel to appear to hang. |
| Problem Conclusion | Add a parameter DNSLookupOnError to the TCP stanza of qm.ini. The default is YES, which means that the current behaviour is not changed. IF the parameter is set to NO, the DNS lookup is not performed, and so the output to the error message will contain only the IP address. |
IY63833
|
| Abstract | C++ SSL CONNECTION FAILS WITH 2381 MQRC_KEY_REPOSITORY_ERROR. |
| Users Affected | Customers using the C++ bindings for WMQ to create an SSL secured client connection. Platforms affected: All Distributed |
| Error Description | When using the C++ bindings for WMQ with the ImqChannel::setSslCipherSpecification and ImqQueueManager::setKeyRepository calls to specify the cipher spec and key repository locations for an SSL secured client connection, the connection fails with 2381 MQRC_KEY_REPOSITORY_ERROR. |
| Problem Summary | The C++ bindings provide the call ImqQueueManager::setKeyRepository to specify the key repository to use for a client connection. This causes an MQSCO structure to be created during ImqQueueManager::connect and to be referenced from the MQCNO structure passed to the MQCONNX call. However, the C++ bindings are not incrementing the Version of the MQCNO structure to MQCNO_VERSION_4, so the MQSCO structure is ignored. |
| Problem Conclusion | A code change was made to increment to the version of the MQCNO structure passed to the MQCONNX call to MQCNO_VERSION_4 when a key repository is specified. |
IY63820
|
| Abstract | UNBLOCK SIGUSR1 BEFORE CALLING THE USER APPLICATION. RUNMQTRM OF MQ5.3 MASK SIGNAL WHILE MQ 5.2 DOES NOT. |
| Users Affected | If a triggered user app wants to make use of the signal SIGUSR1 the signal is not delivered to the app. Platforms affected: All Distributed |
| Error Description | When the triggered application is started by runmqtrm of WMQ5.3 CSD04, it can not handle SIGUSR1. Runmqtrm of WMQ5.3 mask signal while MQ5.2 of runmqtrm does not do. MQ 5.3 should unblock SIGUSR1 before calling the user application. |
| Problem Summary | Triggered apps inherit the signal disposition from the trigger monitor. Since this is a threaded WMQ app, all signals are blocked on the main thread, and so signals are not delivered to triggered apps. |
| Problem Conclusion | Unblock SIGUSR1 before calling the app. |
IY63458
|
| Abstract | SSLPEER VALUE IS INCORRECTLY CHECKED BY WEBSPHERE MQ WHICH RESULTS IN CHANNEL FAILURE, INCLUDING ERROR AMQ9636 |
| Users Affected | You may see this problem if you have use SSL channels and specify an SSLPEER value. Note that this problem will only affect some SSLPEER values (see 'problem summary') so if you have set up SSL and SSLPEER values and it is working OK then you will not see this problem 'develop' at a later stage. Platforms affected: All Distributed |
| Error Description | The customer has correctly specified an SSLPEER value (which matched the corresponding certificate) but the channel failed to start and gave error message AMQ9636. |
| Problem Summary | The problem was caused by how WebSphere MQ pattern matched within the SSLPEER value. The problem could cause WebSphere MQ to incorrectly interpret some DN attribute values as other attribute values. |
| Problem Conclusion | The algorithm has been corrected so that WebSphere MQ now ensures that it is comparing the correct attributes with one another. |
IY63426
|
| Abstract | REFRESH CLUSTER REPOSITORY MANAGER REPEATEDLY ENDS ABNORMALLY. MISCALCULATING SSLPEERNAMEPTR, AN INTERNAL POINTER TO SSL PEER |
| Users Affected | Users utilising clustering and SSL. Platforms affected: All Unix |
| Error Description | After a REFRESH CLUSTER customer repository manager repeatedly ended abnormally. We are miscalculating the SSLPeerNamePtr, an internal pointer to the SSL peer name string. The error will typically manifest as a SIGSEGV FDC from within the rrmPCFFromSSLPeerName function. |
| Problem Summary | Mishandling of the SSL peer name in certain circumstances, dependent on the utilisation of clustering as a whole. Because these circumstances are difficult or impossible to delineate, customers utilising clustering with SSL are recommended to apply this fix. |
| Problem Conclusion | Corrected the handling of the SSL peer name in a clustering context. |
IY63171
|
| Abstract | HEAVY LOCK CONTENTION UNDER STRESS, AND REPORT LONG LOCK WAIT FDCS |
| Users Affected | Users running large numbers of clients on AIX, (and likely also Solaris), where the system is under stress. This problem is only encountered if runmqlsr is run at a lower priority (higher Unix "nice" value) than the WMQ queue manager processes. Platforms affected: AIX,Solaris |
| Error Description | When attempting to ramp up client connections using WMQ 5.3 csd07 the system slowed to a crawl and WMQ stopped processing. This occurred at 1000+ client connections; eventually the client receive 2059 errors (qmgr not available). The WMQ diagnostic script "stackit" hung with dbx trying to attach to an amqrmppa process. |
| Problem Summary | This is an operating system scheduling problem that can be encountered if there are a number of ready-to-run threads (such as WMQ agent threads), and a lower-priority thread (such as an amqrmppa thread where the runmqlsr was issued in the background). In this situation, the operating system may focus all the CPU time available on the higher-priority threads, and not give any time to the lower priority thread. If the lower priority thread holds a lock that the higher priority threads are spinning on, then the system can max-out on CPU and hang. Note that attempts to attach (e.g. via dbx or truss) to a process which isn't being given any CPU time, result in the attacher hanging on AIX. |
| Problem Conclusion | Firstly, it is important that all processes utilising WMQ are run with the same Unix "nice" value. Users need to be aware that running a process in the background can result in it being given a higher nice value (and hence lower priority) by the invoking shell. This could have general WMQ performance implications, and under highly stressed situations may cause a hang. If a user wants to run a process utilising WMQ in the background, then check that the shell doesn't give it a higher nice value. Verify this by looking at the NI column of a ps - efl listing. In ksh, use "set +o bgnice" to stop ksh from raising the nice value of backgrounded processes. Secondly, we have revised the WMQ spinlocking algorithm to reduce the load in the event that a spinlock becomes heavily contended, and this in itself should ensure that the operating system will give some CPU time to lower priority processes should there be any. We have also added a "long lock wait" FDC to the spinlocking to aid with diagnosis of any future issues; and we have coded an explicit check within this FDC to try and detect and report the disparate "nice" values situation. This problem has been seen on AIX. Tests indicate that the same kind of situation could also arise on Solaris. A system which hangs because of this problem, can be persuaded to come out of the hang by renicing the lower priority amqrmppa processes (log on as root and use the renice command). We make the general recommendation that on all Unix platforms, processes utilising WMQ should be run at the same, standard nice value. |
IY63056
|
| Abstract | CRTMQM HANGS ON LINUX SYSTEMS. |
| Users Affected | Create a qmgr appears to hang. Platforms affected: Linux |
| Error Description | The authorisation of groups never ends, and so causes a WMQ program to appear to hang. It can affect crtmqm or an application. |
| Problem Summary | The test for the end of the group list was not correct in all cases. |
| Problem Conclusion | Correct the test as directed by a Linux support group. |
IY59833
|
| Abstract | CHANNEL GOES TO INITIALIZING WHEN STARTED FROM SCRIPT. |
| Users Affected | Distributed platforms Platforms affected: All Distributed |
| Error Description | If more than 10 channels are started through a script one after another, one of the channel goes to INITIALIZING state. It takes 5 minutes to go to RUNNING state. |
| Problem Summary | Problem is due to race condition created when channels were started through a script. |
| Problem Conclusion | Problem was due to multiple processes contending to update channel status. Fix was to eliminate this race condition. |
IC45588
|
| Abstract | JMS CLIENT APPLICATION FAIL WITH 2009 (MQRC_CONNECTION_BROKEN) AT RECEIVE |
| Users Affected | All users of JMS Client subscribers Platforms affected: All Distributed+Java |
| Error Description | JMS Client subscriber application runs normally then for no apparent reason receives 2009 (MQRC_CONNECTION_BROKEN). The WMQ qmgr server log shows AMQ9209 (Connection to host 'hostname (ip address)' closed) indicating that it is the client which has shut down the connection. Sometimes, this is accompanied by CO000044 FDC with Major Errorcode rrcE_BAD_PARAMETER and a whole Transmission segment as Excess received bytes. |
| Problem Summary | The problem is caused by an error in the recalculation of the size of the buffer used for the receive and the order of the underlying MQGET. This leads to the Client closing the connection. |
| Problem Conclusion | The order of the underlying MQGET has been modified. |
IC45412
|
| Abstract | MULTI-THREADED APPLICATIONS MIGHT CAUSE AMQ8074 ERROR POST CSD08 |
| Users Affected | Users running multi-threaded applications with CSD08 or CSD09 Platforms affected: Windows |
| Error Description | After installing CSD08 or CSD09, running multi- threaded applications might cause AMQ8074 error if the threads run under more than one user id. This will cause some of the threads to fail with 2035(MQRC_NOT_AUTHORIZED) error. |
| Problem Summary | Caused by application threads running with different user ids and thread details not being updated properly. Users experiencing this problem will get AMQ8074 error and the thread will terminate with 2035(MQRC_NOT_AUTHORIZED) error. |
| Problem Conclusion | Code changes have been implemented which will allow multi - threaded applications to run with more than one user id. |
IC44243
|
| Abstract | WEBSPHERE MAY FORGET MDB MESSAGE SELECTORS AFTER A LONG TIME |
| Users Affected | Platforms affected: All Distributed+Java |
| Error Description | An application installed under WebSphere which uses MDBs may define message selectors in its ejb-jar.XML file. Selectors can control which messages are passed to application MDBs. After running correctly for some time, WebSphere may pass messages to MDBs which do not match the configured selection criteria. |
| Problem Summary | The MQ Queue Agent keeps a count of how many MDB's using selectors are currently listening on the queue. In some occasions when the browsers were stopped this value was being decremented twice for the same MDB. If the value went below 0, this meant that when new browsers were started and the selector count was increased the Queue Agent would not recognise that selectors were in use. Secondly there was a thread race condition when messages were put to a queue at a certain point as browser was starting. The queue agent would see no selectors being used, then find a message, then the new browser with a selector would start and immediately be given the message. |
| Problem Conclusion | The fix prevents the count from decrementing below 0 and the browsers are now started correctly through synchronising on the queue agent thread. |
IC43947
|
| Abstract | HAMVMQM DOES NOT PUT MQM AUTHORITY ON THE DIRECTORIES |
| Users Affected | All users of Microsoft Clustering (MSCS) Platforms affected: Windows |
| Error Description | On node swap we reapply security to all files, but not the directories associated with the qmgr. If the systems are not domain controllers, then mqm group is a different SID on each node. A symptom of this is to damaged the temporary dynamic queues. |
| Problem Summary | A queue manager under MSCS control may get access denied messages following a node change when trying to delete dynamic queues created by the other node. This may be errors including the probe AD034001 (adiDeleteDir) with Rc=5 (ACCESS DENIED) from RemoveDirectory |
| Problem Conclusion | The MSCS resource DLL has been modified so that following a node swap, as well as putting local mqm authority on the files, it now also puts local mqm authority on the directories, enabling them to be deleted on the other node to where they were created. |
IC43915
|
| Abstract | MQ .NET (DOTNET) WRITEBYTES TRANSLATES CHARACTERS INCORRECTLY |
| Users Affected | All users of the .net (dotnet) interface and the writeBytes method. Platforms affected: Windows |
| Error Description | The writeBytes() method provided with the MQ .NET interface behaves inconsistently with regard to the writeBytes() method in the MQ Java interface. Some international characters may be converted to 0x3F (unknown) as a result. |
| Problem Summary | The writeBytes method is documented as adding the string and stripping out the high order bytes, however this was implemented by converting to an ANSI code page, which lost some of the national language characters. |
| Problem Conclusion | The code has been modified to behave as per the JAVA implementation of MQMessage writeString, and now strips off the high bytes when inserting a Unicode string. |
IC43892
|
| Abstract | MQ SAMPLE PROGRAM AMQSBLST GENERATES DUPLICATE MQMD.MSGID VALUES |
| Users Affected | All users of the amqblast application. Platforms affected: All Distributed |
| Error Description | The amqsblst sample program and its source code amqsblst.c may generate messages with the same MQMD.MsgId value, which may in turn cause problems for other applications including JMS applications. |
| Problem Summary | When messages are put using amqsblast application and if those messages are picked up by MDB as poison messages, the MQMD.MsgId is not getting reset for each message. Hence, the Listener port may get stopped because all the messages have same messages id. |
| Problem Conclusion | The MQMD.MsgId is set with MQMI_NONE before putting the message to the Queue. |
IC43762
|
| Abstract | DEFAULT PERSONAL CERT IS USED TO AUTHORISE CLIENT DURING SSL HANDSHAKE WHEN IBMWEBSPHEREMQUSERNAME CERT DOES NOT MATCH |
| Users Affected | Customers using the gsk6ikm GUI to manage a client certificate key store, and using the labels of personal certificates in the client key store to ensure that only certain userids can connect to a queue manager. Platforms affected: All Unix |
| Error Description | Customer has no personal certificate for the current userid in a GSKit client key store (i.e a personal cert with friendly name ibmwebspheremq). However, the client connection to the server queue manager succeeds, even though SSLCAUTH(REQUIRED) is specified on the SVRCONN channel definition. |
| Problem Summary | GSKit key stores have the ability to mark a particular personal certificate as the default certificate for the key store. When importing a certificate using the command line (gsk6cmd) interface a certificate is not marked as the default (unless the gsk6cmd -cert -setdefault command is used). However, when importing a certificate using the GUI (gsk6ikm) interface, the first certificate imported automatically becomes the default certificate (marked with an asterisk in the GUI). Once any certificate has been marked as the default in the key store, it is only possible to change which certificate is the default - it is not possible to make no certificate the default (unless all personal keys are exported and removed from the key store and then imported back in using the command line). WMQ was instructing GSKit to use the default certificate when no matching ibmwebspheremq certificate was found, and as such a client certificate was being flowed for userids where no matching ibmwebspheremq certificate was found. |
| Problem Conclusion | The WMQ code was changed to ensure that if no matching ibmwebspheremqm certificate is found, then no client certificate will be flowed (even if a certificate is marked as default in the GSKit key store). The 'Testing for failure of SSL client authentication' section in Chapter 15. of the Security book was updated to remove the following bullet point: "The default certificate (which might be the ibmwebspheremq certificate)." |
IC43533
|
| Abstract | CLUSTER: ONLY SUSPEND OR RESUME CLUSTER QUEUE MANAGER OBJECTS WHICH ARE NOT DELETED. |
| Users Affected | All users of WebSphere MQ and clustering Platforms affected: All Distributed |
| Error Description | With two cluster receiver channels defined, the queue manager is suspended from the cluster, then one of the channels is stopped and deleted. This gets propagated around the network correctly. However, when the queue manager is resumed in the cluster, the deleted channel now reappears as a definition on the full repository. |
| Problem Summary | When a queue manager is resumed in a cluster, it may restore previously deleted cluster queue manager entries on the full repository if the entries were suspended before they were deleted. |
| Problem Conclusion | WebSphere MQ has been modified to ensure a resume command only impacts entries which are suspended and not deleted. |
IC43515
|
| Abstract | COM.IBM.MQJMS.JAR CONTAINS AN INCORRECT CLASS-PATH STATEMENT IN THE FILE MANIFEST.MF. |
| Users Affected | This problem affects all customers who use the Java Messaging Service functionality provided with WebSphere MQ 5.3. Platforms affected: All Distributed+Java |
| Error Description | The file com.ibm.mqjms.jar provided with WebSphere MQ 5.3 contains a file called MANIFEST.MF. This file provides important information about the JAR file. One of the entries in the file is: Class-Path: com.ibm.mq.jms.jar According to the documentation on the Sun website, the Class- Path statement in a JAR's MANIFEST file: "specifies the relative URLs of the extensions or libraries that this application or extension needs. URLs are separated by one or more spaces. The application or extension class loader uses the value of this attribute to construct its internal search path." Based on this information, it would appear that the com.ibm.mqjms.jar file has a dependency on com.ibm.mq.jms.jar, which doesn't exist! This doesn't cause any real problems, but if the com.ibm.mqjms.jar file is loaded into a Java development environment that performs JAR validation (such as WebSphere Studio Application Developer - WSAD), an error might be reported stating that the JAR is invalid. |
| Problem Summary | This problem has been caused by an incorrect Class-Path entry in the MANIFEST.MF file for the JAR file com.ibm.mqjms.jar. The JAR file actually has a dependency on the JAR: com.ibm.mq.jar and not: com.ibm.mq.jms.jar |
| Problem Conclusion | The Class-Path statement in the MANIFEST.MF file has been changed to read: Class-Path: com.ibm.mq.jar |
IC43190
|
| Abstract | HANDLING OF AN LU6.2 TIMEOUT IN THE WINDOWS RUNMQLSR PROGRAM |
| Users Affected | Users on MQ5.3 Windows - who use LU6.2 listener Platforms affected: Windows |
| Error Description | MQ problem with the handling of an LU6.2 timeout in the Windows runmqlsr program. The docs say that the cmaccp call, which runmqlsr uses to accept an incoming communication, will block until the time limit given by receive_allocate_timeout is reached. By default, this means that the cmaccp call will timeout in 3600s (1 hour) on Windows. When it times out, cmaccp (or cmwait, if we are not blocking) will return the code CM_PROGRAM_STATE_CHECK. Right now the MQ listener treats this as a fatal error and ends with an AMQ9210 message. However, it appears we need only retry the cmaccp call again to continue normally. |
| Problem Summary | 'AMQ9210: Remote attachment failed.' error occurs. This error gives a CPIC return code of 25, which is CM_PROGRAM_STATE_CHECK. The call was not issued in an allowed conversation state. |
| Problem Conclusion | The docs say that the 'cmaccp' call, which runmqlsr uses to accept an incoming communication, will block until the time limit given by 'receive_allocate_timeout' is reached. By default, this means that the cmaccp call will timeout in 3600s (1 hour) on Windows. When it times out, cmaccp will return the code CM_PROGRAM_STATE_CHECK. Right now the MQ listener treats this as a fatal error and ends with an AMQ9210 message. However, it appears we need only retry the cmaccp call again to continue normally. |
IC43176
|
| Abstract | COMMAND SERVER REPORTS RC 2005 |
| Users Affected | Customers using remote administration using runmqsc. Platforms affected: All Distributed |
| Error Description | From the command server trace we clearly see the error message on the MQPUT1 msgid:20008507 a1:000007D5 a2:000007F3 c1:SYSTEM.ADMIN.COMMAND Where msgid:20008507 pcmPUT_DEAD_LETTER_FAILURE, an attempt by the command server to put a message to the dead-letter queue, using MQPUT1, failed with reason code 2005. The MQDLH reason code was 2035. The user in question was mqm which is not a valid Windows user, however, we should not have failed in sending the reply back. The reason for the failure was the Buffer length in the MQPUT1 - which seemed to be set -1. |
| Problem Summary | The MQPUT1 fails with a 2005 (MQRC_BUFFER_LENGTH_ERROR) msgid:20008507 a1:000007D5 a2:000007F3 c1:SYSTEM.ADMIN.COMMAND Where msgid:20008507 pcmPUT_DEAD_LETTER_FAILURE, an attempt by the command server to put a message to the dead-letter queue, using MQPUT1, failed with reason code 2005. The MQDLH reason code was 2035. The user in question was mqm which is not a valid Windows user, however, we should not have failed in sending the reply back. |
| Problem Conclusion | The reason to the failure was due to the Buffer length in the MQPUT1 - which seems to be set -1. We try to inquire the maximum message length for the dead letter queue, but this fails with a MQCC_WARNING. Based on the return code of inquire we calculate the buffer length, where we went wrong in this case. |
IC43014
|
| Abstract | FAILURE TO RETRIEVE A LARGE SEGMENTED MESSAGE WITH MQGMO_BROWSE_MSG_UNDER_CURSOR |
| Users Affected | All users who have messages comprising of more than 512 segments. Platforms affected: All Distributed |
| Error Description | Customer has a message on a queue which comprises of nearly 2000 segments. They issue a browse + lock + complete message, with a zero length buffer to work out the size of the message to get, and then do a browse message under cursor + lock + complete message to retrieve the whole message. This works on 5.2 but fails on 5.3. |
| Problem Summary | While processing MQGET + MQGMO_COMPLETE_MESSAGE + MQGMO_BROWSE_MSG_UNDER_CURSOR and the message comprises of more than 512 segments, the get may fail to locate the message and return 2033 MQRC_NO_MSG_AVAILABLE. |
| Problem Conclusion | WebSphere MQ has been modified to correctly handle an MQGET of a complete message which is segmented beyond 512 segments. |
IC42753
|
| Abstract | AMQMSRVN DOES NOT HANDLE SPECIAL CHARACTERS SUCH AS - IN PASSWORD. |
| Users Affected | If the customer uses the amqmsrvn -user -password command in Windows to set the username and password for IBMMQSeries service, and if the Password contains special characters (\ or -) then the password is not accepted in full. It truncates the password until a - or \. It only accepts the characters which are on left side of the above mentioned special characters silently without giving any external error Platforms affected: Windows |
| Error Description | When using AMQMSRVN to change the user name associated with WebSphere MQ Services from MUSR_MQADMIN to W2K domain account. If the domain account has a password with special character like dash (-), AMQMSRVN does not handle this properly. It only take the password up to the character before the dash. In addition, it tries to start MQ Service three times using this truncated password which results in domain account getting suspended. So, it appears that the -, that precedes the AMQMSRVM user and password parameters is treated as a delimiter. |
| Problem Summary | The problem in amqmsrvn command was because of the parsing logic used for command amqmsrvn -user -password Though windows password may contain any special characters, the amqmsrvn parsing logic was using - or / as delimiters which caused the limitation |
| Problem Conclusion | There are no external symptoms as the amqmsrvn command used to silently ignore - and / special characters. The problem is only noticed when you try to start IBMMQSeries services. |
90564
| Abstract | SUPPORT ADDED FOR USE OF ';' AND '#' CHARACTERS IN XA_OPEN STRING, AS REQUIRED BY INFORMIX |
| Users Affected | This affects customers on the UNIX platforms where WMQ is co-ordinating global units of work which include updates to an Informix database. Platforms affected: AIX,HP-UX,Solaris |
| Error Description | Users configuring WMQ to act as a transaction manager for an Informix database may require that ';' or '#' characters are passed to Informix in the xa_open string (the XAOpenString parameter in a XAResourceManager stanza). However, when a ';' or '#' character were specified in the xa_open string WMQ failed to include an Informix database in global units of work. |
| Problem Summary | On UNIX queue managers the ';' and '#' characters are treated by WMQ as comment characters in the qm.ini, and as such could not be included in any parameter value. |
| Problem Conclusion | An escape character '\' has been introduced to allow ';' and '#' characters to be specified in the XAOpenString parameter of a XAResourceManager stanza. This allows the strings '\;' or '\#' to be used to specify these characters. If an existing parameter includes a '\' character, but this is not followed by either a ';' or '#' character, then the interpretation of this parameter will not change following application of this fix. |
90299
| Abstract | QUEUE MANAGER CANNOT RESTART AND/OR DAMAGED OBJECTS WITH POTENTIAL MESSAGE CORRUPTION FOLLOWING RESTART. |
| Users Affected | The restartablity problem and damaged objects may be encountered on queue manager restart where the queue manager needs to recover transactions due to prior problems. This may affect a general class of users. Platforms affected: All Distributed |
| Error Description | Inability to restart a queue manager. The queue manager attempts to restart, but finds that logged records are inconsistent and cannot be made consistent. The queue manager is unable to restart, dumps one or more FDCs e.g. reporting "damaged object", and ends. This situation may not be recoverable. It may therefore be necessary to "cold start" the queue manager (i.e. replace its log files with empty ones), or recreate it. |
| Problem Summary | Problems with the handling of recovery of transactions. |
| Problem Conclusion | Corrected the handling of transaction recovery, and corrected the logging of transaction-related log records. |
87597
| Abstract | CLIENT APPLICATIONS SEGV WHEN USING LARGE NUMBER OF THREADS SIMULTANEOUSLY CONNECTED TO WEBSPHERE MQ. |
| Users Affected | Users with applications using an MQ client which make large numbers of connections to MQ at the same time. Platforms affected: All Unix,Windows |
| Error Description | Client applications segv when using large number of threads simultaneously connected to WebSphere MQ. |
| Problem Summary | A function was overwriting memory with zeros. |
| Problem Conclusion | Correct the overwrite. |
82774
| Abstract | PROVIDE PID AND PROGRAM NAME OF ASYNCHRONOUS SIGNAL SENDER |
| Users Affected | Anyone needing to determine who sent an asynchronous signal to a WMQ process. Platforms affected: All Unix |
| Error Description | This is an enhancement to the standard WMQ FDC reporting receipt of an asynchronous signal. The FDC header information will now report the pid of the process that sent the signal, in the "Comment2" part of the FDC header. In addition, if the process has permission it will obtain the process name and report that in a "Comment3" part. In a further change, the formatting of the FDC comment parts has been improved to avoid unnecessary wrap-around. |
| Problem Summary | No problem, but it can sometimes be useful to know who sent an asynchronous signal. |
| Problem Conclusion | Utilised the available Unix signal information facilities. |
[{"Product":{"code":"SSFKSJ","label":"WebSphere MQ"},"Business Unit":{"code":"BU053","label":"Cloud & Data Platform"},"Component":"APAR \/ Maintenance","Platform":[{"code":"PF002","label":"AIX"},{"code":"PF010","label":"HP-UX"},{"code":"PF016","label":"Linux"},{"code":"PF012","label":"IBM i"},{"code":"PF027","label":"Solaris"},{"code":"PF033","label":"Windows"}],"Version":"5.3","Edition":"","Line of Business":{"code":"LOB45","label":"Automation"}}]
Product Synonym
WMQ MQ
Was this topic helpful?
Document Information
Modified date:
17 June 2018
UID
swg27007222