Fixes are available
APAR status
Closed as program error.
Error description
An application issues MQGET calls using WebSphere MQ (WMQ) v7. The number of messages retrieved is fewer than the number of messages put to the queue. The last MQGET receives 2033 - MQRC_NO_MSG_AVAILABLE. The queue is empty. After coding the application to Inquire for CURDEPTH before each MQGET, the trace shows that there are two MQGETs between the MQINQ on some calls. The CURDEPTH value decreases by 2. Here is an example of the call sequence: MQINQ returns CURDEPTH 10 for queue TEST1 MQGET retrieves message 10 from queue TEST1 MQINQ returns CURDEPTH 9 for queue TEST1 MQGET retrieves message 9 from queue TEST1 second MQGET retrieves message 8 from queue TEST1 MQINQ returns CURDEPTH 7 for queue TEST1 After checking the messages, you see that message 9 is missing.
Local fix
Work around: Disable multiplexing by setting SHARECONV on the server connection channel to 0.
Problem summary
**************************************************************** USERS AFFECTED: WebSphere MQ V7 clients connecting to the queue managers in a multiplexing environment. Platforms affected: All Distributed (iSeries, all Unix and Windows) **************************************************************** PROBLEM SUMMARY: Messages are lost during MQGET operations when multiplexing is enabled at WebSphere MQ V7. In multiplexing environments WebSphere MQ creates 2 threads where thread1 sends the request to the server while thread2 does an asynchronous receive. Thread1 sends the MQGET request to the server and goes on to wait until the MQGMO_WAIT interval expires. This thread is woken up by thread2 when it receives the message. The problem happens when thread2 receives a message and posts an event to thread1 and if thread1 times out before it receives the event posted by thread2. So thread1 assumes that the message is not received and sends the second request for the message. Following this a second message is sent to the client. This message is returned to the application while the first message received is discarded. This problem only occurs if thread1 times out at exactly same time when thread2 receives the message.
Problem conclusion
The code has been modified so that thread1 rechecks if a message (or part of the message) has been received after the client has timed out waiting for a message. --------------------------------------------------------------- The fix is targeted for delivery in the following PTFs: v7.0 Platform Fix Pack 7.0.1.4 -------- -------------------- Windows U200323 AIX U835793 HP-UX (PA-RISC) U836458 HP-UX (Itanium) U836463 Solaris (SPARC) U836459 Solaris (x86-64) U836465 iSeries tbc_p700_0_1_4 Linux (x86) U836460 Linux (x86-64) U836464 Linux (zSeries) U836461 Linux (Power) U836462 The latest available maintenance can be obtained from 'WebSphere MQ Recommended Fixes' http://www-1.ibm.com/support/docview.wss?rs=171&uid=swg27006037 If the maintenance level is not yet available, information on its planned availability can be found in 'WebSphere MQ Planned Maintenance Release Dates' http://www-1.ibm.com/support/docview.wss?rs=171&uid=swg27006309 ---------------------------------------------------------------
Temporary fix
Comments
APAR Information
APAR number
IC69092
Reported component name
WMQ WINDOWS V7
Reported component ID
5724H7220
Reported release
701
Status
CLOSED PER
PE
NoPE
HIPER
NoHIPER
Special Attention
NoSpecatt
Submitted date
2010-06-08
Closed date
2010-06-29
Last modified date
2010-06-29
APAR is sysrouted FROM one or more of the following:
APAR is sysrouted TO one or more of the following:
Fix information
Fixed component name
WMQ WINDOWS V7
Fixed component ID
5724H7220
Applicable component levels
R701 PSY
UP
[{"Business Unit":{"code":"BU059","label":"IBM Software w\/o TPS"},"Product":{"code":"SSDEZSF","label":"IBM WebSphere MQ Managed File Transfer for z\/OS"},"Component":"","ARM Category":[],"Platform":[{"code":"PF025","label":"Platform Independent"}],"Version":"7.0.1","Edition":"","Line of Business":{"code":"LOB45","label":"Automation"}}]
Document Information
Modified date:
31 March 2023