Fixes are available
Java SDK 1.5 SR8 Cumulative Fix for WebSphere Application Server
Java SDK 1.5 SR8 Cumulative Fix for WebSphere Application Server
Java SDK 1.5 SR10 Cumulative Fix for WebSphere Application Server
6.1.0.31: Java SDK 1.5 SR11 FP1 Cumulative Fix for WebSphere Application Server
6.1.0.33: Java SDK 1.5 SR12 FP1 Cumulative Fix for WebSphere
6.1.0.29: Java SDK 1.5 SR11 Cumulative Fix for WebSphere Application Server
6.1.0.35: Java SDK 1.5 SR12 FP2 Cumulative Fix for WebSphere
6.1.0.37: Java SDK 1.5 SR12 FP3 Cumulative Fix for WebSphere
6.1.0.39: Java SDK 1.5 SR12 FP4 Cumulative Fix for WebSphere Application Server
6.1.0.41: Java SDK 1.5 SR12 FP5 Cumulative Fix for WebSphere Application Server
6.1.0.43: Java SDK 1.5 SR13 Cumulative Fix for WebSphere Application Server
6.1.0.45: Java SDK 1.5 SR14 Cumulative Fix for WebSphere Application Server
6.1.0.47: WebSphere Application Server V6.1 Fix Pack 47
6.1.0.47: Java SDK 1.5 SR16 Cumulative Fix for WebSphere Application Server
APAR status
Closed as program error.
Error description
When using the Java Message Service (JMS) API to connect to WebSphere MQ (WMQ) from within a Servlet running inside of WebSphere Application Server (WAS), the number of physical connections open between the application server and WMQ is higher than expected. KEYWORDS: QCF TCF Unshareable res-sharing-scope
Local fix
N/A
Problem summary
**************************************************************** * USERS AFFECTED: WebSphere Application Server v6 users of * * the Java Message Service (JMS). * **************************************************************** * PROBLEM DESCRIPTION: When using the Java Message Service * * (JMS) API to connect to WebSphere MQ * * (WMQ) from within a long running * * Servlet or Asynchronous Bean running * * inside of WebSphere Application * * Server, one of the following * * issues can occur: * * * * - The number of physical connections * * between the application server and * * WMQ is greater than expected. * * - A java.lang.OutOfMemory error will * * occur, and the Javaheap will contain * * a large number of entries that look * * like this: * * * * "<Thread name>" <Thread information> * * waiting on condition [<number>] * * at java.lang.Thread. * * sleep(Native Method) * * at com.ibm.ejs.j2c.poolmanager. * * TaskTimer.run(TaskTimer.java) * **************************************************************** * RECOMMENDATION: * **************************************************************** Both of the issues described in this APAR are the result of JMS Sessions being enlisted in long running transactions. When a JMS application running inside of WebSphere Application Server calls Connection.createSession(), the JMS Session that it gets back is taken from the Session Pool associated with the JMS Connection that the method was called on. The Session is automatically enlisted in any active transaction. After an application has finished with the Session, it calls Session.close(). At this point, the Session remains enlisted in the transaction, and will only be returned to the Session Pool when the transaction completes. Whenever a Servlet or an Async Bean is invoked, a Local Transaction is started by the application server. This transaction is completed when the Servlet or the Async Bean finishes. This means that any JMS Session created within either a Servlet or an Async Bean will be enlisted in the Local Transaction, and, when closed, will only be returned to the Session Pool when the Servlet or Async Bean completes. Now, as mentioned above, JMS Session Pools are associated with a JMS Connection - every JMS Connection maintains it's own pool of Sessions. Most applications create a JMS Connection, followed by a JMS Session. The Session is then used before being closed. If the application needs to do more work, it then creates a new JMS Session, uses it and then closes it. When the application has finised, it closes the JMS Connection. When this pattern is done within a Servlet or an Async Bean, the first JMS Session created from the Connection is associated with the Local Transaction that the Servlet or Async Bean is running under. When this Session is closed, it remains associated with the Transaction. If the application now creates a second JMS Session, it actually gets back the one that was created originally. This is because the Connection Manager has determined that the request to create a new JMS Session is being carried out within the same Transaction that the previous Session was enlisted with. This behaviour occurs because JMS Sessions are defined as being SHAREABLE within the scope of a Transaction. One of the most common JMS programming models is "Get-Use-Close" - here, applications get a JMS Connection and Session, use them to do some work, and then close them off. However, this approach can lead to problems if it is used within either a Servlet or an Async Bean. Let's suppose that our Servlet or Async Bean does the following: - Create a JMS Connection. - Create a JMS Session. - Do some work. - Close the JMS Session. - Close the JMS Connection. - Create a JMS Connection. - Create a JMS Session. - Do some work. - Close the JMS Session. - Close the JMS Connection. When the first JMS Connection is created, it is not enlisted in the Local Transaction that is associated with the application. The application now calls Connection.createSession() - this causes a Session to be removed from the Connection's Session Pool. The Session is then enlisted with the Transaction. When the application closes the Session, it remains associated with the Local Transaction, and is not returned to the Session Pool. The Connection is then closed, which causes it to be returned to the Connection Pool. Now, when the application creates the second JMS Connection, it may or may not get back the first Connection. If it does get the original Connection back again, then when the second call to Connection.createSession() is made, the original JMS Session is returned to it. However, what happens if the application gets back a different JMS Connection? Well, in this situation, the application will get a new JMS Session when it calls Connection.createSession(). This Session will then be enlisted in the Local Transaction, which means there are now two Sessions associated with the transaction that the application is running in. These Sessions will only be closed when the Local Transaction completes. Every JMS Session equates to a physical connection to WMQ, and these physical connections will only be closed off when the Session is removed from the Session Pool. Now, because the JMS Sessions in the above scenario are only returned to the Session Pool when the Servlet or Async Bean exits rather than when the Session.close() method is called, this can lead to more physical connections to WMQ being open than expected. Now, let's look at how this behaviour can lead to a build-up of TaskTimer objects in the Javaheap. TaskTimer objects are used to clean up the contents of JMS Connection and Session Pools. They wake up periodically, check the contents of the appropriate Pool, close any Connections or Sessions if necessary, and then go back to sleep. There is one TaskTimer object for each Connection Pool and Session Pool on the system. When a JMS Connection has either been in existance for longer than the value of the Connection Pool's Aged Timeout property, or has been in the Connection Pool for longer than the value of the Pool's Unused Timeout, it will be destroyed. As part of this cleanup processing, the application server will attempt to clean up the Session Pool associated with the Connection. In most situations, the Session Pool is cleaned up successfully. However, if the Session Pool still contains a JMS Session that is enlisted with a Transaction that has not completed, the Session Pool, and it's associated TaskTimer, will be left lying around. The Pool and TaskTimer will only be cleaned up when the Transaction finishes.
Problem conclusion
APAR PK59605 provides a mechanism to make JMS Sessions UNSHAREABLE. This means that whenever an application calls Session.close(), the JMS Session is automatically released from any Transaction that is associated with and returned to the Session Pool. This means that Sessions can be cleaned up and removed from the Session Pool even if the Servlet or Async Bean that created it is still running. In order to enable UNSHAREABLE sessions, the following steps need to be carried out: WebSphere Application Server Version 6.0.2.x -------------------------------------------- - Bring up the WebSphere Administrative Console, and log in. - Expand the Resources->JMS Providers entry in the left hand tree view, and click on WebSphere MQ - You will now see the WebSphere MQ messaging provider panel. Navigate to the Connection Factory being used by your application. - On the Connection Factory Configuration pane, click on Custom Properties. - Click on New, and define a custom property with the name session.sharing.scope and the value UNSHAREABLE. - Click on OK. - Save the changes, and log off the Administrative Console. - Stop and restart the application server. WebSphere Application Server Version 6.1.0.x -------------------------------------------- - Bring up the WebSphere Administrative Console, and log in. - Expand the Resources->JMS entry in the left hand tree view, and click on either the Connection Factory, Queue Connection Factory or Topic Connection Factory link. - In the next pane, select the Connection Factory being used by your application. - On the Connection Factory Configuration pane, click on Custom Properties. - Click on New, and define a custom property with the name session.sharing.scope and the value UNSHAREABLE. - Click on OK. - Save the changes, and log off the Administrative Console. - Stop and restart the application server. The fix for this APAR is currently targeted for inclusion in fixpacks 6.0.2.29 and 6.1.0.17. Please refer to the Recommended Updates page for delivery information: http://www-1.ibm.com/support/ docview.wss?rs=180&context=SSEQTP&uid=swg27004980
Temporary fix
Comments
APAR Information
APAR number
PK59605
Reported component name
WEBS APP SERV N
Reported component ID
5724H8800
Reported release
60I
Status
CLOSED PER
PE
NoPE
HIPER
NoHIPER
Special Attention
NoSpecatt
Submitted date
2008-01-21
Closed date
2008-02-08
Last modified date
2010-09-22
APAR is sysrouted FROM one or more of the following:
APAR is sysrouted TO one or more of the following:
Modules/Macros
MSGING
Fix information
Fixed component name
WEBS APP SERV N
Fixed component ID
5724H8800
Applicable component levels
R60A PSY
UP
R60H PSY
UP
R60I PSY
UP
R60P PSY
UP
R60S PSY
UP
R60W PSY
UP
R60Z PSY
UP
R61A PSY
UP
R61H PSY
UP
R61I PSY
UP
R61P PSY
UP
R61S PSY
UP
R61W PSY
UP
R61Z PSY
UP
Document Information
Modified date:
29 December 2021