APAR status
Closed as program error.
Error description
It has been noticed high counts of mbosets and high memory consumption on the JVMs running escalations, upon analysing one prominent case (mbo OSLCTRANSACTION) and linked it to an out of the box escalation - OSLCTXNCLEANUP, which probably came with the installation of Maximo Anywhere. IBM's guidance is that MBO counts over 20,000 should be considered as a memory leak. Steps to Reproduce ======================== Other escalations are susceptible to the same problem. To reproduce: 1. Create an escalation based on an object which has some records ( FINCNTRL has been used which had close to 8000 records in a dev system). Set the condition to 1=1 or anything which will retrieve a significant number of objects. 2. Add an unconditional reference point and an action which doesn't really make a change to most of the records (such as setValue on an attribute using its default value) 3. Activate the escalation and wait for it to run. 4. Make sure the mbo counts logging s turned on. 5. Observe the SystemOut log file and note the number of the mbosets for the escalation object. Once the escalation has executed, check the mbosets count again. The count grows with the escalation run and stays high until the escalation is deactivated. E:\Program Files\IBM\WebSphere\AppServer\profiles\ctgAppSrv01\logs\MXServer \SystemOut.log (15 hits) Line 5948: [15/01/20 16:17:37:838 GMT] 000000e3 SystemOut O 15 Jan 2020 16:17:37:838 [INFO] [MXServer] [] FINCNTRL: mbosets (6), mbos (30) Line 15892: [15/01/20 16:23:37:933 GMT] 000000e3 SystemOut O 15 Jan 2020 16:23:37:933 [INFO] [MXServer] [] FINCNTRL: mbosets (7), mbos (19) before the first run of the escalation, which happened at 16:23 and repeated every 3min. Line 16536: [15/01/20 16:24:37:950 GMT] 000000e3 SystemOut O 15 Jan 2020 16:24:37:950 [INFO] [MXServer] [] FINCNTRL: mbosets (7776), mbos (17040) Line 17244: [15/01/20 16:25:37:964 GMT] 000000e3 SystemOut O 15 Jan 2020 16:25:37:964 [INFO] [MXServer] [] FINCNTRL: mbosets (7776), mbos (17040) Line 17826: [15/01/20 16:26:37:978 GMT] 000000e3 SystemOut O 15 Jan 2020 16:26:37:978 [INFO] [MXServer] [] FINCNTRL: mbosets (204), mbos (430) Line 21171: [15/01/20 16:29:38:040 GMT] 000000e3 SystemOut O 15 Jan 2020 16:29:38:040 [INFO] [MXServer] [] FINCNTRL: mbosets (499), mbos (1494) Line 21960: [15/01/20 16:30:38:059 GMT] 000000e3 SystemOut O 15 Jan 2020 16:30:38:059 [INFO] [MXServer] [] FINCNTRL: mbosets (7265), mbos (15361) Line 22795: [15/01/20 16:31:38:075 GMT] 000000e3 SystemOut O 15 Jan 2020 16:31:38:075 [INFO] [MXServer] [] FINCNTRL: mbosets (7265), mbos (15361) Line 24434: [15/01/20 16:32:38:088 GMT] 000000e3 SystemOut O 15 Jan 2020 16:32:38:088 [INFO] [MXServer] [] FINCNTRL: mbosets (506), mbos (1516) Line 25291: [15/01/20 16:33:38:106 GMT] 000000e3 SystemOut O 15 Jan 2020 16:33:38:106 [INFO] [MXServer] [] FINCNTRL: mbosets (6439), mbos (12967) Line 25896: [15/01/20 16:34:38:119 GMT] 000000e3 SystemOut O 15 Jan 2020 16:34:38:119 [INFO] [MXServer] [] FINCNTRL: mbosets (6439), mbos (12967) Line 26568: [15/01/20 16:35:38:132 GMT] 000000e3 SystemOut O 15 Jan 2020 16:35:38:132 [INFO] [MXServer] [] FINCNTRL: mbosets (6981), mbos (14591) Line 27158: [15/01/20 16:36:38:149 GMT] 000000e3 SystemOut O 15 Jan 2020 16:36:38:149 [INFO] [MXServer] [] FINCNTRL: mbosets (6923), mbos (14320) Line 29397: [15/01/20 16:38:38:347 GMT] 000000e3 SystemOut O 15 Jan 2020 16:38:38:347 [INFO] [MXServer] [] FINCNTRL: mbosets (578), mbos (1705) Line 30453: [15/01/20 16:39:38:377 GMT] 000000e3 SystemOut O 15 Jan 2020 16:39:38:377 [INFO] [MXServer] [] FINCNTRL: mbosets (8391), mbos (18841) Having looked at the code and run some experiments, it's suspected the leak is related to the mboset which gets created in the attached class EscalationTask.java on line 812. Results ======== The mbo counts are higher than before the escalation started and the memory is still held long after the escalation has finished. Expected Results ================ Once the escalation run completes, the mbo counts drop and memory is released. Environment ==================== Tivoli's process automation engine 7.6.1.0
Local fix
The problem can be minimized by rewriting the conditions in escalations to only retrieve records which will actually get updated. However this is not possible for all escalations and doesn't prevent this problem from occurring in the future.
Problem summary
**************************************************************** * USERS AFFECTED: * * Maximo users * ****************************************************************
Problem conclusion
The fix for this APAR is contained in the following maintenancepackage: Release 7.6.1.3 of Base Services
Temporary fix
Comments
APAR Information
APAR number
IJ27846
Reported component name
SYSTEM PERFORMA
Reported component ID
5724R46PF
Reported release
761
Status
CLOSED PER
PE
NoPE
HIPER
NoHIPER
Special Attention
NoSpecatt / Xsystem
Submitted date
2020-09-11
Closed date
2021-07-21
Last modified date
2021-07-21
APAR is sysrouted FROM one or more of the following:
APAR is sysrouted TO one or more of the following:
Fix information
Fixed component name
SYSTEM PERFORMA
Fixed component ID
5724R46PF
Applicable component levels
[{"Line of Business":{"code":"LOB59","label":"Sustainability Software"},"Business Unit":{"code":"BU059","label":"IBM Software w\/o TPS"},"Product":{"code":"SSLKT6","label":"Maximo Asset Management"},"Platform":[{"code":"PF025","label":"Platform Independent"}],"Version":"761"}]
Document Information
Modified date:
22 July 2021