IBM Integrated Analytics System 1.0.27.3 release notes (July 2022)

1.0.27.3 replaces 1.0.27.0, 1.0.27.1 and 1.0.27.2. This release also fixes the issue with the Db2 Warehouse HA component hanging. Additionally, it updates the Db2 engine level to version 11.5.7.0-CN5 which fixes the compression memory leak issue and a fix to avoid hang in a service monitoring process.

The release also includes the following updates:
  • From 1.0.27.0: a patch for Log4j vulnerabilities, the certificate patch, the Call Home trust file, and some enhancements in backup and restore.
  • From 1.0.27.1: upgrade to the web console container and the Db2 Warehouse container to address security issues (Samba, Polkit, Log4j 2.17.1)
  • From 1.0.27.2: a Db2 Security update, a Wolverine fix and a Container OS security update
Important:
  1. Versions 1.0.25 or 1.0.26.x is a required upgrade path to get onto Podman. You can upgrade directly to 1.0.26.3 if on 1.0.19.5 or above. You must be on Podman to upgrade to 1.0.27.x.
  2. Partial upgrade is only available from version 1.0.26.3. For other versions, full upgrade is required to get all the security updates, including Log4j.
  3. Systems with security patch 7.9.22.01.SP7 installed must to be on 1.0.26.3 or higher to upgrade to 1.0.27.x.
  4. If you want to preserve SSL certificate files during upgrade, you must specify the old certificates by using SSL environment variables in the dashdb.env file. For more information, see Preserving old certificate files during upgrade.

What's new

Containers upgrade
This upgrade addresses security issues with Samba, Polkit and Log4j 2.17.1 in the web console container and the Db2 Warehouse container.
Note: You must also install security patch 7.9.21.12.SP6 to fix these vulnerabilities in the platform.
Log4j vulnerability patch
The patch that addresses the Log4j vulnerabilities is included. Full upgrade is required to patch the Log4j vulnerabilities.
Platform Manager certificate patch
The certificate patch and the Call Home trust file are applied with this upgrade. This fixes the issue with certificates expiring in January 2022. For more information, see Platform Manager certificate patch release notes.
Support for Python 3
Version 3 of Python is now supported.
Backup and restore enhancements
  • Schema backup and restore support IBM Spectrum Protect (TSM) as target media for the backup image using -tsm option.

    Database backup and restore -comprlib option specifies library to use to compress or encrypt a backup image. For more information, see Backing up data using the db_backup command.

Components

Db2 Warehouse 11.5.7.0-CN5
See What's New in Db2 Warehouse.
Db2 Engine 11.5.7
To learn more about the changes that are introduced in Db2 11.5.7.0, read What's new in the Db2 Documentation.

Known issues

Secondary system in AFMDR fails to come up after upgrading to 1.0.27.1
Upgrade to 1.0.27.1 fails on secondary system while updating database ENCRLIB configuration. The following error is returned in Db2wh_local.log:
ERROR: Failed to update database configuration ENCRLIB for BLUDB after the update
WORKAROUND:
  1. Stop appliance on both primary and secondary using command apstop -v after upgrade on both systems is complete:
    [apuser@sail82-t07-n1 ~]$ apstop -v
    Successfully deactivated system
  2. Resume replication from primary system using command apdr resume:
    [apuser@sail81-t07-n1 ~]$ apdr resume
    Getting APDR status
    Successfully resumed DR replication between Primary and Secondary
  3. To check the replication queue status use the command apdr status --replqueue:
    [apuser@sail81-t07-n1 ~]$ apdr status --replqueue
    Directory Queue Status Sync Status
    /opt/ibm/appliance/storage/head/home/db2inst1/db2/keystore Ready In-Sync
    /opt/ibm/appliance/storage/data/db2inst1 Ready In-Sync
    /opt/ibm/appliance/storage/local/db2inst1 Starting Behind
    /opt/ibm/appliance/storage/local/db2archive Ready In-Sync
    /opt/ibm/appliance/storage/head/db2_config Ready In-Sync
  4. You can start using the primary system after starting the appliance using the command apstart -w.
    Note: Once all the filesets are in ready state, it means that the primary system is synced with the secondary system.
Snapshot creation fails after it reaches the limit of 256 snapshots
Once AFM-DR reaches limit of 256 snapshots, the snapshot creation fails.
In that situation, the logs located at /var/log/applaince/platform/afmdr/apafmdr_service.log report error messages similar to:
2021-08-24 23:00:54,443 __main__ [DEBUG]: STDERR: [
Cannot create new snapshot until an existing one is deleted.
Fileset "db2archive" has a limit of 256 snapshots.
Snapshot error: 34, snapName db2archive:psnap-rpo-0900E01859CBCFC5-2-21-08-24-23-00-53, id -1.
mmpsnap: [E] Peer snapshot creation failed. Error code 34.
Unable to start tscr/link/unlink/chfileset on 'local' because conflicting program tscr/link/unlink/chfileset is running. Waiting until it completes or moves to the next phase, which may allow the current command to start.
tscr/link/unlink/chfileset on 'local' is finished waiting.  Processing continues ...
mmpsnap: Command failed. Examine previous error messages to determine cause.

]

WORKAROUND:

Use apdr clean --snapshot command to clean older snapshots. After older snapshots are cleaned, the snapshot creation resumes as per the scheduler.

FIPS/SELinux must be disabled before upgrading
If FIPS is enabled on your system or SELinux is set to enforcing, you must disable FIPS and set SELinux to permissive before you can upgrade to 1.0.27.1. Upgrade does not preserve this configuration and fails if not disabled. The apupgrade command verifies this before the upgrade procedure is started. The settings must be re-enabled after the upgrade. For more information, see Enabling FIPS modes on IAS and Setting SELinux to Enforcing on IAS.
After AFM-DR setup or AFM-DR changeRole, the snapshot creation is successful but it does not sync with the secondary system
In apdr status --snapshot snapshot is reported as unavailable:
[apuser@t07-n1 ~]$ apdr status --snapshot
Getting APDR status
==============================Snapshot Job Status===============================
Currently Running Job Details:
Job Interval(in Minutes):                                 360
Status:                                               SUCCESS

Previous Job Details:
Status:                                               SUCCESS
Started at (UTC):                         2021-09-23 20:00:00
Last Good Job started at (UTC)            2021-09-24 02:00:00
--------------------------------------------------------------------------------
========================Snapshot Recovery Point Details=========================
Availability Status:                              UNAVAILABLE
No Recovery Point Available
Latest job started at (UTC)               2021-09-24 02:00:00
--------------------------------------------------------------------------------

WORKAROUND:

On the Active Primary system, run the following steps:

  1. Determine the gateway node with below command:
    [root@t07-n1 ~]# /usr/lpp/mmfs/bin/mmlscluster  -Y |grep quorumManager
    mmlscluster:clusterNode:0:1:::1:node0101-fab.apdomain.ibm.com:9.0.226.16:node0101-fab.apdomain.ibm.com:quorumManager:G::gateway:
    mmlscluster:clusterNode:0:1:::2:node0102-fab.apdomain.ibm.com:9.0.226.17:node0102-fab.apdomain.ibm.com:quorumManager::::
    mmlscluster:clusterNode:0:1:::3:node0103-fab.apdomain.ibm.com:9.0.226.18:node0103-fab.apdomain.ibm.com:quorumManager::::
    Typically node0101-fab is set as gateway node.
  2. Run apstop -v.
  3. Shut down the gateway node using command mmshutdown -N:
    [root@t07-n1 ~]# mmshutdown -N node0101-fab
    Thu Dec  2 01:28:31 EST 2021: mmshutdown: Starting force unmount of GPFS file systems
    Thu Dec  2 01:28:41 EST 2021: mmshutdown: Shutting down GPFS daemons
    Thu Dec  2 01:28:50 EST 2021: mmshutdown: Finished
  4. Start the gateway node using command mmstartup -N node0101-fab:
    [root@t07-n1 ~]# mmstartup -N  node0101-fab
    Thu Dec  2 01:28:52 EST 2021: mmstartup: Starting GPFS ...
    
  5. Run apstart -w.
  6. Once the system is up, take a snapshot. Verify that the snapshot is in sync with secondary. When checking status, it should show AVAILABLE.
    [apuser@t07-n1 ~]$ apdr status --snapshot
    Getting APDR status
    ==============================Snapshot Job Status===============================
    Currently Running Job Details:
    Job Interval(in Minutes):                                 180
    Status:                                               SUCCESS
    
    Previous Job Details:
    Status:                                               SUCCESS
    Started at (UTC):                         2021-12-02 07:29:32
    Last Good Job started at (UTC)            2021-12-02 10:00:00
    
    
    --------------------------------------------------------------------------------
    ========================Snapshot Recovery Point Details=========================
    Availability Status:                                AVAILABLE
    Recovery Point (UTC):                     2021-12-02 10:00:00
    Latest job started at (UTC)               2021-12-02 10:00:00
    --------------------------------------------------------------------------------