IBM Support

What's New in IBM Db2 Warehouse

News


Abstract

The current releases of IBM® Db2® Warehouse offers the following new features and functions. Any changed or deprecated functions that require changes on the client side are also noted here.

Content

11.5.9.0-cn1 (Db2 Warehouse on OpenShift/K8s only) 20 February 2024
11.5.9.0 (Db2 Warehouse on OpenShift/K8s only) 6 December 2023
11.5.8.0-CN3 (Db2 Warehouse on OpenShift/K8s only) 13 October 2023
11.5.8.0.0 (Common Container) 24 May 2023
11.5.8.0-CN2 (Db2 Warehouse on OpenShift/K8s only) 28 April 2023
11.5.8.0-CN1 (Db2 Warehouse on OpenShift/K8s only) 15 March 2023
11.5.7.0-CN5 26 July 2022
11.5.7.0-CN3 20 April 2022
11.5.7.0-CN1 23 February 2022
11.5.7.0 19 January 2022
11.5.6.0-CN3 14 January 2022
11.5.6.0-CN2 17 December 2021
11.5.6.0 13 July 2021
11.5.5.1 15 April 2021
11.5.5.0 5 November 2020
11.5.4.0-CN2 23 September 2020
11.5.4.0-CN1 27 July 2020
11.5.4.0 30 June 2020
11.5.3.0 22 April 2020
11.5.2.0 18 February 2020
20 February 2024 (Db2 Warehouse on OpenShift/K8s only)
Version s11.5.9.0-cn1 is now available
In addition to performance enhancements, improved security, and defect corrections, the 11.5.9-cn1 release contains the following feature enhancements:

Support for Kubernetes version 1.25 on Db2® operator. You can add a database instance using a Db2 operator on the following:

IBM® Support page is updated and contains a list of known issues for all Db2 11.5.9 Cumulative Special Build (CSB) downloads.

6 December 2023 (Db2 Warehouse on OpenShift/K8s only)
Version s11.5.9.0 is now available
Db2 for Red Hat OpenShift and Kubernetes is enhanced for the s11.5.9.0 release, with improved security.
In addition to performance enhancements, improved security, and defect corrections, s11.5.9.0 contains the following new features:
  • The Db2 Warehouse 11.5.9 on OpenShift release supports the use of DATALAKE tables, providing users with access to data that is stored in open data formats like PARQUET and ORC, and AVRO.  For more information, see Using DATALAKE Tables.
  • The Db2 Warehouse 11.5.9 on OpenShift release also supports Native Cloud Object Storage, allowing users to store in object storage traditional Db2 column-organized tables in Db2's native format while maintaining the existing SQL support and performance using a tiered storage architecture.  For more information, see Native Cloud Object Storage support.
  • HADR automatically restarts on designated auxiliary standby database after a pod restart. As of s11.5.9.0, HADR automatically restarts as standby only if that database was in an HADR standby role. Due to manually initiated failover, HADR does not automatically restart as primary if that database was in a primary role. This prevents a split-brain scenario, as Governor is not supported on the designated auxiliary.
  • Shared log archiving configuration for Db2 high availability disaster recovery (HADR) for existing and new deployments. In a multiple standby HADR configuration with all databases in the same cluster and namespace, you can now configure log archiving to use the same archive log PersistentVolumeClaim (PVC) such that all files are stored in a single location. This prevents the need to manually copy archived log files from an old primary database to the new primary database when requested by auxiliary standbys in a takeover scenario.
13 October 2023 (Db2 Warehouse on OpenShift/K8s only)
Version s11.5.8.0-CN3 is now available
In addition to performance enhancements, improved security, and defect corrections, 11.5.8.0-cn3 contains the following new features:
24 May 2023
Version 11.5.8.0 (Common container) is now available
In addition to performance enhancements, improved security, and defect corrections, the 11.5.8.0 release contains the following feature enhancements:
  • TSM (IBM Spectrum Protect) client and storage agents have been updated to version 8.1.17.
28 April 2023 (Db2 Warehouse on OpenShift/K8s only)
Version s11.5.8.0-CN2 is now available

The following releases contain enhancements to the container layer of the Db2® Warehouse for OpenShift offering.

Db2 s11.5.8.0-cn2

In addition to performance enhancements, improved security, and defect corrections, the s11.5.8-cn2 release contains the following feature enhancements:
  • HADR support for Db2uinstance
  • Audit logging for Db2 Warehouse
  • Limited privilege deployments with NTO
15 March 2023 (Db2 Warehouse on OpenShift/K8s only)
Version s11.5.8.0-CN1 is now available

The following releases contain enhancements to the container layer of the Db2® Warehouse for OpenShift offering.

Db2 s11.5.8.0-cn1

In addition to performance enhancements, improved security, and defect corrections, the s11.5.8-cn1 release contains the following feature enhancements:
  • New HADR role-aware (floating) service.
  • LDAP-Active Directory enablement.
  • The Q Replication add-on.
  • Db2 audit logging support, through the Db2uCluster custom resource.
  • Operator-driven Db2 native backup and restore (technical preview).
  • MPP Horizontal scaling
26 July 2022
Version 11.5.7.0-CN5 is now available
Fixes:
  • Fix to avoid hang in a service monitoring process.
The following Common Vulnerabilities and Exposures (CVEs) have been addressed in this release:
CVE-2022-22390
CVE-2022-22389
20 April 2022
Version 11.5.7.0-CN3 is now available
Fixes and Updates
  • Updated platform layer to address Red Hat Enterprise Linux security updates
  • Updated Db2 engine to address a compression memory leak
Resolved Issues
  • Fixes issue in which the Nodes helper fails to initialize when nodes are passed as environment variable in docker run
23 February 2022
Version 11.5.7.0-CN1 is now available
Fixes
The following Common Vulnerabilities and Exposures (CVEs) have been addressed in this release:
CVE-2021-44832
CVE-2022-21704
CVE-2022-23305
CVE-2022-23307
CVE-2022-23302
CVE-2021-4304
CVE-2021-44142
19 January 2022
 
Version 11.5.7.0 is now available
 

Improvements

Updated IBM Db2 11.5.7.0 engine.
 
Python 3 integration
 
  • Version 3 of Python now supported.
 
Backup and restore enhancements
 
  • Added schema backup and restore support for IBM Spectrum Protect (TSM) as target media for the backup image using -tsm option.
  • Added db_restore options -list-backup and -delete-backup to list and delete schema-level backups on the IBM Spectrum Protect server.
  • Database backup and restore has -comprlib option to specify the library to use to compress or encrypt a backup image. For more information, see Backing up data using the db_backup command.
 
Resolved Issues
  • Using -replace-rcac does not fail now when restoring a schema that contains an RCAC mask/permission that depends on another table/object.
  • Failed database backups with -partitioned-backup option will be now be removed at the end of backup.
  • Backing up with IBM Spectrum Protect (TSM) is now allowed in the web console.
  • Restoring a schema with an INDEX or CONSTRAINT is now supported with -target-schema option.
Note:  This release includes all fixes from previous releases.
14 January 2022
Version 11.5.6.0-cn3 is now available
Fixes
Log4j component has been upgraded to version 2.17 to address the following Common Vulnerabilities and Exposures (CVEs):
CVE-2021-45105 (Security Bulletin)
CVE-2021-45046
17 December 2021
Version 11.5.6.0-cn2 is now available
Fixes
CVE-2021-44228 (Security Bulletin)

13 July 2021

Version 11.5.6.0 is now available

Improvements

Updated IBM Db2 11.5.6.0 engine.
Improved trickle feed insert performance for column-organized tables
  • Inserts for data into column-organized tables via a "trickle-feed" (SQL statements that insert a small number of new rows) have improved processing time and reduced memory, storage, and log space consumption. This optimized trickle-feed insert processing is better able to apply page-based string compression which could result in better compression results than in prior releases. In addition, the amount of storage required is reduced when column-organized tables are populated with a small number of rows using insert statements. Data inserted via "trickle-feed" will typically be inserted uncompressed using the new efficient insert group data page format where multiple columns are grouped onto data pages for column-organized tables. In contrast, bulk data insertion statements (SQL statements that insert a large number of new rows) will typically be inserted in the traditional fully-compressed column-based data page format. When enough data has been inserted in the efficient insert group format to reach an internal threshold, that existing data will automatically be converted into the standard column-based format. The latest compression techniques including page-based string compression will be applied.

Enhanced language support for the CREATE FUNCTION statement

  • Db2 11.5.6 provides an R language option for the CREATE FUNCTION statement when building user-defined extensions (UDXs). For more information, see Creating UDXs in R.
Support for system-period temporal tables
Schema type backup and restore can capture and restore system-period temporal tables, including associated history table. Both tables must be part of the same schema.
Crontab feature for backup and restore
An override option is added to add scheduled backups to cron tab. Follow these steps:
  1. Use -e DB_BACKUP_SCHEDULE in docker run command to override the database backup schedule. The default is everyday at 2AM.
  2. Find the db_backup script at $CRON_SCRIPTS_DIR/db_backup.sh
  3. Add the db backup logic in this script.
Limitations:
-Only the root user can update $DB_BACKUP_SCHEDULE.
-Only a single cron script for scheduling is allowed.
-The cron-tab configurations are not preserved after upgrading.
Known Issues
  • RCAC (Row and Column Access Control) limitations

Restoring RCAC backup without the -replace-rcac option fails. When a schema contains a mask/permission that depends on another table/object, the option -replace-rcac is recommended. Currently, the -target-schema option is necessary to facilitate restore in this scenario.

  • Partitioned backup failure due to insufficient space does not clean up after failure
When a partitioned backup is stored on a filesystem without enough disk space, the following error is generated and cleanup fails:

ERROR: [Errno 39] Directory not empty: '/scratch/1026part/backup_onl_1'

The tracelog provides more accurate information:

ERROR: There is not enough space available in the following backup path(s): /scratch/1026part
  • Restoring partitioned backup fails in the web console
When a partitioned backup is listed in the console, there is no indication that it is partitioned and it is not possible to run the restore with partitioned option in the console. The standard restore will fail. Utilize the CLI db_restore  command with partitioned backups.
  • Concurrent schema restore results in:  ERROR: An incompatible backup or restore is in progress. Please try again once it is complete.
In version 11.5.6 concurrent schema backups are supported, but not the concurrent schema restores. Concurrent backup and restore of different schemas is not supported.
  • db_logprune script fails to execute on Db2wh system with custom hostnames
At current there is no workaround, but our development group are investigating this issue.

15 April 2021

Version 11.5.5.1 is now available

Improvements
Pod Manager (Podman) is now supported in addition to Docker to maintain and administer your container. See Podman support.
This update also fixes the following problem:
  • When Db2 Warehouse MPP nodes are being updated concurrently, there might be a condition whereby the /opt/ibm/dsserver/Config/dswebserver.properties file becomes corrupted as multiple nodes try to modify it at the same time.  In this release, we have enhanced the dswebserver.properties update code to avoid corruption.
Changes to Configurations Options
PRUNE_LOGS_SCHEDULE option added.  Please see Configuration Options.
Known problems and workarounds
  • After you upgrade the Db2 common container from v11.5.4-CN2 to v11.5.5.1, the replication web console does not open on the upgraded source or target system because of a problem with the Db2 SSL certificate exchange.  For more information, please see the Known Issues section in IBM® Data Replication for Availability documentation.
  •  Backing up with IBM Spectrum Protect (TSM) is not supported from the web console. You can use command line (db_backup) instead.
  • db_backup/db_restore issue
Schema backup images taken on IAS 1.0.24 or DB2 Warehouse 11.5.5 may contain inaccurate data for tables that contain the BINARY or VARBINARY data type
A backup operation that is taken on IAS 1.0.24 or Db2 Warehouse 11.5.5 captures data for tables that contain the BINARY or VARBINARY data type as of timestamp when an individual table is processed by the backup operation. All other tables are captured as of timestamp when the db_backup command is issued.
If no concurrent workload was running while db_backup was in progress, the backup image is consistent.
If there was a concurrent write access activity into a table of that type during backup, the backup image might not be consistent. The backup image might contain rows that were inserted or updated after the db_backup command was issued if such rows were committed bsefore that table was processed by backup. The backup image does not contain rows that were deleted or truncated after the db_backup command was issued if the rows were committed before that table was processed by backup.
All schema backup types: -type ONL, -type INC, -type DEL are affected.
Only tables that contain the BINARYand or VARBINARY column types are affected. All other tables are backed up normally, even if they are a part of the same backup image.
To find out if any tables have BINARY or VARBINARY columns, run:
select TABSCHEMA, TABNAME from syscat.columns C, syscat.schemata S where
     S.ROWMODIFICATIONTRACKING='Y' and
     C.TABSCHEMA = S.SCHEMANAME    and
    (C.TYPENAME='BINARY' or C.TYPENAME='VARBINARY')
group by (C.TABSCHEMA, C.TABNAME)
If any table is affected, run the db_backup command again and obtain a backup image by using IAS 1.0.25 or Db2 Warehouse 11.5.5.1
  • Slapd Service fails to restart
Stop-Start of Db2 Warehouse container services can fail on ppc64le Podman systems as slapd.service fails to restart due to lack of /tmp space in container.
To workaround this issue redeploy the container by adding  --tmpfs /run:rw,size=787448k,mode=1777  in the podman run command.
  • Podman 
Right now, all output messages that are Podman-related still use the word docker.
In Podman-based containers, data cannot be copied from the host system into the container's /tmp partition. Use other partitions to copy any data from the host into the containers. Although the copy command (docker cp) does not throw any errors when copying the data into /tmp from the host system, no data is actually copied.
  • Load Option in Console
Beginning with version 11.5.5.0, the LOAD OPTION is not available.  The root cause is currently being investigated and the issue be addressed in version 11.5.6.0.
Additional Notes:

db_restore using -target-schema option is not supported for the following:

  • Restoring Indexes to a new schema
  • Alter table, add foreign key constraint

Revised archive log pruning mechanism:

  • Retains twice the no. of logs set for log primary and log secondary in db cfg.

5 November 2020

Version 11.5.5.0 is now available

Improvements

Upgraded Db2 Warehouse engine.

23 September 2020

Version 11.5.4.0-CN2 is now available

Improvements

This update fixes the following problems:
  • AD setup failures. A desc': u'Size limit exceeded error appeared during the setup.
    As a solution, a new console container is introduced.
  • Problems when connecting to the database from the console. The problems were caused by an expired SSL certificate.
    The following error message was shown: Warning: The console is unable to connect to the database. Common reasons are that the target database is offline or the console is unable to connect to the SSL port. To diagnose the problem, use the console to check the system status. If necessary, consult the Db2 logs."
    For SSL database connections to work, you need to trust the certificate from all your clients. For detailed instructions, see Secure Socket Layer (SSL) support.
  • Ability to preserve old certificate files during upgrade.  See Preserving old certificate files during upgrade.

Known problems and workarounds

  • While you're running the db_restore command from the web console, a "Database restore failed on the web console" error appears. The restore succeeds in backend.
  • The following problems regarding the SSL certificate are not yet solved:
    • BLUDR does not work.
    • The db_migrate_iias command does not work with SSL.
    • The dbload and dbsql commands do not work with SSL from clients.
        As a workaround for these SSL connections to work, you must do the steps that apply to Db2 Warehouse as described in the
        KNOWN ISSUES section in the Integrated Analytics System 1.0.2.3 release notes.

27 July 2020

Version 11.5.4.0-CN1 is now available

Improvements
This update fixes the paging size limit for the setup of the Active Directory (AD).


30 June 2020

Version 11.5.4.0 is now available

New Db2 engine
Db2 Warehouse 11.5.4.0 uses the Db2 11.5.4.0 engine, which has been updated with the features described in What's New in Db2 11.5.4 container release.

New features for backup and restore

  • Incremental backup is now allowed after CREATE INDEX, DROP INDEX, or ALTER INDEX statements. Previously, if any of these statements were issued, incremental backup was disallowed and automatically got converted to full online.
  • Row modification tracking schema that is created as part of the db_restore operation will be created in locked mode. This means that access to any table in that schema, as well as CREATE TABLE or DROP TABLE, and related table objects is blocked. On successful completion, the schema is automatically unlocked. On failure (error), the schema is left locked and any of the following actions must be taken:
    • Re-run db_restore with -drop-existing option.
    • Run db_restore -unlockschema to unlock the schema and allow access to all objects in it. Note that a previously failed restore would not be completed, and any tables that were not created or populated with data will be left as is.
    • Run db_restore -cleanup-failed-restore to drop and unlock the schema. Failures to drop objects in schema (dropped by ADMIN_DROP_SCHEMA) are ignored. The schema is unlocked even if failures are encountered.
  • Support for status check through db_backup -status and db_restore -status is added.
  • Either operation will print if backup or restore is currently in progress. Output is command that is in progress, not command that is run (so db_backup -status can report that restore is in progress).
  • Default compression setting is changed to LZ4 from NO in previous releases. The setting can still be changed through the -compress option.
  • Improved performance of incremental db_restore when backup image contains a lot of records to be deleted.
  • On error db_backup will remove failed backup image. If failure is due to out-of-disk, top 5 consumers of disk are printed, as well as backup image size (prior to deletion).
  • GRANT statements are captured and replayed by table-level db_restore.
Changes for the Db2 support tool
  • The db_migrate command has been enhanced:
    • Loading Netezza INCREMENTAL backup in db_migrate is now supported.
    • A db_checksum enhancement to calculate partial checksums of columns was added. Thus, it is easier to narrow down data corruption.
  • The dbload command has been enhanced:
    • You can now load data into tables with hidden columns.
    • You can now load also data from standard input of the program.
    • The -delim option now supports hexadecimal values.
New password policy
Administrators can now manage internal LDAP users and password policies on the Users and Privileges console page. You can create new password policies or update and delete existing password policies on the Password policies tab, and assign a specific password policy to a user on the Users tab.
Improved compression for string data types
The page-based string compression that was added to Db2 Warehouse Version 11.5.1.0 has been improved. String data that has high cardinality, 16 or fewer distinct bytes, and only short repeating patterns will now get better compression. Such data typically includes the following information when stored as strings:
  • Numbers (including decimal, hex, serial numbers, and phone numbers)
  • Dates
  • Times
  • Timestamps
Reduced synopsis table storage for small tables
Synopsis tables for small column-organized tables might have excessive overhead due to Db2's partitioning and storage allocation. The unused allocated storage for synopsis tables might be excessive in comparison to the base tables. The excessive storage consumption by synopsis tables can be avoided without performance penalty by deferring the creation of synopsis tuples until storage consumption overhead can be minimized.

SQL compatibility enhancements
The following enhancements provide compatibility with PureData System for Analytics (Netezza):

  • New or enhanced built-in functions:
    • ASCII_STR
    • NCHR
    • TO_SINGLE_BYTE (was restricted to Unicode and code page 943 databases, is now extended to all code pages)
    • TO_MULTI_BYTE
    • UNICODE_STR
  • Lock avoidance for catalog tables for external user queries only
    Currently, enabling CC (Currently Committed), DB2_SKIPINSERTED, DB2_EVALUNCOMMITTED, and DB2_SKIPDELETED is not supported for user-initiated catalog table scans. This restriction is lifted by the option to change the locking behavior, thus improving the concurrency for user-initiated catalog scans. The isolation levels that are supported depend on the optimization. You can set the DB2COMPOPT registry variable with the LOCKAVOID_EXT_CATSCANS option to enable catalog scans on external queries. This registry variable setting does not impact the behavior of internal queries on the Db2 catalog tables.
  • WITH clause in nested-table-reference and derived table usage
    The query body of a common-table expression (WITH clause) can now contain additional nested-table-reference and derived table usage, except for subqueries in predicates. For these subqueries, a WITH clause in nested-table-references and derived tables is not possible.
  • NULL ordering
    In Db2, NULL values are considered higher than any other values. By enabling NULL ordering, NULLS are considered as the smallest values in sorting. You can enable this new option by setting the DB2_REVERSE_NULL_ORDER registry variable to DB2_REVERSE_NULL_ORDER=TRUE. By default, the DB2_REVERSE_NULL_ORDER registry variable is set to FALSE.

  • External Table with COMPRESS GZIP option does not need data object with .gz extension
    When you use the COMPRESS GZIP option, you can now choose to specify the value with or without the .gz extension for the DATAOBJECT or FILE_NAME option.

  • Changed behavior of DECIMAL scalar function empty string in NPS mode
    In NPS mode, casting an empty string to DECIMAL returns now returns 0.

  • DAYS_BETWEEN, WEEK_BETWEEN, MONTHS_BETWEEN, HOURS_BETWEEN, MINUTES_BETWEEN, SECONDS_BETWEEN scalar functions behavior change in NPS mode
    In NPS compatibility mode, the DAYS_BETWEEN, WEEK_BETWEEN, MONTHS_BETWEEN, HOURS_BETWEEN, MINUTES_BETWEEN, SECONDS_BETWEEN scalar functions always return a positive number.

Other SQL enhancements
New SKIP LOCKED DATA clause for row-organized tables. The SKIP LOCKED DATA clause specifies that rows are skipped when incompatible locks that would block the progress of the statement are held on the rows by other transactions.
Changes in the web console
  • In the Settings > CALL HOME > SNMP dialog of the web console, you can now specify the SNMP version and the SNMP community.
  • In the Settings > CALL HOME > Contacts dialog of the web console, you can now add an e-mail address that contains trailing blanks or non-alphanumeric characters.
 
 

15 June 2020
For users of Db2 Warehouse v11.5.1 and v11.5.2 only
Interim v11.5.1.0-SB40166 is released
Db2 Warehouse v11.5.1.x and v11.5.2.x  are exposed to a potential page-based compression corruption and need to be upgraded to the following Db2 Warehouse releases.

Releases v11.5.1, v11.5.1.0-CN1, v11.5.1.0-CN2, and v11.5.2 are  impacted.
If you are running any of the impacted releases, you should immediately upgrade as follows:
  • v3.1.0 must upgrade to v11.5.1.0-SB40166 (note this is a required upgrade path).
  • v11.5.1.x releases must be upgraded to either v11.5.1.0-SB40166 or v11.5.3.
  • v11.5.2 releases must upgrade to v11.5.3.
 
 

22 April 2020

Version 11.5.3.0 is now available

New Db2 engine
Db2 Warehouse 11.5.3.0 uses the Db2 11.5.3.0 engine, which has been updated with the features described in What's New in Db2 11.5.3 container release.

Columnar incremental schema backup and restore
Db2 Warehouse now lets you backup a schema incrementally, and to restore either an entire schema or individual tables within it, followed by a full restore of the schema, or of tables within the schema. The following additional enhancements and features have also been added to Db2 Warehouse:

  • The ADMIN_COPY_SCHEMA now copies the ROWMODIFICATIONTRACKING attribute of the schema.
  • During a backup, you can now specify whether row modification tracking is to be enabled by using the new option -enable-row-modification-tracking in the db_backup utility.
  • The RENAME TABLE commands that are issued between full and incremental schema backups are now captured and properly replayed.
  • The estimate calculations for incremental backup image size have been improved.
Changes for the Db2 support tool
  • Db2 Warehouse now includes SSL support for support tools and secure connections between systems.
  • The db_migrate_iias command has been enhanced. This command now displays the checksum calculation to help you ensure that data is not corrupted during migration between Db2 Warehouse systems.
Bug fixes
  • A bug that caused a 30 second delay when killing a db2sysc process was fixed.
  • A bug that prevented the IDAA Server from starting on the correct node when the head node changed has been fixed.
  • A bug that could prevent ODBC connections following an internal LDAP server failover has been fixed.
Changes in the web console
  • In the Administer > Privileges > Grant dialog of the web console, the Group option and the User option have been deprecated and therefore removed.
  • On the CONNECT > Connection Information page of the web console, the URLs for the driver downloads have been updated.
  • An issue that sometimes prevented a custom admin user from accessing the web console from an external LDAP server or Microsoft Active Directory server has been fixed.
As of 31 March 2020, IBM is no longer maintaining Db2 Warehouse on the Docker Store. The Db2 Warehouse container images are moving to a new home that is under IBM control, the IBM Cloud Container Registry.
SQL compatibility enhancements
The following enhancement provides compatibility with PureData System for Analytics (Netezza):
  • New Netezza TIMESTAMP string support
    With this enhancement, the Netezza timestamp format (MM-DD-YYYY HH24:MM:SS) is recognized in Db2.
 

18 February 2020

Version 11.5.2.0 is now available

New Db2 engine
Db2 Warehouse 11.5.2.0 uses the Db2 11.5.2.0 engine, which has been updated with the features described in What's New in 11.5.2 container release.

Columnar incremental schema backup and restore
Db2 Warehouse provides support for incremental backup of a schema followed by a full restore of the schema, or of tables within the schema. The feature allows not only concurrent read access to the schema, but also concurrent insert, update, and delete access. A full description of the feature and its limitations are documented here.

Workload management enhancements
Db2 Warehouse now provides new workload management capability, including the following enhancements:

REST endpoints
You can set up your Db2 Warehouse system so that application programmers can create Representational State Transfer (REST) endpoints, each of which is associated with a single SQL statement. Authenticated users of web, mobile, or cloud applications can then use these REST endpoints to interact with Db2 Warehouse from any REST HTTP client without having to install any drivers.

New monitor element to identify a failover event
A new monitor element, HADR_LAST_TAKEOVER_TIME, has been added to MON_GET_HADR to help users identify the occurrence of a failover event.

Script to purge scratch LOAD copy files
A new clear_loadcopy_path command command  can be used to clear the contents of the directory specified by the $LOAD_COPY_PATH variable. This reduces the amount of storage taken up by unneeded files.

New column for the MON_GET_HADR table function 
The MON_GET_HADR table function returns high availability disaster recovery (HADR) monitoring information. A new column, HADR_LAST_TAKEOVER_TIME, of type TIMESTAMP, has been added to report, for a particular database, when the HADR_ROLE last changed from STANDBY to PRIMARY.

Manageability enhancements
The SSL_SVR_LABEL database manager configuration parameter can now be updated dynamically. There is no longer a need to restart the instance to change this parameter. Db2 is now able to change the SSL server certificate used for incoming connections while the instance is running.

Performance enhancements
Hash joins of columnar tables or streams can now apply additional non-equality join predicates for improved performance. Also, new strategies have been introduced for column-organized query planning. These strategies improve query performance by aggregating and removing duplicates earlier, and are counterparts to existing strategies used in row-organized query planning.

Spatial analytics
Functions provided by the Db2 Spatial Analytics component can be used to analyze data stored in either column-organized or row-organized tables. Geospatial data can be stored in special data types, each of which can hold up to 4 MB.

SQL compatibility enhancements
The following enhancements provide compatibility with PureData System for Analytics (Netezza):

  • The query body of a common-table expression (that is, a WITH clause) can now contain additional common-table expressions.
  • For column-organized tables, the IMMEDIATE clause is now optional for a TRUNCATE statement. When the IMMEDIATE clause is not specified, the TRUNCATE operation can be stopped. This means that the operation can be stopped at any point in the transaction's scope before it completes. The truncated table is then immediately available for use within the same unit of work. For a TRUNCATE statement that is issued without the IMMEDIATE clause, you can issue a ROLLBACK statement to undo the TRUNCATE operation, even if another data-changing operation was issued after the original TRUNCATE statement. This will undo everything, including the truncate operation. After this is done, you can reclaim storage manually by running the REORG RECLAIM operation, or you can wait for the health monitor trigger to reclaim storage automatically.
  • When you create an external table that uses a text file format, a new option called LFINSTRING lets you specify how unescaped line-feed (sometimes called LF or newline) characters in string data are to be interpreted for that table:
    • If set to TRUE, an unescaped LF character is interpreted as a record delimiter only if it is in the last field of a record; otherwise, it is treated as data.
    • If set to FALSE, an unescaped LF character is interpreted as a record delimiter regardless of its position. This is the default.
    This option is not supported for unload operations, and applies only to line-feed characters, not to carriage-return line-feed (CRLF) characters.

28 January 2020

Version 11.5.1.0-CN2 is now available

Improved upgrade process
This update improves the reliability of the upgrade process. If you already upgraded to version 11.5.1.0-CN1, there is no need to upgrade to version 11.5.1.0-CN2, because your upgrade was already successful and there is no other benefit.


9 January 2020

Version 11.5.1.0-CN1 is now available

Container only changes
The number of this version is 11.5.1.0-CN1. The "CN" portion of the suffix "-CN1" indicates that this update affects only the container contents, not the database engine.

New system metrics
The web console now provides "page in" and "page out" metrics to help users diagnose potential memory problems.

New workload management capability
Db2 Warehouse now makes it easier for you to manage your query workloads.

overlay2 replaces devicemapper as default storage driver
It is now recommended that you use the OverlayFS storage driver overlay2. This driver and how to configure Docker to use it are described in Use the OverlayFS storage driver. If you are unable to use overlay2, you can continue to use the devicemapper storage driver; however, its use is deprecated. If you use devicemapper in a production environment, configure direct-lvm mode as described in Configure direct-lvm mode for production.

High availability (HA) enhancements
This release introduces the following improvements to Db2 Warehouse's out-of-the-box high availability (HA) solution:
  • Improved security due to the use of dynamically generated HA REST server SSL certificates instead of static certificates.
  • Improved security due to an upgraded version of etcd and the use of stronger cipher suites instead of the default suites.
  • Improved recovery performance by restarting only those multiple logical nodes (MLNs) that are on physical nodes that have at least one failed MLN instead of restarting all MLNs on all nodes.
  • Reduced restart time by killing processes in parallel across all system nodes.
  • Avoids false database activate failures when reactivating Db2 during recovery, by ignoring SQL warning exceptions.
Node recovery
In an MPP environment, the database manager is now able to recover a failed node without having to restart the entire cluster. This reduces the downtime and disruption caused by a node failure.

6 November 2019

Version 11.5.1.0 is now available

Version change
Db2 Warehouse changed its versioning to reflect the level of the Db2 engine that it uses. Db2 Warehouse 11.5.1.0 uses the Db2 11.5 engine, which was released in July 2019, and has since been updated with the following additional features and fixes:


New upgrade process
Due to major changes resulting from the move to a new Db2 engine, the procedure to upgrade to this new version differs from the usual update procedure. There are additional pre-upgrade and post-upgrade steps that are necessary to ensure a smooth transition. The upgrade procedure is documented here.

Enhanced db_migrate command
The db_migrate command now offers the new parameters -hiddencols and -orderby, which enable the migration of data in hidden columns and of ordered data.

Improved string compression
For string compression, Db2 Warehouse can now use a page-based (as opposed to dictionary-based) algorithm when appropriate. Especially during bulk insert or update operations involving columns with high cardinality and that contain many unique values, this greatly improves compression for the following data types:

  • CHAR  
  • CHAR FOR BIT DATA  
  • VARCHAR  
  • VARCHAR FOR BIT DATA  
  • GRAPHIC  
  • VARGRAPHIC  
  • BINARY  
  • VARBINARY
Improved availability
In an MPP environment, the database manager is now able to recover a failed node without having to restart the entire cluster. This capability reduces the downtime and disruption caused by a node failure.

6 September 2019

Version 3.10.0 is now available

This version contains fixes that are important for the proper functioning of the product, so you should update your deployment as soon as possible. For instructions, see Updating Db2 Warehouse.


31 July 2019

Version 3.9.0 is now available

Db2 engine features


APAR fixes

  • IT29081 - FALSE POSITIVE AGAINST BACKUP WITH REGISTRY DB2_BCKP_PAGE_VERIFICATION
  • IT23771 - DB2 MAY CRASH DURING RF
  • IT29021 - DB2 TRAPS IN SQLRL_GET_FULL_NAME()
  • IT28506 - IN DB2 V11.1.4.4, A SEGMENTATION FAULT (CORE DUMPED) CAN OCCUR WHEN RUNNING DB2HAICU WITH AN XML INPUT FILE.
  • IT27943 - Deadlatch between an online backup, prefetchers and an agent doing a write to the same table space
  • IT29280 - ABNORMAL SHUTDOWN OF DATABASE WHEN LOAD ENCOUNTERS SQLO_NOTAVAIL ERROR IN SQLPUPDATEGLOBALTIMEINGLFH
  • IT27963 - DB2CLUSTER SEGFAULT CAUSED BY WRONG ARGUMENTS TO PD_HEXDUMP WHEN RUNNING INSTALLFIXPACK ON PURESCALE
  • IT29225 - HADR CANNOT BE STARTED DUE TO INSUFFICIENT HADR DATABASE RESOURCE VALIDATION"
  • IT29380 - ROLLFORWARD MAY FAIL WITH SQL1271W AND SQLB_TBSPACE_TOO_SMALL WHEN REPLAYING AN ALTER TABLESPACE OPERATION"
  • IT29468 - DB2LOGGP CAN TRAP WITH SIGNAL #11 IN A  FEDERATED ENVIRONMENT

3 July 2019

Version 3.8.0 is now available

Upgraded Python runtime for Spark Analytics on Db2 Warehouse and Integrated Analytics System
The version of Python used by Spark Analytics has been upgraded from Python 2.7 to Python 3.7. Support for Python 2 versions will end this year. To begin using the newer Python version, re-install all Python packages for Spark Analytics as described in Installing Python packages on Db2 Warehouse. To see which Python packages (and their versions) are installed, log into the Db2 Warehouse container and issue the following command: /usr/bin/pip list

S-TAP configuration data
Db2 Warehouse uses S-TAP to monitor its database traffic and to forward information about that traffic to a Guardium system. Administrators are now able to save S-TAP configuration data in the initialization file ${SYSCFGDIR}/$(hostname -s)/guard_tap.ini. Consequently, they no longer need to re-configure S-TAP manually each time Db2 Warehouse is redeployed.

Db2 engine features

  • New modules DBMS_LOK and UTL_RAW have been added for Oracle compatibility.
  • Autogroom has been enabled to enhance BLU compression.
  • You are now able to add LOB columns to CDE tables with an ALTER TABLE statement.
  • The maximum number of primary and secondary log files has been increased to 4096.


APAR fixes

  • IT26072 - UPGRADE DATABASE COMMAND RETURNS SQL1224, SQL0954 WHEN APPLHEAPSZ IS SET TO SMALL FIXED VALUE
  • IT28424 - CAN NOT CREATE STORED PROCEDURES WITH SAME NAME BUT DIFFERENT PARAMETERS WHEN SQL_COMPAT ='NPS'
  • IT29048 - HSJN RESID PREDICATE ISSUE, LEADING TO -901 FOR THE CDE QUERY AT PARSERWRAPPER/JOINREWRITER
  • IT29231 - UNSUSTAINABLE TRAP CAN OCCUR AFTER A SUSTAINED TRAP IF THERE TEMP TABLES ARE USED BY TRANSACTIONS
  • IT28414 - DB2 MIGHT CRASH IN SQLOFMBLKEX->SQLO_MEM_POOL::MEMTREEPUT WITH ""CORRUPT POOL FREE TREE NODE"" ERROR MESSAGE IN DB2DIAG.LOG
  • IT28784 - RESTORING A LOADCOPY WHEN THERE IS A MODIFICATION STATE INDEX ON A COLUMN ORGANIZED TABLE MAY CORRUPT THE INDEX.
  • IT28248 - CLEANING UP OF EDU CAUSES PRIVATE MEMORY TO LEAK  (WINDOWS PLATFORM ONLY)
  • IT28622 - NUMBLOCKPAGES OF A BUFFERPOOL MAY BE ADJUSTED TO AN INVALID VALUE
  • IT28320 - RESTORE OF SMS AUTOMATIC STORAGE TABLE SPACE IN DB2 DEVELOPER-C EDITION FAILS WITH SQL1139N
  • IT28880 - EXTENT MOVEMENT RETURNS SQL1523N REASON CODE ""15"" WHEN EXTENT MOVEMENT NOT RUNNING
  • IT28881 - TABLE SPACE RESTORE UNABLE TO USE ALL CONTAINER PATHS AND FAILS WITH DISK FULL
  • IT27455 - RELAX DB2DIAG.LOG MESSAGE IN SQLSFETCHNEXTTEMPRECORD PROBE 504
  • IJ15638 - SECOND EXECUTION OF A CACHED CDE QUERY IN DPF ENVIRONMENT MIGHT LEAD TO SEGV (FODC TRAP) OR FODC MEMORY FOR ILLEGAL REQUEST SIZE
  • IT28103 - STMM REPORTS SEVERE ERRORS ABOUT STMMSIMULATESORTMERGE IN DB2DIAG.LOG
  • IT21772 - INCORRECT LENGTH FOR SQL_LOCAL_LEN AND SQL_CODESET_LEN ON AIX
  • IT28801 - "BACKUP WITH AIX COMPRESSION NX842 RETURN SQL2079 GETMAXCOMPRESSE DSIZE PROMISED THAT THIS BUFFER WOULD BE BIG ENOUGH"
  • IT25379 - "DB2 INSPECT MIGHT INCORRECTLY REPORT OUT OF RANGE ERRORS FOR BLUTABLES"
  • IT22497 - "EM_LIST_LATCH IS GRABBED AND RELEASED FREQUENTLY EVEN THERE IS NO TRANSACTION EVENT MONITOR CREATED/ACTIVE"
  • IT28983 - "DB2CLUSTER -CM -LIST -TIEBREAKER RETURNS INCORRECT OUTPUT WHEN THE PEER DOMAIN IS STOPPED OR IN MAINTENANCE"
  • IT27809 - "DB2 LOCK EVENT MONITOR REPORT: LOCK MODE HELD IS NONE IN PURESCALE"
  • IT29232 - EXTRANEOUS CODE IN THE MOUNTV111_STOP.KSH SCRIPT
  • IT29127 - DB2DART DETECTS ERRORS AFTER UPGRADING TO V11.1FP4
  • IT28615 - DB2PD -TABLESPACE DOES NOT INDICATE LOCAL OFFLINE STATE
     

3 June 2019

Version 3.7.0 is now available

Db2 engine performance improvements
The time required to create the encoding dictionaries for columnar tables has been significantly reduced. This improves the performance of SQL-based insert and update statements, and allows large data sets to be processed more efficiently.


30 April 2019

Version 3.6.0 is now available

New system metrics
The web console now provides "page in" and "page out" metrics to help users diagnose potential memory problems.


2 April 2019

Version 3.5.0 is now available

Customization of Db2 registry and configuration
Db2 Warehouse administrators can now tune the Db2 registry settings and configuration parameters to suit their needs.

Data replication target configuration
You can now use the Db2 Warehouse web console to configure BLUDR replication on a target system.

Common container operating system upgrade
CentOS and ClefOS have been upgraded to version 7.6.


1 March 2019

Version 3.4.0 is now available

IBM Data Replication for Db2 Continuous Availability
A 90-day "Try it Now" version of IBM Data Replication for Db2 Continuous Availability has been integrated into Db2 Warehouse. All Db2 Warehouse customers can immediately begin their trial after updating to version 3.4.0 of Db2 Warehouse.

Support for HADR deprecated
Use of the high availability disaster recovery (HADR) feature for SMP deployments has been deprecated. Customers are directed to use IBM Data Replication for Db2 Continuous Availability as an alternative.

Backup and restore with compression
Database, schema, and table-level backup and restore capability with file compression is now available.

Performance improvements
The internal processes used to monitor and troubleshoot Db2 Warehouse during container start-up now run in parallel for improved performance.

Documentation enhancements

  • A new topic describes the best practices for selecting Azure instance and storage types for Db2 Warehouse deployments.
  • NFS mount options are now documented.

31 January 2019

Version 3.3.0 is now available

IBM Data Replication for Db2 Continuous Availability
Db2 Warehouse can now serve as a target for data replication from IBM Integrated Analytics System (IIAS).

Installing Python packages
The instructions for installing additional Python packages have changed. If you install packages for all users, you no longer need to reinstall all packages after an update. You also no longer need to repeat the re-installation procedure on all nodes of a cluster. The instructions for installing packages for a single user are unchanged.

Backup and restore
Backup and restore tasks can now be managed from the web console.

AWS best practices
A new topic provides guidance on selecting appropriate AWS instance and storage types for Db2 Warehouse deployments.

Content hub
The Db2 Warehouse content hub offers a new way to access product information.

New videos


21 December 2018

Version 3.2.0 is now available

Customizable administrator, administrator group, and user group names
If you use an external LDAP server or Microsoft Active Directory server for user authentication and authorization, you can now customize the names of the administrator, administrator group, and user group to comply with your organizations' naming conventions and security guidelines.

Documentation enhancements
There are new step-by-step instructions for How to use Kubernetes to deploy Db2 Warehouse SMP.

Faster deployment
The time needed to deploy Db2 Warehouse has been reduced by up to 20%.


30 November 2018

Version 3.1.0 is now available

Backup and restore

  • Online incremental backups can now be carried out from the command line.
  • [Technical Preview] More granular backups are now supported. You can now back up a single database schema to a local file system, SAN, or NAS, then restore one table or all tables from that schema.


Documentation enhancements


31 October 2018

Version 3.0.1 is now available

Cloud deployments

  • Db2 Warehouse has integrated its high-availability (HA) capabilities with the cloud native HA capabilities of Kubernetes and the public cloud IaaS providers AWS, Azure, GCP, and Softlayer. Now, when the infrastructure recovers from a node failure, Db2 Warehouse automatically recovers with it.
  • Db2 Warehouse now allows cloud-native local storage of table data for AWS (EBS and EFS), Azure Disk Storage, Google Cloud Filestore, Elastifile, and Quobyte:

        Deploying Db2 Warehouse on Azure
        Deploying Db2 Warehouse on AWS
        Deploying Db2 Warehouse on GCP

Backup and restore
Db2 Warehouse now offers full online and offline backup and restore capability , using the db_backup command.

Documentation

Spark
Spark is now disabled by default, which frees up to 20% more resources for installations that do not use Spark.

In-database analytics
Members of the bluusers group can now create global temporary tables for their in-database analytics models.

Db2 engine performance improvements
The performance of large insert and update operations for column-organized tables has been improved significantly (by a factor of 2 - 3).


28 September 2018

Version 2.12.0 is now available

Support for locally attached disks

IBM Db2 Warehouse MPP deployments now have the capability to use locally attached disks. A technical preview that demonstrates this new capability for Azure Premium Storage and AWS Elastic Block Store (EBS) is now available. For more information, see the following videos:

Client container enhancement

When a user is added to the client container, that user now automatically inherits the environment needed to use Db2 and related tools.


20 September 2018

Version 2.11.2 is now available

This version contains fixes that are important for the proper functioning of the product, so you should update your deployment as soon as possible. For instructions, see Updating Db2 Warehouse.


31 August 2018

Version 2.11.0 is now available

Support for Transport Layer Security (TLS) 1.2

Db2 Warehouse now uses Transport Layer Security (TLS) 1.2 to provide secure communication of data in transit. TLS 1.0, which has known vulnerabilities, has been disabled.

User account bluadmin can now reside in a subdirectory of the user base DN location

The bluadmin user account no longer needs to reside in the directory specified by the user base distinguished name (DN), but can now instead reside in a sub-directory. This lets you specify a higher level directory for the user base DN, and use one or more of its subdirectories to organize your user accounts, including bluadmin.


27 July 2018

Version 2.10.0 is now available

This release features a new Db2 SQL engine and updated SW components to address defects and security vulnerabilities..


29 June 2018

Version 2.9.0 is now available

New home page for database administrators

For users with BLUADMIN role, the home page now displays information about database availability, responsiveness, throughput, resource usage, contention, and time spent. This lets administrators quickly assess database health.

Display options and their values

This release introduces a new command (show_options) that displays the values of all Db2 Warehouse options for which a value has been specified. This is especially helpful when duplicating a Db2 Warehouse environment.

Improved logging

The Db2 Warehouse deployment log now records source file names and line numbers. This additional information helps troubleshooters analyze problems.

Python module now available

The Python database module (ibm_db) in now included in the Db2 Warehouse client container. This lets Python application developers write code that uses native database APIs instead of other means, such as Db2 command-line interface commands.


31 May 2018

Version 2.8.0 is now available

Replacement of Kitematic for older operating systems

Db2 Warehouse has discontinued support for Kitematic. If your operating system supports it, install the Docker for Windows or Docker for Mac app and use that to deploy the Linux x86 container. However, if you are using an older operating system that does not support these apps (for example, Windows 7), you can now deploy, update, or redeploy the Linux x86 container by using the Docker Toolbox without Kitematic.

Console support for Db2 for z/OS remote servers

Db2 for z/OS is now displayed in the list of remote server types when you click ADMINISTER > Remote Tables in the web console.

Monitoring enhancement

When you click MONITOR > Systems > Software in the web console, you are now alerted about the number of non-temporary table spaces that are approaching the space limit. You can drill down to get the table space names and percentage of space that is used.


30 April 2018

Version 2.7.0 is now available

Additional metrics

The following new metrics are displayed if you click Monitor > Dashboard in the Db2 Warehouse web console: CPU, memory, swap space, network transmit, and network receive.

New Db2 Warehouse Orchestrator tool for MPP deployments

The new Db2 Warehouse Orchestrator tool tool replaces Swarm as one of the ways to deploy, update, and scale in or out in an MPP environment. The Db2 Warehouse Orchestrator method is simpler and more secure than the Swarm method. You can obtain the tool from an IBM GitHub repository.

Renamed product edition

The Db2 Warehouse Developer-C for Non-Production edition has been renamed to Db2 Warehouse Developer Edition. This edition comes with a non-expiring free license but is unwarranted and not for use in production environments. It supports SMP deployments only.

Renamed dashlicm command

The dashlicm command has been renamed to the manage_license command.

Additional documentation for deployments on Amazon Web Services (AWS)

As part of the documentation updates for this product version, instructions for SMP deployments on AWS are now available.


18 April 2018

IBM Cloud Private now supports IBM Db2 Warehouse MPP deployments.


30 March 2018

Version 2.6.0 is now available

This product version contains the following enhancements:

New way to obtain IBM Db2 Warehouse Developer-C for Non-Production

IBM Db2 Warehouse Developer-C for Non-Production is now available from the IBM Trials and Downloads: Db2 Hybrid Data Management website.

POWER9 support

Db2 Warehouse support on POWER LE hardware now includes support for the POWER9 processor. You must use the POWER9 processor in POWER8 mode, with RHEL 7.4 and version 3.10 of the kernel.

LDAP-only support for Microsoft Active Directory

If you are using a Microsoft Active Directory server, you can now configure DB2 Warehouse such that each node acts solely as an LDAP client, rather than having each node join the Active Directory domain. For information, see Setting up a Microsoft Active Directory server for IBM Db2 Warehouse.

Sample data container enhancements

Db2 Warehouse now provides new docker run command parameters for use with the sample data container. You can use these new parameters to improve how you load and manage the sample data, including dropping the data. In addition, a sample data container is now available for IBM z Systems hardware. For information, see IBM Db2 Warehouse sample data.

Important:


26 February 2018

Version 2.5.0 is now available

IBM Db2 Warehouse Developer-C for Non-Production for Linux Intel x86 platform

You can now use Db2 Warehouse Developer-C for Non-Production on the Linux Intel x86 platform. This Db2 Warehouse product has a free, non-expiring licence; you do not need to obtain a free trial license and then convert it to a paid production license after 90 days. You can use this product to try out features in your development and test environments; this product is not intended for production use. It is unwarranted and does not come with official IBM support, but you can post questions to the Db2 Warehouse forum.

Deployment on Windows and Macintosh by using native Docker applications

Instead of using Kitematic to deploy and update IBM Db2 Warehouse Developer-C for Non-Production on Windows and Macintosh, you can now use Docker for Windows or Docker for Mac. When you deploy or update by using Docker for Windows or Docker for Mac, you use the new Linux Intel x86 container of Db2 Warehouse Developer-C for Non-Production. MPP deployments are not supported.

On Windows and Macintosh, you can now also deploy and update the client and sample data containers.

Important: For all new deployments, you should use Docker for Windows or Docker for Mac with an Intel x86 container, instead of using a Kitematic container. IBM has deprecated the use of Kitematic for deploying and updating Db2 Warehouse Developer-C for Non-Production. Also, Docker refers to Kitematic as a legacy solution and recommends using Docker for Windows or Docker for Mac if possible.

Expanded support for Db2 Warehouse on IBM Cloud Private

You can now deploy Db2 Warehouse image containers for POWER LE and z Systems hardware on IBM Cloud Private, an application platform for developing and managing on-premises, containerized applications. IBM Cloud Private is an integrated environment that includes Kubernetes, a private image repository, a management console, and monitoring frameworks.


29 January 2018

Version 2.4.0 is now available

This product version contains the following enhancements:

Secure invocation of Docker commands on remote nodes

If you configure the Docker engine to listen over a TCP/IP port, you can submit Docker commands over the network to run on remote Db2 Warehouse nodes. To support secure remote invocation of Docker commands, Db2 Warehouse now provides the setup_docker_remote.sh script. This script sets up the certificates that are required for secure remote communication. The script also installs the docker_remote command, which makes it easier to invoke the Docker commands remotely. For more information, see Remote invocation of Docker commands in IBM Db2 Warehouse.

Simpler configuration of Microsoft Active Directory

You can now configure Db2 Warehouse to act as a client to an Microsoft Active Directory server by using the web console. In the console, click Settings > External User Management, and then click External AD.

Important: Db2 Warehouse support for Docker Hub has been deprecated and will be removed in the near future. You should start using Docker Store instead, which provides a better user experience. For information about accessing and using the containers in Docker Store, see IBM Db2 Warehouse prerequisites , IBM Db2 Warehouse containers , and the instructions in task topics.


22 December 2017

Version 2.3.0 is now available

This product version contains the following enhancements:

Microsoft Active Directory support

You can now use a Microsoft Active Directory server for authentication and authorization, as an alternative to an LDAP server. Db2 Warehouse uses a self-contained LDAP server by default. For information about setting up Active Directory, see Setting up a Microsoft Active Directory server for IBM Db2 Warehouse.

New user management command

You can use the new docker exec -it Db2wh configure_user_management command to configure either a Microsoft Active Directory server or external LDAP server. This command replaces the docker exec -it Db2wh configure_ldap command. For information about the new command, see configure_user_management command for IBM Db2 Warehouse.

IBM Security Guardium support

IBM Security Guardium software helps provide comprehensive data protection. To specify a Guardium collector for Db2 Warehouse, you can specify the new -e GUARDIUM_INFO parameter for the docker run command. For more information, see Configuration options for the IBM Db2 Warehouse image.

Containers for POWER LE in Docker Store

The Db2 Warehouse image container and client container for POWER LE hardware are now available in Docker Store. As in previous releases, these containers are also available in Docker Hub and the IBM Box location. For a summary of the containers for Db2 Warehouse, including the naming conventions and locations, see IBM Db2 Warehouse containers.

Geospatial Toolkit

The new Geospatial Toolkit provides SQL functions that you can use to efficiently process and index spatial data. For example, you can use the functions with Global Positioning System (GPS) data from cell phones or vehicles to track the movement of entities in or around an area of interest or to calculate spatial relationships among various geospatial or geographical features. For more information, see Geospatial Toolkit.

Fast Data Movement

Fast Data Movement allows rapid transfer of data between Hadoop distributions and Db2 warehouses. For more information, see Fast Data Movement.

Changes to client container deployments and updates

The following changes apply when you deploy or update the client container:

  • The docker run command no longer requires the --privileged=true parameter.
  • Specifying a user name and password for the -e REMOTE_DB parameter for the docker run command now results in an error.
  • There are multiple changes to the syntax of the db_catalog command. For example, you can now change the database alias in the catalog, and specifying a user ID and password for the --add parameter now results in an error.

22 November 2017

Version 2.2 is now available

New Federation data sources in the console

MySQL and PostgreSQL are now available in the Db2 Warehouse web console as data sources for Federation.


27 October 2017

Version 2.1 is now available

This product version contains the following enhancements:

z Systems support

You can now deploy Db2 Warehouse on Linux on IBM z Systems hardware. Together, Linux and z Systems hardware provide outstanding data security, high availability, and superior performance. For deployment prerequisites, see IBM Db2 Warehouse prerequisites (Linux on IBM z Systems hardware).

Simplified registration

It will be easier to gain access to Db2 Warehouse containers for many platforms because in early November, the containers will be available in Docker Store. To obtain access:
1. Obtain a Docker ID.
2. Log in to Docker Store.
3. Search for the relevant container: IBM Db2 Warehouse (image container for Linux), IBM Db2 Warehouse Developer-C for Non-Production (image container for Windows and Mac), IBM Db2 Warehouse client container, or IBM Db2 Warehouse sample data container.
4. In the search results, click the box for the relevant container.
5. Click Proceed to Checkout.
6. Complete your contact information, agree to the terms and conditions, and click Get Content.

Improved web console usability

The Db2 Warehouse web console has been redesigned for better usability:

  • The home page now provides a summary of hardware issues, software issues, database alerts, and storage usage.
  • To get key metrics about database activity, you can use the new Monitor > Dashboard option.
  • To get quick access to key console options for your role, you can use the new menu in the upper right corner of the console (click the person icon). This menu includes an About option that provides Docker image information.
  • To walk through the options in the console navigation menu and learn what the options do, click the new Discover button in the upper right corner on the console.
  • To download scripts to move files to the cloud, for later loading into Db2 Warehouse, use the Connect > Download Tools option.
     

Livy server job scheduler for Spark

When you deploy the Db2 Warehouse image container, a Livy server is automatically installed and configured for you. You can submit Spark applications from a client system to a Spark cluster running alongside Db2 Warehouse, inside the same container. You can use the new docker exec -it Db2wh livy-server command to start or stop the Livy server or obtain the status of the Livy server. For more information, see Submitting Spark applications through a Livy server and livy-server command for IBM Db2 Warehouse.

Additional operating support for POWER LE hardware

The Red Hat Enterprise Linux operating system is now supported for Db2 Warehouse on POWER LE hardware.

Enhanced diagnostic information

You can use the new dbdiag command to collect diagnostic data for components of your Db2 Warehouse implementation. You can selectively collect data according to the component type, problem symptom, and node. For more information, see dbdiag command for IBM Db2 Warehouse.

Production-level support for recently introduced configuration options

The TABLE_ORG, DB_PAGE_SIZE, DB_COLLATION, DB_TERRITORY, and DB_CODESET configuration options, which were introduced as a technical preview in Db2 Warehouse 2.0, are now supported for production environments, through the Db2 Warehouse image container. For information about these options, see Configuration options for the IBM Db2 Warehouse image.

Compatibility changes

For information about new compatibility changes, see the November 9, 2017 entry for Db2 Warehouse on Cloud.

In addition, the container names have changed. For details, see IBM Db2 Warehouse containers. In addition, Db2wh is now used as the container name in the commands in the documentation.


24 October 2017


Deployment through IBM Cloud Private

You can now deploy version 2.0 of Db2 Warehouse and Db2 Warehouse Developer-C for Non-Production on IBM Cloud Private, an application platform for developing and managing on-premises, containerized applications. IBM Cloud Private is an integrated environment that includes Kubernetes, a private image repository, a management console, and monitoring frameworks. IBM Cloud Private does not support Db2 Warehouse MPP deployments. For more information, see IBM Db2 Warehouse.


29 September 2017

Version 2.0 is now available

New ways to customize your database for your workloads

TABLE_ORG and DB_PAGE_SIZE options are now available for the -e parameter, which you can specify for the docker run command. The TABLE_ORG option specifies whether tables use column-organized storage (which is best for analytic workloads) or row-organized storage (which is best for OLTP workloads). These options are available as a technical preview in the image container. For more information, see Configuration options for the IBM Db2 Warehouse image.

Administration improvements

You can now select a role and grant privileges to it or revoke privileges from it by clicking Administer > Privileges. In addition, the pages that are displayed when you click Settings > Users and Privileges and click Settings > My Profile have been redesigned.

Enhanced health information

You can now use the docker exec -it dashDB dashdbhealth command to check the health of various aspects of your Db2 Warehouse implementation, on one or multiple nodes. This command can be very useful in helping to diagnose problems. For more information, see dashdbhealth command for IBM Db2 Warehouse.

Additional SSL support

The db_catalog command, which catalogs a remote Db2 Warehouse database for use with the tools in the Db2 Warehouse client container, now supports the --ssl parameter. You can use this parameter with the --add parameter to catalog the remote database with SSL support. You can then run Db2 CLP commands and scripts over SSL. For more information, see db_catalog command for IBM Db2 Warehouse.

Changes to constraint enforcement

By default, the NOT ENFORCED parameter applies to constraints for tables that you create in Db2 Warehouse 2.0 or later. Because the database manager does not enforce uniqueness by default for new tables, incorrect or unexpected results can occur if the table data violates the not-enforced constraint. If you want to enforce uniqueness, specify the ENFORCED parameter when you create or alter unique or referential constraints (such as primary key and foreign key). The change in default behavior aligns with best practices for warehouse workloads and improves ingest performance.

Also, you can tailor your database for your location by specifying the DB_CODESET, DB_COLLATION_SEQUENCE, and DB_TERRITORY options for the -e parameter of the docker run command. These options are available as a technical preview in the 1.11.2 experimental container. For more information, see Configuration options for the IBM Db2 Warehouse image.


25 August 2017

Db2 Warehouse 1.11.1 contains fixes that are important for the proper functioning of the product, so you should update your deployment as soon as possible. For instructions, see Updating Db2 Warehouse.

Also, see the webcasts.


8 August 2017

External tables are now supported. For information about how to create them, see the CREATE EXTERNAL TABLES statement topic or the demo , which also shows how to load data and check row counts.


2 August 2017

A "what's new" webcast for the June release is now available.


28 July 2017

Version 1.11.0 is now available

This product version contains fixes and the following enhancements and changes.

Enhancements and changes to deployment and related tasks

Deploying Db2 Warehouse and performing related tasks, such as updating and scaling Db2 Warehouse, are now simpler because you don't have to issue the docker exec -it dashDB start command. Some other minor changes have also been made.

Enhancement to HA

After a head node failover, if the original head node becomes reachable again, restarting the system causes the original head node to become the current head node again.

Additional schema privileges

You can now grant or revoke the following new schema privileges by using the Db2 Warehouse web console: ACCESSCRTL, DATAACCESS, DELETEIN, EXECUTEIN, INSERTIN, LOAD, SCHEMAADM, SELECTIN, and UPDATEIN. To grant or revoke schema privileges, click Administer > Schemas, select a schema, and click Privileges.

Reporting of CPU information

To help you monitor usage, the get_system_info command and the Settings page in the Db2 Warehouse web console now report the numbers of CPU cores, in addition to the numbers of physical CPUs.

Docker engine and storage driver support

Db2 Warehouse now supports Docker engine version 1.12.6 or higher, rather than just 1.12.6. This support applies to the CE and EE Docker engines that are supported by Docker and by Ubuntu (docker.io). Also, the devicemapper storage driver is now required for only CentOS and RHEL, not for all operating systems.

Performance improvements

Parallel inserts are now enabled in Db2 Warehouse, except when you are using HADR. Vectorized inserts and reduced logging are also now enabled. Parallel inserts, vectorized inserts, and reduced logging can help improve performance.

Change to the criteria for determining the maximum number of nodes

If you deploy Db2 Warehouse 1.11.0 with 7.68 TB or less of cluster total RAM, 24 data partitions are allocated. The maximum number of nodes when you deploy or scale out is therefore 24. If you deploy dashDB Local 1.11.0 with more than 7.68 TB of cluster total RAM, 60 data partitions are allocated. The maximum number of nodes when you deploy or scale out is therefore 60.

db_migrate command

The db_migrate command that is available in the Db2 Warehouse client container and image container now contains the functionality of the db_migrate_preview command, such as the loader load|exttab parameter.


18 July 2017

New offering names

The following table summarizes the new offering names:

Previous name New name Effective date
IBM dashDB for Analytics IBM Db2 Warehouse on Cloud July 18, 2017
IBM dashDB Local IBM Db2 Warehouse July 18, 2017
IBM dashDB for Transactions IBM Db2 on Cloud June 20, 2017


Also, the dashDB Local product for Windows and Macintosh platforms is now called IBM Db2 Warehouse Developer-C for Non-Production. It is now available at no charge, with a non-expiring license.


30 May 2017

Version 1.9.0 is now available

This product version contains fixes and the following enhancements. For more information, see the "what's new" June webcast.

External LDAP

In previous releases, dashDB Local always used a self-contained LDAP server for authentication and authorization. You now have the option of configuring dashDB Local to act as a client to an external LDAP server by using either the new configure_ldap command or the new Settings > External LDAP option in the dashDB Local web console. You can also monitor the health of your external LDAP server by using the web console.

get_webconsole_url command

You can use the new get_webconsole_url command to display the IP address and port number of the host where you deployed the dashDB Local image.

License monitoring

If your dashDB trial license is within 7 days of expiring or has expired, a message is now displayed in the dashDB Local web console. Also, the status, version, and start commands also now display a license information banner that shows the license type, the license state, the expiry date, and the number of days before expiration of a trial license.

dashDB Local web console

The style and color of the dashDB Local console have changed somewhat to be more consistent with the look and feel of other tools from other IBM Analytics products.

Experimental container

The new dashDB experimental container provides preliminary versions of new and enhanced advanced features that are planned for a future official dashDB product. The features are external tables, reduced logging, vectorized inserts, workload management (WLM), and parallel insert, update, and delete (parallel IUD). These features are available for your preview and evaluation; they have not been fully tested, so do not use them in a production environment.

To deploy the container, follow the instructions in Deploying dashDB Local (Linux) , but use one of the following repository and tag combinations:

  • For the Ubuntu operating system on POWER LE hardware: ibmdashdb/preview:v1.9.0-experimental-ppcle
  • For Linux operating systems on other hardware: ibmdashdb/preview:v1.9.0-experimental-linux


For information about how to use the features in the experimental container, contact your IBM Support representative.


28 April 2017

Version 1.8.0 is now available

This product version contains fixes and the following enhancements. For more information, see the "what's new" May webcast.

Parallel inserts

You can now enable parallel inserts into the dashDB Local database by using the docker exec -it dashDB parallel-enable.sh command. By default, parallel inserts are disabled; if you enable them, you can disable them by using the docker exec -it dashDB parallel-disable.sh command. Both commands are currently available for technical preview only. Performing parallel inserts can increase the log space requirement.

For more information about the commands, see parallel-enable command for dashDB Local and parallel-disable command for dashDB Local.

Fluid Query

Fluid Query now supports Oracle as a data source on both POWER LE and Intel hardware. In addition, the Microsoft SQL Server, Apache Hive, Cloudera Impala, and Netezza data sources, which were previously supported on only Intel hardware, are now also supported on POWER LE hardware.


31 March 2017

Version 1.7.0 now available

This product version contains fixes and the following enhancements. For more information, see the "what's new" April webcast.

Monitoring

In the dashDB Local web console, you can use the Monitor > Systems option, followed by the Software tab, to obtain the following information about database health:

  • The overall database status
  • Whether the database is in write pending state
  • The number of tables in reorg pending state
  • The number of tables in load pending state
  • The number of unavailable tables


Numbers of data partitions and nodes

If you have at least 960 GB of cluster total RAM when you deploy dashDB Local 1.7, 60 data partitions are allocated. (In previous releases, 24 partitions were allocated, regardless of how much cluster total RAM you had.) You can therefore now deploy or scale out to 60 nodes, and the higher number of data partitions can help improve performance. If you deploy dashDB Local 1.7 and have less than 960 GB of cluster total RAM when you deploy, 24 data partitions are allocated, and the maximum number of nodes when you deploy or scale out is therefore 24. Even if you increase the cluster total RAM to at least 960 GB after deployment, you cannot scale out to more than 24 nodes.

High availability disaster recovery (HADR) for SMP deployments

In a dashDB Local SMP deployment, you can set up HADR with one primary node and one standby (disaster recovery) node. HADR in SMP deployments does not use automatic failover and failback. If an unplanned outage occurs, you must instruct the standby node to take over as the new primary node, resolve the issue on the new standby node (the old primary node), and then instruct the new standby node to become the primary node again. To help you to set up and manage HADR in an SMP deployment, the setup_hadr and manage_hadr commands and the -e HADR_ENABLED='YES' parameter for the docker run command are now available. For more information, see High availability and disaster recovery for dashDB Local.

Azure

For documentation and a video on deploying dashDB Local on the Microsoft Azure cloud computing platform, see Deploying dashDB Local on Microsoft Azure and Tutorial: Deploy dashDB Local on Microsoft Azure.

Sample data

A sample data container for dashDB Local is now available. You can use this container, which is separate from the one that contains the dashDB Local product image, to load sample data into your BLUDB database. For instructions, see Loading sample data for dashDB Local.

Integrated ODBC driver for fluid queries

The dashDB Local product now contains an integrated, pre-configured ODBC driver for use with fluid queries. This lets you directly access remote data sources such as Hive, Impala, Spark, Netezza, and SQL Server without having to download, install, and configure the driver yourself.


27 February 2017

Version 1.6.0 is now available

This product version contains fixes and the following enhancements:


For instructions on how to update your system, see Updating dashDB Local.


30 January 2017

Version 1.5.0 is now available

This product version contains fixes and the following enhancements:

  • The dashDB Local web console has been improved:
    • When you perform an action for a database object such as a table by using the Administer option, clicking Run causes the action to be performed immediately, without opening the SQL editor.
    • The Administer option now provides an easier way to manage object privileges.
    • The Remote Tables (Fluid Query) option now supports the Microsoft SQL Server (MSSQL) data source.
  • Two new stored procedures based on Apache Spark are now available:
    • A generalized linear model (GLM) stored procedure. GLM handles heavy-tailed distributions and nominal (discrete-valued) distributions.
    • A TwoStep clustering stored procedure. TwoStep clustering is a data mining algorithm for large data sets. It is faster than traditional methods because it typically scans a data set only once before it saves the data to a clustering feature tree.
  • Changing the default configuration, such as for the Oracle compatibility mode, is now simpler. Instead of using the /mnt/clusterfs/options file, you now specify the -e <configuration_value>=<option> parameter for the docker run command.


For instructions on how to update your system, see Updating dashDB Local.


30 December 2016

Version 1.4.1 is now available

This product version contains fixes to enhance the stability of dashDB Local. For instructions on how to update your system, see Updating dashDB Local.


25 November 2016

Version 1.4.0 is now available

Update to the latest version of dashDB Local to take advantage of the following enhancements. For instructions, see Updating dashDB Local.

Driver package for POWER LE hardware

The PowerLinux (ppc64le) dashDB driver package is now available for POWER LE hardware. You can download the PowerLinux driver package from the dashDB Local console by clicking Connect > Download Tools. For instructions on installing the driver package, see dashDB driver package.

Security

You can now use the dashDB REST API to create LDAP users for the dashDB Local database.

You can use the rotate_db_master_key command to change the values of the master key. The master key is used to encrypt the data encryption key.

dashDB Local console 

"Quick tours" have been added to the home page and to the Load, Administer, and Run SQL pages. Quick tours walk you through the features of the console.

You can use a new button on the Spark Analytics page to open the Spark application monitoring page.

If you click Monitor > Workloads, the Overview page shows the database time breakdown by database or by workload. You can drill down to see the top resource consumers.

Monitoring of the history of database utility execution is now supported.

On the Run SQL page, you can insert and replace multiple scripts at the same time.

You can use the Privilege button on the Nickname Explorer page and the Manage Servers page to grant access to nicknames and remote servers.

If you attempt to log in to the dashDB Local console six times with an incorrect password, the account will be locked. You can try again in 30 minutes, or you can ask your administrator to unlock your account.
  
Swarm

The dashDB_local_Swarm_install.sh script now supports Docker 1.12 and the new -p (--port) option, which specifies the port number.

Apache Spark

Apache Spark, which is integrated into dashDB Local, has been upgraded from 1.6 to 2.0.

You can now use Spark with R. You can launch and run a SparkR batch application by using the spark-submit.sh script, the IDAX_SPARK_SUBMIT stored procedure, or the REST API.

You can now use socket communication for a more efficient local data transfer when reading data from dashDB tables into Spark. This option is especially helpful for large tables. You can specify this option when reading the data frame.

A new data source parameter is available. You can now use an append mode to write small amounts of data, repeatedly if necessary, into an existing table.

A new compound self-service demo notebook for dashDB with Spark is available at https://github.com/ibmdbanalytics/dashdb_analytic_tools/blob/master/dashdblocal_notebooks/Tornado%20Clustering.ipynb.


28 October 2016

Version 1.3.0 is now available

  • Update to the latest version of dashDB Local to take advantage of the following enhancements. For instructions on how to update, see here.


Container packaging and delivery

  • As well as being able to use the docker run and pull commands with the dashDB Local image from Docker Hub, you can use the docker load command to directly run a new downloadable dashDB Local image. For information about how to download the stand-alone image, contact your IBM Support representative.


Workload management (WLM)

  • In dashDB Local 1.3.0, for newly created databases, WLM adaptive admission control is used to admit work to the system based on estimated and observed resource usage. However, fixed concurrency limits (for example, admit up to 10 queries), which were used in previous releases, remain in place for existing databases. Key benefits of adaptive admission control can include improved throughput and reduced errors due to concurrency.


Monitoring

  • The Activity event monitor with statement details is used to capture information about individual query executions. In the dashDB Local 1.3.0 console, if you click Monitor > Workloads and then click History and the Individual Executions tab, you can view the statements by group, such as by WLM workload or service class. Also, if you click Package Cache while in history mode, you can view the metrics for the statements that were captured by the Activity event monitor and information about the statements that used the most resources.


POWER LE support

  • Support for dashDB Local on POWER LE hardware is provided as a technical preview. For POWER LE hardware, the only supported operating systems for dashDB Local are Ubuntu 16.04 or later.  


Fluid Query

  • You can use the Administer > Remote Tables option in the dashDB Local console to define remote tables to be referenced by Fluid Query.

30 September 2016

Version 1.2.0 is now available

  • Update to the latest version of dashDB Local to take advantage of the following enhancements. For instructions on how to update, see here.


Integrated Apache Spark support

  • Apache Spark, previously available as a technical preview, is now enabled by default.
    Apache Spark offers numerous advantages to users of dashDB Local, such as the ability to interactively transform, visualize, and analyze data, and to run highly scalable analytic applications. You can run Apache Spark applications that analyze data in a dashDB database and write their results to that database. You can also use Apache Spark to subscribe to streaming engines to process and land streaming data directly into dashDB tables.


Develop Spark applications using Jupyter notebooks container

  • You can use Jupyter notebooks to develop Spark applications interactively, then either deploy them to dashDB or export their source code for further development. Use the Docker image provided for dashDB Local to set up a Jupyter environment quickly and easily, which is ready to interact with dashDB's integrated Apache Spark.


Deploy, run, and monitor Spark applications using the CLI, SQL, the IBM dashDB Analytics API, or web console

  • A unique one-click-deployment function allows you to transform your interactive Jupyter notebooks into deployed Spark applications inside dashDB. You can also develop your own Spark applications using other development tools, and then deploy them into dashDB. You can run and manage deployed Spark applications either with the spark-submit.sh command-line tool, a documented REST API, or using the SPARK_SUBMIT stored procedure that you can call from a database SQL connection. Spark applications can also be monitored using the dashDB web console.


Run Spark-based machine learning routines

  • For a defined set of popular machine learning problems you can use integrated stored procedures for training models, doing predictions and for managing stored models. These procedures internally leverage Apache Spark with machine learning libraries.
    For more information, see Analyzing with Spark on dashDB Local.


Enhanced SQL editor

  • You can now select from a list of predefined SQL statements, which you can use as templates for creating your own queries. Available statements include SELECT, INSERT, DELETE, and UPDATE. You can add your queries to a list of saved scripts and view script execution history that includes details of the success or failure of a script execution.


Generate SELECT and INSERT statements directly from database object pages

  • You can now generate SELECT and INSERT statements for objects, like tables, views, aliases, MQTs, and nicknames, directly from within the Administer window. Now, instead of jumping back and forth between the object page and the SQL Editor, you can simply edit out any unwanted properties from your generated SQL statement and run the query.


Fluid Query now available in technical preview

  • The Fluid Query feature lets you access data that is located at a data source that is different from the one to which you submitted a query.

30 August 2016

Version 1.1.0 is now available

  • Update to the latest version of dashDB Local to take advantage of the following enhancements. For instructions on how to update, see here.


New tagging convention

  • To ensure that our Kitematic users are getting the right image, we are now using the following convention for tagging our images:
    • Windows/Mac (using Kitematic): latest (but v1.0.0-kitematic also works)
    • Linux: latest-linux
    As a result, Windows and Mac users who deploying or updating can click on the CREATE button and the most recent ("latest") image is automatically selected. Linux users must specify the "latest-linux" tag in their docker run or docker pull commands. The command provided (these can be cut and pasted) reflect this change.


Oracle compatibility

  • You can specify that your dashDB Local database is to be created in Oracle compatibility mode, allowing you to run existing Oracle applications. For more information, see here.


Monitoring enhancements

  • MPP tables now show their data distribution statistics.
  • You can now easily switch between real-time and historical monitoring.
  • A new time range slider makes it easier to zero in on periods of interest
  • Console response when switching between pages is greatly improved.


Object management enhancements

  • We’ve made it easier for you to create application objects, such as stored procedures, user-defined types, and user-defined functions. You can now create them within the Administer objects window and we’ll provide you with a template and instructions to help you along.
    • We’ve made it easier for you to grant and revoke privileges. You can now specify privileges for multiple users and multiple objects at the same time.
    • We’ve made some usability improvements around table altering operations, making it easier for you to add, update or delete columns. For example, as you add columns, we perform instant validation of the fields you enter.


SQL editor enhancements

  • You can now save your existing SQL scripts as favorites for easy access later.
  • We’ve added support for find/replace for regular expressions.
  • We’ve added templates to help you build your SELECT, INSERT, UPDATE, and DELETE statements.


Container portability

  • You can now move your dashDB Local data from one cluster to a new cluster in just a few simple steps. This is supported in both SMP and MPP deployments.

22 July 2016

Version 1.0.0 is now available

dashDB Local is next-generation data warehousing and analytics technology for use in private clouds, virtual private clouds and other container-supported infrastructures. It is ideal when you must maintain control over data and applications, yet want cloud-like simplicity. For more information, see this page.

Actionable items for preview participants

Preview license expiration
Per the terms of the dashDB Local preview program, your preview license will expire on 2016-08-01, at which time the preview version of your product will no longer work. You will have to re-register for the generally available (GA) product, after which you can have an additional 90 day trial license. Before those 90 days are up, you have to purchase and convert to a production license.

No migration from preview version
You cannot perform a version update from the preview product to the  1.0.0 (GA) version of dashDB Local.

What's new and changed from the preview version

Integrated Apache Spark support
Use the integrated Apache Spark framework to run Spark applications, both interactively and in batch, for data exploration, data transformation, and machine learning.

Scale up/down support
You can now add or remove resources (CPU or memory) to the host servers in your dashDB Local deployment. When the services restart, any resource changes are detected and the database configuration is updated automatically.

No data distributed to catalog partition
To improve performance, dashDB Local now excludes the catalog partition (partition 0) and instead distributes the data among the other 23 partitions.

Kitematic is now a separate image
If you are deploying dashDB Local via Kitematic, you need to specify the v1.0.0-kitematic tag. Previously, you could use the default "latest", but that image will not work in the current release.

Enhanced port checking
We've added some additional steps to our prerequisite check scripts that detect whether all the ports required by the dashDB stack are open on the hosts' Linux firewall.

Enhanced usage metrics
The dashDB Settings panel in the web console now shows monthly aggregated vCPU usage metrics.

Miscellaneous changes and improvements
- Container O/S is now CentOS 7.2
- Console look and feel is improved

[{"Business Unit":{"code":"BU059","label":"IBM Software w\/o TPS"},"Product":{"code":"SSCJDQ","label":"IBM Db2 Warehouse"},"Component":"","Platform":[{"code":"PF016","label":"Linux"},{"code":"PF017","label":"Mac OS"},{"code":"PF033","label":"Windows"}],"Version":"All Versions","Edition":"","Line of Business":{"code":"LOB10","label":"Data and AI"}}]

Document Information

Modified date:
20 February 2024

UID

ibm10739539