What's new in 22.0.1 - June 2022

Learn what's new in version 22.0.1.

Important: Use of inclusive terminology

While IBM values the use of inclusive language, terms that are outside of IBM's direct influence are sometimes required for the sake of maintaining user understanding. As other industry leaders join IBM in embracing the use of inclusive language, IBM will continue to update the documentation to reflect those changes.

For information on features that are deprecated or removed in 22.0.1, see Deprecated and removed features.

The following sections help you to see where the changes are made and where the new features are added.

Changes in interim fix IF002 - August 2022

Foundational services upgrade

Starting from Cloud Pak for Business Automation 22.0.1 interim fix 2 (IF002), IBM Cloud Pak foundational services 3.20.x (v3.20 OLM channel) is supported. Therefore, if you plan to upgrade to Cloud Pak for Business Automation 22.0.1-IF002 (and hence foundational services 3.20), you must run a script to change the channel of ibm-common-services-operator from 3.19 to 3.20. Upgrading the channel of ibm-common-services-operator upgrades the channel for all the other foundational operators.

Note: If you already applied the 3.20.x catalog in your cluster and updated the OLM channel to v3.20, you can still run the shell script. However, you must wait for the Zen custom resource status to be in "Ready" state. To get the status of the Zen service, run the following command:
oc get zenservice -o yaml

Online upgrade

  1. Log in to the cluster as an administrator by using the oc login command.
  2. Download the upgrade_common_services.sh script to your local machine.
  3. Read the script help carefully by running the following command:
    ./upgrade_common_services.sh -h  
  4. Run the script with the all namespaces flag and channel version to the command:
    ./upgrade_common_services.sh -a -c v3.20
    Note: If you installed foundational services in specific namespaces (an unusual use case), add the namespaces and channel version to the command. For example, if you want to upgrade foundational services in a Cloud Pak namespace cp4ba and foundational services namespace cp4ba-cs to version v3.20, run the following command:
    ./upgrade_common_services.sh -cloudpaksNS cp4ba -csNS cp4ba-cs -c v3.20
  5. You can then apply the CP4BA 22.0.1-IF002 catalog sources by following the instructions in the interim fix readme.
Air gap (offline) upgrade
  1. Log in to your OpenShift Container Platform console.
  2. Remove the catalog source for your current foundational services installer version.
    oc delete CatalogSource opencloud-operators -n openshift-marketplace
  3. Log in to the cluster as an administrator by using the oc login command.
  4. Download the upgrade_common_services_airgap.sh script to your local machine.
  5. Read the script help carefully by running the following command:
    ./upgrade_common_services_airgap.sh -h  
  6. Run the script with the all namespaces flag and channel version to the command:
    ./upgrade_common_services_airgap.sh -a -c v3.20
  7. You can then mirror the new images and apply the CP4BA 22.0.1-IF002 catalog sources by following the instructions in the interim fix readme.
Script enhancements to prepare capabilities for installation

You can now run the cert-kubernetes script (cp4a-prerequisites.sh) to generate database SQL statements and YAML templates for Kubernetes secrets that you need for each of your selected CP4BA capabilities.

Note: To extract the cert-kubernetes repository for IF002.
  1. Download the Container Application Software for Enterprises (CASE) package 4.0.2.
  2. Extract the package.
  3. Extract the contents from the .tar file in the ibm-cp-automation/inventory/cp4aOperatorSdk/files/deploy/crs folder. Use the tar command to extract the archives.
    tar -xvzf ibm-cp-automation-4.0.2.tgz
    cd ibm-cp-automation/inventory/cp4aOperatorSdk/files/deploy/crs
    tar -xvf cert-k8s-22.0.1.tar
  1. First, run the cp4a-prerequisites.sh script with the [property] option to create a property file for your databases and LDAP.
  2. You can then run the script again with the [generate] option to output the scripts and YAML templates that are based on the values in the property file.
  3. The final step is for you to create the databases and the secrets that are needed by running the generated SQL, YAML, and scripts.

For more information about the cp4a-prerequisites.sh script and a description of how to use it, see the corresponding technote for this interim fix under New features in Cloud Pak for Business Automation interim fixes.

Configuration of foundational services in multiple namespaces

Cloud Pak for Business Automation 22.0.1-IF002 supports a new installation with a dedicated instance of foundational services. It does not support an upgrade of CP4BA with a shared instance of foundational services to CP4BA with a dedicated instance. It is also not possible to install an instance of CP4BA with a shared foundational services instance and CP4BA with a dedicated instance in the same cluster.

You can use the OCP console or the scripts to configure your CP4BA deployment with foundational services.

Note: By default, a dedicated instance of foundational services is installed with CP4BA, which is the recommended use case. If you do not want to use dedicated foundational services for your deployment and want to use a shared instance instead, you must use the OCP console. A deployment of CP4BA with a shared foundational services instance can be done by applying the CP4BA catalog sources and then selecting the CP4BA catalog tile to create a subscription.
  • Using the OCP console

    To create dedicated foundational services for your CP4BA deployment in the OCP console, you must add a configMap (common-service-maps) to your cluster that describes the namespace mapping of the foundational services. The following example shows a configMap with two mapped instances:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: common-service-maps
      namespace: kube-public
    data: 
      common-service-maps.yaml: |
        controlNamespace: cs-control
        namespaceMapping:
        - requested-from-namespace:
          - cp4ba1
          map-to-common-service-namespace: cp4ba1
        - requested-from-namespace:
          - cp4ba2
          map-to-common-service-namespace: cp4ba2 

    Where the CP4BA deployment in the cp4ba1 namespace uses dedicated foundational services that are also installed in cp4ba1, and the CP4BA deployment in cp4ba2 uses dedicated foundational services in cp4ba2.

    After you created a configMap, you can then follow the steps to install a CP4BA starter or production deployment. For more information, see Installing on Red Hat OpenShift.

    When the deployment is complete, the CP4BA deployment uses the foundational services that are defined in the configMap namespace mapping.

  • Using the deployment scripts

    If you use the cluster admin script (cp4a-clusteradmin-setup.sh), the script checks for an existing common-service-maps configMap to see whether the instances of foundational services already in the cluster are dedicated to a Cloud Pak. If the common-service-maps configMap does not exist, the script creates a configMap so that the operator can deploy a dedicated instance. When the script is complete, the CP4BA deployment uses the foundational services that are defined in the configMap namespace mapping.

    Tip: You can check whether a shared instance is installed by running the following command:
    oc get cm ibm-common-services-status -n kube-public -o yaml

    The output of the command shows the status of the installed foundational services:

    apiVersion: v1
    data:
      2201s-iamstatus: Ready
      cp4ba-iamstatus: Ready
      iamstatus: Ready
      ibm-common-services-iamstatus: Ready
    kind: ConfigMap

For more information about installing multiple IBM Cloud Pak foundational services instances in your cluster, see Installing IBM Cloud Pak foundational services in multiple namespaces.

Upgrade guide builder

Open the interactive guide builder You can use the new interactive guide builder to create the steps that you need to upgrade Cloud Pak for Business Automation 21.0.3 to 22.0.1.

Supported operating environments

Software Product Compatibility Report (SPCR)

The system requirements information is available through the SPCR website. You can dynamically generate operating system, prerequisite, server virtualization environment, translation, end of service, and detailed system requirements reports for each release. New information Learn more...

Note: The support for Red Hat OpenShift Container Platform (OCP) 4.6 and 4.7 is removed. You can now use only OCP 4.8, OCP 4.10, or higher.

For more information about the system requirements of the IBM Cloud Pak foundational services, see hardware requirements.

Support for IBM Cloud Satellite

Cloud Pak for Business Automation capabilities can be deployed on IBM Cloud Satellite customer data centers. Deploy and use the Cloud Pak as you would on any other OpenShift cluster. Get started by selecting the On-premises & edge location in Satellite Locations. New information Learn more...

Support for Red Hat® OpenShift® Service on AWS (ROSA)

Cloud Pak for Business Automation can now be installed on ROSA.

ROSA is hosted on Amazon Web Services (AWS) public cloud and jointly managed by Red Hat and AWS. Before you choose an AWS Storage solution, AWS recommends that you first assess what storage characteristics are appropriate for your applications and business. After you familiarize yourself with AWS Storage, you can then compare your requirements to the AWS Storage services and select the solution that meets your needs. New information Learn more...

As of June 2022, OpenShift Data Foundation (ODF), formerly known as OpenShift Container Storage (OCS), is not supported on ROSA. You can manually configure ODF, but Red Hat cannot provide support. The support of ODF on ROSA is only through managed ODF, which is planned for in the second half of 2022.

Warning: If your LDAP and your ROSA cluster are not in the same VPC (virtual private cloud), you might see intermittent connection issues. If your LDAP and your ROSA cluster are in the same VPC and your Cloud Pak for Business Automation custom resource specifies an external LDAP IP, you might still see the issues in the connection.
Software dependencies

The IBM Cloud Pak foundational services are upgraded with each Cloud Pak version and interim fix. For more information about what's new in these services, see What's new.

Support for running Linux on IBM Systems

IBM systems servers take advantage of hardware performance, availability, and reliability features as it runs Linux applications.

Linux® on Power®

This version introduces support for Linux on Power (ppc64le). The support works the same as for Linux on Z (s390x). The Cloud Pak operator chooses the CPU architecture on which to deploy an ICP4ACluster based on the type of nodes in the Red Hat OpenShift cluster and the available container images. Document Processing, Content Collector for SAP, and Workflow machine learning features do not provide container images with the ppc64le identifier for the Power architecture.

Linux on Z

Capabilities that previously did not support Linux on Z in previous versions now do. Automation Document Processing and Workflow machine learning features continue to not support Linux on Z.

Database support added across the Cloud Pak
Microsoft SQL Server

Microsoft SQL Server (MSSQL) is now a supported external database in all capabilities of Cloud Pak for Business Automation, except Document Processing.

Support for Microsoft SQL JDBC Driver 10.2 (or higher)

Version 10.2 of the Microsoft JDBC Driver for SQL Server brings several added features, changes, and fixed issues over the previous production release including several breaking changes that require modification of the data source definitions. If you plan to update your Microsoft SQL JDBC Driver to 10.2 (or higher), you must update the deployment by supplying the appropriate Microsoft SQL JDBC Driver 10.2 (or higher) JAR file. When the operator is used to update the deployment, the data source definition is modified by the operator to avoid the breaking changes.

For more information about how to provide the JDBC driver files to the operator for deployment, see Preparing customized versions of JDBC drivers and CCSAP libraries.

For more information about the breaking changes from Microsoft, see JDBC Driver 10.2 for SQL Server Released.

Oracle

Oracle is now a supported external database in the Operational Decision Manager capability of Cloud Pak for Business Automation.

MongoDB

Automation Decision Services now supports MongoDB 4.4 in addition to version 4.2.

Open source updates

For security reasons, Log4j is either removed or upgraded to 2.17.1+ in all containers that previously included version 1.x.

Install

Pattern installation guides in PDF

If you want to install a production deployment of a single Cloud Pak for Business Automation pattern, you can use a preconfigured guide that includes all of the options for that pattern. Due to the numerous dependencies and shared components of the patterns, it is not always obvious which components a pattern installs. Each preconfigured guide includes all the steps for all of the components that are needed to install a single pattern by using the provided scripts. If you do not want to install an option, ignore the corresponding steps for that option. New information Learn more...

Starter deployments now use PostgreSQL to improve performance

The database that is used by the Cloud Pak in starter deployments is now PostgreSQL instead of Db2®. OpenLDAP is still set up in a starter deployment for you, and creates a cp4admin user to run LDAP commands. You can add users and update the existing default users in the Red Hat OpenShift console or directly from the ldap command line interface (CLI). New information Learn more...

Support global image pull secret to improve consistency across Cloud Paks

Cloud Pak for Business Automation supports the use of a global OpenShift pull-secret. You can add the authentication information for the IBM® Entitled Registry (cp.icr.io) to the global secret in your OpenShift cluster and use it for all online deployments (have a connection to the internet). The admin.registrykey secret from previous versions is no longer needed, so the ibm-entitlement-key secret is the only secret that you require if you do not want to use the global secret. New information Learn more...

Ephemeral storage to optimize your cluster

Ephemeral storage size is now specified for each capability. The custom resource can be updated to declare limits and requests for a container. The emptyDir volume that is created for ephemeral storage must have a size limit (emptyDir.sizeLimit), and specify how much CPU and memory a container needs.

If a container exceeds ephemeral storage limits, then the Pod for the container is evicted from the node. You can monitor container specific ephemeral storage usage by finding the node that the pod runs in, and then running a curl command or a jq query to view the metrics data.

jq '.pods[] | select(.podRef.name | contains( "<deployment name>")) | .podRef.Name,."ephemeral-storage"'
Default JDBC drivers are provided

A web server is now used to download custom drivers if you need specific versions instead of using the operator PVC. The parameter sc_drivers_url is used to locate the anonymous web server URL. New information Learn more...

Cloud Pak operator PVC requirement removed to reduce set up time

The operator PVCs are no longer needed before you install the Cloud Pak. Due to the removal of the operator PVCs, capabilities no longer store the log files to the default PVC name cp4a-shared-log-pvc. If you need to store the log data from the pods, you need to specify a value for the datavolume.existing_pvc_for_logstore parameters for each capability. New information Learn more...

Stand-alone deployment of Process Federation Server and Workplace

You can now perform a dedicated deployment of Process Federation Server and Workplace (as an optional component) on OpenShift without having to deploy a Business Automation Workflow server in the same namespace. A Process Federation Server deployment can be set up to federate traditional (on-premises) Business Automation Workflow servers, Business Automation Workflow on containers servers, and Workflow Process Service servers, as long as they are all in the same namespace. The deployment gives you access to the federated REST API to create your own federated user interface for task workers and can also include Workplace to provide an out-of-the box user interface for task workers. New information Learn more...

IBM Process Mining

Process Mining is installed independently of the Cloud Pak operator and is released in a different lifecycle. New information Learn more...

IBM Robotic Process Automation

Robotic Process Automation is installed independently of the Cloud Pak operator and is released in a different lifecycle. New information Learn more...

New CP4BA FileNet Content Manager operator to ease the operational complexity of running multiple patterns

You can now install a CP4BA FileNet Content Manager deployment in one of two ways. You can either install the content pattern by using the CP4BA FileNet operator, or by using the Cloud Pak for Business Automation (CP4BA) multi-pattern operator. If you do not want to combine FileNet with other patterns, choose the CP4BA FileNet operator. If you want to install the content pattern by using the CP4BA multi-pattern operator, the FileNet Content Manager custom resource is controlled by an instance of the CP4BA FileNet operator. In other words, the content pattern is now controlled by the new operator in all instances that it is installed.

Build

Business applications
Configure externally accessible apps
Design publicly accessible business apps that can be accessed by any user without the need of authenticating to the system. New information Learn more...
Securely exchange data between apps and hosted pages
Exchange input and output data between your apps and the page where they are hosted. For example, contract id or customer id available from your portal can now be passed to the embedded apps when they start. New information Learn more...
Decision automations
Manage decision services
  • Create decision services in Brazilian Portuguese.
  • Duplicate any existing decision service to work on a new copy of it. New information Learn more...
  • Migrate decision services created in an earlier version of Automation Decision Services to make them compatible with the latest version of the product. New information Learn more...
  • Edit the groupID of decision services and decision artifacts directly in Decision Designer. New information Learn more...
  • Configure Automation Decision Services to automatically create a remote Git repository and connect any new project to it. New information Learn more...
Use extended authoring capabilities to build larger decisions
  • Use string interpolation to concatenate string elements in a more readable and convenient syntax. New information Learn more...
  • Protect your decision tables against editing by others by using the decision table locking facilities. New information Learn more...
  • An automatic check is performed when importing external libraries into Decision Designer to make sure value types are properly defined. New information Learn more...
Infuse decisions with machine learning
  • Use machine learning samples and proofs of concept out of the box by configuring global machine learning providers in the Automation Decision Services admin platform. New information Learn more...
  • Create predictive models by importing ruleset models directly into Decision Designer. New information Learn more...
  • Create predictive models by uploading serialized machine learning models from IBM Watson® Machine Learning. New informationLearn more...
Validate decisions
  • Verify that task models produce the results you expect by running them with test data directly in Decision Designer. New information Learn more...
  • In the redesigned Run tab, a new JSON rich text editor replaces the old text area editor to edit JSON schemas.
Build and execute decision services
  • Check the status of unit tests directly in the Deploy tab in Decision Designer.
  • Configure Maven to connect to Decision Designer and use Decision Designer as a Maven repository. New information Learn more...
  • Execute decision services in a Java application running in a Kubernetes container by using the execution Java API for Automation Decision Services. New information Learn more...
  • Select the latest semantic version of a decision service for execution. New information Learn more...
Document processing
Extract data with better results
  • Field and table data results are improved through better text and checkbox location identification, improved results ranking, and better recognition of table cells without drawn lines. A table column can be populated from data that is contained in a separate column, for example fill in the PO column from a value in a Description column. New information Learn more...
  • A document field can be populated from a value that is contained in a table summary. New information Learn more...
  • Data extraction from identity documents (such as ID cards) is introduced as a feature in 22.0.1, when it was previously a technology preview in 21.0.3. New information Learn more...
Configure and train data extraction more easily in Document Processing Designer
  • You can now define display names for document types or fields in any language. You can also edit aliases, which is the list of other possible field names, when teaching a data extraction model.
  • The system automatically finds and lists possible field values so you can quickly teach the field location within a sample document. You can also indicate when a field value is not present in a sample to improve understanding when the sample is ready to train. New information Learn more...
  • You can fill in ground truth data by drawing a box around the data in the document viewer. New information Learn more...
  • The test results view is improved to show document-level field and table results. New information Learn more...
  • You can export an existing project ontology (document and field types) to a local file and import the ontology into a new project. New information Learn more...
Process text in additional languages

You can now extract and classify data from documents in Dutch, German, and Brazilian Portuguese. New information Learn more...

Improved usability of applications for human-in-the-loop data verification
  • When you upload documents, you can now choose the document types rather than have the system auto-classify documents. New information Learn more about choosing the types in single applications, batch applications, and how to configure that option.
  • After documents are finalized, you can view all of the data extraction results and export the extracted data to a local file in JSON and CSV formats. New information Learn more...
  • Application templates now support Microsoft Word and image file formats, in addition to PDF. New information Learn more...
  • When a table spans multiple pages, the data is now merged to make viewing and entering table data easier and faster.
General performance is improved with a better throughput of documents

Documents that have a fixed format, such as tax forms, are processed faster and with higher throughput when you identify a document type as a fixed-format document type.

Workflow automations
Develop workflow automations more efficiently
  • Debug service flows and client-side human services more easily by setting breakpoints. New information Learn more...
  • Work more easily with larger scripts by using the expanded script editing area.
  • Easily change the context of variables between input, output and private.
Automation services
  • Expose asynchronous workflow automation services that are implemented by processes. New information Learn more...
  • Develop workflow automations that call automation services that are implemented by processes. Such automation services are invoked in an asynchronous fashion. New information Learn more...
  • To build external clients, you can use strongly-typed REST APIs to call long-running processes and receive their responses by providing a callback URL. New information Learn more...
Improve process instance migration
Improve process instance migration by using a migration policy in Business Automation Studio to identify orphaned tokens so that you can delete or move them. You can define this policy either in the IBM Business Automation Studio user interface Learn more... or by using a REST API. New information Learn more...

For more information, see What's new in IBM Business Automation Workflow 22.0.1.

Use and manage

Case Client supports the platform plug-in

You can access Case Client on desktops that have the case feature and platform plug-in enabled.

Decisions management
Deploying on different environments simultaneously

Tests and simulations can be performed from a Decision Center that is deployed in a different cluster or namespace than the Decision Runner that executes these tests and simulations. New information Learn more...

Customizing the Decision Center behavior

You can adjust the behavior of Decision Center by defining context parameters in a properties file that is referenced by the configmap. New information Learn more...

Administer

Operational Decision Manager full support of Zen

Operational Decision Manager now manages authentication through the IBM Cloud Pak® Platform UI front door with Zen roles and permissions. New information Learn more...