IBM Support

IBM Cloud Pak for Automation 20.0.x Known Limitations

General Page

This web page provides a list of known limitations in Cloud Pak for Automation 20.0.x. Workarounds are provided where possible. This page is subject to updates to keep you current on new or resolved issues.
The first row of the following table shows the release number. You can also find issues in the IBM Knowledge Center at IBM Cloud Pak for Automation: Known Limitations.
IBM Cloud Pak for Automation 20.0.3, interim fix 2 (ICP4A 20.0.3-IF2)
NVIDIA GPU required for Deep Learning in the Automation Document Processing pattern
The table of infrastructure requirements for Automation Document Processing (see Identifying the infrastructure requirements) states that for Deep Learning, "it is highly recommended to use worker nodes with NVIDIA GPU for better and faster results." This note should state that NVIDIA GPU is required for Deep Learning in the Automation Document Processing pattern.
Wrong Kafka topic name in documented procedure
In the documentation for IBM Business Automation Insights, the procedure in Configuring event output to Kafka uses the topic ibm-bai-ingress. If you use this topic, your setup will fail. Instead, use the topic bai-ingress, which is the default value that is shown in Environment variables for Apache Kafka.
Limited lengths for custom resource and instance names
Be careful with the lengths of your custom resource (CR) and instance names. For example, if your deployment contains Workflow Authoring, the total length of the CR name must not exceed 13 characters. If you enable the Workflow Authoring Lombardi custom XML (workflow_authoring_configuration.lombardi_custom_xml_secret_name), your CR name must not exceed 7 characters. Otherwise, the Workflow Authoring deployment fails.
Special characters in oidcClientPassword or Resource Registry reader password can cause Workflow issues
Business Automation Workflow might not work after an enterprise deployment on Red Hat OpenShift on IBM Cloud if there is a special character in the oidcClientPassword defined in the Process Federation Server admin secret, or there is a special character in the Resource Registry reader password.
Workflow Authoring documentation correction
In the IBM Business Automation Workflow Authoring parameters topic, the default value for the workflow_authoring_configuration.image.repository parameter is shown as workflow-server. However, it should be <path>;/workflow-authoring.
 
 For container images, the default value follows the pattern <path>/container name that applies the following conditions:
  •     By default, <path> is the location of the container on the IBM Entitled Registry.
  •     If sc_image_repository is set, <path> is the value that is provided.
For example, if sc_image_repository is myimagerepos.com/cp4ba/, the repository value that is used for any given container image is myimagerepos.com/cp4ba/container image name.
Business Automation Studio documentation corrections
1) Steps 1, 2, and 4 in Configuring a remote server are incorrect.
For step 1, the secret name was revised to distinguish the secret for IBM Workflow Server or IBM Workflow Center.
For step 2, the username and password were revised to be the Workflow Authoring username and password, not deadmin.
For step 4, the Workflow Server root CA certification was added to the Workflow Authoring trust list.
Also, the topic was retitled to "Optional: Customizing Workflow Server to connect to Workflow Authoring".
2) In the Upgrading IBM Business Automation Studio topic, step 1 a) is incorrect. To delete a JMS stateful set, run the following command:
oc delete sts <jms statefulset name>
Business Performance Center documentation correction
Business Performance Center does not support IBM Business Automation Content Analyzer data kinds. In the topic Business Performance Center in the documentation for IBM Cloud Pak for Automation V20.0.x, “Table 1. Supported data kinds in Business Performance Center” states in the “Content” column that IBM Business Automation Content Analyzer data kinds are supported. This entry is incorrect. However, the center does support data kinds for IBM FileNet Content Manager.
Automation Decision Services (ADS) capability
If you are deploying and using the OpenShift image registry (repository: image-registry.openshift-image-registry.svc:5000) or a private image registry, you must update your custom resource by adding the following entries under shared_configuration and ads_configuration respectively:
 
shared_configuration:
  images:
  keytool_job_container:
    repository: image-registry.openshift-image-registry.svc:5000/<your project>/dba-keytool-jobcontainer
    tag: "20.0.3-IF002"
  keytool_init_container:
    repository: image-registry.openshift-image-registry.svc:5000/<your project>/dba-keytool-initcontainer
    tag: "20.0.3-IF002"
  umsregistration_initjob:
    repository: image-registry.openshift-image-registry.svc:5000/<your project>/dba-umsregistration-initjob
    tag: "20.0.3-IF002"
ads_configuration:
    rr_integration:
      image:
        repository:  image-registry.openshift-image-registry.svc:5000/<your project>/ads-rrintegration
        tag: 20.0.3-IF002
    front:
      image:
        repository:  image-registry.openshift-image-registry.svc:5000/<your project>/ads-front
        tag: 20.0.3-IF002
    download_service:
      image:
        repository:  image-registry.openshift-image-registry.svc:5000/<your project>/ads-download
        tag: 20.0.3-IF002
    rest_api:
      image:
        repository:  image-registry.openshift-image-registry.svc:5000/<your project>/ads-restapi
        tag: 20.0.3-IF002
    credentials_service:
      image:
        repository:  image-registry.openshift-image-registry.svc:5000/<your project>/ads-credentials
        tag: 20.0.3-IF002
    git_service:
      image:
        repository:  image-registry.openshift-image-registry.svc:5000/<your project>/ads-gitservice
        tag: 20.0.3-IF002
    parsing_service:
      image:
        repository:  image-registry.openshift-image-registry.svc:5000/<your project>/ads-parsing
        tag: 20.0.3-IF002
    run_service:
      image:
        repository:  image-registry.openshift-image-registry.svc:5000/<your project>/ads-run
        tag: 20.0.3-IF002
    embedded_build_service:
      image:
        repository:  image-registry.openshift-image-registry.svc:5000/<your project>/ads-build
        tag: 20.0.3-IF002
    embedded_runtime_service:
      image:
        repository:  image-registry.openshift-image-registry.svc:5000/<your project>/ads-runtime
        tag: 20.0.3-IF002
    decision_runtime_service:
      image:
        repository:  image-registry.openshift-image-registry.svc:5000/<your project>/ads-runtime
        tag: 20.0.3-IF002
Automation Decision Services capability
When you install the Automation Decision Services (ADS) pattern with Decision Designer enabled and Decision Runtime disabled, the installation by the operator fails.  The operator logs show an error:
FAILED! => {"msg": "The task includes an option with an undefined variable. The error was:
'ads_runtime_service_tag_explicitly_set' is undefined
The error appears to be in '/opt/ansible/roles/ADS/tasks/install/main.yml': line 46, column 3, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
# set tags or digests for runtime
- set_fact:
  ^ here"}
The workaround is to request also the installation of the Decision Runtime by activating the ads_runtime component
of the ADS pattern:
spec:
  shared_configuration:
     sc_deployment_patterns: decisions_ads
     sc_optional_components: ads_designer, ads_runtime
Business Automation Studio, Business Automation Workflow, or Automation Workstreams Services capability

If the Business Automation Studio (BAS) bootstrap job process is slow in starting, the BAS deployment and runtime pod might not be deployed on the first loop. You might experience issues such as a constant error pop-up when you access the BAS, Business Automation Workflow, or Automation Workstreams Services capability. You can wait for the BAS bootstrap job to finish on the reconcile run, and then the BAS deployment and pod is installed and started.

If the Workflow Authoring pods started before the Workflow Authoring database job finished, you might need to restart the Workflow Authoring pod manually.

You can also try the following workarounds:

  • If you are using self-signed certificates, make sure that you followed the post-deployment steps by visiting all the links and accepting the self-signed certificates. For example, see  Verifying Business Automation Workflow Runtime.
  • If you find the cr_name-workflow-authoring-baw-db-init-job-hash_value pod isn't in complete state for longer than 30 minutes, you can try to restart all the pods named cr_name-workflow-authoring-baw-server-number by deleting them with the command oc delete pod -l app.kubernetes.io/component=server,app.kubernetes.io/name=workflow-server. Wait until the pod restarts and is back in ready state before you try again.
Recovering from failed upgrade of the Cloud Pak operator from 20.0.3.x to 21.0.1

If you deployed CP4BA 20.0.3.x by using OperatorHub and the CP4BA Operator failed to automatically upgrade to 21.0.1, you can use the following steps to revert the installed operator back to 20.0.3:

  1. In the OpenShift Cloud Platform console, click Operators, and then select Installed Operators. Uninstall both IBM Cloud Pak for Automation and IBM Cloud Pak for Business Automation.
  2. Create imagePullSecrets (admin.registrykey) in the openshift-marketplace namespace, and then run the following command:
    kubectl patch sa default -n openshift-marketplace -p '"imagePullSecrets": [{"name": "admin.registrykey" }]’
    
    
  3. Apply the CatalogSource depending on whether your installation is 20.0.3 or 20.0.3.2:

    CatalogSource Yaml for version 20.0.3:

    apiVersion: operators.coreos.com/v1alpha1
    kind: CatalogSource
    metadata:
      name: ibm-cp4a-operator-catalog
      namespace: openshift-marketplace
    spec:
         displayName: ibm-cp4a-operator
         publisher: IBM
         sourceType: grpc
         image: cp.icr.io/cp/cp4a/ibm-cp-automation-catalog@sha256:92771e51a9390b0c70278673512ca8e89f9a0c600d5cdf7f9d39de0902d1be4d
         updateStrategy:
          registryPoll:
           interval: 45m
    
    

    CatalogSource Yaml for version 20.0.3.2:

    apiVersion: operators.coreos.com/v1alpha1
    kind: CatalogSource
    metadata:
      name: ibm-cp4a-operator-catalog
      namespace: openshift-marketplace
    spec:
         displayName: ibm-cp4a-operator
         publisher: IBM
         sourceType: grpc
         image: cp.icr.io/cp/cp4a/ibm-cp-automation-catalog@sha256:e2358c099efa88ba0e71c5e4b503f3fda86413b4f6e0a97838bd9d6e13c84421      
        updateStrategy:
          registryPoll:
           interval: 45m
    
    
  4. In the OpenShift Cloud Platform console, click Operators to open the OperatorHub, and then select Provider Type > ibm-cp4a-operator to open the IBM Cloud Pak for Automation
  5. Click the IBM Cloud Pak for Automation catalog item, and then click Install.
    1. In the Create Operator Subscription wizard, select the ${NAMESPACE} that you created and prepared for the operator, and make sure to use the Manual approval strategy.
    2. Click Install, and select Approve for Manual approval required.
  6. Verify the deployment by checking all the pods are running. The Operator deployment has version 20.0.3.x.

Later, if you want to upgrade to the IBM Cloud Pak for Business Automation Operator 21.0.1, use the following steps:

  1. In the OpenShift Cloud Platform console, click Operators to open the Installed Operators, and then select the IBM Cloud Pak for Business Automation.
  2. Select the Subscription tab, and then change Approval from Manual to Automatic.
  3. Download the appropriate repository and go to the cert-kubernetes directory. The following command clones the latest version of the 21.0.1.tar.gz file:
    git clone https://github.com/icp4a/cert-kubernetes.git
    
    
  4. Change directory to the scripts folder where you downloaded the repository:
    cd cert-kubernetes/scripts
    
    
  5. Run the upgrade Operator script in the command window:
    ./upgradeOperator.sh -i <registry_url>/icp4a-operator:21.0.1 -p '<my_imagepullsecret_name>' -a accept -n <namespace> 
  6. When the upgrade operator script completes, go to OpenShift Cloud Platform console and click Operators to open the Installed Operators:

    - IBM Cloud Pak for Automation's status is Pending

    - IBM Cloud Pak for Business Automation's status is Failed

  7. Uninstall the IBM Cloud Pak for Automation. Then, you see that the IBM Cloud Pak for Business Automation can be installed successfully.

[{"Line of Business":{"code":"LOB45","label":"Automation"},"Business Unit":{"code":"BU059","label":"IBM Software w\/o TPS"},"Product":{"code":"SS2JQC","label":"IBM Cloud Pak for Automation"},"ARM Category":[{"code":"a8m0z0000001gWWAAY","label":"CloudPak4Automation Platform"}],"Platform":[{"code":"PF025","label":"Platform Independent"}],"Version":"20.0.0;20.0.1;20.0.2;20.0.3"}]

Document Information

Modified date:
13 April 2022

UID

ibm16380350