Known limitations and issues

Before you use IBM Cloud Pak® for Business Automation, make sure that you are aware of the known limitations.

For the most up-to-date information, see the support page Cloud Pak for Business Automation Known Limitations, which is regularly updated. Known limitations for the Cloud Pak foundational services can be found here Known issues in foundational services.

The following sections provide the known limitations by Cloud Pak capability.

Multi-zone region (MZR) storage support on Red Hat OpenShift Kubernetes Service (ROKS)

Table 1. Known issues and limitations of Portworx on ROKS
Limitation Description
When one zone is unavailable, it might take up to a minute to recover. If all worker nodes in a single zone are shut down or unavailable, it can take up to a minute to access the Cloud Pak applications and services. For example, to access ACCE from CPE takes a minute to respond.

LDAP failover

LDAP failover is not supported. It is not possible to configure multiple/failover LDAPs in the custom resource (CR) file template.

IBM Automation Document Processing

Table 2. Known issues and limitations of Document Processing
Limitation Description

If you want to delete and recreate a Document Processing project to start over, you might encounter errors after recreating the project. This occurs because the recreated project is out of sync with the Git repository.

Workaround

To avoid errors, follow these steps to delete and recreate a project:

  1. In Business Automation Studio, delete your project.
  2. Go to the remote Git server that is connected to your Document Processing Designer and delete the project repository for the project that you deleted in step 1.
  3. In Business Automation Studio, create your project again with the same name as the previous project.

For more information, see Saving an ADP Project fails with status code 500 service error.

Accessibility of the Verify client graphical user interface.

An impaired user who uses the Firefox browser to reach the Verify client user interface and then tabs into it to access the various zones cannot tab out again.

You reach the Verify client user interface in different ways, depending on your application. For example, for the single document application, you tab into the content list on the start page, select a document from the list by pressing the Tab or Arrow keys, and then you tab to the context icon (the 3 ...), select the icon by pressing the space bar or the Enter key, and finally press the Arrow keys to select Review document.

In a single or batch document processing application, the Fit to height option does not fit to height properly.

When you view a document in a single or batch document processing application, if you rotate this document clockwise or counterclockwise and select the "Fit to height" option, the size is not changed. The limitation applies when fixing classification or data extraction issues, and in portrait view in the modal viewer dialog.
Members of the Classification Workers-<projectName> team might see a processing error message when the document is finalized. In single document processing applications, members of the Classification Workers-<projectName> team might see a message that says "Processing error" in the document processing progress indicator. This is because these users do not have permission on finalized documents in the Content Platform Engine. In this case, close the progress indicator and refresh the content list table.
The ViewONE service is not load-balanced.

Due to a limitation when editing the document to fix classification issues in batch document processing applications, the current icp4adeploy-viewone-svc service session affinity is configured as ClientIP and the session has to be sticky.

For field types Postal mail address and Address information: Extraction of multiple addresses within a single block is not supported. You cannot define your own composite address field types because you cannot define multiple postal mail address sub-fields for a single address block.
You cannot upgrade previous address field types (such as address block or US mail address) to the new Postal mail address and Address information field types. Address field types that you defined in earlier versions remain the same. If you want to use the new address functionality, you must deploy a new key class based on the new address field types (Postal mail address and Address information).
Cannot upload or process Microsoft Word documents. In a FIPS-enabled deployment, you cannot upload or process Microsoft Word documents that have the .doc or .docx format.
To use a FIPS-compliant TensorFlow version, the NVIDIA CUDA drivers 11.2 are required. However, IBM Cloud Public (ROKS) GPU does not support CUDA 11.2 because it uses Red Hat Enterprise Linux (RHEL) 7. The current version of NVIDIA Operator on RHEL 7 is 1.5.2. It cannot be upgraded to the latest version (1.6.x) to use the latest NVIDIA CUDA drivers 11.2, because that version does not support RHEL 7. The current NVIDIA Operator that is installed from the Operator Hub does not run on GPU RHEL 7 Bare Metal Servers, as you only have the option to deploy to 1.6.x.
If you are using applications based on a previous version of the Batch Document Processing Template or Document Processing Template, upgrading those applications to use the latest document processing toolkit might cause breaking changes. Your key value pair fields might not display correctly.

Workaround

To display the key value pair (KVP) fields correctly in the Data extraction issues page, update the settings for your Batch Document Processing Template or Document Processing Template application in Application Designer.

  1. Log in to Business Automation Studio, and from the menu, click Design > Business applications.
  2. Create a version for your application:
    1. Find your application and click Open to open your application.
    2. Click the Versions icon at the top and select Create a version.
    3. Go back to Business Automation Studio. In Applications, double-click the tile for your application and select a version to export:
      • If you want to export the most recent version, click the three dot menu at the top and click Export.
      • If you want to export a specific version, click the three dot menu in the row for the version, and click Export.
    4. Click Export this version to be shared or backed up (.twx) and save the file to your machine. If you have more than one application to export, repeat the steps for each application in your authoring environment.
  3. Log in to the destination Business Automation Studio and from the menu, click Design > Business applications. Click Import and import your .twx application file. After importing, find the tile for your application and click the icon to accept any available updates.
  4. Open your application and go to VerifyPage for a document processing application, or the Verify data page for a batch document processing application.
    1. Click Properties Settings icon. Click Switch to advanced properties and open Configuration > General.
    2. Update the environment variable settings by using the Select... button, as follows:
      • Content project ID: DbaProjectId
      • Content Analyzer project ID: ACAProjectId
    3. Click Done, then Finish editing.
  5. For a batch document application, go to the Classify page and repeat steps 4.a, 4.b and 4.c.
  6. Go to Application project settings > Environment Variables.
    1. Update evDbaProjectId to the current project ID.
    2. Update evObjectStoreName to the current object store name.
    3. Create the variable idTokenSupport and give it a value of GRAPHQL_APP_RESOURCE,VIEW_APP_RESOURCE,CA_APPRESOURCE.
  7. Click Finish editing, go back to the application page, and click Preview. Ignore any alert about folder access for a document processing application, as the following steps correct the problem.
  8. Close the preview and exit your application.
  9. Create a version for your application and export it: repeat steps 2.a to 2.c of this procedure and when you select a version to export, click the three dot menu next to the most recent version, select Export this version to be published (.zip), and click Export. Save the file to your machine.

    If you have more than one application to export, repeat the steps for each application in your authoring environment.

  10. Import the file to your application service connections in IBM Business Automation Navigator. For more information, see Moving your application to runtime.
Content Analyzer supports Horizontal Pod Autoscaler (HPA). However, a known issue exists with HPA that causes the flapping effect (frequent scaling up and down). This issue is fixed in Kubernetes version 1.21, which is supported in OCP 4.8. You must use OCP 4.8 to avoid flapping with HPA.
Data extraction from complex tables is not fully supported. While data extraction from simple tables is fully supported, some limitations exist for complex tables, for example when you extract data from the summary section of tables, or if some watermarks, text, or other interfering elements exist in your documents. Complex tables are not fully supported. For the full list of supported tables, and examples of the types of tables that contain limitations, see Supported tables for extraction and tables with limited support.
Some checkboxes are not detected. In 21.0.3, the checkbox detection is improved by means of leveraging the output of Object Detection and adding an additional detection layer. However, some limitations remain for some types of checkboxes, for example if they are too small, overlapping, or improperly shaped. For more information and examples of non-detected checkboxes, see Limitations for checkbox detection in 21.0.3.
Problems accessing the application configurator after the initial configuration

When you create an application in Business Automation Studio, an application configurator displays where you enter configuration parameters for the application.

However, after the application is created you cannot access the same configurator. As a result, you cannot change the configuration settings the same way.

Workaround

If you want to update application settings, you can use the view configurator from inside Application Designer.

In the Batch Document Processing template, use the view configurator for the Verify Data page.

In the Document Processing template, use the view configurator for the VerifyPage.

One project database is provided in a starter deployment. By design, one project database is configured as part of the starter pattern for Document Processing. As a result, only one Document Processing project can be created in a starter deployment.

IBM Automation Decision Services

For more information, see Known limitations.

IBM Business Automation Studio and IBM Business Automation Application Engine

Table 3. Limitations of Business Automation Studio and Application Engine
Limitation Description
Process applications from Business Automation Workflow do not appear in Application Designer. Sometimes the app resources of the Workflow server do not appear in Studio when you deploy Workflow server instances, Studio, and Resource Registry in the same custom resource YAML file.

If you deployed Business Automation Studio with the Business Automation Workflow server in the same custom resource YAML file, and you do not see process applications from Business Automation Workflow server in Business Automation Studio, restart the Business Automation Workflow server pod.

The Business Automation Workflow toolkit and configurators might not get imported properly. When you install both Business Automation Workflow on containers and Business Automation Studio together, the Business Automation Workflow toolkit and configurators might not get imported properly. If you don't see the Workflow Services toolkit, the Start Process Configurator, or the Call Service Configurator, manually import the .twx files by downloading them from the Contributions table inside the Resource Registry section of the Administration page of Business Automation Studio.
Kubernetes kubectl known issue modified subpath configmap mount fails when container restarts #68211. Business Automation Studio related pods go into a CrashLoopBackOff state during the restart of the docker service on a worker node.

If you use the kubectl get pods command to check the pods when a pod is in the CrashLoopBackOff state, you get the following error message:

Warning Failed 3m kubelet, <IP_ADDRESS>
Error: failed to start container: 
Error response from daemon: 
OCI runtime create failed: container_linux.go:348: 
starting container process caused "process_linux.go:402: container init caused \"
rootfs_linux.go:58: 
   mounting \\\"/var/lib/kubelet/pods/XXXXX/volume-subpaths/key-trust-store/ibm-dba-ums/2\\\" 
   to rootfs \\\"/var/lib/docker/overlay2/XXXXXXX/merged\\\" 
   at \\\"/var/lib/docker/overlay2/XXXXXXX/merged/opt/ibm/wlp/usr/shared/resources/security/keystore/jks/server.jks\\\" 
   caused \\\"no such file or directory\\\"\"": unknown

To recover a pod, delete it in the OpenShift console and create a new pod.

To use IBM Business Automation Application Engine (Application Engine) with Db2® for High Availability and Disaster Recovery (HADR), you must have an alternative server available when Application Engine starts.

Application Engine depends on the automatic client reroute (ACR) of the Db2 HADR server to fail over to a standby database server. You must have a successful initial connection to that server when Application Engine starts.

IBM Resource Registry can get out of sync.

If you have more than one etcd server and the data gets out of sync between the servers, you must scale to one node and then scale back to multiple nodes to synchronize Resource Registry.

After you create the Resource Registry, you must keep the replica size.

Because of the design of etcd, changing the replica size can cause data loss. If you must set the replica size, set it to an odd number. If you reduce the pod size, the pods are deleted one by one to prevent data loss and the possibility that the cluster gets out of sync.
  • If you update the Resource Registry admin secret to change the username or password, first delete the instance_name-dba-rr-random_value pods to cause Resource Registry to enable the updates. Alternatively, you can enable the update manually with etcd commands.
  • If you update the Resource Registry configurations in the icp4acluster custom resource instance, the update might not affect the Resource Registry pod directly. It affects the newly created pods when you increase the number of replicas.

After you deploy Business Automation Studio or Application Engine, you cannot change the Business Automation Studio or Application Engine admin user.

 

Because of a Node.js server limitation, Application Engine trusts only root CA.

If an external service is used and signed with another root CA, you must add the root CA as trusted instead of the service certificate.
  • The certificate can be self-signed, or signed by a well-known root CA.
  • If you are using a depth zero self-signed certificate, it must be listed as a trusted certificate.
  • If you are using a certificate that is signed by a self-signed root CA, the self-signed CA must be in the trusted list. Using a leaf certificate in the trusted list is not supported.
  • If you are adding the root CA of two or more external services to the Application Engine trust list, you can't use the same common name for those root CAs.

The Java™ Message Service (JMS) statefulset doesn't support scale.

You must keep the replica size of the JMS statefulset at 1.

IBM FileNet® Content Manager

Table 4. Limitations of IBM FileNet Content Manager
Limitation or issue Description
A smaller number of indexing batches than configured leads to a noticeable degradation in the overall indexing throughput rate. Because obsolete Virtual Servers in the Global Configuration Database (GCD) are not automatically cleaned up, that Virtual Server count can be higher than the actual number of CE instances with dispatching enabled. That inflated number results in a smaller number of concurrent batches per CSS server, negatively affecting indexing performance.

For more information and to resolve the issue, see Content Platform Engine uneven CBR indexing workload and indexing degradation.

Limitation of Google ID in Edit Service on an External Share desktop In external share deployments that use Google ID for the identity provider authentication, issues can occur when you use the Edit Service to edit or add Microsoft documents. The issue causes login to be unsuccessful.
Download just Jace.jar from the Client API Download area of ACCE fails. Result is the "3.5.0.0 (20211028_0742) x86_64" text returned. When an application, like ACCE, is accessed through the Zen, the application, and certain operator-controlled elements in Kubernetes need additional logic to support embedded or "self-referential" URLs. The single file download in the Client API Download area of ACCE uses self-referential URLs and the additional logic is missing. To avoid the self-referential URLs, download the whole Client API package that contains the desired file instead of an individual file. The individual file would then be extracted from the package.
Queries to retrieve group hierarchies using the SCIM Directory Provider might fail if one of the groups in the hierarchy contains a space or some other character not valid in an HTTP URL. The problem can occur when performing searches of users or groups and the search tries to retrieve the groups a group belongs to. If one of these groups in this chain contains a space or other illegal HTTP URL character, then the search may fail.
Multiple LDAPs configured in IAM may result in incorrect LDAP to SCIM attribute mappings. If there are multiple LDAPs configured in IAM, you should create a custom SCIM attribute map for the LDAP servers used in FNCM. Otherwise an incorrect mapping may result. To learn more about how to review this mapping and change it, see Updating SCIM LDAP attributes mapping.
LDAP to SCIM attribute mapping may not be correct. The default LDAP to SCIM attribute mapping used by IAM may not be correct. In particular, TDS/SDS LDAP may have incorrect mappings for the group attributes for objectClass and members. To learn more about how to review this mapping and change it, see Updating SCIM LDAP attributes mapping.
External Share does not work when using Zen and IAM for authentication. Since IAM currently does not allow authentication with external identity providers, the External Share feature cannot work in a Zen enabled environment. If you wish to use External Share, then deploy the content pattern with Zen disabled.
When using the SCIM Directory Provider to perform queries for a user or group with no search attribute, all users/groups are returned rather than no users or groups. Queries without a search pattern are being treated as a wildcard rather than a restriction to return nothing.
Poor performance or timeouts might result if the CPE System user belongs to a large group hierarchy that contains many other users or groups. In general, large group hierarchies might result in slow SCIM queries or even timeouts when authorizing access to documents or objects in Content Platform Engine. This is especially true of the CPE System User. Hence, when determining the user ID to be used as the CPE System User, choose one with few groups and with few members.
Users might be unable to log in using an email address as a username if the LDAP server configured in IAM does not have a mail attribute that is populated with the same value.

This problem can arise if the IAM LDAP to SCIM mapping contains a value for the SCIM userName attribute that is an email address but there is no value mapped to the SCIM email attribute. For example, using a MSAD LDAP server, the UserPrincipalName can be populated with the user's email address, but no value is populated for the mail attribute. With IAM's default LDAP to SCIM map, the LDAP UserPrincipalName value is mapped to the SCIM userName field and the LDAP mail value is mapped to the SCIM email field.

If the user logs in with a userName that looks like an email address, the CPE server will attempt to retrieve the users information from IAM via a SCIM request. Since the userName looks like an email, the SCIM query that is performed uses the SCIM email field. Since the SCIM email field is mapped from the underlying LDAP mail field, if there is no value in this mail field, then the SCIM query will fail and the user will get an authentication failure.

The workaround for this scenario is to configure IAM's SCIM attribute mapping to also map the LDAP UserPrincipalName attribute to the SCIM email attribute. Thus CPE SCIM queries will succeed when a value is present for a SCIM email and the user will be able to login. You can configure the IAM SCIM attribute map as described in Updating SCIM LDAP attributes mapping.

IBM Business Automation Navigator

Table 5. Limitations of IBM Business Automation Navigator
Limitation or issue Description
Resiliency issues can cause lapses in the availability of Workplace after a few weeks. This issue might be attributed to issues with the Content Platform Engine (cpe) pod. Use the following mitigation steps:
  • Ensure that cpe is deployed in a highly available setup, with at least two replicas.
  • Monitor the cpe pod and restart if issues occur.
  • On the cpe pod, consider setting the disable_fips production parameter to "true" for any environments where FIPS is not required.

IBM Business Automation Insights

  • Table 6 lists the limitations that apply whether IBM Business Automation Insights is deployed to a Kubernetes cluster or to a server outside of a Kubernetes cluster.
  • Table 7 lists the limitations that apply only to IBM Business Automation Insights for a server.
  • Table 8 lists the limitations that apply only to Kubernetes deployments of IBM Business Automation Insights.
Table 6. Limitations common to Kubernetes and non-Kubernetes deployment
Limitation or issue Description
Alerts
Business Performance Center
You cannot create an alert for a period KPI if it contains a group. If you want to create an alert for a period KPI, go to the Monitoring tab and remove the Group by keyword. Then, go to the Thresholds tab to create one or more alerts.
Kibana
In the Kibana graphical user interface, the Alerting menu item for monitoring your data and automatically sending alert notifications is present but the underlying feature is not enabled.
Business Performance Center

Because Business Performance Center uses Elasticsearch as its database, approximation problems can be found with aggregations of numbers greater than 2^53 (that is, about 9 * 10^15). See the Limits for long values section of the Aggregations page of the Elasticsearch documentation.

When you create a metric, aggregations (minimum, maximum) are not supported on data of type double. Use a float instead.

No Business Automation Insights support for IBM Automation Document Processing The integration between IBM Automation Document Processing (ADP) and Business Automation Insights is not supported. When you deploy or configure the IBM Cloud Pak for Business Automation platform, select the Business Automation Insights component together with patterns that are supported by Business Automation Insights, such as workflow (Business Automation Workflow) or decisions (Operational Decision Manager), not just with document-processing (IBM Automation Document Processing).
Flink jobs might fail to resume after a crash. After a Flink job failure or a machine restart, the Flink cluster might not be able to restart the Flink job automatically. For a successful recovery, restart Business Automation Insights. For instructions, see Troubleshooting Flink jobs.
Case event emitter (ICM) You can configure a connection to only one target object store. The Case event Emitter does not support multiple target object stores.
Elasticsearch indices

Defining a high number of fields in an Elasticsearch index might lead to a so-called mappings explosion which might cause out-of-memory errors and difficult situations to recover from. The maximum number of fields in Elasticsearch indices created by IBM Business Automation Insights is set to 1000. Field and object mappings, and field aliases, count towards this limit. Ensure that the various documents that are stored in Elasticsearch indices do not lead to reaching this limit.

Event formats are documented in Reference for event emission.

For Operational Decision Manager, you can configure event processing to avoid the risk of mappings explosion. See Operational Decision Manager event processing walkthrough.

In the BPEL Tasks dashboard, the User tasks currently not completed widget does not display any results. The search that is used by the widget does not return any results because it uses an incorrect filter for the task state.

To avoid this issue, edit the filter in the User tasks currently waiting to be processed search. Set the state filter to accept one of the following values: TASK_CREATED, TASK_STARTED, TASK_CLAIM_CANCELLED, TASK_CLAIMED.

Historical Data Playback REST API The API plays back data only from closed processes (completed or terminated). Active processes are not handled.

When upgrading the Business Automation Insights, the event forwarder job fails.

Restart the event forwarder job using the following command:

oc get job iaf-insights-engine-event-forwarder -o json | jq 'del(.spec.selector)' | jq 'del(.spec.template.metadata.labels)' | oc replace --force -f - }}

For further instructions, see Troubleshooting Apache Flink jobs

Table 7. Limitations to a single server deployment of IBM Business Automation Insights
Limitation or issue Description
Release scope IBM Business Automation Insights for a server is delivered with no new features compared Cloud Pak for Business Automation 20.0.3. For more information, see IBM Business Automation Insights for a server.

The same applies to Business Performance Center.

Docker compose IBM Business Automation Insights does not deploy to a single server with docker-compose 1.28.0 and later but a workaround is documented in Patching for docker-compose 1.28 and later.
Elasticsearch and Kafka You can use the embedded Confluent Kafka distribution or an external Kafka installation but you can use only embedded Elasticsearch.
Security of communications to Kafka When support for the embedded Kafka server is disabled, the connection to the external Kafka server is authenticated only with the SASL_SSL protocol and custom events are not supported.
In Case dashboards, elapsed time calculations do not include late events: Average elapsed Time of completed activities and Average elapsed time of completed cases widgets. Events that are emitted after a case or activity completes are ignored.
Apache ZooKeeper The version of ZooKeeper that is bundled with Confluent Kafka does not support SSL. For more information, see the ZooKeeper page of the Confluent Kafka documentation.
FLINK_PARALLELISM environment variable The default value of 1 means no parallelism. Do not change it.
Table 8. Limitations to Kubernetes deployments only
Limitation or issue Description
Upgrade and rollback You cannot upgrade or roll back a IBM Business Automation Insights deployment by changing the appVersion parameter in the custom resource. For more information, see Upgrading Business Automation Insights and Rolling back an upgrade.
In Case dashboards, elapsed time calculations do not include late events: Average elapsed Time of completed activities and Average elapsed time of completed cases widgets. Events that are emitted after a case or activity completes are ignored. But by setting the bai_configuration.icm.process_events_after_completion parameter to true, you can set the Case Flink job to process the events that are generated on a case after the case is closed. The start and end times remain unchanged. Therefore, the duration is the same but the properties are updated based on the events that were generated after completion.
Processing of Automation Decision Services events and of events from custom sources by the event forwarder, possible duplication The Elasticsearch document identifier, which is used to index an event that is processed by the event forwarder is automatically assigned to that event. As a result, if the Flink job restarts on failure, events might be duplicated when they are reprocessed by the event forwarder.
Business Performance Center
Fixed date time range
If, in a fixed date time range, you modify only the date or time, your changes are not saved.
Data tables
When any chart is displayed as a data table, only the first 1000 rows are shown.
Data permissions
You can set up data permissions by monitoring source or by team. If you encounter an error when you set a permission by source, try setting the same permission by team.
When installing Business Automation Insights on OCP 4.14, iaf-insights-engine-management crashes and restarts. In IBM Business Automation Insights Management, set the memory limit of management.resources.limits.memory to 140Mi in the corresponding ICP4ACluster CR parameter. See Management service parameters.

IBM Workflow Process Service

Table 9. Limitations of IBM Workflow Process Service
Limitation or issue Description
REST services invocation Invoking REST services that use parameters of type file or string with the format binary is not supported.
Enterprise Content Management Enterprise Content Management (ECM) related to creating local and global documents, such as by using an ECM integration step, IBM Business Process Manager (BPM) Document List, or Responsive Document Explorer, is not supported.
Globally and locally managed documents Associating documents with a process instance is not supported.
Online deployment IBM Workflow Process Service 21.0.3 on Docker does not support online deployment for Workflow Process Service Authoring because of a Liberty issue. You must deploy the authoring environment in offline mode.

IBM Operational Decision Manager

Table 10. Limitations of IBM Operational Decision Manager
Limitation or issue Description
The ODM services are not accessible. The following error is returned:
{\"error_description\":\"OpenID Connect client returned with status: SEND_401\",\"error\":401}

The log indicates that the IAM certificate cannot be found in the truststore.jks file.

The IAM certificate has probably been renewed but the ODM pods have not been refreshed with the new certificate.

Restart the ODM pods manually to apply the new certificate.