Known limitations
Before you use IBM Cloud Pak® for Business Automation, make sure that you are aware of the known limitations.
For the most up-to-date information, see the support page Cloud Pak for Business Automation Known Limitations, which is regularly updated.
The following sections provide the known limitations by Cloud Pak capability.
- Image tags cannot start with a zero
- Okta and Azure AD integration
- Multi-zone region (MZR) storage support on Red Hat OpenShift Kubernetes Service (ROKS)
- Connection issues of Identity Management (IM) to LDAP(s)
- LDAP failover
- IBM Automation Document Processing
- IBM Automation Decision Services
- IBM Automation Workstream Services
- IBM Business Automation Insights
- IBM Business Automation Navigator
- IBM Business Automation Application and IBM Business Automation Studio
- IBM FileNet Content Manager
- IBM Business Automation Workflow and IBM Workflow Process Service
Image tags cannot start with a zero
If you use image tags in the custom resources to identify different versions of Cloud Pak for Business Automation container images, do not start the tag with a "0" (zero). The "0" is removed from the tag by the operators and the image cannot be pulled as a result. Image tags can include lowercase and uppercase letters, digits, underscores ( _ ), periods ( . ), and dashes ( - ). For more information, see Digests versus image tags.
Okta and Azure AD integration
Limitation | Description |
---|---|
Automation Document Processing | Not supported. |
Business Automation Workflow Case | Management solutions and applications are not supported. |
Business Automation Workflow Content | Processes cannot be triggered if documents are added from the Administrative Console for Content Engine (ACCE). Note, processes can be launched when documents are added from Navigator BAW desktop. |
Business Automation Workflow External Services |
|
Multi-zone region (MZR) storage support on Red Hat OpenShift Kubernetes Service (ROKS)
Limitation | Description |
---|---|
When one zone is unavailable, it might take up to a minute to recover. | If all worker nodes in a single zone are shut down or unavailable, it can take up to a minute to access the Cloud Pak applications and services. For example, to access ACCE from CPE takes a minute to respond. |
Connection issues of Identity Management (IM) to LDAP(s)
Limitation | Description |
---|---|
IM does not update LDAP certificates automatically | When the CP4BA operator configures an LDAP connection to IM, the certificates are added to
the platform-auth-ldaps-ca-cert secret.
|
LDAP failover
LDAP failover is not supported. It is not possible to configure multiple/failover LDAPs in the custom resource (CR) file template.
IBM Automation Document Processing
Limitation | Description |
---|---|
Issues with versioning when importing a project in Document Processing Designer with the merge or overwrite options. |
Importing a project with the overwrite option in Document Processing Designer only supports importing projects from the previous version. For example, in 23.0.1, you must only import projects that were exported from version 22.0.2. If you have exported projects from older releases, you must periodically update the archive files
with the latest release:
It is recommended not to merge projects from different releases. When you import a project with
the merge option across releases, you must first migrate the older release archive file into the
latest release:
When you import a project with the merge option, the current project is merged with the project from the exported archive file. Only nonconflicting document types are imported into the project, and if there are conflicting document types, they are skipped during import. If you need to import a document type that is in conflict, you must first delete that document type from the project before importing the archive file. After importing a project, it is recommended to retrain both classification and extraction models
to avoid issues with the change of document classes in the merged project, and make sure the models
are trained and built with the latest features.
![]() |
Need an egress to use webhooks with external custom applications. | If you have an external custom application that uses the webhook feature, you must set up a custom egress for Document Processing engine so that notifications can be sent outside of the Red Hat OpenShift Container Platform cluster where Document Processing engine is deployed. For more information, see Creating an external egress for Document Processing engine when an external application uses webhooks. |
In a starter deployment, simultaneous uploading of multiple large batches might fail with an Uploading Error status. |
|
Accessibility of the Verify client graphical user interface. | A user who uses the Firefox browser to reach the Verify client user
interface and then tabs into it to access the various zones cannot tab out again. You reach the Verify client user interface in different ways, depending on your application. For example, for the single document application, you tab into the content list on the start page, select a document from the list by pressing the Tab or Arrow keys, and then you tab to the context icon (the 3 ...), select the icon by pressing the space bar or the Enter key, and finally press the Arrow keys to select Review document. |
Data standardization: uniqueness | You cannot reuse an existing data definition for composite fields, nor create a data
definition with a name that is already used for another data definition. When standardizing your data, you can associate a data definition with a field or a document type. These data definitions are used when deploying the project to a runtime environment. For simple fields, you can either create a data definition, or reuse an existing one. For composite fields, you are not able to reuse an existing data definition, you can only create one. If you attempt to create a data definition with the same name as an existing one, you get a uniqueness error. |
Deleting and re-creating a project |
If you want to delete and re-create a Document Processing project to start over, you
might encounter errors after re-creating the project. This occurs because the re-created project is
out of sync with the Git repository.
For more information, see Saving an ADP Project fails with status code 500 service error. |
Fit to height option does not fit to height properly. | In a single or batch document processing application, the Fit to height option does not fit to height properly. When you view a document in a single or batch document processing application, if you rotate this document clockwise or counterclockwise and select the Fit to height option, the size is not changed. The limitation applies when fixing classification or data extraction issues, and in portrait view in the modal viewer dialog. |
The ViewONE service is not load-balanced. | Due to a limitation when editing the document to fix classification issues in batch document
processing applications, the current icp4adeploy-viewone-svc service session
affinity is configured as ClientIP and the session has to be sticky. |
Postal mail address and Address information field types |
|
Microsoft Word documents |
|
Support of NVIDIA CUDA drivers 11.2 | To use a FIPS-compliant TensorFlow version, the NVIDIA CUDA drivers 11.2 are required. However, IBM Cloud® Public (ROKS) GPU does not support CUDA 11.2 because it uses Red Hat® Enterprise Linux® (RHEL) 7. The current version of NVIDIA Operator on RHEL 7 is 1.5.2. It cannot be upgraded to the latest version (1.6.x) to use the latest NVIDIA CUDA drivers 11.2 because that version does not support RHEL 7. The current NVIDIA Operator that is installed from the Operator Hub does not run on GPU RHEL 7 Bare Metal Servers, as you only have the option to deploy to 1.6.x. |
Data extraction from tables is not fully supported. |
|
Some checkboxes are not detected. | Some types of checkboxes are not detected, for example if they are too small or improperly shaped. For more information and examples of non-detected checkboxes, see Limitations for checkbox detection. |
Problems accessing the application configurator after the initial configuration |
|
SystemT Extractor accuracy | The SystemT extractors that are included with IBM Automation Document Processing are intended as samples to demonstrate the capabilities of the feature. They are not tuned to any specific document format and might not provide high recognition rates for some document types. If you plan to use SystemT extractors in production environment, you need to build your own SystemT extractors, which can be better trained for the documents that you are processing. |
IBM Automation Decision Services
For more information, see Known limitations.
IBM Automation Workstream Services
For more information, see Workstream limitations.
IBM Business Automation Insights
Limitation | Description |
---|---|
Alerts |
|
Business Performance Center |
|
No Business Automation Insights support for IBM Automation Document Processing | The integration between IBM Automation Document Processing (ADP) and Business Automation Insights is not supported.
When you deploy or configure the IBM
Cloud Pak for Business Automation platform, select the
Business Automation Insights component
together with patterns that are supported by Business Automation Insights, such as
workflow (Business Automation Workflow) or
decisions (Operational Decision Manager), not just with
document-processing (IBM Automation Document Processing). |
Flink jobs might fail to resume after a crash. | After a Flink job failure or a machine restart, the Flink cluster might not be able to restart the Flink job automatically. For a successful recovery, restart Business Automation Insights. For instructions, see Troubleshooting Flink jobs. |
Case event emitter (ICM) | You can configure a connection to only one target object store. The Case event Emitter does not support multiple target object stores. |
Elasticsearch indices |
Defining a high number of fields in an Elasticsearch index might lead to a so-called mappings explosion which might cause out-of-memory errors and difficult situations to recover from. The maximum number of fields in Elasticsearch indices created by IBM Business Automation Insights is set to 1000. Field and object mappings, and field aliases, count towards this limit. Ensure that the various documents that are stored in Elasticsearch indices do not lead to reaching this limit. Event formats are documented in Reference for event emission. For Operational Decision Manager, you can configure event processing to avoid the risk of mappings explosion. See Operational Decision Manager event processing walkthrough. |
In the BPEL Tasks dashboard, the User tasks currently not completed widget does not display any results. | The search that is used by the widget does not return any results because it
uses an incorrect filter for the task state. To avoid this issue, edit the filter in the User tasks currently waiting to be processed search. Set the state filter to accept one of the following values: TASK_CREATED, TASK_STARTED, TASK_CLAIM_CANCELLED, TASK_CLAIMED. |
Historical Data Playback REST API | The API plays back data only from closed processes (completed or terminated). Active processes are not handled. |
In Case dashboards, elapsed time calculations do not include late events: Average elapsed Time of completed activities and Average elapsed time of completed cases widgets. | Events that are emitted after a case or activity completes are ignored. But by setting the bai_configuration.icm.process_events_after_completion parameter to true, you can set the Case Flink job to process the events that are generated on a case after the case is closed. The start and end times remain unchanged. Therefore, the duration is the same but the properties are updated based on the events that were generated after completion. |
IBM Business Automation Navigator
Limitation | Description |
---|---|
Resiliency issues can cause lapses in the availability of Workplace after a few weeks. | This issue might be attributed to issues with the Content Platform Engine (cpe) pod. Use the
following mitigation steps:
|
Task Manager is not supported when configuring with System for Cross-domain Identity Management (SCIM). | Task Manager requires an LDAP registry for user authorization. It is not supported in a deployment that is configured with SCIM. |
After you update the schema name in the CR YAML, the schema name is not updated in the system.properties file and the Business Automation Navigator pod uses the old schema name. | You need to manually delete the system.properties file and restart the Business Automation Navigator pod so that it uses the new schema name. |
IBM Business Automation Application and IBM Business Automation Studio
Limitation | Description |
---|---|
Process applications from Business Automation Workflow do not appear in Application Designer. | Sometimes the app resources of the Workflow server do not appear in Studio when you deploy
Workflow server instances, Studio, and Resource Registry in the same custom resource YAML
file. If you deployed Business Automation Studio with the Business Automation Workflow server in the same custom resource YAML file, and you do not see process applications from Business Automation Workflow server in Business Automation Studio, restart the Business Automation Workflow server pod. |
The Business Automation Workflow toolkit and configurators might not get imported properly. | When you install both Business Automation Workflow on containers and Business Automation Studio together, the Business Automation Workflow toolkit and configurators might not get imported properly. If you don't see the Workflow Services toolkit, the Start Process Configurator, or the Call Service Configurator, manually import the .twx files by downloading them from the Contributions table inside the Resource Registry section of the Administration page of Business Automation Studio. |
Kubernetes kubectl known issue modified subpath
configmap mount fails when container restarts #68211. |
Business Automation
Studio related pods go
into a CrashLoopBackOff state during the restart of the docker service on a worker
node. If you use the Warning Failed 3m kubelet, <IP_ADDRESS> Error: failed to start container: Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "process_linux.go:402: container init caused \" rootfs_linux.go:58: mounting To recover a pod, delete it in the OpenShift® console and create a new pod. |
To use Application Engine with Db2® for High Availability and Disaster Recovery (HADR), you must have an alternative server available when Application Engine starts. |
Application Engine depends on the automatic client reroute (ACR) of the Db2 HADR server to fail over to a standby database server. You must have a successful initial connection to that server when Application Engine starts. |
IBM Resource Registry can get out of sync. |
If you have more than one etcd server and the data gets out of sync between
the servers, you must scale to one node and then scale back to multiple nodes to synchronize
Resource Registry. |
After you create the Resource Registry, you must keep the replica size. |
Because of the design of etcd , changing the replica size can cause data
loss. If you must set the replica size, set it to an odd number. If you reduce the pod size, the
pods are deleted one by one to prevent data loss and the possibility that the cluster gets out of sync.
|
After you deploy Business Automation Studio or Application Engine, you cannot change the Business Automation Studio or Application Engine admin user. |
Make sure that you set the admin user to a sustainable username at installation time. |
Because of a Node.js server limitation, Application Engine trusts only root CA. |
If an external service is used and signed with another root CA, you must add the root CA as
trusted instead of the service certificate.
|
IBM FileNet® Content Manager
Limitation | Description |
---|---|
A smaller number of indexing batches than configured leads to a noticeable degradation in the overall indexing throughput rate. | Because obsolete Virtual Servers in the Global Configuration Database (GCD) are not
automatically cleaned up, that Virtual Server count can be higher than the actual number of CE
instances with dispatching enabled. That inflated number results in a smaller number of concurrent
batches per CSS server, negatively affecting indexing performance. For more information and to resolve the issue, see Content Platform Engine uneven CBR indexing workload and indexing degradation. |
Download just Jace.jar from the Client API Download area of ACCE fails. Result is the "3.5.0.0 (20211028_0742) x86_64" text returned. | When an application, like ACCE, is accessed through Zen, the application, and certain operator-controlled elements in Kubernetes need additional logic to support embedded or "self-referential" URLs. The single file download in the Client API Download area of ACCE uses self-referential URLs and the additional logic is missing. To avoid the self-referential URLs, download the whole Client API package that contains the desired file instead of an individual file. The individual file would then be extracted from the package. |
Queries to retrieve group hierarchies using the SCIM Directory Provider might fail if one of the groups in the hierarchy contains a space or some other character not valid in an HTTP URL. | The problem can occur when performing searches of users or groups and the search tries to retrieve the groups a group belongs to. If one of these groups in this chain contains a space or other illegal HTTP URL character, then the search may fail. |
LDAP to SCIM attribute mapping may not be correct. | The default LDAP to SCIM attribute mapping used by IM may not be correct. In particular, TDS/SDS LDAP may have incorrect mappings for the group attributes for objectClass and members. To learn more about how to review this mapping and change it, see Updating SCIM LDAP attributes mapping. |
When using the SCIM Directory Provider to perform queries for a user or group with no search attribute, all users/groups are returned rather than no users or groups. | Queries without a search pattern are being treated as a wildcard rather than a restriction to return nothing. |
IBM Business Automation Workflow and IBM Workflow Process Service
For IBM Business
Automation Workflow, see
IBM
Business Automation Workflow known limitations.
For Workplace, see Workplace
limitations.