Known limitations
Before you use IBM Cloud Pak® for Business Automation, make sure that you are aware of the known limitations.
For the most up-to-date information, see the support page Cloud Pak for Business Automation Known Limitations, which is regularly updated.
The following sections provide the known limitations by Cloud Pak capability.
- Multi-zone region (MZR) storage support on Red Hat OpenShift Kubernetes Service (ROKS)
- LDAP failover
- IBM Automation Document Processing
- IBM Automation Decision Services
- IBM Business Automation Studio and IBM Business Automation Application
- IBM FileNet Content Manager
- IBM Business Automation Navigator
- IBM Business Automation Insights
- IBM Workflow Process Service
Multi-zone region (MZR) storage support on Red Hat OpenShift Kubernetes Service (ROKS)
Limitation | Description |
---|---|
When one zone is unavailable, it might take up to a minute to recover. | If all worker nodes in a single zone are shut down or unavailable, it can take up to a minute to access the Cloud Pak applications and services. For example, to access ACCE from CPE takes a minute to respond. |
LDAP failover
LDAP failover is not supported. It is not possible to configure multiple/failover LDAPs in the custom resource (CR) file template.
IBM Automation Document Processing
Limitation | Description |
---|---|
Members of the Doc Processing Analysts team must have reader access in Team Server in order to deploy a project. | If you are a member of the Doc Processing Analysts team and you want to deploy a Document Processing project, you must update
the reader access of the Doc Processing Analysts team in the Business Team Server user interface:
|
The Doc Processing Project Admins team must be added to the Creators team in Team Server in order to deploy a project in a design environment. | If you are a project administrator for a Document Processing project and you want to
deploy this project in a design environment, you must first add the Doc Processing Project Admins
team to the Creators team in the Business Team Server user interface:
|
In a starter deployment, simultaneous uploading of multiple large batches might fail with an Uploading Error status. |
|
Accessibility of the Verify client graphical user interface. | An impaired user who uses the Firefox browser to reach the Verify
client user interface and then tabs into it to access the various zones cannot tab out
again. You reach the Verify client user interface in different ways, depending on your application. For example, for the single document application, you tab into the content list on the start page, select a document from the list by pressing the Tab or Arrow keys, and then you tab to the context icon (the 3 ...), select the icon by pressing the space bar or the Enter key, and finally press the Arrow keys to select Review document. |
Identity cards are not processed. | An error message such as the following one in the onec-extraction pod log
means that identity cards could not be processed. As a workaround, restart the
onec-extraction pod so that it processes ID cards
successfully.
|
Data standardization: uniqueness | You cannot reuse an existing data definition for composite fields, nor create a data
definition with a name that is already used for another data definition. When standardizing your data, you can associate a data definition with a field or a document type. These data definitions are used when deploying the project to a runtime environment. For simple fields, you can either create a data definition, or reuse an existing one. For composite fields, you are not able to reuse an existing data definition, you can only create one. If you attempt to create a data definition with the same name as an existing one, you get a uniqueness error. |
The language code for Brazilian Portuguese is returned as pt and not
pt-br . |
If you export your project data in JSON format and you set Brazilian Portuguese as an
extraction language, the language code that is returned is pt and not the expected
pt-br . |
Deleting and re-creating a project |
If you want to delete and re-create a Document Processing project to start over, you
might encounter errors after re-creating the project. This occurs because the re-created project is
out of sync with the Git repository.
For more information, see Saving an ADP Project fails with status code 500 service error. |
Fit to height option does not fit to height properly. | In a single or batch document processing application, the Fit to height option does not fit to height properly. When you view a document in a single or batch document processing application, if you rotate this document clockwise or counterclockwise and select the Fit to height option, the size is not changed. The limitation applies when fixing classification or data extraction issues, and in portrait view in the modal viewer dialog. |
The ViewONE service is not load-balanced. | Due to a limitation when editing the document to fix classification issues in batch document
processing applications, the current icp4adeploy-viewone-svc service session
affinity is configured as ClientIP and the session has to be sticky. |
Postal mail address and Address information field types |
|
Microsoft Word documents |
|
Support of NVIDIA CUDA drivers 11.2 | To use a FIPS-compliant TensorFlow version, the NVIDIA CUDA drivers 11.2 are required. However, IBM Cloud® Public (ROKS) GPU does not support CUDA 11.2 because it uses Red Hat® Enterprise Linux® (RHEL) 7. The current version of NVIDIA Operator on RHEL 7 is 1.5.2. It cannot be upgraded to the latest version (1.6.x) to use the latest NVIDIA CUDA drivers 11.2, because that version does not support RHEL 7. The current NVIDIA Operator that is installed from the Operator Hub does not run on GPU RHEL 7 Bare Metal Servers, as you only have the option to deploy to 1.6.x. |
If you are using applications based on a version older than 21.0.3 of the Batch Document Processing Template or Document Processing Template, upgrading those applications to use the latest document processing toolkit might cause breaking changes. Your key-value pair (KVP) fields might not display correctly. |
Workaround To display the key value pair (KVP) fields correctly in the Data extraction issues page, update the settings for your Batch Document Processing Template or Document Processing Template application in Application Designer.
|
HPA flapping effect | Content Analyzer supports
Horizontal Pod Autoscaler (HPA). However, a known
issue exists with HPA that causes the flapping effect (frequent scaling up and down). This
issue is fixed in Kubernetes version 1.21, which is supported in OCP 4.8.
|
Data extraction from tables is not fully supported. |
|
Some checkboxes are not detected. | Some types of checkboxes are not detected, for example if they are too small or improperly shaped. For more information and examples of non-detected checkboxes, see Limitations for checkbox detection. |
Problems accessing the application configurator after the initial configuration |
|
One project database is provided in a starter deployment. | By design, one project database is configured as part of the starter pattern for Document Processing. As a result, only one Document Processing project can be created in a starter deployment. |
IBM Automation Decision Services
For more information, see Known limitations.
IBM Business Automation Studio and IBM Business Automation Application
Limitation | Description |
---|---|
Process applications from Business Automation Workflow do not appear in Application Designer. | Sometimes the app resources of the Workflow server do not appear in Studio when you deploy
Workflow server instances, Studio, and Resource Registry in the same custom resource YAML
file. If you deployed Business Automation Studio with the Business Automation Workflow server in the same custom resource YAML file, and you do not see process applications from Business Automation Workflow server in Business Automation Studio, restart the Business Automation Workflow server pod. |
The Business Automation Workflow toolkit and configurators might not get imported properly. | When you install both Business Automation Workflow on containers and Business Automation Studio together, the Business Automation Workflow toolkit and configurators might not get imported properly. If you don't see the Workflow Services toolkit, the Start Process Configurator, or the Call Service Configurator, manually import the .twx files by downloading them from the Contributions table inside the Resource Registry section of the Administration page of Business Automation Studio. |
Kubernetes kubectl known issue modified subpath
configmap mount fails when container restarts #68211. |
Business Automation
Studio related pods go
into a CrashLoopBackOff state during the restart of the docker service on a worker
node. If you use the Warning Failed 3m kubelet, <IP_ADDRESS> Error: failed to start container: Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "process_linux.go:402: container init caused \" rootfs_linux.go:58: mounting To recover a pod, delete it in the OpenShift® console and create a new pod. |
To use IBM Business Automation Application (Application Engine) with Db2® for High Availability and Disaster Recovery (HADR), you must have an alternative server available when Application Engine starts. |
Application Engine depends on the automatic client reroute (ACR) of the Db2 HADR server to fail over to a standby database server. You must have a successful initial connection to that server when Application Engine starts. |
IBM Resource Registry can get out of sync. |
If you have more than one etcd server and the data gets out of sync between
the servers, you must scale to one node and then scale back to multiple nodes to synchronize
Resource Registry. |
After you create the Resource Registry, you must keep the replica size. |
Because of the design of etcd , changing the replica size can cause data
loss. If you must set the replica size, set it to an odd number. If you reduce the pod size, the
pods are deleted one by one to prevent data loss and the possibility that the cluster gets out of sync.
|
After you deploy Business Automation Studio or Application Engine, you cannot change the Business Automation Studio or Application Engine admin user. |
|
Because of a Node.js server limitation, Application Engine trusts only root CA. |
If an external service is used and signed with another root CA, you must add the root CA as
trusted instead of the service certificate.
|
The Java™ Message Service (JMS) statefulset doesn't support scale. |
You must keep the replica size of the JMS statefulset at 1. |
IBM FileNet® Content Manager
Limitation | Description |
---|---|
A smaller number of indexing batches than configured leads to a noticeable degradation in the overall indexing throughput rate. | Because obsolete Virtual Servers in the Global Configuration Database (GCD) are not
automatically cleaned up, that Virtual Server count can be higher than the actual number of CE
instances with dispatching enabled. That inflated number results in a smaller number of concurrent
batches per CSS server, negatively affecting indexing performance. For more information and to resolve the issue, see Content Platform Engine uneven CBR indexing workload and indexing degradation. |
Limitation of Google ID in Edit Service on an External Share desktop | In external share deployments that use Google ID for the identity provider authentication, issues can occur when you use the Edit Service to edit or add Microsoft documents. The issue causes login to be unsuccessful. |
Download just Jace.jar from the Client API Download area of ACCE fails. Result is the "3.5.0.0 (20211028_0742) x86_64" text returned. | When an application, like ACCE, is accessed through the Zen, the application, and certain operator-controlled elements in Kubernetes need additional logic to support embedded or "self-referential" URLs. The single file download in the Client API Download area of ACCE uses self-referential URLs and the additional logic is missing. To avoid the self-referential URLs, download the whole Client API package that contains the desired file instead of an individual file. The individual file would then be extracted from the package. |
Queries to retrieve group hierarchies using the SCIM Directory Provider might fail if one of the groups in the hierarchy contains a space or some other character not valid in an HTTP URL. | The problem can occur when performing searches of users or groups and the search tries to retrieve the groups a group belongs to. If one of these groups in this chain contains a space or other illegal HTTP URL character, then the search may fail. |
Multiple LDAPs configured in IAM may result in incorrect LDAP to SCIM attribute mappings. | If there are multiple LDAPs configured in IAM, you should create a custom SCIM attribute map for the LDAP servers used in FNCM. Otherwise an incorrect mapping may result. To learn more about how to review this mapping and change it, see Updating SCIM LDAP attributes mapping. |
LDAP to SCIM attribute mapping may not be correct. | The default LDAP to SCIM attribute mapping used by IAM may not be correct. In particular, TDS/SDS LDAP may have incorrect mappings for the group attributes for objectClass and members. To learn more about how to review this mapping and change it, see Updating SCIM LDAP attributes mapping. |
External Share does not work when using Zen and IAM for authentication. | Since IAM currently does not allow authentication with external identity providers, the External Share feature cannot work in a Zen enabled environment. If you wish to use External Share, then deploy the content pattern with Zen disabled. |
When using the SCIM Directory Provider to perform queries for a user or group with no search attribute, all users/groups are returned rather than no users or groups. | Queries without a search pattern are being treated as a wildcard rather than a restriction to return nothing. |
IBM Business Automation Navigator
Limitation | Description |
---|---|
Resiliency issues can cause lapses in the availability of Workplace after a few weeks. | This issue might be attributed to issues with the Content Platform Engine (cpe) pod. Use the
following mitigation steps:
|
IBM Business Automation Insights
Limitation | Description |
---|---|
Upgrade and rollback | You cannot upgrade or roll back a IBM Business Automation Insights deployment by changing the appVersion parameter in the custom resource. For more information, see Upgrading Business Automation Insights and Rolling back an upgrade. |
Alerts |
|
Business Performance Center |
|
No Business Automation Insights support for IBM Automation Document Processing | The integration between IBM Automation Document Processing (ADP) and Business Automation Insights is not supported.
When you deploy or configure the IBM
Cloud Pak for Business Automation platform, select the
Business Automation Insights component
together with patterns that are supported by Business Automation Insights, such as
workflow (Business Automation Workflow) or
decisions (Operational Decision Manager), not just with
document-processing (IBM Automation Document Processing). |
Flink jobs might fail to resume after a crash. | After a Flink job failure or a machine restart, the Flink cluster might not be able to restart the Flink job automatically. For a successful recovery, restart Business Automation Insights. For instructions, see Troubleshooting Flink jobs. |
Case event emitter (ICM) | You can configure a connection to only one target object store. The Case event Emitter does not support multiple target object stores. |
Elasticsearch indices |
Defining a high number of fields in an Elasticsearch index might lead to a so-called mappings explosion which might cause out-of-memory errors and difficult situations to recover from. The maximum number of fields in Elasticsearch indices created by IBM Business Automation Insights is set to 1000. Field and object mappings, and field aliases, count towards this limit. Ensure that the various documents that are stored in Elasticsearch indices do not lead to reaching this limit. Event formats are documented in Reference for event emission. For Operational Decision Manager, you can configure event processing to avoid the risk of mappings explosion. See Operational Decision Manager event processing walkthrough. |
In the BPEL Tasks dashboard, the User tasks currently not completed widget does not display any results. | The search that is used by the widget does not return any results because it
uses an incorrect filter for the task state. To avoid this issue, edit the filter in the User tasks currently waiting to be processed search. Set the state filter to accept one of the following values: TASK_CREATED, TASK_STARTED, TASK_CLAIM_CANCELLED, TASK_CLAIMED. |
Historical Data Playback REST API | The API plays back data only from closed processes (completed or terminated). Active processes are not handled. |
In Case dashboards, elapsed time calculations do not include late events: Average elapsed Time of completed activities and Average elapsed time of completed cases widgets. | Events that are emitted after a case or activity completes are ignored. But by setting the bai_configuration.icm.process_events_after_completion parameter to true, you can set the Case Flink job to process the events that are generated on a case after the case is closed. The start and end times remain unchanged. Therefore, the duration is the same but the properties are updated based on the events that were generated after completion. |
Processing of Automation Decision Services events and of events from custom sources by the event forwarder, possible duplication | The Elasticsearch document identifier, which is used to index an event that is processed by the event forwarder is automatically assigned to that event. As a result, if the Flink job restarts on failure, events might be duplicated when they are reprocessed by the event forwarder. |
Limitation | Description |
---|---|
Upgrade and rollback | You cannot upgrade or roll back a IBM Business Automation Insights deployment by changing the appVersion parameter in the custom resource. For more information, see Upgrading Business Automation Insights and Rolling back an upgrade. |
In Case dashboards, elapsed time calculations do not include late events: Average elapsed Time of completed activities and Average elapsed time of completed cases widgets. | Events that are emitted after a case or activity completes are ignored. But by setting the bai_configuration.icm.process_events_after_completion parameter to true, you can set the Case Flink job to process the events that are generated on a case after the case is closed. The start and end times remain unchanged. Therefore, the duration is the same but the properties are updated based on the events that were generated after completion. |
Processing of Automation Decision Services events and of events from custom sources by the event forwarder, possible duplication | The Elasticsearch document identifier, which is used to index an event that is processed by the event forwarder is automatically assigned to that event. As a result, if the Flink job restarts on failure, events might be duplicated when they are reprocessed by the event forwarder. |
Business Performance Center |
|
IBM Workflow Process Service
Limitation | Description |
---|---|
REST services invocation | Invoking REST services that use parameters of type file or string with the format binary is not supported. |
Enterprise Content Management | Enterprise Content Management (ECM) related to creating local and global documents, such as by using an ECM integration step, IBM Business Process Manager (BPM) Document List, or Responsive Document Explorer, is not supported. |
Globally and locally managed documents | Associating documents with a process instance is not supported. |