Known limitations and issues

Before you use IBM Cloud Pak® for Automation, make sure that you are aware of the known limitations.

For the most up-to-date information, see the support page Cloud Pak for Automation Known Limitations, which is regularly updated.

The following sections provide the known limitations by Cloud Pak capability.

IBM Automation Document Processing

 New in 20.0.3 
Table 1. Known issues and limitations of Document Processing
Limitation Description
Problems accessing the application configurator after the initial configuration

When you create an application in Business Automation Studio, an application configurator displays where you enter configuration parameters for the application.

However, after the application is created you cannot access the same configurator. As a result, you cannot change the configuration settings the same way.

Workaround

If you want to update application settings, you can use the view configurator from inside Application Designer.

In the Batch Document Processing template, use the view configurator for the Verify Data page.

In the Document Processing template, use the view configurator for the VerifyPage.

In the Document Processing application, users might encounter issues during finalization of field values.

At processing time, circumstances can occur where the Content Analyzer can return multiple KeyClass entries with the same name in the KVP Table of the extracted JSON. For example:

  • A defined field exists on a page or across multiple pages.
  • A field has multiple aliases that match multiple values on a page.

Mitigation

When you have multiple field values that are encountered during finalization, the value from the last field is set and any additional entries are ignored.

Behavior for selecting elements when you fix document processing issues. In the Document Processing application, you cannot use the Tab key to select elements when fixing issues in the Document type and page order and Data extraction issues tiles.

Mitigation

Use the mouse to select elements.

Changing KeyClass name might impact the existing annotations. KVPTable lists the KVPs that are identified in the document. Now, the KVP entries do not store the associated KeyClass Id and are identified by using the KeyClass name. If a document is annotated, this information is stored internally in the form of a KVPTable and the annotation is identified by using the KeyClass name. Now, if the name of the KeyClass is changed after the annotation is done, it might not be synced up on the already stored annotation and might impact the extraction of the KVP.

Mitigation

For the training documents only. (Not for the production runtime documents): Update the key class name, delete the training documents, reupload, reannotate, and then retrain the model.

The Windows-based batch scripts (with file extension .bat) in the DB2 folder are not currently supported and must not be used. The DB2 folder for the database preparation scripts for the Document Processing Db2 databases contains scripts for both UNIX and Windows. However, the Windows scripts are not supported. Do not use the Windows scripts for 20.0.3.
Document Processing components do not support Horizontal Pod Autoscaler (HPA). If you want to scale out the Document Processing pods, update the appropriate replica_count parameter in the CR YAML file.
The number of missing fields might be inaccurate if you add a required field after you teach the model.

If you define a new required field after you used samples to teach the model, and then run the sample again to train the model, the model does not account for the missing value of the required field. You must add a new sample to refresh the data.

One project database is provided in the evaluation or demo installation. By design, one project database is configured as part of the evaluation or demo deployment pattern for Document Processing. As a result, only one Document Processing project can be created in the demo deployment.
Two validators from the pre-trained model exhibit incorrect severity levels.

The Datatype Mismatch validator and the Required Value validator from the pre-trained extraction model are intended to return an error if the conditions are not met. However, the two validators return an informational severity instead of an error.

You can fix this issue by updating the severity settings for any field that uses the validator. When you edit the validator, you can override the severity setting and specify an error or warning instead of informational. You need to make this update for every field that uses the validator. You do not need to retrain the model after this update, but you must redeploy the project after this update.

If the samples that are used for training for checkbox fields have composite type or table fields defined, the data extraction model might not extract the checkbox fields correctly.

In each sample document, make sure that the Undefined data tab does not contain any fields with an empty captured field.

Data from a composite field type is mapped to a Content Platform Engine string property after the document is finalized.

The composite data is persisted as a JSON format in a string property. Currently, the keyclass field names are stored without any values in Content Platform Engine. For example:

{"CustName":"","InvoiceNumber":"","InvoiceDate":""}

To find the value of the composite data, check the content.json file that is persisted as an annotation object along with the document.

IBM Automation Decision Services

 New in 20.0.2 

For more information, see Known limitations.

IBM Automation Digital Worker

 Removed in 20.0.3 
Table 2. (Deprecated) Limitations of IBM Automation Digital Worker
Limitation or issue Description
 New in 20.0.2 
Skills that connect to a service that is hosted on the ICP4A platform do not support topologies where the service is using a different UMS instance than Digital Worker. The following skills are impacted:
  • Decision rules
  • Case management
  • Content analyzer
  • Process management

To correct the issue, you must configure the service so that it uses the UMS instance of Digital Worker and uses the REST API skill.

Previous skill versions might be incompatible with 20.0.2 and cause tasks to fail. In 20.0.2, input schema validation is enforced for skills when the tasks are run. If you have tasks from a previous version, you must update your tasks to use the most recent skill versions:
  1. Undeploy the task.
  2. In your skills list, select Edit skill and take note of the skills configuration and name.
  3. Remove the skills from the task.
  4. Add the skills again with the latest version, and use the same configuration and name as before.
Arabic-indic, Devanagari, and Thai digits are not supported. You cannot enter Arabic-indic, Devanagari, or Thai digits in text areas, for example, when you add a guardrail and select Single/Multiple value selection from a set with Number data type.
 New in 20.0.1 
You cannot schedule a task with Decision Rules skill to use Hosted Transparent Decision Service. You can schedule a task, but the execution of the skill fails with an authorization error.
When you publish a skill, it cannot be more than 300Kb in size. You must make sure that a skill is not higher than 300Kb in size to be published. For example, the Watson™ visual recognition skill is only 6Kb. Skills that are slightly under 300Kb might still produce a size error.
 New in 19.0.3 
Tasks that run on a runtime Pod cannot complete their run if:
  • You scale down the StatefulSet runtime
  • This particular runtime Pod crashes

Before you scale down, make sure that no tasks are running. Be prepared to restart a task after a runtime Pod crash.

Task run might cause Runtime pod to restart. Task run share runtime pod resources with no limitations. A run might consume all available memory and eventually cause the pod to restart.
Local discrepancies can occur if several UMS users are connected to the same Digital Worker instance at the same time. If several UMS users are connected to the same Digital Worker instance, the persistence is ruled according to the last modification that was done. This can lead to local discrepancies, especially if different users are editing the same artifact. You can resolve those discrepancies by refreshing your current page.

For example, if you are deploying a task and get an error, this can be because another user added some incomplete instructions in the meantime, which prevents any deployment at this moment.

No automatic validation is done on instructions or schemas when tasks are auto-saved. When you are working on tasks, your work is saved automatically, but the instructions and the schemas are not validated. Validation of these artifacts is done when you deploy the task.
Skills in a task must have a unique name. You must not have multiple skills with the same name in the same task. To be able to call each skill, they must have unique names.
Elasticsearch index limitation when you send tracking data to IBM Business Automation Insights. By default, an Elasticsearch index can contain only 1000 different fields. If you do not change your Business Automation Insights Elasticsearch settings, make sure to not send more than 950 different tracking data.
Sending multiple emails with the Send email skill might not be possible because of mail providers restrictions. Every time the Send email skill runs, a connection is opened, a mail is sent, and finally the connection is closed. It is not suitable if you want to send multiple emails because of mail providers limitations (for example, the limit with a Gmail account is 100 email per day).
External services can have limitations that prevent you from running skills in parallel. If you run several skills in parallel, you must make sure that those skills can support it. For example, IBM Watson® Visual Recognition returns the error "Too Many requests" if you do not respect the product limitations.
If you create an Array object in the task instructions and pass it into a skill, array instanceof Array returns false in the code of the skill. If you create an Array object in the task instructions and pass it into a skill, array instanceof Array returns false in the code of the skill. However, you are still able to access the array elements as expected.
You must scale the number of Pods (replicas) if you reach 1 CPU per Pod. If you reach 1 CPU per Pod, you must scale the number of Pods (replicas) because adding more CPU does not help scaling.
You must log in again to access the Kibana performance dashboard. When you open your performance dashboard from the Monitor view in Digital Worker, you might need to log in again to access the Kibana dashboard. This is because Business Automation Insights does not use UMS.
Only Coordinated Universal Time is supported when you schedule tasks to run. You must use Coordinated Universal Time when you set the schedule for a task, and not your local time.
OpenID subject name is displayed as the owner of a task. In Digital Worker, the name that is displayed as the owner of a task is the user's OpenID subject, and not the user's full name.
Schedules for tasks are removed when you undeploy. When you undeploy a task, if there is a schedule set for this task, it is removed. You must set it again when you redeploy the task.
When you deploy a skill that you developed with the skill toolkit, you must bundle any private packages not available on public NPM with the skill. If you use private packages that are not available on public NPM when you deploy skills that you developed from scratch, you must bundle those private packages with your skill. Otherwise, the deployment of your skill fails.
Public internet connection is required. The cluster on which the skills are installed must be connected to the public internet.
Run results are removed after one hour. When you run a task, you can access the results at least for one hour after the start of the run. Results are removed 24 hours after the run starts.
A client_ID shared across several applications is not supported. If the client_ID exists but the redirect_URL changes, Digital Worker installs without error. However, the designer displays an error that states that the redirect URL is not correct. To resolve this error, you must reinstall Digital Worker with a new client_ID or after the current client_ID deletion.

To avoid this issue, do not use a client_ID in several applications.

IBM Business Automation Studio and IBM Business Automation Application Engine (Application Engine)

Table 3. Limitations of Business Automation Studio and Application Engine
Limitation Description
 New in 20.0.2   
Process applications from Business Automation Workflow on container are not appearing in Application Designer. Sometimes the app resources of the Business Automation Workflow server on container instances don't appear in Business Automation Studio when you deploy Workflow server instances, Business Automation Studio, or Resource Registry in the same custom resource YAML file by using the operator.

If you deployed Business Automation Studio with the Business Automation Workflow server on container instances in the same custom resource YAML file, but you don't see process applications from Business Automation Workflow server on container instances in Business Automation Studio, restart the Business Automation Workflow server on the container instance pod.

The Business Automation Workflow toolkit and configurators might not get imported properly. When you install both Business Automation Workflow on containers and Business Automation Studio together, the Business Automation Workflow toolkit and configurators might not get imported properly. If you don't see the Workflow Services toolkit, the Start Process Configurator, or the Call Service Configurator, manually import the .twx files by downloading them from the Contributions table inside the Resource Registry section of the Administration page of Business Automation Studio.
 New in 19.0.2   
Kubernetes kubectl known issue https://github.com/kubernetes/kubernetes/issues/68211. Business Automation Studio related pods go into a CrashLoopBackOff state during the restart of the docker service on a worker node.

If you use the kubectl get podscommand to check the pods when a pod is in the CrashLoopBackOff state, you get the following error message:

Warning Failed 3m kubelet, 172.16.191.220 Error: failed to start container: Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "process_linux.go:402: container init caused \"rootfs_linux.go:58: mounting \\\"/var/lib/kubelet/pods/69e99228-b890-11e9-81c0-00163e01b43c/volume-subpaths/key-trust-store/ibm-dba-ums/2\\\" to rootfs \\\"/var/lib/docker/overlay2/0f50004756b6e39e6da16f57d7fdb9f72898bc8a5748681218a7b1b20eb0612b/merged\\\" at \\\"/var/lib/docker/overlay2/0f50004756b6e39e6da16f57d7fdb9f72898bc8a5748681218a7b1b20eb0612b/merged/opt/ibm/wlp/usr/shared/resources/security/keystore/jks/server.jks\\\" caused \\\"no such file or directory\\\"\"": unknown

To recover a pod, delete it in the OpenShift console and create a new pod.

To use IBM Business Automation Application Engine (Application Engine) with Db2® for High Availability and Disaster Recovery (HADR), you must have an alternative server available when Application Engine starts.

Application Engine depends on the automatic client reroute (ACR) of the Db2 HADR server to fail over to a standby database server. There must be a successful initial connection to that server when Application Engine starts.

IBM Resource Registry can get out of sync.

If you have more than one etcd server and the data gets out of sync between the servers, you must scale to one node and then scale back to multiple nodes to synchronize Resource Registry.

After you create the Resource Registry, you must keep the replica size.

Because of the design of etcd, changing the replica size can cause data loss. If you must set the replica size, set it to an odd number. If you reduce the pod size, the pods are destroyed one by one slowly to prevent data loss or the cluster getting out of sync.
  • If you update the Resource Registry admin secret to change the username or password, first delete the instance_name-dba-rr-random_value pods to cause Resource Registry to enable the updates. Alternatively, you can enable the update manually with etcd commands.
  • If you update the Resource Registry configurations in the icp4acluster custom resource instance, the update might not affect the Resource Registry pod directly. It affects the newly created pods when you increase the number of replicas.

After you deploy Business Automation Studio or Application Engine, you can't change the Business Automation Studio or Application Engine admin user.

 

Because of a Node.js server limitation, Application Engine trusts only root CA.

If an external service is used and signed with another root CA, you must add the root CA as trusted instead of the service certificate.
  • The certificate can be self-signed, or signed by a well-known root CA.
  • If you are using a depth zero self-signed certificate, it must be listed as a trusted certificate.
  • If you are using a certificate that is signed by a self-signed root CA, the self-signed CA must be in the trusted list. Using a leaf certificate in the trusted list is not supported.
  • If you are adding the root CA of two or more external services to the Application Engine trust list, you can't use the same common name for those root CAs.

Business Automation Studio and Application Engine support only the IBM Db2 database.

 

The Java™ Message Service (JMS) statefulset doesn't support scale.

You must keep the replica size of the JMS statefulset at 1.

User Management Service

 New in 19.0.2 
Table 4. Limitations of User Management Service
Limitation or issue Description
Error message CWWKS1424E When you start UMS for the first time, initialization causes error message CWWKS1424E to be logged twice. You can safely ignore the first two occurrences of this error.

IBM FileNet Content Manager

 New in 19.0.3 
Table 5. Limitations of IBM FileNet Content Manager
Limitation or issue Description
Zero-byte file causes error message (Fixed in 20.0.2) When you deploy the external share container with the operator, but do not use the external LDAP settings, a zero-byte file called ibm_ext_ldap_AD.xml is created inside the container. This file triggers an invalid file error message. To solve the issue, you must manually delete the file from the disk.
Process Engine functions are not supported by UMS integration (Fixed in 20.0.2) If you plan to use Process Engine functions, such as validating workflow, do not configure UMS integration with Content Platform Engine.
Deployment with an operator cannot support creating more than one data source with Oracle database. (Fixed in 20.0.2) Although the custom resource supports more than one data source, the generic secret cannot support more than one, because each data source might have distinct user names and passwords.

For information about manually creating additional data source definitions to include in the configDropins/overrides for the FileNet® Content Manager and Business Automation Navigator containers, see Tuning IBM WebSphere® Liberty for FileNet Content Manager components.

(Fixed in 20.0.2.1) In the 20.0.2.1 version, a new parameter called dc_os_label is added that lets you specify a label per object store definition in the CR YAML. You use that same label in the ibm-fncm-secret that you create to specify separate data source login credentials for each object store data source. For more information, see Creating secrets to protect sensitive configuration data.

A smaller number of indexing batches than configured leads to a noticeable degradation in the overall indexing throughput rate. Because obsolete Virtual Servers in the Global Configuration Database (GCD) are not automatically cleaned up, that Virtual Server count can be higher than the actual number of CE instances with dispatching enabled. That inflated number results in a smaller number of concurrent batches per CSS server, negatively affecting indexing performance.

For more information and to resolve the issue, see Content Platform Engine uneven CBR indexing workload and indexing degradation.

Multiple test folders are created periodically when the verification container is enabled in the custom resource YAML. (Fixed in 20.0.2) Test files and folders can continue to be created by the verification container. To resolve this, you can remove the verification section of the custom resource YAML after your environment is up and running and reapply the custom resource YAML.
Limitation of Google ID in Edit Service on an External Share desktop In external share deployments that use Google ID for the identity provider authentication, issues can occur when you use the Edit Service to edit or add Microsoft documents. The issue causes login to be unsuccessful.
With Task Manager, files that are part of a deleted Teamspace are unfiled but still searchable In an environment that includes Task Manager, if you delete a Teamspace, files that were part of the Teamspace are still searchable.
(20.0.2) The Task Manager container is not supported by IBM Security Directory Server. (Fixed in 20.0.3) If you want to include Task Manager in your environment, use Microsoft Active Directory for your LDAP provider. In 20.0.3, IBM Security Directory Server is supported.
With Task Manager, database schema customization for Navigator is not supported for 20.0.2 and earlier. For 20.0.3 and later, this is not needed in Task Manager. If you are using Task Manager, the Navigator schema name must be ICNDB.
UMS with Task Manager is not supported. You cannot use UMS with Task Manager.

IBM Content Navigator

 New in 19.0.3 
Table 6. Limitations of IBM Content Navigator
Limitation or issue Description
Cannot upload plug in when Navigator mode is set to 0. (Fixed in 20.0.2) When the Navigator mode is Platform (0), the Upload File Path on Server option is unavailable for uploading a plug-in. To work around this limitation:
  1. Log in to the Navigator admin desktop.
  2. Click Settings, change Navigator Mode to Platform and Content, and click Save.
  3. Refresh the browser to make the change effective.
  4. Reopen the Settings tab to confirm that the Upload File Path on Server setting is available.
  5. (Optional) Change the Navigator Mode setting back to Platform, if needed. The upload path is still valid even if the setting cannot be shown.
Unable to download a TIFF image as a PDF. Downloading a TIFF as a PDF from Navigator is unavailable.
Error that service is unavailable when you configure the Share plug-in in the Navigator admin desktop. (Fixed in 20.0.3) When you try to configure the Share plug-in in the admin desktop, you might encounter an error like the following: "The service is unavailable".

Start the configuration again. This error usually displays only one time, and you can complete the configuration if you start it again.

IBM Business Automation Insights

  • Table 7 lists the limitations that apply whether IBM Business Automation Insights is deployed to a Kubernetes cluster or to a server outside of a Kubernetes cluster (formerly called IBM Business Automation Insights on a single node).
  • Table 8 lists the limitations that apply only to IBM Business Automation Insights for a server.
Table 7. Limitations common to Kubernetes and non-Kubernetes deployment
Limitation or issue Description
Business Automation Workflow Process Emitter and Case Emitter cannot be enabled together with Event Streams configured.  New in 20.0.2  When both emitters are enabled, the Case Emitter cannot send events to Event Streams. Multiple exceptions are reported in the liberty-messages.log file.

When you configure emitters with Event Streams in Business Automation Workflow on containers, enable either the Business Automation Workflow Process Emitter or Case Emitter, not both.

Security of communications to Kafka
  •  For 20.0.1  Salted Challenge Response Authentication Mechanism (SCRAM) is not supported. Plain SSL, SSL with username and password, SSL with Kerberos authentication, and Kerberos authentication are supported.
  •  New in 20.0.2  Salted Challenge Response Authentication Mechanism (SCRAM) is supported only in its SCRAM-SHA-512 variant. Plain SSL, SSL with username and password, SSL with Kerberos authentication, and Kerberos authentication are supported.
  •  New in 20.0.3  Kafka Kerberos configuration is not supported.
Elasticsearch Docker images Elasticsearch and Kibana docker images with X-Pack installed are not supported. IBM Business Automation Insights 19.0.2 and later supports the capability to take snapshots and restore internal user database, action groups, roles, and role mappings. But alternative authentication methods through SAML or OpenID Connect are provided as a technology preview, that is, without any support from IBM.
Elasticsearch indexes

Defining a high number of fields in an Elasticsearch index might lead to a so-called mappings explosion which might cause out-of-memory errors and difficult situations to recover from. The maximum number of fields in Elasticsearch indexes created by IBM Business Automation Insights is set to 1000. Field and object mappings, and field aliases, count towards this limit. Ensure that the various documents that are stored in Elasticsearch indexes do not lead to reaching this limit.

For more information, see BPMN summary event formats, Case and activity summary event formats, and the Settings to prevent mappings explosion page of the Elasticsearch documentation.

Case dashboard elapsed time calculations do not include late events: Average elapsed Time of completed activities and Average elapsed time of completed cases widgets.  New in 18.0.2 : Events that are emitted after a case or activity completes are ignored.
 New in 18.0.2  Case activity charts If you installed IBM Business Automation Insights with the Case event emitter from IBM Business Automation Workflow 18.0.0.2 or earlier, or from IBM Case Manager 5.3.3 interim fix IF003 or earlier, case activity charts might not reflect correct data. To avoid the issue, use the case event emitter from IBM Business Automation Workflow 18.0.0.2 interim fix 8.6.10018002-WS-BPM-IFPJ45625 or IBM Business Automation Workflow 19.0.0.1 or higher.

If some older Elasticsearch or HDFS data is pushed by the older Case event emitter, follow the replay procedure to clear the old data. Use the new Case event emitter that is released with IBM Business Automation Workflow 18.0.0.2 interim fix 8.6.10018002-WS-BPM-IFPJ45625, or IBM Business Automation Workflow 19.0.0.1 or higher fix pack to push the events.

For more information about the replay procedure, see Replaying Case events.

The User tasks currently not completed widget in the BPEL Tasks dashboard doesn't display any results. The search that is used by the widget does not return any results because it uses an incorrect filter for the task state.

To avoid this issue, edit the filter in the User tasks currently waiting to be processed search. Set the state filter to accept one of the following values: TASK_CREATED, TASK_STARTED, TASK_CLAIM_CANCELLED, TASK_CLAIMED.

Historical Data Playback REST API The API plays back data only from closed processes (completed or terminated). Active processes are not handled.
 New in 19.0.1  Alerting feature on Kibana graphical user interface not usable. In the Kibana interface, the Alerting menu item for monitoring your data and automatically sending alert notifications is present but the underlying feature is not enabled.
 New in 19.0.3 
Table 8. Limitations to single node/server deployment
Limitation or issue Description
Elasticsearch and Kafka You can use the embedded Confluent Kafka distribution or an external Kafka installation but you can use only embedded Elasticsearch.
Security of communications to Kafka  New in 20.0.2  When support for the embedded Kafka server is disabled, the connection to the external Kafka server is authenticated only with the SASL_SSL protocol and custom events are not supported.
Apache ZooKeeper The version of ZooKeeper that is bundled with Confluent Kafka does not support SSL. For more information, see the ZooKeeper page of the Confluent Kafka documentation.