System requirements

All the Cloud Pak containers are based on Red Hat Universal Base Images (UBI), and are Red Hat and IBM certified. To use the Cloud Pak images, the administrator must make sure that the target cluster on Red Hat OpenShift Container Platform has the capacity for all of the capabilities that you plan to install.

For each stage in your operations (a minimum of three stages is expected "development, preproduction, and production"), you must allocate a cluster of nodes before you install the Cloud Pak. Development, preproduction, and production are stages that are best run on different compute nodes. To achieve resource isolation, each namespace is a virtual cluster within the physical cluster and a Cloud Pak deployment is scoped to a single namespace. High-level resource objects are scoped within namespaces. Low-level resources, such as nodes and persistent volumes, are not in namespaces.

Note: Use the shared_configuration.sc_deployment_license parameter to define the purpose of the "custom" deployment type (shared_configuration.sc_deployment_type). Valid values are production and non-production.

The Detailed system requirements page provides a cluster requirements guideline for IBM Cloud Pak® for Business Automation.

The minimum cluster configuration and physical resources that are needed to run the Cloud Pak include the following elements:
  • Hardware architecture: Intel (amd64 or x86_64 the 64-bit edition for Linux® x86) on all platforms or Linux on IBM Z.
  • Node counts: Dual compute nodes for nonproduction and production clusters. A minimum of three nodes is needed for medium and large production environments and large test environments. Any cluster configuration needs to adapt to the size of the projects and the workload that is expected.
A cluster where you want to install all of the capabilities needs as a minimum:
  • Master (3 nodes): 4 vCPU and 8 Gi memory on each node.
  • Worker (8 nodes): 16 vCPU and 32 Gi memory on each node.

Based on your cluster requirement, you can pick a deployment profile (sc_deployment_profile_size) and enable it during installation. Cloud Pak for Business Automation provides small, medium, and large deployment profiles. You can set the profile during installation, in an update, and during an upgrade.

The default profile is small. Before you install the Cloud Pak, you can change the profile to medium or large. You can scale up or down a profile anytime after installation. However, if you install with a medium profile and another Cloud Pak specifies a medium or large profile then if you scale down to size small, the profile for the foundational services remains as it is. You can scale down the foundational services to small only if no other Cloud Pak specifies a medium or large size. For more information, see Setting the common services profile.

Note: If you use a shared foundational services instance for multiple CP4BA deployments, the amount of resources are less than if you use dedicated (one per CP4BA deployment) instances. If you plan to use dedicated instances, then be aware that additional resources are needed.

It is recommended that you set the IBM Cloud Platform UI (Zen) service to the same size as Cloud Pak for Business Automation. The possible values are small, medium, and large.

oc patch AutomationUIConfig iaf-system --type=merge -p '{"spec":{"zenService":{"scaleConfig":"large"}}}'

The following table describes each deployment profile.

Table 1. Deployment profiles and estimated workloads
Profile Description Scaling (per 8-hour day) Minimum number of worker nodes
Small (no HA) For environments that are used by 10 developers and 25 users. For environments that are used by a single department with a few users; useful for application development.
  • Processes 10,000 documents
  • Processes 5,000 workflows
  • Processes 5,000 transactions
  • Processes 500,000 decisions
  • Supports failover
  • StatefulSet provides high availability (HA)
8
Medium For environments that are used by 20 developers and 125 users. For environments that are used by a single department and by limited users.
  • Processes 100,000 documents
  • Processes 25,000 workflows
  • Processes 25,000 transactions
  • Processes 2,000,000 decisions
  • Supports HA and failover
  • Provides at least two replicas of most services, if configuring failover
16
Large For environments that are used by 50 developers and 625 users. For environments that are shared by multiple departments and users.
  • Processes 1,000,000 documents
  • Processes 125,000 workflows
  • Processes 125,000 transactions
  • Processes 5,000,000 decisions
  • Supports HA and failover
  • Provides at least two replicas of most services, if configuring failover
32

You can use custom resource templates to update the hardware requirements of the services that you want to install.

The following sections provide the default resources for each capability. For more information about the minimum requirements of foundational services, see Hardware requirements and recommendations for foundational services.

Attention: The values in the hardware requirements tables were derived under specific operating and environment conditions. The information is accurate under the given conditions, but results that are obtained in your operating environments might vary significantly. Therefore, IBM cannot provide any representations, assurances, guarantees, or warranties regarding the performance of the profiles in your environment.

Small profile hardware requirements

  • Table 2 Cloud Pak for Business Automation operator default requirements for a small profile
  • Table 3 Automation Decision Services default requirements for a small profile
  • Table 4 Automation Document Processing default requirements for a small profile
  • Table 5 Automation Workstream Services default requirements for a small profile
  • Table 6 Business Automation Application default requirements for a small profile
  • Table 7 Business Automation Insights default requirements for a small profile
  • Table 8 Business Automation Navigator default requirements for a small profile
  • Table 9 Business Automation Studio default requirements for a small profile
  • Table 10 Business Automation Workflow default requirements with or without Automation Workstream Services for a small profile
  • Table 11 FileNet® Content Manager default requirements for a small profile
  • Table 12 Operational Decision Manager default requirements for a small profile
Table 2. Cloud Pak for Business Automation operator default requirements for a small profile
Component CPU Request (m) CPU Limit (m) Memory Request (Mi) Memory Limit (Mi) Number of replicas Pods are licensed for production/nonproduction
ibm-cp4a-operator 500 1000 256 1024 1 No
Note: If you plan to install the operator in all namespaces for more than one instance, add more resources. You can use the oc patch csv command to add more resources:
oc patch csv ibm-cp4a-operator.v22.1.0 --type=json -p '[
{
"op":"replace",
"path": "/spec/install/spec/deployments/0/spec/template/spec/containers/0/resources/limits/cpu",
"value": "4"
},
{
"op":"replace",
"path": "/spec/install/spec/deployments/0/spec/template/spec/containers/0/resources/limits/memory",
"value": "8Gi"
},
{
"op":"replace",
"path": "/spec/install/spec/deployments/0/spec/template/spec/containers/0/resources/requests/cpu",
"value": "1500m"
},
{
"op":"replace",
"path": "/spec/install/spec/deployments/0/spec/template/spec/containers/0/resources/requests/memory",
"value": "1600Mi"
},
]'
Table 3. Automation Decision Services default requirements for a small profile
Component CPU Request (m) CPU Limit (m) Memory Request (Mi) Memory Limit (Mi) Number of replicas Pods are licensed for production/nonproduction Ephemeral storage Request Ephemeral storage Limit
ads-runtime 500 1000 2048 3072 1 Yes 100Mi 500Mi
ads-credentials 250 1000 800 1536 1 No 100Mi 300Mi
ads-embedded-build 500 2000 1024 2048 1 No 1Gi 1.5Gi
ads-download 100 300 200 200 1 No 100Mi 300Mi
ads-front 100 300 256 256 1 No 100Mi 300Mi
ads-gitservice 500 1000 800 1536 1 No 400Mi 500Mi
ads-parsing 250 1000 800 1536 1 No 100Mi 300Mi
ads-restapi 500 1000 800 1536 1 No 100Mi 400Mi
ads-run 500 1000 800 1536 1 No 100Mi 400Mi
Note: Automation Decision Services also creates some jobs that request 200m CPU and 256Mi Memory. The following jobs are created at the beginning of the installation and do not last long:
  • ads-ltpa-creation
  • ads-runtime-bai-registration
  • ads-ads-runtime-zen-translation-job
  • ads-designer-zen-translation-job

The ads-rr-integration and ads-ads-rr-as-runtime-synchro jobs are started every 15 minutes, and are also short-lived.

Table 4. Automation Document Processing default requirements for a small profile
Component CPU Request (m) CPU Limit (m) Memory Request (Mi) Memory Limit (Mi) Number of replicas Pods are licensed for production/nonproduction
OCR Extraction 200 1000 2048 3072

From 22.0.1-IF005 4608

6 Yes
Classify Process 200 500 400 2048 1 Yes
Processing Extraction 500 1000 1024 3584 2 Yes
Natural Language Extractor 200 500 600 1440 2 Yes
Callerapi 200 600 600 1024 1 No
PostProcessing 200 600 400 800 1 No
Setup 200 600 600 1024 2 No
Deep Learning 1000 2000 3072 10240

From 22.0.1-IF005 15360

2 No
UpdateFileDetail 200 600 400 600 1 No
Backend 200 600 400 1024 2 No
Redis 100 250 100 640 1 No
RabbitMQ 100 1000 100 1024 2 No
Common Git Gateway Service (git-service) 500 1000 512 1536 1 No
Content Designer Repo API (CDRA) 500 1000 1024 3072 1 No
Content Designer UI and REST (CDS) 500 1000 512 3072 1 No
Content Project Deployment Service (CPDS) 500 1000 512 3072 1 No
Mongo database (mongodb) 500 1000 512 1024 1 No
Viewer service (viewone) 500 1000 1024 3072 1 No
One Conversion 200 1000 100 4096 1 Yes
Important:
  • Document Processing - The Deep Learning optional container has the ability to use NVIDIA GPU if it is available. NVIDIA is the only supported GPU for Deep Learning in the Document Processing pattern. The GPU worker nodes must have a unique label, for example ibm-cloud.kubernetes.io/gpu-enabled:true. You add this label value to the deployment script or to the YAML file of your custom resource when you configure the YAML for deployment. To install the NVIDIA GPU operator, follow these installation instructions. For high-availability, you need a minimum of 2 GPU so that 2 replicas of Deep Learning pods can be started. You can change the replica to 1 if you have 1 GPU on the node.
  • For Document Processing, the CPU of the worker nodes must meet TensorFlow AVX requirements. For more information, see Hardware requirements for TensorFlow with pip.
  • The One Conversion (optional) container uses Scene Text Recognition (STR) to recognize identity cards.
Note:
  • Each Processing Extraction pod uses an extra 50Mi of RAM for the tmpfs volume mount with the type of Memory.
  • Document Processing requires databases for project configuration and processing. These databases must be Db2. The hardware and storage requirements for the databases depend on the system load for each document processing project.
  • deprecated icon (Deprecated) When the global.ocrextraction.id_card_detection.enabled parameter is set to true, the default RAM resource for the OCR extraction pod is set to 1Gi/7Gi.
  • When the deep_learning_object_detection.enabled parameter is set to true, the default RAM resource for the OCR extraction pod is set to 1Gi/3Gi (From 22.0.1-IF005 it is set to 1Gi/4Gi).
  • If you process only fixed-format documents, you might improve performance by disabling deep learning object detection. For more information about the system requirements for Content Analyzer components in this scenario, see IBM Automation Document Processing system requirements when disabling deep learning object detection for fixed-format documents.
  • If you deploy with only the document_processing pattern, you can reduce the sizing for some of the required components. For more information, see IBM Automation Document Processing system requirements for a light production deployment (document_processing pattern only).
Table 5. Automation Workstream Services default requirements for a small profile
Component CPU Request (m) CPU Limit (m) Memory Request (Mi) Memory Limit (Mi) Number of replicas Pods are licensed for production/nonproduction
Workflow Server 500 2000 2048 3060 1 Yes
Java™ Message Service 100 1000 512 1024 1 No
Process Federation Server operator 100 500 20 1024 1 No
Process Federation Service 200 1000 512 1024 1 No
Process Federation Service-dbareg 50 100 512 512 1 No
Elasticsearch Service 500 800 820 2048 1 No
Notes:
Automation Workstream Services also creates some jobs that request 200m CPU and 128Mi Memory:
  • basimport-job is created only with Business Automation Studio.
  • content-init-job
  • db-init-job-pfs
  • ltpa-job
  • oidc-registry-job
  • workplace-init-job
The db-init-job requests 500m CPU and 512Mi Memory.
Table 6. Business Automation Application default requirements for a small profile
Component CPU Request (m) CPU Limit (m) Memory Request (Mi) Memory Limit (Mi) Number of replicas Pods are licensed for production/nonproduction
App Engine 300 500 256 1024 1 Yes/No
Resource Registry 100 500 256 512 1 No
Table 7. Business Automation Insights default requirements for a small profile
Component CPU Request (m) CPU Limit (m) Memory Request (Mi) Memory Limit (Mi) Number of replicas Pods are licensed for production/nonproduction
Business Performance Center 100 4000 512 2000 1 Yes/No
Flink task managers 1000 1000 1728 1728 Default parallelism

2

Yes/No
Flink job manager 1000 1000 1728 1728 1 No
Administration REST API (Optional) 100 (Optional) 500 50 120 2 No
Management REST API 100 1000 50 120 2 No
Management back end (second container of the same management pod as the previous one) 100 500 350 512 2 No
Note: Business Automation Insights relies on Kafka, Apicurio, and Elasticsearch from IBM Automation foundation. For information about their system requirements, see the System requirements page of the IBM Automation foundation documentation. Business Automation Insights also creates the bai-setup and iaf-insights-engine-application-setup Kubernetes jobs and requests 200m for CPU and 350Mi for memory. The CPU and memory limits are set equal to the requests. The pods of these Kubernetes jobs run for a short time at the beginning of the installation, then complete, thus freeing the resources.
Table 8. Business Automation Navigator default requirements for a small profile
Component CPU Request (m) CPU Limit (m) Memory Request (Mi) Memory Limit (Mi) Number of replicas Pods are licensed for production/nonproduction
Navigator 1000 1000 3072 3072 1 No
Table 9. Business Automation Studio default requirements for a small profile
Component CPU Request (m) CPU Limit (m) Memory Request (Mi) Memory Limit (Mi) Number of replicas Pods are licensed for production/nonproduction
App Engine playback 300 500 256 1024 1 No
BAStudio 1100 2000 1752 3072 1 No
Resource Registry 100 500 256 512 1 No
Table 10. Business Automation Workflow default requirements with or without Automation Workstream Services for a small profile
Component CPU Request (m) CPU Limit (m) Memory Request (Mi) Memory Limit (Mi) Number of replicas Pods are licensed for production/nonproduction
Workflow Server 500 2000 2048 3060 1 Yes
Workflow Authoring 500 2000 2048 3072 1 No
Java Message Service 100 1000 512 1024 1 No
Process Federation Server operator 100 500 20 1024 1 No
Process Federation Service 200 1000 512 1024 1 No
Process Federation Service-dbareg 50 100 512 512 1 No
Elasticsearch Service 500 800 820 2048 1 No
Intelligent Task Prioritization 500 2000 1024 2560 1 No
Workforce Insights 500 2000 1024 2560 1 No
Notes:

Intelligent Task Prioritization and Workforce Insights are optional and are not supported on Linux on IBM Z.

Business Automation Workflow also creates some jobs that request 200m CPU and 128Mi Memory:
  • basimport-job is created only with Business Automation Studio.
  • case-init-job
  • content-init-job
  • db-init-job-pfs
  • ltpa-job
  • oidc-registry-job
  • oidc-registry-job-for-webpd is created only with workflow center.
  • workplace-init-job
The db-init-job requests 500m CPU and 512Mi Memory.
Table 11. FileNet Content Manager default requirements for a small profile
Component CPU Request (m) CPU Limit (m) Memory Request (Mi) Memory Limit (Mi) Number of replicas Pods are licensed for production/nonproduction
CPE 1000 1000 3072 3072 1 Yes
CSS 1000 1000 4096 4096 1 Yes
Enterprise Records (ER) 500 1000 1536 1536 1 Yes
Content Collector for SAP (CC4SAP) 500 1000 1536 1536 1 Yes
CMIS 500 1000 1536 1536 1 No
GraphQL 500 1000 1536 1536 1 No
External Share 500 1000 1536 1536 1 No
Task Manager 500 1000 1536 1536 1 No
Note: Not all containers are used in every workload. If a feature like external sharing of documents or the Content Services GraphQL API is not used, that container requires less resources or is optionally not deployed.

In high-volume indexing scenarios, where ingested docs are full-text indexed, the CSS utilization can exceed the CPE utilization. In some cases, this might be 3 - 5 times larger.

For optional processing such as thumbnail generation or text filtering, at least 1 GB of native memory is required by the CPE for each. If both types of processing are expected, add at least 2 GB to the memory requests/limits for the CPE.

With the processing of content, resources required increase with the complexity and size of the content. Increase both memory and CPU for the CPE and CSS services to reflect the type and size of documents in your system. Resource requirements might also increase over time as the amount of data in the system grows.

Table 12. Operational Decision Manager default requirements for a small profile
Component CPU Request (m) CPU Limit (m) Memory Request (Mi) Memory Limit (Mi) Number of replicas Pods are licensed for production/nonproduction Ephemeral storage Request (Mi) Ephemeral storage Limit (Mi)
Decision Center 1000 1000 4096 4096 1 Yes 50 500
Decision Runner 500 500 1024 2048 1 Yes 50 500
Decision Server Runtime 500 1000 2048 2048 1 Yes 50 500
Decision Server Console 500 500 512 1024 1 No 50 500
Note: Operational Decision Manager also creates an odm-oidc-job-registration job that requests 200m CPU and 256Mi Memory. The pod is created at the beginning of the installation and does not last long.

Medium profile hardware requirements

  • Table 13 Cloud Pak for Business Automation operator default requirements for a medium profile
  • Table 14 Automation Decision Services default requirements for a medium profile
  • Table 15 Automation Document Processing default requirements for a medium profile
  • Table 16 Automation Workstream Services default requirements for a medium profile
  • Table 17 Business Automation Application default requirements for a medium profile
  • Table 18 Business Automation Insights default requirements for a medium profile
  • Table 19 Business Automation Navigator default requirements for a medium profile
  • Table 20 Business Automation Studio default requirements for a medium profile
  • Table 21 Business Automation Workflow default requirements with or without Automation Workstream Services for a medium profile
  • Table 22 FileNet Content Manager default requirements for a medium profile
  • Table 23 Operational Decision Manager default requirements for a medium profile
Table 13. Cloud Pak for Business Automation operator default requirements for a medium profile
Component CPU Request (m) CPU Limit (m) Memory Request (Mi) Memory Limit (Mi) Number of replicas Pods are licensed for production/nonproduction
ibm-cp4a-operator 500 1000 256 1024 1 No
Note: If you plan to install the operator in all namespaces for more than one instance, add more resources. You can use the oc patch csv command to add more resources:
oc patch csv ibm-cp4a-operator.v22.1.0 --type=json -p '[
{
"op":"replace",
"path": "/spec/install/spec/deployments/0/spec/template/spec/containers/0/resources/limits/cpu",
"value": "4"
},
{
"op":"replace",
"path": "/spec/install/spec/deployments/0/spec/template/spec/containers/0/resources/limits/memory",
"value": "8Gi"
},
{
"op":"replace",
"path": "/spec/install/spec/deployments/0/spec/template/spec/containers/0/resources/requests/cpu",
"value": "1500m"
},
{
"op":"replace",
"path": "/spec/install/spec/deployments/0/spec/template/spec/containers/0/resources/requests/memory",
"value": "1600Mi"
},
]'
Table 14. Automation Decision Services default requirements for a medium profile
Component CPU Request (m) CPU Limit (m) Memory Request (Mi) Memory Limit (Mi) Number of replicas Pods are licensed for production/nonproduction Ephemaral storage Request Ephemaral storage Limit
ads-runtime 1000 2000 2048 3072 1 Yes 100Mi 1.5Gi
ads-credentials 250 1000 800 1536 1 No 100Mi 400Mi
ads-embedded-build 500 2000 1024 2048 1 No 1Gi 1.5Gi
ads-download 100 300 200 200 1 No 100Mi 400Mi
ads-front 100 300 256 256 1 No 100Mi 400Mi
ads-gitservice 500 1000 800 1536 1 No 400Mi 600Mi
ads-parsing 250 1000 800 1536 1 No 100Mi 400Mi
ads-restapi 500 1000 800 1536 1 No 100Mi 400Mi
ads-run 500 1000 800 1536 1 No 100Mi 400Mi
Note: Automation Decision Services also creates some jobs that request 200m CPU and 256Mi Memory. The following jobs are created at the beginning of the installation and do not last long:
  • ads-ltpa-creation
  • ads-runtime-bai-registration
  • ads-ads-runtime-zen-translation-job
  • ads-designer-zen-translation-job

The ads-rr-integration and ads-ads-rr-as-runtime-synchro jobs are started every 15 minutes, and are also short-lived.

Table 15. Automation Document Processing default requirements for a medium profile
Component CPU Request (m) CPU Limit (m) Memory Request (Mi) Memory Limit (Mi) Number of Replicas Pods are licensed for production/nonproduction
OCR Extraction 200 1000 2048 3072

From 22.0.1-IF005 4608

9 Yes
Classify Process 200 500 400 2048 2 Yes
Processing Extraction 500 1000 1024 3584 2 Yes
Natural Language Extractor 200 500 600 1440 2 Yes
Callerapi 200 600 600 1024 2 No
PostProcessing 200 600 400 800 2 No
Setup 200 600 600 1024 4 No
Deep Learning 1000 2000 3072 10240

From 22.0.1-IF005 15360

2 No
UpdateFileDetail 200 600 400 600 2 No
Backend 200 600 400 1024 4 No
Redis 100 250 100 640 1 No
RabbitMQ 100 1000 100 1024 3 No
Common Git Gateway Service (git-service) 500 1000 512 1536 1 No
Content Designer Repo API (CDRA) 500 1000 1024 3072 2 No
Content Designer UI and REST (CDS) 500 1000 512 3072 2 No
Content Project Deployment Service (CPDS) 500 1000 512 3072 2 No
Mongo database (mongodb) 500 1000 512 1024 1 No
Viewer service (viewone) 500 2000 1024 4096 2 No
One Conversion 200 1000 100 4096 2 Yes
Important:
  • Document Processing - The Deep Learning optional container has the ability to use NVIDIA GPU if it is available. NVIDIA is the only supported GPU for Deep Learning in the Document Processing pattern. The GPU worker nodes must have a unique label, for example ibm-cloud.kubernetes.io/gpu-enabled:true. You add this label value to the deployment script or to the YAML file of your custom resource when you configure the YAML for deployment. To install the NVIDIA GPU operator, follow these installation instructions. For high-availability, you need a minimum of 2 GPU so that 2 replicas of Deep Learning pods can be started. You can change the replica to 1 if you have 1 GPU on the node.
  • For Document Processing, the CPU of the worker nodes must meet TensorFlow AVX requirements. For more information, see Hardware requirements for TensorFlow with pip.
  • The One Conversion (optional) container uses Scene Text Recognition (STR) to recognize identity cards.
Note:
  • Each Processing Extraction pod uses an extra 50Mi of RAM for the tmpfs volume mount with the type of Memory.
  • Document Processing requires databases for project configuration and processing. These databases must be Db2. The hardware and storage requirements for the databases depend on the system load for each document processing project.
  • deprecated icon (Deprecated) When the global.ocrextraction.id_card_detection.enabled parameter is set to true, the default RAM resource for the OCR extraction pod is set to 1Gi/7Gi.
  • When the deep_learning_object_detection.enabled parameter is set to true, the default RAM resource for the OCR extraction pod is set to 1Gi/3Gi (From 22.0.1-IF005 it is set to 1Gi/4Gi).
  • If you process only fixed-format documents, you might improve performance by disabling deep learning object detection. For more information about the system requirements for Content Analyzer components in this scenario, see IBM Automation Document Processing system requirements when disabling deep learning object detection for fixed-format documents.
  • If you deploy with only the document_processing pattern, you can reduce the sizing for some of the required components. For more information, see IBM Automation Document Processing system requirements for a light production deployment (document_processing pattern only).
Table 16. Automation Workstream Services default requirements for a medium profile
Component CPU Request (m) CPU Limit (m) Memory Request (Mi) Memory Limit (Mi) Number of replicas Pods are licensed for production/nonproduction
Workflow Server 500 2000 2560 3512 2 Yes
Java Message Service 200 1000 512 2048 1 No
Process Federation Server operator 100 500 20 1024 1 No
Process Federation Service 200 1000 512 1024 2 No
Process Federation Service-dbareg 50 100 512 512 1 No
Elasticsearch Service 500 1000 3512 5120 3 No
Notes:
Automation Workstream Services also creates some jobs that request 200m CPU and 128Mi Memory:
  • basimport-job is created only with Business Automation Studio.
  • content-init-job
  • db-init-job-pfs
  • ltpa-job
  • oidc-registry-job
  • workplace-init-job
The db-init-job requests 500m CPU and 512Mi Memory.
Table 17. Business Automation Application default requirements for a medium profile
Component CPU Request (m) CPU Limit (m) Memory Request (Mi) Memory Limit (Mi) Number of replicas Pods are licensed for production/nonproduction
App Engine 300 500 256 1024 3 Yes/No
Resource Registry 100 500 256 512 3 No
Table 18. Business Automation Insights default requirements for a medium profile
Component CPU Request (m) CPU Limit (m) Memory Request (Mi) Memory Limit (Mi) Number of replicas Pods are licensed for production/nonproduction
Business Performance Center 100 4000 512 2000 1 Yes/No
Flink task managers 1000 1000 1728 1728 Default parallelism

2

Yes/No
Flink job manager 1000 1000 1728 1728 1 No
Administration REST API (Optional) 100 (Optional) 500 50 120 2 No
Management REST API 100 1000 50 120 2 No
Management back end (second container of the same management pod as the previous one) 100 500 350 512 2 No
Note: Business Automation Insights relies on Kafka, Apicurio, and Elasticsearch from IBM Automation foundation. For information about their system requirements, see the System requirements page of the IBM Automation foundation documentation. Business Automation Insights also creates the bai-setup and iaf-insights-engine-application-setup Kubernetes jobs and requests 200m for CPU and 350Mi for memory. The CPU and memory limits are set equal to the requests. The pods of these Kubernetes jobs run for a short time at the beginning of the installation, then complete, thus freeing the resources.
Table 19. Business Automation Navigator default requirements for a medium profile
Component CPU Request (m) CPU Limit (m) Memory Request (Mi) Memory Limit (Mi) Number of replicas Pods are licensed for production/nonproduction
Navigator 2000 3000 4096 4096 2 No
Table 20. Business Automation Studio default requirements for a medium profile
Component CPU Request (m) CPU Limit (m) Memory Request (Mi) Memory Limit (Mi) Number of replicas Pods are licensed for production/nonproduction
App Engine playback 300 500 256 1024 2 No
BAStudio 1000 2000 1752 3072 2 No
Resource Registry 100 500 256 512 3 No
Table 21. Business Automation Workflow default requirements with or without Automation Workstream Services for a medium profile
Component CPU Request (m) CPU Limit (m) Memory Request (Mi) Memory Limit (Mi) Number of replicas Pods are licensed for production/nonproduction
Workflow Server 500 2000 2560 3512 2 Yes
Workflow Authoring 500 4000 1024 3072 1 No
Java Message Service 100 1000 512 1024 1 No
Process Federation Server operator 100 500 20 1024 1 No
Process Federation Service 200 1000 512 1024 2 No
Process Federation Service-dbareg 50 100 512 512 1 No
Elasticsearch Service 500 1000 3512 5120 3 No
Intelligent Task Prioritization 500 2000 1024 2560 2 No
Workforce Insights 500 2000 1024 2560 2 No
Notes:

Intelligent Task Prioritization and Workforce Insights are optional and are not supported on Linux on IBM Z.

Business Automation Workflow also creates some jobs that request 200m CPU and 128Mi Memory:
  • basimport-job is created only with Business Automation Studio.
  • case-init-job
  • content-init-job
  • db-init-job-pfs
  • ltpa-job
  • oidc-registry-job
  • oidc-registry-job-for-webpd is created only with workflow center.
  • workplace-init-job
The db-init-job requests 500m CPU and 512Mi Memory.
Table 22. FileNet Content Manager default requirements for a medium profile
Component CPU Request (m) CPU Limit (m) Memory Request (Mi) Memory Limit (Mi) Number of replicas Pods are licensed for production/nonproduction
CPE 1500 2000 3072 3072 2 Yes
CSS 1000 2000 8192 8192 2 Yes
Enterprise Records (ER) 500 1000 1536 1536 2 Yes
Content Collector for SAP (CC4SAP) 500 1000 1536 1536 2 Yes
CMIS 500 1000 1536 1536 2 No
GraphQL 500 2000 3072 3072 3 No
External Share 500 1000 1536 1536 2 No
Task Manager 500 1000 1536 1536 2 No
Note: Not all containers are used in every workload. If a feature like external sharing of documents or the Content Services GraphQL API is not used, that container requires less resources or is optionally not deployed.

In high-volume indexing scenarios, where ingested docs are full-text indexed, the CSS utilization can exceed the CPE utilization. In some cases, this might be 3 - 5 times larger.

For optional processing such as thumbnail generation or text filtering, at least 1 GB of native memory is required by the CPE for each. If both types of processing are expected, add at least 2 GB to the memory requests/limits for the Content Platform Engine (CPE).

With the processing of content, resource requirements increase with the complexity and size of the content. Increase both memory and CPU for the CPE and CSS services to reflect the type and size of documents in your system. Resource requirements might also increase over time as the amount of data in the system grows.

Table 23. Operational Decision Manager default requirements for a medium profile
Component CPU Request (m) CPU Limit (m) Memory Request (Mi) Memory Limit (Mi) Number of replicas Pods are licensed for production/nonproduction Ephemeral storage Request (Mi) Ephemeral storage Limit (Gi)
Decision Center 1000 1000 4096 8192 2 Yes 50 2
Decision Runner 500 2000 2048 2048 2 Yes 50 2
Decision Server Runtime 2000 2000 2048 2048 3 Yes 50 2
Decision Server Console 500 2000 512 2048 1 No 50 2
Note: Operational Decision Manager also creates an odm-oidc-job-registration job that requests 200m CPU and 256Mi Memory. The pod is created at the beginning of the installation and does not last long.

To achieve high availability, you must adapt the cluster configuration and physical resources. You can set up a Db2® High Availability Disaster Recovery (HADR) database. For more information, see Preparing your environment for disaster recovery. For high availability and fault tolerance to be effective, set the number of replicas that you need for the respective configuration parameters in your custom resource file. The operator then manages the scaling.

Large profile hardware requirements

  • Table 24 Cloud Pak for Business Automation operator default requirements for a large profile
  • Table 25 Automation Decision Services default requirements for a large profile
  • Table 26 Automation Document Processing default requirements for a large profile
  • Table 27 Automation Workstream Services default requirements for a large profile
  • Table 28 Business Automation Application default requirements for a large profile
  • Table 29 Business Automation Insights default requirements for a large profile
  • Table 30 Business Automation Navigator default requirements for a large profile
  • Table 31 Business Automation Studio default requirements for a large profile
  • Table 32 Business Automation Workflow default requirements with or without Automation Workstream Services for a large profile
  • Table 33 FileNet Content Manager default requirements for a large profile
  • Table 34 Operational Decision Manager default requirements for a large profile
Table 24. Cloud Pak for Business Automation operator default requirements for a large profile
Component CPU Request (m) CPU Limit (m) Memory Request (Mi) Memory Limit (Mi) Number of replicas Pods are licensed for production/nonproduction
ibm-cp4a-operator 500 1000 256 1024 1 No
Note: If you plan to install the operator in all namespaces for more than one instance, add more resources. You can use the oc patch csv command to add more resources:
oc patch csv ibm-cp4a-operator.v22.1.0 --type=json -p '[
{
"op":"replace",
"path": "/spec/install/spec/deployments/0/spec/template/spec/containers/0/resources/limits/cpu",
"value": "4"
},
{
"op":"replace",
"path": "/spec/install/spec/deployments/0/spec/template/spec/containers/0/resources/limits/memory",
"value": "8Gi"
},
{
"op":"replace",
"path": "/spec/install/spec/deployments/0/spec/template/spec/containers/0/resources/requests/cpu",
"value": "1500m"
},
{
"op":"replace",
"path": "/spec/install/spec/deployments/0/spec/template/spec/containers/0/resources/requests/memory",
"value": "1600Mi"
},
]'
Table 25. Automation Decision Services default requirements for a large profile
Component CPU Request (m) CPU Limit (m) Memory Request (Mi) Memory Limit (Mi) Number of replicas Pods are licensed for production/nonproduction Ephemeral storage Request Ephemeral storage Limit
ads-runtime 1000 2000 2048 3072 2 Yes 100Mi 1.6Gi
ads-credentials 250 1000 800 1536 2 No 100Mi 500Mi
ads-embedded-build 500 2000 1024 2048 1 No 1Gi 2Gi
ads-download 100 300 200 200 2 No 100Mi 500Mi
ads-front 100 300 256 256 2 No 100Mi 500Mi
ads-gitservice 500 1000 800 1536 2 No 400Mi 800Mi
ads-parsing 250 1000 800 1536 2 No 100Mi 500Mi
ads-restapi 500 1000 800 1536 2 No 100Mi 500Mi
ads-run 500 1000 800 1536 2 No 100Mi 500Mi
Note: Automation Decision Services also creates some jobs that request 200m CPU and 256Mi Memory. The following jobs are created at the beginning of the installation and do not last long:
  • ads-ltpa-creation
  • ads-runtime-bai-registration
  • ads-ads-runtime-zen-translation-job
  • ads-designer-zen-translation-job

The ads-rr-integration and ads-ads-rr-as-runtime-synchro jobs are started every 15 minutes, and are also short-lived.

Table 26. Automation Document Processing default requirements for a large profile
Component CPU Request (m) CPU Limit (m) Memory Request (Mi) Memory Limit (Mi) Number of Replicas Pods are licensed for production/nonproduction
OCR Extraction 200 1000 2048 3072

From 22.0.1-IF005 4608

17 Yes
Classify Process 200 500 400 2048 2 Yes
Processing Extraction 500 1000 1024 3584 3 Yes
Natural Language Extractor 200 500 600 1440 2 Yes
Callerapi 200 600 600 1024 2 No
PostProcessing 200 600 400 800 2 No
Setup 200 600 600 1024 6 No
Deep Learning 1000 2000 3072 10240

From 22.0.1-IF005 15360

2 No
UpdateFileDetail 200 600 400 600 2 No
Backend 200 600 400 1024 6 No
Redis 100 250 100 640 1 No
RabbitMQ 100 1000 100 1024 3 No
Common Git Gateway Service (git-service) 500 1000 512 1536 2 No
Content Designer Repo API (CDRA) 500 1000 1024 3072 3 No
Content Designer UI and REST (CDS) 500 1000 512 3072 3 No
Content Project Deployment Service (CPDS) 500 1000 512 3072 3 No
Mongo database (mongodb) 500 1000 512 1024 1 No
Viewer service (viewone) 1000 3000 3072 6144 2 No
One Conversion 200 1000 100 4096 2 Yes
Important:
  • Document Processing - The Deep Learning optional container has the ability to use NVIDIA GPU if it is available. NVIDIA is the only supported GPU for Deep Learning in the Document Processing pattern. The GPU worker nodes must have a unique label, for example ibm-cloud.kubernetes.io/gpu-enabled:true. You add this label value to the deployment script or to the YAML file of your custom resource when you configure the YAML for deployment. To install the NVIDIA GPU operator, follow these installation instructions. For high-availability, you need a minimum of 2 GPU so that 2 replicas of Deep Learning pods can be started. You can change the replica to 1 if you have 1 GPU on the node.
  • For Document Processing, the CPU of the worker nodes must meet TensorFlow AVX requirements. For more information, see Hardware requirements for TensorFlow with pip.
  • The One Conversion (optional) container uses Scene Text Recognition (STR) to recognize identity cards.
Note:
  • Each Processing Extraction pod uses an extra 50Mi of RAM for the tmpfs volume mount with the type of Memory.
  • Document Processing requires databases for project configuration and processing. These databases must be Db2. The hardware and storage requirements for the databases depend on the system load for each document processing project.
  • deprecated icon (Deprecated) When the global.ocrextraction.id_card_detection.enabled parameter is set to true, the default RAM resource for the OCR extraction pod is set to 1Gi/7Gi.
  • When the deep_learning_object_detection.enabled parameter is set to true, the default RAM resource for the OCR extraction pod is set to 1Gi/3Gi (From 22.0.1-IF005 it is set to 1Gi/4Gi).
  • If you process only fixed-format documents, you might improve performance by disabling deep learning object detection. For more information about the system requirements for Content Analyzer components in this scenario, see IBM Automation Document Processing system requirements when disabling deep learning object detection for fixed-format documents.
  • If you deploy with only the document_processing pattern, you can reduce the sizing for some of the required components. For more information, see IBM Automation Document Processing system requirements for a light production deployment (document_processing pattern only).
Table 27. Automation Workstream Services default requirements for a large profile
Component CPU Request (m) CPU Limit (m) Memory Request (Mi) Memory Limit (Mi) Number of replicas Pods are licensed for production/nonproduction
Workflow Server 1000 2000 3060 4000 4 Yes
Java Message Service 500 1000 512 1024 1 No
Process Federation Server operator 100 500 20 1024 1 No
Process Federation Service 200 1000 512 1024 2 No
Process Federation Service-dbareg 50 100 512 512 1 No
Elasticsearch Service 1000 2000 3512 5128 3 No
Notes:
Automation Workstream Services also creates some jobs that request 200m CPU and 128Mi Memory:
  • basimport-job is created only with Business Automation Studio.
  • content-init-job
  • db-init-job-pfs
  • ltpa-job
  • oidc-registry-job
  • workplace-init-job
The db-init-job requests 500m CPU and 512Mi Memory.
Table 28. Business Automation Application default requirements for a large profile
Component CPU Request (m) CPU Limit (m) Memory Request (Mi) Memory Limit (Mi) Number of replicas Pods are licensed for production/nonproduction
App Engine 300 500 256 1024 6 Yes/No
Resource Registry 100 500 256 512 1 No
Table 29. Business Automation Insights default requirements for a large profile
Component CPU Request (m) CPU Limit (m) Memory Request (Mi) Memory Limit (Mi) Number of replicas Pods are licensed for production/nonproduction
Business Performance Center 100 4000 512 2000 1 Yes/No
Flink task managers 1000 1000 1728 1728 Default parallelism

2

Yes/No
Flink job manager 1000 1000 1728 1728 1 No
Administration REST API (Optional) 100 (Optional) 500 50 120 2 No
Management REST API 100 1000 50 120 2 No
Management back end (second container of the same management pod as the previous one) 100 500 350 512 2 No
Note: Business Automation Insights relies on Kafka, Apicurio, and Elasticsearch from IBM Automation foundation. For information about their system requirements, see the System requirements page of the IBM Automation foundation documentation. Business Automation Insights also creates the bai-setup and iaf-insights-engine-application-setup Kubernetes jobs and requests 200m for CPU and 350Mi for memory. The CPU and memory limits are set equal to the requests. The pods of these Kubernetes jobs run for a short time at the beginning of the installation, then complete, thus freeing the resources.
Table 30. Business Automation Navigator default requirements for a large profile
Component CPU Request (m) CPU Limit (m) Memory Request (Mi) Memory Limit (Mi) Number of replicas Pods are licensed for production/nonproduction
Navigator 2000 4000 6144 6144 6 No
Table 31. Business Automation Studio default requirements for a large profile
Component CPU Request (m) CPU Limit (m) Memory Request (Mi) Memory Limit (Mi) Number of replicas Pods are licensed for production/nonproduction
App Engine playback 300 500 256 1024 4 No
BAStudio 2000 4000 1752 3072 2 No
Resource Registry 100 500 256 512 3 No
Table 32. Business Automation Workflow default requirements with or without Automation Workstream Services for a large profile
Component CPU Request (m) CPU Limit (m) Memory Request (Mi) Memory Limit (Mi) Number of replicas Pods are licensed for production/nonproduction
Workflow Server 1000 2000 3060 4000 4 Yes
Workflow Authoring 1000 2000 2000 3000 2 No
Java Message Service 500 1000 512 1024 1 No
Process Federation Server operator 100 500 20 1024 1 No
Process Federation Service 200 1000 512 1024 2 No
Process Federation Service-dbareg 50 100 512 512 1 No
Elasticsearch Service 1000 2000 3512 5128 3 No
Intelligent Task Prioritization 500 2000 1024 2560 2 No
Workforce Insights 500 2000 1024 2560 2 No
Notes:

Intelligent Task Prioritization and Workforce Insights are optional and are not supported on Linux on IBM Z.

Business Automation Workflow also creates some jobs that request 200m CPU and 128Mi Memory:
  • basimport-job is created only with Business Automation Studio.
  • case-init-job
  • content-init-job
  • db-init-job-pfs
  • ltpa-job
  • oidc-registry-job
  • oidc-registry-job-for-webpd is created only with workflow center.
  • workplace-init-job
The db-init-job requests 500m CPU and 512Mi Memory.
Table 33. FileNet Content Manager default requirements for a large profile
Component CPU Request (m) CPU Limit (m) Memory Request (Mi) Memory Limit (Mi) Number of replicas Pods are licensed for production/nonproduction
CPE 3000 4000 8192 8192 2 Yes
CSS 2000 4000 8192 8192 2 Yes
Enterprise Records (ER) 500 1000 1536 1536 2 Yes
Content Collector for SAP (CC4SAP) 500 1000 1536 1536 2 Yes
CMIS 500 1000 1536 1536 2 No
GraphQL 1000 2000 3072 3072 6 No
External Share 500 1000 1536 1536 2 No
Task Manager 500 1000 1536 1536 2 No
Note: Not all containers are used in every workload. If a feature like external sharing of documents or the Content Services GraphQL API is not used, that container requires less resources or is optionally not deployed.

In high-volume indexing scenarios, where ingested docs are full-text indexed, the CSS utilization can exceed the CPE utilization. In some cases, this might be 3 - 5 times larger.

For optional processing such as thumbnail generation or text filtering, at least 1 GB of native memory is required by the CPE for each. If both types of processing are expected, add at least 2 GB to the memory requests/limits for the CPE.

With the processing of content, resources required increase with the complexity and size of the content. Increase both memory and CPU for the CPE and CSS services to reflect the type and size of documents in your system. Resource requirements might also increase over time as the amount of data in the system grows.

Table 34. Operational Decision Manager default requirements for a large profile
Component CPU Request (m) CPU Limit (m) Memory Request (Mi) Memory Limit (Mi) Number of replicas Pods are licensed for production/nonproduction Ephemeral storage Request (Mi) Ephemeral storage Limit (Gi)
Decision Center 2000 2000 4096 16384 2 Yes 50 10
Decision Runner 500 4000 2048 2048 2 Yes 50 10
Decision Server Runtime 2000 2000 4096 4096 6 Yes 50 10
Decision Server Console 500 2000 512 4096 1 No 50 10
Note: Operational Decision Manager also creates an odm-oidc-job-registration job that requests 200m CPU and 256Mi Memory. The pod is created at the beginning of the installation and does not last long.

To achieve high availability, you must adapt the cluster configuration and physical resources. You can set up a Db2 High Availability Disaster Recovery (HADR) database. For more information, see Preparing your environment for disaster recovery. For high availability and fault tolerance to be effective, set the number of replicas that you need for the respective configuration parameters in your custom resource file. The operator then manages the scaling.