System requirements
All the Cloud Pak containers are based on Red Hat Universal Base Images (UBI), and are Red Hat and IBM certified. To use the Cloud Pak images, the administrator must make sure that the target cluster on OpenShift Container Platform has the capacity for all of the capabilities that you plan to install.
For each stage in your operations (a minimum of three stages is expected "development, preproduction, and production"), you must allocate a cluster of nodes before you install the Cloud Pak. Development, preproduction, and production are stages that are best run on different compute nodes. To achieve resource isolation, each namespace is a virtual cluster within the physical cluster and a Cloud Pak deployment is scoped to a single namespace. High-level resource objects are scoped within namespaces. Low-level resources, such as nodes and persistent volumes, are not in namespaces.
The Detailed system requirements page provides a cluster requirements guideline for IBM Cloud Pak® for Business Automation.
The minimum cluster configuration and physical resources that are needed to run the Cloud Pak include:
- Hardware architecture: Intel (amd64 or x86_64 the 64-bit edition for Linux® x86) on all platforms or Linux on IBM Z.
- Node counts: Dual compute nodes for non-production and production clusters. A minimum of three nodes is needed for medium and large production environments and large test environments. Any cluster configuration needs to adapt to the size of the projects and the workload that is expected.
- Master (3 nodes): 4 vCPU and 8 Gi memory on each node.
- Worker (8 nodes): 16 vCPU and 32 Gi memory on each node.
Based on your cluster requirement, you can pick a deployment profile
(sc_deployment_profile_size) and enable it during installation. Cloud Pak for Business Automation provides
small
, medium
, and large
deployment profiles. You
can set the profile during installation, in an update, and during an upgrade.
The default profile is small
. Before you install the Cloud Pak, you can change
the profile to medium
or large
. You can scale up or down a profile
anytime after installation. However, if you install with a medium
profile and
another Cloud Pak specifies a medium
or large
profile then if you
scale down to size small
, the profile for the foundational services remains as it
is. You can scale down the foundational services to small
only if no other Cloud
Pak specifies a medium
or large
size. For more information, see
Setting the common services profile.
It is recommended that you set the IBM Cloud Platform UI (Zen) service to the same size as Cloud Pak for Business Automation. The possible values are small, medium, and large.
oc patch AutomationUIConfig iaf-system --type=merge -p '{"spec":{"zenService":{"scaleConfig":"large"}}}'
The following table describes each deployment profile.
Profile | Description | Scaling (per 8-hour day) | Minimum number of worker nodes |
---|---|---|---|
Small (no HA) | For environments that are used by 10 developers and 25 users. For environments that are used by a single department with a few users; useful for application development. |
|
8 |
Medium | For environments that are used by 20 developers and 125 users. For environments that are used by a single department and by limited users. |
|
16 |
Large | For environments that are used by 50 developers and 625 users. For environments that are shared by multiple departments and users. |
|
32 |
You can use custom resource templates to update the hardware requirements of the services that you want to install.
The following sections provide the default resources for each capability. For more information about the minimum requirements of foundational services, see Hardware requirements and recommendations for foundational services.
- Small profile hardware requirements
- Medium profile hardware requirements
- Large profile hardware requirements
small
, medium
, and
large
profiles are derived under specific operating and environment conditions. Due
to the differences in hardware, networking, and storage, the resources vary for each workload in
different environments. It is important that you run a performance test with peaked workload in a
Test or UAT environment. Monitor the resource usage (for example CPU and memory) to determine the
resource usage for each component. Based on the resource usage, make changes to CPU request, CPU
limit, memory request, and memory limit in the custom resource for each component.Small profile hardware requirements
- Table 2 Cloud Pak for Business Automation operator default requirements for a small profile
- Table 3 Automation Decision Services default requirements for a small profile
- Table 4 Automation Document Processing default requirements for a small profile
- Table 5 Automation Workstream Services default requirements for a small profile
- Table 6 Business Automation Application default requirements for a small profile
- Table 7 Business Automation Insights default requirements for a small profile
- Table 8 Business Automation Navigator default requirements for a small profile
- Table 9 Business Automation Studio default requirements for a small profile
- Table 10 Business Automation Workflow default requirements with or without Automation Workstream Services for a small profile
- Table 11 FileNet® Content Manager default requirements for a small profile
- Table 12 Operational Decision Manager default requirements for a small profile
- Table 13 Workflow Process Service default requirements for a small profile
Component | CPU Request (m) | CPU Limit (m) | Memory Request (Mi) | Memory Limit (Mi) | Number of replicas | Pods are licensed for production/non-production |
---|---|---|---|---|---|---|
ibm-cp4a-operator | 500 | 1000 | 256 | 1024 | 1 | No |
oc patch csv
command to add more
resources:oc patch csv ibm-cp4a-operator.v21.0.3 --type=json -p '[
{
"op":"replace",
"path": "/spec/install/spec/deployments/0/spec/template/spec/containers/0/resources/limits/cpu",
"value": "4"
},
{
"op":"replace",
"path": "/spec/install/spec/deployments/0/spec/template/spec/containers/0/resources/limits/memory",
"value": "8Gi"
},
{
"op":"replace",
"path": "/spec/install/spec/deployments/0/spec/template/spec/containers/0/resources/requests/cpu",
"value": "1500m"
},
{
"op":"replace",
"path": "/spec/install/spec/deployments/0/spec/template/spec/containers/0/resources/requests/memory",
"value": "1600Mi"
},
]'
Component | CPU Request (m) | CPU Limit (m) | Memory Request (Mi) | Memory Limit (Mi) | Number of replicas | Pods are licensed for production/non-production |
---|---|---|---|---|---|---|
ads-runtime | 500 | 1000 | 2048 | 3072 | 1 | Yes |
ads-credentials | 250 | 1000 | 800 | 1536 | 1 | No |
ads-embedded-build | 500 | 2000 | 1024 | 2048 | 1 | No |
ads-download | 100 | 300 | 200 | 200 | 1 | No |
ads-front | 100 | 300 | 256 | 256 | 1 | No |
ads-gitservice | 500 | 1000 | 800 | 1536 | 1 | No |
ads-parsing | 250 | 1000 | 800 | 1536 | 1 | No |
ads-restapi | 500 | 1000 | 800 | 1536 | 1 | No |
ads-run | 500 | 1000 | 800 | 1536 | 1 | No |
- ads-ltpa-creation
- ads-runtime-bai-registration
- ads-ads-runtime-zen-translation-job
The ads-rr-integration and ads-ads-rr-as-runtime-synchro jobs are started every 15 minutes, and are also short-lived.
Component | CPU Request (m) | CPU Limit (m) | Memory Request (Mi) | Memory Limit (Mi) | Number of Replicas | Pods are licensed for production/non-production |
---|---|---|---|---|---|---|
OCR Extraction | 200 | 1000 | 1024 | 3072 From 21.0.3-IF014 4608 From 21.0.3-IF0265120 |
5 | Yes |
Classify Process | 200 | 500 | 400 | 2048 | 2 | Yes |
Processing Extraction | 500 | 1000 | 1024 | 3584 | 2 | Yes |
Natural Language Extractor | 200 | 500 | 600 | 1440 | 2 | Yes |
Callerapi | 200 | 600 | 600 | 1024 | 2 | No |
PostProcessing | 200 | 600 | 400 | 800 | 2 | No |
Setup | 200 | 600 | 600 | 1024 | 2 | No |
Deep Learning | 1000 | 2000 | 3072 | 10240 From 21.0.3-IF016 15360 |
2 | No |
UpdateFileDetail | 200 | 600 | 400 | 600 | 2 | No |
Backend | 200 | 600 | 400 | 1024 | 2 | No |
Redis | 100 | 250 | 100 | 640 | 2 | No |
RabbitMQ | 100 | 1000 | 100 | 1024 | 2 | No |
Common Git Gateway Service (git-service) | 500 | 1000 | 512 | 1536 | 1 | No |
Content Designer Repo API (CDRA) | 500 | 1000 | 1024 | 3072 | 1 | No |
Content Designer UI and REST (CDS) | 500 | 1000 | 512 | 3072 | 1 | No |
Content Project Deployment Service (CPDS) | 500 | 1000 | 512 | 3072 | 1 | No |
Mongo database (mongodb) | 500 | 1000 | 512 | 1024 | 1 | No |
Viewer service (viewone) | 500 | 1000 | 1024 | 3072 | 1 | No |
- Document Processing - For Deep Learning, it is highly recommended to use worker nodes with NVIDIA GPU for better and faster results. NVIDIA is the only supported GPU for Deep Learning in the Document Processing pattern. Use these installation instructions to install the NVIDIA GPU Operator.
- The GPU worker nodes must have a unique label, like
ibm-cloud.kubernetes.io/gpu-enabled:true
. You add this label value to the deployment script or your CR YAML when you configure the YAML for deployment. To achieve HA, you need a minimum of 2 GPU so that 2 replicas of Deep Learning pods can be started. You can change the replica to 1 if you have 1 GPU on the node. - For Document Processing, the CPU of the worker nodes must meet TensorFlow AVX requirements. For more information, see Hardware requirements for TensorFlow with pip.
- Each Processing Extraction pod uses an extra 50Mi of RAM for the
tmpfs
volume mount with the type ofMemory
. - Document Processing requires databases for project configuration and processing. These databases must be Db2. The hardware and storage requirements for the databases depend on the system load for each document processing project.
- When the global.ocrextraction.id_card_detection.enabled parameter is set to
true
, the default RAM resource is set to1Gi/7Gi
(From 21.0.3-IF014 it is set to1Gi/8.5Gi
).
Component | CPU Request (m) | CPU Limit (m) | Memory Request (Mi) | Memory Limit (Mi) | Number of replicas | Pods are licensed for production/non-production |
---|---|---|---|---|---|---|
Workflow Server | 500 | 2000 | 2048 | 3072 | 1 | Yes |
Java™ Message Service | 100 | 1000 | 512 | 1024 | 1 | No |
Process Federation Service | 200 | 1000 | 512 | 1024 | 1 | No |
Process Federation Service-dbareg | 50 | 100 | 512 | 512 | 1 | No |
Elasticsearch Service | 500 | 800 | 820 | 2048 | 1 | No |
- basimport-job is created only with Business Automation Studio.
- content-init-job
- db-init-job-pfs
- ltpa-job
- oidc-registry-job
- workplace-init-job
Component | CPU Request (m) | CPU Limit (m) | Memory Request (Mi) | Memory Limit (Mi) | Number of replicas | Pods are licensed for production/non-production |
---|---|---|---|---|---|---|
App Engine | 300 | 500 | 256 | 1024 | 1 | Yes/No |
Resource Registry | 100 | 500 | 256 | 512 | 1 | No |
Component | CPU Request (m) | CPU Limit (m) | Memory Request (Mi) | Memory Limit (Mi) | Number of replicas | Pods are licensed for production/non-production |
---|---|---|---|---|---|---|
Business Performance Center | 100 | 4000 | 512 | 2000 | 1 | Yes/No |
Flink task managers | 1000 | 1000 | 1728 | 1728 | Default parallelism 2 |
Yes/No |
Flink job manager | 1000 | 1000 | 1728 | 1728 | 1 | No |
Management REST API | 100 | 1000 | 50 | 120 | 1 | No |
Management back end (second container of the same management pod as the previous one) | 100 | 500 | 350 | 512 | 1 | No |
bai-setup
and iaf-insights-engine-application-setup
Kubernetes
jobs and requests 200m for CPU and 350Mi for memory. The CPU and memory limits are set equal to the
requests. The pods of these Kubernetes jobs run for a short time at the beginning of the
installation, then complete, thus freeing the resources.Component | CPU Request (m) | CPU Limit (m) | Memory Request (Mi) | Memory Limit (Mi) | Number of replicas | Pods are licensed for production/non-production |
---|---|---|---|---|---|---|
Navigator | 1000 | 1000 | 3072 | 3072 | 1 | No |
Component | CPU Request (m) | CPU Limit (m) | Memory Request (Mi) | Memory Limit (Mi) | Number of replicas | Pods are licensed for production/non-production |
---|---|---|---|---|---|---|
App Engine playback | 300 | 500 | 256 | 1024 | 1 | No |
BAStudio | 1100 | 2000 | 1752 | 3072 | 1 | No |
Resource Registry | 100 | 500 | 256 | 512 | 1 | No |
Component | CPU Request (m) | CPU Limit (m) | Memory Request (Mi) | Memory Limit (Mi) | Number of replicas | Pods are licensed for production/non-production |
---|---|---|---|---|---|---|
Workflow Server | 500 | 2000 | 2048 | 3072 | 1 | Yes |
Workflow Authoring | 500 | 2000 | 2048 | 3072 | 1 | No |
Java Message Service | 100 | 1000 | 512 | 1024 | 1 | No |
Process Federation Service | 200 | 1000 | 512 | 1024 | 1 | No |
Process Federation Service-dbareg | 50 | 100 | 512 | 512 | 1 | No |
Elasticsearch Service | 500 | 800 | 820 | 2048 | 1 | No |
Intelligent Task Prioritization | 500 | 2000 | 1024 | 2560 | 1 | No |
Workforce Insights | 500 | 2000 | 1024 | 2560 | 1 | No |
Intelligent Task Prioritization and Workforce Insights are optional and are not supported on Linux on IBM Z.
- basimport-job is created only with Business Automation Studio.
- case-init-job
- content-init-job
- db-init-job-pfs
- ltpa-job
- oidc-registry-job
- oidc-registry-job-for-webpd is created only with workflow center.
- workplace-init-job
Component | CPU Request (m) | CPU Limit (m) | Memory Request (Mi) | Memory Limit (Mi) | Number of replicas | Pods are licensed for production/non-production |
---|---|---|---|---|---|---|
CPE | 1000 | 1000 | 3072 | 3072 | 1 | Yes |
CSS | 1000 | 1000 | 4096 | 4096 | 1 | Yes |
Enterprise Records (ER) | 500 | 1000 | 1536 | 1536 | 1 | Yes |
Content Collector for SAP (CC4SAP) | 500 | 1000 | 1536 | 1536 | 1 | Yes |
CMIS | 500 | 1000 | 1536 | 1536 | 1 | No |
GraphQL | 500 | 1000 | 1536 | 1536 | 1 | No |
External Share | 500 | 1000 | 1536 | 1536 | 1 | No |
Task Manager | 500 | 1000 | 1536 | 1536 | 1 | No |
In high-volume indexing scenarios, where ingested docs are full text indexed, the CSS utilization can exceed the CPE utilization. In some cases, this might be 3 - 5 times larger.
For optional processing such as thumbnail generation or text filtering, at least 1 GB of native memory is required by the CPE for each. If both types of processing are expected, add at least 2 GB to the memory requests/limits for the CPE.
With the processing of content, resources required increase with the complexity and size of the content. Increase both memory and CPU for the CPE and CSS services to reflect the type and size of documents in your system. Resource requirements might also increase over time as the amount of data in the system grows.
Component | CPU Request (m) | CPU Limit (m) | Memory Request (Mi) | Memory Limit (Mi) | Number of replicas | Pods are licensed for production/non-production |
---|---|---|---|---|---|---|
Decision Center | 1000 | 1000 | 4096 | 4096 | 1 | Yes |
Decision Runner | 500 | 500 | 1024 | 2048 | 1 | Yes |
Decision Server Runtime | 500 | 1000 | 2048 | 2048 | 1 | Yes |
Decision Server Console | 500 | 500 | 512 | 1024 | 1 | No |
Component | CPU Request (m) | CPU Limit (m) | Memory Request (Mi) | Memory Limit (Mi) | Number of replicas |
---|---|---|---|---|---|
PostgreSQL | 1000 | 1000 | 2048 | 2048 | 1 |
Workflow Process Service | 500 | 2000 | 2048 | 2048 | 1 |
Resource Registry | 100 | 500 | 256 | 512 | 1 |
Workflow Process Service operator | 100 | 500 | 20 | 500 |
Medium profile hardware requirements
- Table 14 Cloud Pak for Business Automation operator default requirements for a medium profile
- Table 15 Automation Decision Services default requirements for a medium profile
- Table 16 Automation Document Processing default requirements for a medium profile
- Table 17 Automation Workstream Services default requirements for a medium profile
- Table 18 Business Automation Application default requirements for a medium profile
- Table 19 Business Automation Insights default requirements for a medium profile
- Table 20 Business Automation Navigator default requirements for a medium profile
- Table 21 Business Automation Studio default requirements for a medium profile
- Table 22 Business Automation Workflow default requirements with or without Automation Workstream Services for a medium profile
- Table 23 FileNet Content Manager default requirements for a medium profile
- Table 24 Operational Decision Manager default requirements for a medium profile
- Table 25 Workflow Process Service default requirements for a medium profile
Component | CPU Request (m) | CPU Limit (m) | Memory Request (Mi) | Memory Limit (Mi) | Number of replicas | Pods are licensed for production/non-production |
---|---|---|---|---|---|---|
ibm-cp4a-operator | 500 | 1000 | 256 | 1024 | 1 | No |
oc patch csv
command to add more
resources:oc patch csv ibm-cp4a-operator.v21.0.3 --type=json -p '[
{
"op":"replace",
"path": "/spec/install/spec/deployments/0/spec/template/spec/containers/0/resources/limits/cpu",
"value": "4"
},
{
"op":"replace",
"path": "/spec/install/spec/deployments/0/spec/template/spec/containers/0/resources/limits/memory",
"value": "8Gi"
},
{
"op":"replace",
"path": "/spec/install/spec/deployments/0/spec/template/spec/containers/0/resources/requests/cpu",
"value": "1500m"
},
{
"op":"replace",
"path": "/spec/install/spec/deployments/0/spec/template/spec/containers/0/resources/requests/memory",
"value": "1600Mi"
},
]'
Component | CPU Request (m) | CPU Limit (m) | Memory Request (Mi) | Memory Limit (Mi) | Number of replicas | Pods are licensed for production/non-production |
---|---|---|---|---|---|---|
ads-runtime | 1000 | 2000 | 2048 | 3072 | 1 | Yes |
ads-credentials | 250 | 1000 | 800 | 1536 | 1 | No |
ads-embedded-build | 500 | 2000 | 1024 | 2048 | 1 | No |
ads-download | 100 | 300 | 200 | 200 | 1 | No |
ads-front | 100 | 300 | 256 | 256 | 1 | No |
ads-gitservice | 500 | 1000 | 800 | 1536 | 1 | No |
ads-parsing | 250 | 1000 | 800 | 1536 | 1 | No |
ads-restapi | 500 | 1000 | 800 | 1536 | 1 | No |
ads-run | 500 | 1000 | 800 | 1536 | 1 | No |
- ads-ltpa-creation
- ads-runtime-bai-registration
- ads-ads-runtime-zen-translation-job
The ads-rr-integration and ads-ads-rr-as-runtime-synchro jobs are started every 15 minutes, and are also short-lived.
Component | CPU Request (m) | CPU Limit (m) | Memory Request (Mi) | Memory Limit (Mi) | Number of Replicas | Pods are licensed for production/non-production |
---|---|---|---|---|---|---|
OCR Extraction | 200 | 1000 | 1024 | 3072 From 21.0.3-IF014 4608 From 21.0.3-IF026 5120 |
8 | Yes |
Classify Process | 200 | 500 | 400 | 2048 | 2 | Yes |
Processing Extraction | 500 | 1000 | 1024 | 3584 | 3 | Yes |
Natural Language Extractor | 200 | 500 | 600 | 1440 | 2 | Yes |
Callerapi | 200 | 600 | 600 | 1024 | 2 | No |
PostProcessing | 200 | 600 | 400 | 800 | 2 | No |
Setup | 200 | 600 | 600 | 1024 | 3 | No |
Deep Learning | 1000 | 2000 | 3072 | 10240 From 21.0.3-IF016 15360 |
2 | No |
UpdateFileDetail | 200 | 600 | 400 | 600 | 2 | No |
Backend | 200 | 600 | 400 | 1024 | 3 | No |
Redis | 100 | 250 | 100 | 640 | 3 | No |
RabbitMQ | 100 | 1000 | 100 | 1024 | 3 | No |
Common Git Gateway Service (git-service) | 500 | 1000 | 512 | 1536 | 1 | No |
Content Designer Repo API (CDRA) | 500 | 1000 | 1024 | 3072 | 2 | No |
Content Designer UI and REST (CDS) | 500 | 1000 | 512 | 3072 | 2 | No |
Content Project Deployment Service (CPDS) | 500 | 1000 | 512 | 3072 | 2 | No |
Mongo database (mongodb) | 500 | 1000 | 512 | 1024 | 1 | No |
Viewer service (viewone) | 500 | 1000 | 1024 | 3072 | 2 | No |
- Document Processing - For Deep Learning, it is highly recommended to use worker nodes with NVIDIA GPU for better and faster results. NVIDIA is the only supported GPU for Deep Learning in the Document Processing pattern. Use these installation instructions to install the NVIDIA GPU Operator.
- The GPU worker nodes must have a unique label, like
ibm-cloud.kubernetes.io/gpu-enabled:true
. You add this label value to the deployment script or your CR YAML when you configure the YAML for deployment. To achieve HA, you need a minimum of 2 GPU so that 2 replicas of Deep Learning pods can be started. You can change the replica to 1 if you have 1 GPU on the node. - For Document Processing, the CPU of the worker nodes must meet TensorFlow AVX requirements. For more information, see Hardware requirements for TensorFlow with pip.
- Each Processing Extraction pod uses an extra 50Mi of RAM for the
tmpfs
volume mount with the type ofMemory
. - Document Processing requires databases for project configuration and processing. These databases must be Db2. The hardware and storage requirements for the databases depend on the system load for each document processing project.
- When the global.ocrextraction.id_card_detection.enabled parameter is set to
true
, the default RAM resource is set to1Gi/7Gi
(From 21.0.3-IF014 it is set to1Gi/8.5Gi
).
Component | CPU Request (m) | CPU Limit (m) | Memory Request (Mi) | Memory Limit (Mi) | Number of replicas | Pods are licensed for production/non-production |
---|---|---|---|---|---|---|
Workflow Server | 500 | 2000 | 2560 | 3512 | 2 | Yes |
Java Message Service | 200 | 1000 | 512 | 2048 | 1 | No |
Process Federation Service | 200 | 1000 | 512 | 1024 | 2 | No |
Process Federation Service-dbareg | 50 | 100 | 512 | 512 | 1 | No |
Elasticsearch Service | 500 | 1000 | 3512 | 5120 | 3 | No |
- basimport-job is created only with Business Automation Studio.
- content-init-job
- db-init-job-pfs
- ltpa-job
- oidc-registry-job
- workplace-init-job
Component | CPU Request (m) | CPU Limit (m) | Memory Request (Mi) | Memory Limit (Mi) | Number of replicas | Pods are licensed for production/non-production |
---|---|---|---|---|---|---|
App Engine | 300 | 500 | 256 | 1024 | 3 | Yes/No |
Resource Registry | 100 | 500 | 256 | 512 | 3 | No |
Component | CPU Request (m) | CPU Limit (m) | Memory Request (Mi) | Memory Limit (Mi) | Number of replicas | Pods are licensed for production/non-production |
---|---|---|---|---|---|---|
Business Performance Center | 100 | 4000 | 512 | 2000 | 2 | Yes/No |
Flink task managers | 1000 | 1000 | 1728 | 1728 | Default parallelism 2 |
Yes/No |
Flink job manager | 1000 | 1000 | 1728 | 1728 | 1 | No |
Management REST API | 100 | 1000 | 50 | 120 | 2 | No |
Management back end (second container of the same management pod as the previous one) | 100 | 500 | 350 | 512 | 2 | No |
bai-setup
and iaf-insights-engine-application-setup
Kubernetes
jobs and requests 200m for CPU and 350Mi for memory. The CPU and memory limits are set equal to the
requests. The pods of these Kubernetes jobs run for a short time at the beginning of the
installation, then complete, thus freeing the resources.Component | CPU Request (m) | CPU Limit (m) | Memory Request (Mi) | Memory Limit (Mi) | Number of replicas | Pods are licensed for production/non-production |
---|---|---|---|---|---|---|
Navigator | 2000 | 3000 | 4096 | 4096 | 2 | No |
Component | CPU Request (m) | CPU Limit (m) | Memory Request (Mi) | Memory Limit (Mi) | Number of replicas | Pods are licensed for production/non-production |
---|---|---|---|---|---|---|
App Engine playback | 300 | 500 | 256 | 1024 | 2 | No |
BAStudio | 1000 | 2000 | 1752 | 3072 | 2 | No |
Resource Registry | 100 | 500 | 256 | 512 | 3 | No |
Component | CPU Request (m) | CPU Limit (m) | Memory Request (Mi) | Memory Limit (Mi) | Number of replicas | Pods are licensed for production/non-production |
---|---|---|---|---|---|---|
Workflow Server | 500 | 2000 | 2560 | 3512 | 2 | Yes |
Workflow Authoring | 500 | 4000 | 1024 | 3072 | 1 | No |
Java Message Service | 100 | 1000 | 512 | 1024 | 1 | No |
Process Federation Service | 200 | 1000 | 512 | 1024 | 2 | No |
Process Federation Service-dbareg | 50 | 100 | 512 | 512 | 1 | No |
Elasticsearch Service | 500 | 1000 | 3512 | 5120 | 3 | No |
Intelligent Task Prioritization | 500 | 2000 | 1024 | 2560 | 2 | No |
Workforce Insights | 500 | 2000 | 1024 | 2560 | 2 | No |
Intelligent Task Prioritization and Workforce Insights are optional and are not supported on Linux on IBM Z.
- basimport-job is created only with Business Automation Studio.
- case-init-job
- content-init-job
- db-init-job-pfs
- ltpa-job
- oidc-registry-job
- oidc-registry-job-for-webpd is created only with workflow center.
- workplace-init-job
Component | CPU Request (m) | CPU Limit (m) | Memory Request (Mi) | Memory Limit (Mi) | Number of replicas | Pods are licensed for production/non-production |
---|---|---|---|---|---|---|
CPE | 1500 | 2000 | 3072 | 3072 | 2 | Yes |
CSS | 1000 | 2000 | 8192 | 8192 | 2 | Yes |
Enterprise Records (ER) | 500 | 1000 | 1536 | 1536 | 2 | Yes |
Content Collector for SAP (CC4SAP) | 500 | 1000 | 1536 | 1536 | 2 | Yes |
CMIS | 500 | 1000 | 1536 | 1536 | 2 | No |
GraphQL | 500 | 2000 | 3072 | 3072 | 3 | No |
External Share | 500 | 1000 | 1536 | 1536 | 2 | No |
Task Manager | 500 | 1000 | 1536 | 1536 | 2 | No |
In high-volume indexing scenarios, where ingested docs are full text indexed, the CSS utilization can exceed the CPE utilization. In some cases, this might be 3 - 5 times larger.
For optional processing such as thumbnail generation or text filtering, at least 1 GB of native memory is required by the CPE for each. If both types of processing are expected, add at least 2 GB to the memory requests/limits for the CPE.
With the processing of content, resources required increase with the complexity and size of the content. Increase both memory and CPU for the CPE and CSS services to reflect the type and size of documents in your system. Resource requirements might also increase over time as the amount of data in the system grows.
Component | CPU Request (m) | CPU Limit (m) | Memory Request (Mi) | Memory Limit (Mi) | Number of replicas | Pods are licensed for production/non-production |
---|---|---|---|---|---|---|
Decision Center | 1000 | 1000 | 4096 | 8192 | 2 | Yes |
Decision Runner | 500 | 2000 | 2048 | 2048 | 2 | Yes |
Decision Server Runtime | 2000 | 2000 | 2048 | 2048 | 3 | Yes |
Decision Server Console | 500 | 2000 | 512 | 2048 | 1 | No |
Component | CPU Request (m) | CPU Limit (m) | Memory Request (Mi) | Memory Limit (Mi) | Number of replicas |
---|---|---|---|---|---|
PostgreSQL | 1000 | 1000 | 2048 | 2048 | 2 |
Workflow Process Service | 1500 | 3000 | 3072 | 3072 | 2 |
Resource Registry | 100 | 500 | 256 | 512 | 2 |
Workflow Process Service operator | 100 | 500 | 20 | 500 |
To achieve high availability, you must adapt the cluster configuration and physical resources. You can set up a Db2® High Availability Disaster Recovery (HADR) database. For more information, see Preparing your environment for disaster recovery. For high availability and fault tolerance to be effective, set the number of replicas that you need for the respective configuration parameters in your custom resource file. The operator then manages the scaling.
Large profile hardware requirements
- Table 26 Cloud Pak for Business Automation operator default requirements for a large profile
- Table 27 Automation Decision Services default requirements for a large profile
- Table 28 Automation Document Processing default requirements for a large profile
- Table 29 Automation Workstream Services default requirements for a large profile
- Table 30 Business Automation Application default requirements for a large profile
- Table 31 Business Automation Insights default requirements for a large profile
- Table 32 Business Automation Navigator default requirements for a large profile
- Table 33 Business Automation Studio default requirements for a large profile
- Table 34 Business Automation Workflow default requirements with or without Automation Workstream Services for a large profile
- Table 35 FileNet Content Manager default requirements for a large profile
- Table 36 Operational Decision Manager default requirements for a large profile
- Table 37 Workflow Process Service default requirements for a large profile
Component | CPU Request (m) | CPU Limit (m) | Memory Request (Mi) | Memory Limit (Mi) | Number of replicas | Pods are licensed for production/non-production |
---|---|---|---|---|---|---|
ibm-cp4a-operator | 500 | 1000 | 256 | 1024 | 1 | No |
oc patch csv
command to add more
resources:oc patch csv ibm-cp4a-operator.v21.0.3 --type=json -p '[
{
"op":"replace",
"path": "/spec/install/spec/deployments/0/spec/template/spec/containers/0/resources/limits/cpu",
"value": "4"
},
{
"op":"replace",
"path": "/spec/install/spec/deployments/0/spec/template/spec/containers/0/resources/limits/memory",
"value": "8Gi"
},
{
"op":"replace",
"path": "/spec/install/spec/deployments/0/spec/template/spec/containers/0/resources/requests/cpu",
"value": "1500m"
},
{
"op":"replace",
"path": "/spec/install/spec/deployments/0/spec/template/spec/containers/0/resources/requests/memory",
"value": "1600Mi"
},
]'
Component | CPU Request (m) | CPU Limit (m) | Memory Request (Mi) | Memory Limit (Mi) | Number of replicas | Pods are licensed for production/non-production |
---|---|---|---|---|---|---|
ads-runtime | 1000 | 2000 | 2048 | 3072 | 2 | Yes |
ads-credentials | 250 | 1000 | 800 | 1536 | 2 | No |
ads-embedded-build | 500 | 2000 | 1024 | 2048 | 1 | No |
ads-download | 100 | 300 | 200 | 200 | 2 | No |
ads-front | 100 | 300 | 256 | 256 | 2 | No |
ads-gitservice | 500 | 1000 | 800 | 1536 | 2 | No |
ads-parsing | 250 | 1000 | 800 | 1536 | 2 | No |
ads-restapi | 500 | 1000 | 800 | 1536 | 2 | No |
ads-run | 500 | 1000 | 800 | 1536 | 2 | No |
- ads-ltpa-creation
- ads-runtime-bai-registration
- ads-ads-runtime-zen-translation-job
The ads-rr-integration and ads-ads-rr-as-runtime-synchro jobs are started every 15 minutes, and are also short-lived.
Component | CPU Request (m) | CPU Limit (m) | Memory Request (Mi) | Memory Limit (Mi) | Number of Replicas | Pods are licensed for production/non-production |
---|---|---|---|---|---|---|
OCR Extraction | 200 | 1000 | 1024 | 3072 From 21.0.3-IF014 4608 From 21.0.3-IF026 5120 |
14 | Yes |
Classify Process | 200 | 500 | 400 | 2048 | 2 | Yes |
Processing Extraction | 500 | 1000 | 1024 | 3584 | 6 | Yes |
Natural Language Extractor | 200 | 500 | 600 | 1440 | 2 | Yes |
Callerapi | 200 | 600 | 600 | 1024 | 2 | No |
PostProcessing | 200 | 600 | 400 | 800 | 2 | No |
Setup | 200 | 600 | 600 | 1024 | 4 | No |
Deep Learning | 1000 | 2000 | 3072 | 10240 From 21.0.3-IF016 15360 |
2 | No |
UpdateFileDetail | 200 | 600 | 400 | 600 | 2 | No |
Backend | 200 | 600 | 400 | 1024 | 4 | No |
Redis | 100 | 250 | 100 | 640 | 3 | No |
RabbitMQ | 100 | 1000 | 100 | 1024 | 3 | No |
Common Git Gateway Service (git-service) | 500 | 1000 | 512 | 1536 | 2 | No |
Content Designer Repo API (CDRA) | 500 | 1000 | 1024 | 3072 | 3 | No |
Content Designer UI and REST (CDS) | 500 | 1000 | 512 | 3072 | 3 | No |
Content Project Deployment Service (CPDS) | 500 | 1000 | 512 | 3072 | 3 | No |
Mongo database (mongodb) | 500 | 1000 | 512 | 1024 | 1 | No |
Viewer service (viewone) | 500 | 1000 | 1024 | 3072 | 4 | No |
- Document Processing - For Deep Learning, it is highly recommended to use worker nodes with NVIDIA GPU for better and faster results. NVIDIA is the only supported GPU for Deep Learning in the Document Processing pattern. Use these installation instructions to install the NVIDIA GPU Operator.
- The GPU worker nodes must have a unique label, like
ibm-cloud.kubernetes.io/gpu-enabled:true
. You add this label value to the deployment script or your CR YAML when you configure the YAML for deployment. To achieve HA, you need a minimum of 2 GPU so that 2 replicas of Deep Learning pods can be started. You can change the replica to 1 if you have 1 GPU on the node. - For Document Processing, the CPU of the worker nodes must meet TensorFlow AVX requirements. For more information, see Hardware requirements for TensorFlow with pip.
- Each Processing Extraction pod uses an extra 50Mi of RAM for the
tmpfs
volume mount with the type ofMemory
. - Document Processing requires databases for project configuration and processing. These databases must be Db2. The hardware and storage requirements for the databases depend on the system load for each document processing project.
- When the global.ocrextraction.id_card_detection.enabled parameter is set to
true
, the default RAM resource is set to1Gi/7Gi
(From 21.0.3-IF014 it is set to1Gi/8.5Gi
).
Component | CPU Request (m) | CPU Limit (m) | Memory Request (Mi) | Memory Limit (Mi) | Number of replicas | Pods are licensed for production/non-production |
---|---|---|---|---|---|---|
Workflow Server | 1000 | 2000 | 3060 | 4000 | 4 | Yes |
Java Message Service | 500 | 1000 | 512 | 1024 | 1 | No |
Process Federation Service | 300 | 1000 | 750 | 1512 | 2 | No |
Process Federation Service-dbareg | 50 | 100 | 512 | 512 | 1 | No |
Elasticsearch Service | 1000 | 2000 | 3512 | 5128 | 3 | No |
- basimport-job is created only with Business Automation Studio.
- content-init-job
- db-init-job-pfs
- ltpa-job
- oidc-registry-job
- workplace-init-job
Component | CPU Request (m) | CPU Limit (m) | Memory Request (Mi) | Memory Limit (Mi) | Number of replicas | Pods are licensed for production/non-production |
---|---|---|---|---|---|---|
App Engine | 300 | 500 | 256 | 1024 | 6 | Yes/No |
Resource Registry | 100 | 500 | 256 | 512 | 1 | No |
Component | CPU Request (m) | CPU Limit (m) | Memory Request (Mi) | Memory Limit (Mi) | Number of replicas | Pods are licensed for production/non-production |
---|---|---|---|---|---|---|
Business Performance Center | 100 | 4000 | 512 | 2000 | 2 | Yes/No |
Flink task managers | 1000 | 1000 | 1728 | 1728 | Default parallelism 2 |
Yes/No |
Flink job manager | 1000 | 1000 | 1728 | 1728 | 1 | No |
Management REST API | 100 | 1000 | 50 | 120 | 1 | No |
Management back end (second container of the same management pod as the previous one) | 100 | 500 | 350 | 512 | 1 | No |
bai-setup
and iaf-insights-engine-application-setup
Kubernetes
jobs and requests 200m for CPU and 350Mi for memory. The CPU and memory limits are set equal to the
requests. The pods of these Kubernetes jobs run for a short time at the beginning of the
installation, then complete, thus freeing the resources.Component | CPU Request (m) | CPU Limit (m) | Memory Request (Mi) | Memory Limit (Mi) | Number of replicas | Pods are licensed for production/non-production |
---|---|---|---|---|---|---|
Navigator | 2000 | 4000 | 6144 | 6144 | 6 | No |
Component | CPU Request (m) | CPU Limit (m) | Memory Request (Mi) | Memory Limit (Mi) | Number of replicas | Pods are licensed for production/non-production |
---|---|---|---|---|---|---|
App Engine playback | 300 | 500 | 256 | 1024 | 4 | No |
BAStudio | 2000 | 4000 | 1752 | 3072 | 2 | No |
Resource Registry | 100 | 500 | 256 | 512 | 3 | No |
Component | CPU Request (m) | CPU Limit (m) | Memory Request (Mi) | Memory Limit (Mi) | Number of replicas | Pods are licensed for production/non-production |
---|---|---|---|---|---|---|
Workflow Server | 1000 | 2000 | 3060 | 4000 | 4 | Yes |
Workflow Authoring | 1000 | 2000 | 2000 | 3000 | 2 | No |
Java Message Service | 500 | 1000 | 512 | 1024 | 1 | No |
Process Federation Service | 300 | 1000 | 750 | 1512 | 2 | No |
Process Federation Service-dbareg | 50 | 100 | 512 | 512 | 1 | No |
Elasticsearch Service | 1000 | 2000 | 3512 | 5128 | 3 | No |
Intelligent Task Prioritization | 500 | 2000 | 1024 | 2560 | 2 | No |
Workforce Insights | 500 | 2000 | 1024 | 2560 | 2 | No |
Intelligent Task Prioritization and Workforce Insights are optional and are not supported on Linux on IBM Z.
- basimport-job is created only with Business Automation Studio.
- case-init-job
- content-init-job
- db-init-job-pfs
- ltpa-job
- oidc-registry-job
- oidc-registry-job-for-webpd is created only with workflow center.
- workplace-init-job
Component | CPU Request (m) | CPU Limit (m) | Memory Request (Mi) | Memory Limit (Mi) | Number of replicas | Pods are licensed for production/non-production |
---|---|---|---|---|---|---|
CPE | 3000 | 4000 | 8192 | 8192 | 2 | Yes |
CSS | 2000 | 4000 | 8192 | 8192 | 2 | Yes |
Enterprise Records (ER) | 500 | 1000 | 1536 | 1536 | 2 | Yes |
Content Collector for SAP (CC4SAP) | 500 | 1000 | 1536 | 1536 | 2 | Yes |
CMIS | 500 | 1000 | 1536 | 1536 | 2 | No |
GraphQL | 1000 | 2000 | 3072 | 3072 | 6 | No |
External Share | 500 | 1000 | 1536 | 1536 | 2 | No |
Task Manager | 500 | 1000 | 1536 | 1536 | 2 | No |
In high-volume indexing scenarios, where ingested docs are full text indexed, the CSS utilization can exceed the CPE utilization. In some cases, this might be 3 - 5 times larger.
For optional processing such as thumbnail generation or text filtering, at least 1 GB of native memory is required by the CPE for each. If both types of processing are expected, add at least 2 GB to the memory requests/limits for the CPE.
With the processing of content, resources required increase with the complexity and size of the content. Increase both memory and CPU for the CPE and CSS services to reflect the type and size of documents in your system. Resource requirements might also increase over time as the amount of data in the system grows.
Component | CPU Request (m) | CPU Limit (m) | Memory Request (Mi) | Memory Limit (Mi) | Number of replicas | Pods are licensed for production/non-production |
---|---|---|---|---|---|---|
Decision Center | 2000 | 2000 | 4096 | 16384 | 2 | Yes |
Decision Runner | 500 | 4000 | 2048 | 2048 | 2 | Yes |
Decision Server Runtime | 2000 | 2000 | 4096 | 4096 | 6 | Yes |
Decision Server Console | 500 | 2000 | 512 | 4096 | 1 | No |
Component | CPU Request (m) | CPU Limit (m) | Memory Request (Mi) | Memory Limit (Mi) | Number of replicas |
---|---|---|---|---|---|
PostgreSQL | 2000 | 2000 | 3072 | 3072 | 3 |
Workflow Process Service | 2000 | 3000 | 3072 | 3072 | 3 |
Resource Registry | 100 | 500 | 256 | 512 | 3 |
Workflow Process Service operator | 100 | 500 | 20 | 500 |
To achieve high availability, you must adapt the cluster configuration and physical resources. You can set up a Db2 High Availability Disaster Recovery (HADR) database. For more information, see Preparing your environment for disaster recovery. For high availability and fault tolerance to be effective, set the number of replicas that you need for the respective configuration parameters in your custom resource file. The operator then manages the scaling.