All the Cloud Pak containers are based on Red Hat Universal Base Images (UBI), and
are Red Hat and IBM certified. To use the Cloud Pak images, the administrator must make sure that
the target cluster on Red Hat OpenShift Container Platform has the capacity for all the capabilities
that you plan to install.
For each stage in your operations (a minimum of three stages is expected "development,
preproduction, and production"), you must allocate a cluster of nodes before you install the Cloud
Pak. Development, preproduction, and production are stages that are best run on different compute
nodes. To achieve resource isolation, each namespace is a virtual cluster within the physical
cluster and a Cloud Pak deployment is scoped to a single namespace. High-level resource objects are
scoped within namespaces. Low-level resources, such as nodes and persistent volumes, are not in
namespaces.
Note: Use the shared_configuration.sc_deployment_license parameter to define
the purpose of the "custom" deployment type
(shared_configuration.sc_deployment_type). Valid values are
production and non-production.
The Detailed system requirements page provides a cluster requirements guideline for IBM Cloud Pak® for Business Automation. To find information
on the supported versions of OpenShift Container Platform for example, open the rendered report for
23.0.2 and go to
the Containers tab.
The minimum cluster configuration and physical resources that are
needed to run the Cloud Pak include the following elements:
- Hardware architecture: Intel (amd64 or x86_64 the 64-bit
edition for Linux® x86) on all platforms, Linux on IBM Z, or
Linux on Power.
- Node counts: Dual compute nodes for nonproduction and production clusters. A minimum of three
nodes is needed for medium and large production environments and large test environments. Any
cluster configuration needs to adapt to the size of the projects and the workload that is expected.
Based on your cluster requirement, you can pick a deployment profile
(sc_deployment_profile_size) and enable it during installation. Cloud Pak for Business Automation provides
small
, medium
, and large
deployment profiles. You
can set the profile during installation, in an update, and during an upgrade.
The default profile is
small
. Before you install the
Cloud Pak, you can change the profile to
medium
or
large
. You can
scale up or down a profile anytime after installation. However, if you install with a
medium
profile and another Cloud Pak specifies a
medium
or
large
profile then if you scale down to size
small
, the profile
for the foundational services remains as it is. You can scale down the foundational services to
small
only if no other Cloud Pak specifies a
medium
or
large
size.
Attention: The values in the hardware requirements tables
were derived under specific operating and environment conditions. The information is accurate under
specific conditions, but results that are obtained in your operating environments might vary
significantly. Therefore, IBM cannot provide any representations, assurances, guarantees, or
warranties regarding the performance of the profiles in your environment.
It is recommended that you set the IBM Cloud Platform UI (Zen) service to the same size as
Cloud Pak for Business Automation. The
possible values are small, medium, and large.
The following table describes each deployment profile.
Table 1. Deployment profiles and estimated
workloads
Profile |
Description |
Scaling (per 8-hour day) |
Minimum number of worker nodes |
Small (no HA) |
For environments that are used by 10 developers and 25 users. For environments that are used
by a single department with a few users; useful for application development. |
- Processes 10,000 documents
- Processes 5,000 human workflows
- Processes 500,000 Straight Thru Processes processes
- Processes 1.25 million Straight Thru Service Flows
- Processes 5,000 transactions
- Processes 500,000 decisions
- Supports failover
|
8 |
Medium |
For environments that are used by 20 developers and 125 users. For environments that are used
by a single department and by limited users. |
- Processes 100,000 documents
- Processes 25,000 human workflows
- Processes 1 million Straight Thru Processes processes
- Processes 3.5 million Straight Thru Service Flows
- Processes 25,000 transactions
- Processes 2,000,000 decisions
- Supports HA and failover
- Provides at least two replicas of most services, if configuring failover
|
16 |
Large |
For environments that are used by 50 developers and 625 users. For environments that are
shared by multiple departments and users. |
- Processes 1,000,000 documents
- Processes 125,000 human workflows
- Processes 2 million Straight Thru Processes processes
- Processes 7 million Straight Thru Service Flows
- Processes 125,000 transactions
- Processes 5,000,000 decisions
- Supports HA and failover
- Provides at least two replicas of most services, if configuring failover
|
32 |
You can use custom resource templates to update the hardware requirements of the services that
you want to install.
The following sections provide the default resources for each capability. For more information
about the minimum requirements of foundational services, see Hardware requirements and recommendations for foundational
services.
Note: The system requirements for small
, medium
, and
large
profiles are derived under specific operating and environment conditions. Due
to the differences in hardware, networking, and storage, the resources vary for each workload in
different environments. It is important that you run a performance test with peaked workload in a
Test or UAT environment. Monitor the resource usage (for example CPU and memory) to determine the
resource usage for each component. Based on the resource usage, make changes to CPU request, CPU
limit, memory request, and memory limit in the custom resource for each component.
Small profile hardware requirements
- Table 2
Cloud Pak for Business Automation operator
default requirements for a small profile
- Table 3 Automation Decision Services default
requirements for a small profile
- Table 4 Automation Document Processing default
requirements for a small profile
- Table 5 Automation Workstream Services default
requirements for a small profile
- Table 6
Business Automation Application default requirements
for a small profile
- Table 7
Business Automation Insights default
requirements for a small profile
- Table 8 Business Automation Navigator default
requirements for a small profile
- Table 9
Business Automation
Studio default requirements
for a small profile
- Table 10 Business Automation Workflow default
requirements with or without Automation Workstream Services for a small profile
- Table 11
FileNet® Content Manager default requirements for a small
profile
- Table 12
Operational Decision Manager default requirements
for a small profile
- Table 13
Workflow Process Service Authoring
default requirements for a small profile
Table 2. Cloud Pak for Business Automation operator default
requirements for a small profile
Component |
CPU Request (m) |
CPU Limit (m) |
Memory Request (Mi) |
Memory Limit (Mi) |
Number of replicas |
Pods are licensed for production/nonproduction |
Ephemeral storage Limit |
ibm-cp4a-operator |
500 |
1000 |
256 |
2048 |
1 |
No |
NA |
ibm-content-operator |
500 |
1000 |
256 |
2048 |
1 |
No |
NA |
ibm-ads-operator |
10 |
500 |
64 |
512 |
1 |
No |
NA |
ibm-odm-operator |
10 |
500 |
256 |
768 |
1 |
No |
NA |
ibm-dpe-operator |
10 |
1000 |
256 |
768 |
1 |
No |
750 |
ibm-pfs-operator |
100 |
500 |
20 |
1024 |
1 |
No |
NA |
CPU Request and CPU Limit values are measured in units of millicore (m).
Memory Request and Memory Limit values are measured in units of mebibyte (Mi).
Note: If you plan to install the cp4a operator for example in all namespaces for more than one
instance, add more resources. You can use the
oc patch csv
command to add more
resources:
oc patch csv ibm-cp4a-operator.v23.2.0 --type=json -p '[
{
"op":"replace",
"path": "/spec/install/spec/deployments/0/spec/template/spec/containers/0/resources/limits/cpu",
"value": "4"
},
{
"op":"replace",
"path": "/spec/install/spec/deployments/0/spec/template/spec/containers/0/resources/limits/memory",
"value": "8Gi"
},
{
"op":"replace",
"path": "/spec/install/spec/deployments/0/spec/template/spec/containers/0/resources/requests/cpu",
"value": "1500m"
},
{
"op":"replace",
"path": "/spec/install/spec/deployments/0/spec/template/spec/containers/0/resources/requests/memory",
"value": "1600Mi"
},
]'
Table 3. Automation Decision Services default
requirements for a small profile
Component |
CPU Request (m) |
CPU Limit (m) |
Memory Request (Mi) |
Memory Limit (Mi) |
Number of replicas |
Pods are licensed for production/nonproduction |
Ephemeral storage Request |
Ephemeral storage Limit |
ads-runtime |
500 |
1000 |
2048 |
3072 |
1 |
Yes |
300Mi |
500Mi |
ads-credentials |
250 |
1000 |
800 |
1536 |
1 |
No |
300Mi |
600Mi |
ads-embedded-build |
500 |
2000 |
1024 |
2048 |
1 |
No |
1.1Gi |
1.4Gi |
ads-download |
100 |
300 |
200 |
200 |
1 |
No |
300Mi |
500Mi |
ads-front |
100 |
300 |
256 |
256 |
1 |
No |
300Mi |
500Mi |
ads-gitservice |
500 |
1000 |
800 |
1536 |
1 |
No |
400Mi |
600Mi |
ads-parsing |
250 |
1000 |
800 |
1536 |
1 |
No |
300Mi |
500Mi |
ads-restapi |
500 |
1000 |
800 |
1536 |
1 |
No |
300Mi |
1.2Gi |
ads-run |
500 |
1000 |
800 |
1536 |
1 |
No |
300Mi |
700Mi |
Note: Automation Decision Services also creates some jobs that request 200m CPU and 256Mi Memory.
The following jobs are created at the beginning of the installation and do not last long:
- ads-ltpa-creation
- ads-runtime-bai-registration
- ads-ads-runtime-zen-translation-job
- ads-designer-zen-translation-job
The ads-rr-integration and ads-ads-rr-as-runtime-synchro jobs are started every 15 minutes,
and are also short-lived.
Table 4. Automation Document Processing
default requirements for a small profile
Component |
CPU Request (m) |
CPU Limit (m) |
Memory Request (Mi) |
Memory Limit (Mi) |
Number of replicas |
Pods are licensed for production/nonproduction |
Ephemeral storage Limit (Mi) |
OCR Extraction |
200 |
1000 |
1024 |
2560 |
2 |
Yes |
3072 |
Classify Process |
200 |
500 |
400 |
2048 |
1 |
Yes |
3072 |
Processing Extraction |
500 |
1000 |
1024 |
6656 |
6 |
Yes |
5120 |
Natural Language Extractor |
200 |
500 |
600 |
1440 |
2 |
Yes |
3072 |
PostProcessing |
200 |
1000 |
400 |
1229 |
1 |
No |
3072 |
Setup |
200 |
1000 |
600 |
2048 |
2 |
No |
3072 |
Deep Learning |
1000 |
2000 |
3072 |
15360 |
2 |
No |
7680 |
Backend |
200 |
1000 |
400 |
2048 |
2 |
No |
4608 |
Webhook |
200 |
300 |
400 |
500 |
1 |
No |
1024 |
RabbitMQ |
100 |
1000 |
100 |
1024 |
2 |
No |
3072 |
OCR engine 2 Runtime (wdu-runtime) (technology preview) |
200 |
4000 |
1024 |
11264 |
1 |
No |
4096 |
OCR engine 2 Extraction (wdu-extraction) (technology preview) |
300 |
1000 |
500 |
1024 |
1 |
No |
3072 |
Common Git Gateway Service (git-service) |
500 |
1000 |
512 |
1536 |
1 |
No |
Not applicable |
Content Designer Repo API (CDRA) |
500 |
1000 |
1024 |
3072 |
1 |
No |
Not applicable |
Content Designer UI and REST (CDS) |
500 |
1000 |
512 |
3072 |
1 |
No |
2048 |
Content Project Deployment Service (CPDS) |
500 |
1000 |
512 |
3072 |
1 |
No |
Not applicable |
Mongo database (MongoDB) |
500 |
1000 |
512 |
1024 |
1 |
No |
Not applicable |
Viewer service (viewone) |
500 |
1000 |
1024 |
3072 |
1 |
No |
Not applicable |
Important:
- Document Processing - The Deep
Learning optional container can use NVIDIA GPU if it is available. NVIDIA is the only supported GPU
for Deep Learning in the Document Processing pattern. The GPU worker
nodes must have a unique label, for example
ibm-cloud.kubernetes.io/gpu-enabled:true
. You add this label value to the
deployment script or to the YAML file of your custom resource when you configure the YAML for
deployment. To install the NVIDIA GPU operator, follow these installation instructions. For high-availability, you need a
minimum of 2 GPU so that 2 replicas of Deep Learning pods can be started. You can change the replica
to 1 if you have 1 GPU on the node.
- For Document Processing, the CPU
of the worker nodes must meet TensorFlow AVX requirements. For more information, see Hardware requirements for TensorFlow with pip.
Note:
- Document Processing requires databases for project configuration and processing. These databases
must be Db2 or PostgreSQL. The hardware and storage requirements for the databases depend on the
system load for each document processing project.
- The previous table shows the requirements if deep learning object detection is enabled. If you
process fixed-format documents only, you might improve performance by disabling deep learning object
detection. For more information about the system requirements for Document Processing engine components in this scenario,
see IBM
Automation Document Processing system requirements when disabling deep learning object detection for
fixed-format documents.
- If you deploy with only the
document_processing
pattern, you can reduce the
sizing for some of the required components. For more information, see IBM Automation
Document Processing system requirements for a light production deployment (document_processing
pattern only).
- It is recommended to use 70% of the available space for projects, for example if you have 5000
Mb, use 3500 Mb for your projects. Because the model size is 153 Mb, it means you must create a
maximum of 22 projects. If you want to set up more than 22 projects, you must increase the ephemeral
storage for both the Deep Learning and Processing Extraction containers.
- The OCR engine 2 Runtime container enables support for low-quality documents and handwriting
recognition when the ca_configuration.ocrextraction.use_iocr parameter is set
to
auto
or all
.
- OCR engine 2 Extraction is an optional container that is used to make gRPC requests to the OCR
engine 2 Runtime service, and you deploy it by setting the
ca_configuration.ocrextraction.use_iocr parameter to
auto
or
all
.
Table 5. Automation Workstream Services
default requirements for a small profile
Component |
CPU Request (m) |
CPU Limit (m) |
Memory Request (Mi) |
Memory Limit (Mi) |
Number of replicas |
Pods are licensed for production/nonproduction |
Workflow Server |
500 |
2000 |
2048 |
3060 |
1 |
Yes |
Notes:
Automation Workstream Services also creates some jobs that request 200m CPU and 128Mi Memory:
- basimport-job is created only with Business Automation
Studio
- db-init-job
- content-init-job
- ltpa-job
- oidc-registry-job
- oidc-registry-job-for-webpd is created only with Workflow Center
Table 6. Business Automation Application default requirements for a
small profile
Component |
CPU Request (m) |
CPU Limit (m) |
Memory Request (Mi) |
Memory Limit (Mi) |
Number of replicas |
Pods are licensed for production/nonproduction |
App Engine |
300 |
500 |
256 |
1024 |
1 |
Yes/No |
Resource Registry |
100 |
500 |
256 |
512 |
1 |
No |
Table 7. Business Automation Insights default requirements
for a small profile
Component |
CPU Request (m) |
CPU Limit (m) |
Memory Request (Mi) |
Memory Limit (Mi) |
Number of replicas |
Pods are licensed for production/nonproduction |
Business Performance Center |
100 |
4000 |
512 |
2000 |
1 |
Yes/No |
Flink task managers |
1000 |
1000 |
1728 |
1728 |
Default parallelism 2
|
Yes/No |
Flink job manager |
1000 |
1000 |
1728 |
1728 |
1 |
No |
Management REST API |
100 |
1000 |
50 |
160 |
1 |
No |
Management back end (second container of the same management pod as the previous one) |
100 |
500 |
350 |
512 |
1 |
No |
Note: Business Automation Insights relies on Kafka, and
Elasticsearch from Foundational Services. Business Automation Insights also creates the
bai-setup
and bai-core-application-setup
Kubernetes jobs and
requests 200m for CPU and 350Mi for memory. The CPU and memory limits are set equal to the requests.
The pods of these Kubernetes jobs run for a short time at the beginning of the installation, then
complete, thus freeing the resources.
Table 8. Business Automation Navigator default
requirements for a small profile
Component |
CPU Request (m) |
CPU Limit (m) |
Memory Request (Mi) |
Memory Limit (Mi) |
Number of replicas |
Pods are licensed for production/nonproduction |
Navigator |
1000 |
1000 |
3072 |
3072 |
1 |
No |
Navigator Watcher |
250 |
500 |
256 |
512 |
1 |
No |
Table 9. Business Automation
Studio default requirements for a
small profile
Component |
CPU Request (m) |
CPU Limit (m) |
Memory Request (Mi) |
Memory Limit (Mi) |
Number of replicas |
Pods are licensed for production/nonproduction |
App Engine playback |
300 |
500 |
256 |
1024 |
1 |
No |
BAStudio |
1100 |
2000 |
1752 |
3072 |
1 |
No |
Resource Registry |
100 |
500 |
256 |
512 |
1 |
No |
Table 10. Business Automation Workflow default
requirements with or without Automation Workstream Services for a small profile
Component |
CPU Request (m) |
CPU Limit (m) |
Memory Request (Mi) |
Memory Limit (Mi) |
Number of replicas |
Pods are licensed for production/nonproduction |
Workflow Server |
500 |
2000 |
2048 |
3060 |
1 |
Yes |
Workflow Authoring |
500 |
2000 |
2048 |
3072 |
1 |
No |
Intelligent Task Prioritization |
500 |
2000 |
1024 |
2560 |
1 |
No |
Workforce Insights |
500 |
2000 |
1024 |
2560 |
1 |
No |
Notes:
Intelligent Task Prioritization and Workforce Insights are optional and are
not supported on all platforms. For more information, see Detailed system
requirements.
Business Automation Workflow also creates
some jobs that request 200m CPU and 128Mi Memory:
- basimport-job is created only with Business Automation
Studio.
- case-init-job
- db-init-job
- content-init-job
- ltpa-job
Table 11. FileNet Content Manager default requirements for a small profile
Component |
CPU Request (m) |
CPU Limit (m) |
Memory Request (Mi) |
Memory Limit (Mi) |
Number of replicas |
Pods are licensed for production/nonproduction |
CPE |
1000 |
1000 |
3072 |
3072 |
1 |
Yes |
CSS |
1000 |
1000 |
4096 |
4096 |
1 |
Yes |
Enterprise Records (ER) |
500 |
1000 |
1536 |
1536 |
1 |
Yes |
Content Collector for SAP (CC4SAP) |
500 |
1000 |
1536 |
1536 |
1 |
Yes |
CMIS |
500 |
1000 |
1536 |
1536 |
1 |
No |
GraphQL |
500 |
1000 |
1536 |
1536 |
1 |
No |
Task Manager |
500 |
1000 |
1536 |
1536 |
1 |
No |
Note: Not all containers are used in every workload. If a feature the Content Services GraphQL API
is not used, that container requires less resources or is optionally not deployed.
In high-volume
indexing scenarios, where ingested docs are full-text indexed, the CSS usage can exceed the CPE
usage. Sometimes, this might be 3 - 5 times larger.
For optional processing such as thumbnail
generation or text filtering, at least 1 GB of native memory is required by the CPE for each. If
both types of processing are expected, add at least 2 GB to the memory requests/limits for the CPE.
With the processing of content, resources required increase with the complexity and size of
the content. Increase both memory and CPU for the CPE and CSS services to reflect the type and size
of documents in your system. Resource requirements might also increase over time as the amount of
data in the system grows.
Table 12. Operational Decision Manager default requirements for a
small profile
Component |
CPU Request (m) |
CPU Limit (m) |
Memory Request (Mi) |
Memory Limit (Mi) |
Number of replicas |
Pods are licensed for production/nonproduction |
Ephemeral storage Request |
Ephemeral storage Limit |
Decision Center |
1000 |
1000 |
4096 |
4096 |
1 |
Yes |
1G |
2G |
Decision Runner |
500 |
500 |
1024 |
2048 |
1 |
Yes |
200Mi |
1G |
Decision Server Runtime |
500 |
1000 |
2048 |
2048 |
1 |
Yes |
200Mi |
1G |
Decision Server Console |
500 |
500 |
512 |
1024 |
1 |
No |
200Mi |
1G |
Note: Operational Decision Manager also creates an
odm-oidc-job-registration job that requests 200m CPU and 256Mi Memory. The pod is created at the
beginning of the installation and does not last long.
Table 13. IBM Workflow Process Service Authoring default
requirements for a small profile
Component |
CPU Request (m) |
CPU Limit (m) |
Memory Request (Mi) |
Memory Limit (Mi) |
Number of replicas |
Pods are licensed for production/nonproduction |
IBM Workflow Process Service Authoring |
1100 |
2000 |
1752 |
3072 |
1 |
No |
Medium profile hardware requirements
- Table 14
Cloud Pak for Business Automation operator
default requirements for a medium profile
- Table 15 Automation Decision Services default
requirements for a medium profile
- Table 16 Automation Document Processing
default requirements for a medium profile
- Table 17 Automation Workstream Services
default requirements for a medium profile
- Table 18
Business Automation Application default requirements
for a medium profile
- Table 19
Business Automation Insights default
requirements for a medium profile
- Table 20 Business Automation Navigator default
requirements for a medium profile
- Table 21
Business Automation
Studio default requirements
for a medium profile
- Table 22 Business Automation Workflow default
requirements with or without Automation Workstream Services for a medium profile
- Table 23
FileNet Content Manager default requirements for a medium
profile
- Table 24
Operational Decision Manager default requirements
for a medium profile
- Table 25
Workflow Process Service Authoring
default requirements for a medium profile
Table 14. Cloud Pak for Business Automation operator default
requirements for a medium profile
Component |
CPU Request (m) |
CPU Limit (m) |
Memory Request (Mi) |
Memory Limit (Mi) |
Number of replicas |
Pods are licensed for production/nonproduction |
Ephemeral storage Limit |
ibm-cp4a-operator |
500 |
1000 |
256 |
2048 |
1 |
No |
NA |
ibm-content-operator |
500 |
1000 |
256 |
2048 |
1 |
No |
NA |
ibm-ads-operator |
10 |
500 |
64 |
512 |
1 |
No |
NA |
ibm-odm-operator |
10 |
500 |
256 |
768 |
1 |
No |
NA |
ibm-dpe-operator |
10 |
1000 |
256 |
768 |
1 |
No |
500 |
ibm-pfs-operator |
100 |
500 |
20 |
1024 |
1 |
No |
NA |
Note: If you plan to install the cp4a operator for example in all namespaces for more than one
instance, add more resources. You can use the
oc patch csv
command to add more
resources:
oc patch csv ibm-cp4a-operator.v23.2.0 --type=json -p '[
{
"op":"replace",
"path": "/spec/install/spec/deployments/0/spec/template/spec/containers/0/resources/limits/cpu",
"value": "4"
},
{
"op":"replace",
"path": "/spec/install/spec/deployments/0/spec/template/spec/containers/0/resources/limits/memory",
"value": "8Gi"
},
{
"op":"replace",
"path": "/spec/install/spec/deployments/0/spec/template/spec/containers/0/resources/requests/cpu",
"value": "1500m"
},
{
"op":"replace",
"path": "/spec/install/spec/deployments/0/spec/template/spec/containers/0/resources/requests/memory",
"value": "1600Mi"
},
]'
Table 15. Automation Decision Services default
requirements for a medium profile
Component |
CPU Request (m) |
CPU Limit (m) |
Memory Request (Mi) |
Memory Limit (Mi) |
Number of replicas |
Pods are licensed for production/nonproduction |
Ephemaral storage Request |
Ephemaral storage Limit |
ads-runtime |
500 |
1000 |
2048 |
3072 |
2 |
Yes |
2.1Gi |
3Gi |
ads-credentials |
250 |
1000 |
800 |
1536 |
2 |
No |
300Mi |
600Mi |
ads-embedded-build |
500 |
2000 |
1024 |
2048 |
2 |
No |
1.1Gi |
1.4Gi |
ads-download |
100 |
300 |
200 |
200 |
2 |
No |
300Mi |
600Mi |
ads-front |
100 |
300 |
256 |
256 |
2 |
No |
300Mi |
600Mi |
ads-gitservice |
500 |
1000 |
800 |
1536 |
2 |
No |
400Mi |
700Mi |
ads-parsing |
250 |
1000 |
800 |
1536 |
2 |
No |
300Mi |
600Mi |
ads-restapi |
500 |
1000 |
800 |
1536 |
2 |
No |
300Mi |
1.2Gi |
ads-run |
500 |
1000 |
800 |
1536 |
2 |
No |
300Mi |
700Mi |
Note: Automation Decision Services also creates some jobs that request 200m CPU and 256Mi Memory.
The following jobs are created at the beginning of the installation and do not last long:
- ads-ltpa-creation
- ads-runtime-bai-registration
- ads-ads-runtime-zen-translation-job
- ads-designer-zen-translation-job
The ads-rr-integration and ads-ads-rr-as-runtime-synchro jobs are started every 15 minutes,
and are also short-lived.
Table 16. Automation Document Processing default
requirements for a medium profile
Component |
CPU Request (m) |
CPU Limit (m) |
Memory Request (Mi) |
Memory Limit (Mi) |
Number of Replicas |
Pods are licensed for production/nonproduction |
Ephemeral storage Limit (Mi) |
OCR Extraction |
200 |
1000 |
1024 |
2560 |
3 |
Yes |
3072 |
Classify Process |
200 |
500 |
400 |
2048 |
2 |
Yes |
3072 |
Processing Extraction |
500 |
1000 |
1024 |
6656 |
8 |
Yes |
5120 |
Natural Language Extractor |
200 |
500 |
600 |
1440 |
2 |
Yes |
3072 |
PostProcessing |
200 |
1000 |
400 |
1229 |
2 |
No |
3072 |
Setup |
200 |
1000 |
600 |
2048 |
4 |
No |
3072 |
Deep Learning |
1000 |
2000 |
3072 |
15360 |
2 |
No |
7680 |
Backend |
200 |
1000 |
400 |
2048 |
4 |
No |
4608 |
Webhook |
200 |
300 |
400 |
500 |
2 |
No |
1024 |
RabbitMQ |
100 |
1000 |
100 |
1024 |
3 |
No |
3072 |
OCR engine 2 Runtime (wdu-runtime) (technology preview) |
200 |
4000 |
1024 |
11264 |
1 |
No |
4096 |
OCR engine 2 Extraction (wdu-extraction) (technology preview) |
300 |
1000 |
500 |
1024 |
1 |
No |
3072 |
Common Git Gateway Service (git-service) |
500 |
1000 |
512 |
1536 |
1 |
No |
Not applicable |
Content Designer Repo API (CDRA) |
500 |
1000 |
1024 |
3072 |
2 |
No |
Not applicable |
Content Designer UI and REST (CDS) |
500 |
1000 |
512 |
3072 |
2 |
No |
2048 |
Content Project Deployment Service (CPDS) |
500 |
1000 |
512 |
1024 |
2 |
No |
Not applicable |
Mongo database (MongoDB) |
500 |
1000 |
512 |
1024 |
1 |
No |
Not applicable |
Viewer service (viewone) |
500 |
2000 |
1024 |
4096 |
2 |
No |
Not applicable |
Important:
- Document Processing - The Deep
Learning optional container can use NVIDIA GPU if it is available. NVIDIA is the only supported GPU
for Deep Learning in the Document Processing pattern. The GPU worker
nodes must have a unique label, for example
ibm-cloud.kubernetes.io/gpu-enabled:true
. You add this label value to the
deployment script or to the YAML file of your custom resource when you configure the YAML for
deployment. To install the NVIDIA GPU operator, follow these installation instructions. For high-availability, you need a
minimum of 2 GPU so that 2 replicas of Deep Learning pods can be started. You can change the replica
to 1 if you have 1 GPU on the node.
- For Document Processing, the CPU
of the worker nodes must meet TensorFlow AVX requirements. For more information, see Hardware requirements for TensorFlow with pip.
Note:
- Document Processing requires databases for project configuration and processing. These databases
must be Db2 or PostgreSQL. The hardware and storage requirements for the databases depend on the system load for
each document processing project.
- The previous table shows the requirements if deep learning object detection is enabled. If you
process fixed-format documents only, you might improve performance by disabling deep learning object
detection. For more information about the system requirements for Document Processing engine components in this scenario,
see IBM
Automation Document Processing system requirements when disabling deep learning object detection for
fixed-format documents.
- If you deploy with only the
document_processing
pattern, you can reduce the
sizing for some of the required components. For more information, see IBM Automation
Document Processing system requirements for a light production deployment (document_processing
pattern only).
- It is recommended to use 70% of the available space for projects, for example if you have 5000
Mb, use 3500 Mb for your projects. Because the model size is 153 Mb, it means you must create a
maximum of 22 projects. If you want to set up more than 22 projects, you must increase the ephemeral
storage for both the Deep Learning and Processing Extraction containers.
- The OCR engine 2 Runtime container enables support for low-quality documents and handwriting
recognition when the ca_configuration.ocrextraction.use_iocr parameter is set
to
auto
or all
.
- OCR engine 2 Extraction is an optional container that is used to make gRPC requests to the OCR
engine 2 Runtime service, and you deploy it by setting the
ca_configuration.ocrextraction.use_iocr parameter to
auto
or
all
.
Table 17. Automation Workstream Services default
requirements for a medium profile
Component |
CPU Request (m) |
CPU Limit (m) |
Memory Request (Mi) |
Memory Limit (Mi) |
Number of replicas |
Pods are licensed for production/nonproduction |
Workflow Server |
500 |
2000 |
2560 |
3512 |
2 |
Yes |
Notes:
Automation Workstream Services also creates some jobs that request 200m CPU and 128Mi Memory:
- basimport-job is created only with Business Automation
Studio
- db-init-job
- content-init-job
- ltpa-job
- oidc-registry-job
- oidc-registry-job-for-webpd is created only with Workflow Center
Table 18. Business Automation Application default requirements for a
medium profile
Component |
CPU Request (m) |
CPU Limit (m) |
Memory Request (Mi) |
Memory Limit (Mi) |
Number of replicas |
Pods are licensed for production/nonproduction |
App Engine |
300 |
500 |
256 |
1024 |
3 |
Yes/No |
Resource Registry |
100 |
500 |
256 |
512 |
3 |
No |
Table 19. Business Automation Insights default requirements
for a medium profile
Component |
CPU Request (m) |
CPU Limit (m) |
Memory Request (Mi) |
Memory Limit (Mi) |
Number of replicas |
Pods are licensed for production/nonproduction |
Business Performance Center |
100 |
4000 |
512 |
2000 |
2 |
Yes/No |
Flink task managers |
1000 |
1000 |
1728 |
1728 |
Default parallelism 2
|
Yes/No |
Flink job manager |
1000 |
1000 |
1728 |
1728 |
1 |
No |
Management REST API |
100 |
1000 |
50 |
160 |
2 |
No |
Management back end (second container of the same management pod as the previous one) |
100 |
500 |
350 |
512 |
2 |
No |
Note: Business Automation Insights relies on Kafka, and
Elasticsearch from Foundational Services. Business Automation Insights also creates the
bai-setup
and bai-core-application-setup
Kubernetes jobs and
requests 200m for CPU and 350Mi for memory. The CPU and memory limits are set equal to the requests.
The pods of these Kubernetes jobs run for a short time at the beginning of the installation, then
complete, thus freeing the resources.
Table 20. Business Automation Navigator default
requirements for a medium profile
Component |
CPU Request (m) |
CPU Limit (m) |
Memory Request (Mi) |
Memory Limit (Mi) |
Number of replicas |
Pods are licensed for production/nonproduction |
Navigator |
2000 |
3000 |
4096 |
4096 |
2 |
No |
Navigator Watcher |
250 |
500 |
256 |
512 |
1 |
No |
Table 21. Business Automation
Studio default requirements for a
medium profile
Component |
CPU Request (m) |
CPU Limit (m) |
Memory Request (Mi) |
Memory Limit (Mi) |
Number of replicas |
Pods are licensed for production/nonproduction |
App Engine playback |
300 |
500 |
256 |
1024 |
2 |
No |
BAStudio |
1000 |
2000 |
1752 |
3072 |
2 |
No |
Resource Registry |
100 |
500 |
256 |
512 |
3 |
No |
Table 22. Business Automation Workflow default
requirements with or without Automation Workstream Services for a medium profile
Component |
CPU Request (m) |
CPU Limit (m) |
Memory Request (Mi) |
Memory Limit (Mi) |
Number of replicas |
Pods are licensed for production/nonproduction |
Workflow Server |
500 |
2000 |
2560 |
3512 |
2 |
Yes |
Workflow Authoring |
500 |
4000 |
1024 |
3072 |
1 |
No |
Intelligent Task Prioritization |
500 |
2000 |
1024 |
2560 |
2 |
No |
Workforce Insights |
500 |
2000 |
1024 |
2560 |
2 |
No |
Notes:
Intelligent Task Prioritization and Workforce Insights are optional and are
not supported on all platforms. For more information, see Detailed system
requirements.
Business Automation Workflow also creates
some jobs that request 200m CPU and 128Mi Memory:
- basimport-job is created only with Business Automation
Studio.
- case-init-job
- db-init-job
- content-init-job
- ltpa-job
Table 23. FileNet Content Manager default requirements for a medium profile
Component |
CPU Request (m) |
CPU Limit (m) |
Memory Request (Mi) |
Memory Limit (Mi) |
Number of replicas |
Pods are licensed for production/nonproduction |
CPE |
1500 |
2000 |
3072 |
3072 |
2 |
Yes |
CSS |
1000 |
2000 |
8192 |
8192 |
2 |
Yes |
Enterprise Records (ER) |
500 |
1000 |
1536 |
1536 |
2 |
Yes |
Content Collector for SAP (CC4SAP) |
500 |
1000 |
1536 |
1536 |
2 |
Yes |
CMIS |
500 |
1000 |
1536 |
1536 |
2 |
No |
GraphQL |
500 |
2000 |
3072 |
3072 |
3 |
No |
Task Manager |
500 |
1000 |
1536 |
1536 |
2 |
No |
Note: Not all containers are used in every workload. If a feature like the Content Services GraphQL
API is not used, that container requires less resources or is optionally not deployed.
In
high-volume indexing scenarios, where ingested docs are full-text indexed, the CSS usage can exceed
the CPE usage. Sometimes, this might be 3 - 5 times larger.
For optional processing such as
thumbnail generation or text filtering, at least 1 GB of native memory is required by the CPE for
each. If both types of processing are expected, add at least 2 GB to the memory requests/limits for
the Content Platform Engine (CPE).
With the processing of content, resource requirements increase with the complexity and size
of the content. Increase both memory and CPU for the CPE and CSS services to reflect the type and
size of documents in your system. Resource requirements might also increase over time as the amount
of data in the system grows.
Table 24. Operational Decision Manager default requirements for a
medium profile
Component |
CPU Request (m) |
CPU Limit (m) |
Memory Request (Mi) |
Memory Limit (Mi) |
Number of replicas |
Pods are licensed for production/nonproduction |
Ephemeral storage Request |
Ephemeral storage Limit |
Decision Center |
1000 |
1000 |
4096 |
8192 |
2 |
Yes |
1G |
2G |
Decision Runner |
500 |
2000 |
2048 |
2048 |
2 |
Yes |
200Mi |
1G |
Decision Server Runtime |
2000 |
2000 |
2048 |
2048 |
3 |
Yes |
200Mi |
1G |
Decision Server Console |
500 |
2000 |
512 |
2048 |
1 |
No |
200Mi |
1G |
Note: Operational Decision Manager also creates an
odm-oidc-job-registration job that requests 200m CPU and 256Mi Memory. The pod is created at the
beginning of the installation and does not last long.
To achieve high availability, you must adapt the cluster configuration and physical resources.
You can set up a Db2® High Availability Disaster Recovery
(HADR) database. For more information, see Preparing your environment for disaster recovery. For high availability and fault tolerance
to be effective, set the number of replicas that you need for the respective configuration
parameters in your custom resource file. The operator then manages the scaling.
Table 25. Workflow Process Service Authoring default
requirements for a medium profile
Component |
CPU Request (m) |
CPU Limit (m) |
Memory Request (Mi) |
Memory Limit (Mi) |
Number of replicas |
Pods are licensed for production/nonproduction |
Workflow Process Service Authoring |
1000 |
2000 |
1752 |
3072 |
2 |
No |
Large profile hardware requirements
- Table 26
Cloud Pak for Business Automation operator
default requirements for a large profile
- Table 27 Automation Decision Services default
requirements for a large profile
- Table 28 Automation Document Processing
default requirements for a large profile
- Table 29 Automation Workstream Services
default requirements for a large profile
- Table 30
Business Automation Application default requirements
for a large profile
- Table 31
Business Automation Insights default
requirements for a large profile
- Table 32 Business Automation Navigator default
requirements for a large profile
- Table 33
Business Automation
Studio default requirements
for a large profile
- Table 34 Business Automation Workflow default
requirements with or without Automation Workstream Services for a large profile
- Table 35
FileNet Content Manager default requirements for a large
profile
- Table 36
Operational Decision Manager default requirements
for a large profile
- Table 37
Workflow Process Service Authoring
default requirements for a large profile
Table 26. Cloud Pak for Business Automation operator default
requirements for a large profile
Component |
CPU Request (m) |
CPU Limit (m) |
Memory Request (Mi) |
Memory Limit (Mi) |
Number of replicas |
Pods are licensed for production/nonproduction |
Ephemeral storage Limit |
ibm-cp4a-operator |
500 |
1000 |
256 |
2048 |
1 |
No |
NA |
ibm-content-operator |
500 |
1000 |
256 |
2048 |
1 |
No |
NA |
ibm-ads-operator |
10 |
500 |
64 |
512 |
1 |
No |
NA |
ibm-odm-operator |
10 |
500 |
256 |
768 |
1 |
No |
NA |
ibm-dpe-operator |
10 |
1000 |
256 |
768 |
1 |
No |
500 |
ibm-pfs-operator |
100 |
500 |
20 |
1024 |
1 |
No |
NA |
Note: If you plan to install the cp4a operator for example in all namespaces for more than one
instance, add more resources. You can use the
oc patch csv
command to add more
resources:
oc patch csv ibm-cp4a-operator.v23.2.0 --type=json -p '[
{
"op":"replace",
"path": "/spec/install/spec/deployments/0/spec/template/spec/containers/0/resources/limits/cpu",
"value": "4"
},
{
"op":"replace",
"path": "/spec/install/spec/deployments/0/spec/template/spec/containers/0/resources/limits/memory",
"value": "8Gi"
},
{
"op":"replace",
"path": "/spec/install/spec/deployments/0/spec/template/spec/containers/0/resources/requests/cpu",
"value": "1500m"
},
{
"op":"replace",
"path": "/spec/install/spec/deployments/0/spec/template/spec/containers/0/resources/requests/memory",
"value": "1600Mi"
},
]'
Table 27. Automation Decision Services default
requirements for a large profile
Component |
CPU Request (m) |
CPU Limit (m) |
Memory Request (Mi) |
Memory Limit (Mi) |
Number of replicas |
Pods are licensed for production/nonproduction |
Ephemeral storage Request |
Ephemeral storage Limit |
ads-runtime |
1000 |
2000 |
2048 |
3072 |
2 |
Yes |
2.1Gi |
2.6Gi |
ads-credentials |
250 |
2000 |
800 |
1536 |
2 |
No |
300Mi |
700Mi |
ads-embedded-build |
500 |
2000 |
1024 |
2048 |
2 |
No |
1.1Gi |
1.5Gi |
ads-download |
100 |
300 |
200 |
200 |
2 |
No |
300Mi |
700Mi |
ads-front |
100 |
300 |
256 |
256 |
2 |
No |
300Mi |
700Mi |
ads-gitservice |
500 |
2000 |
800 |
1536 |
2 |
No |
400Mi |
800Mi |
ads-parsing |
250 |
2000 |
800 |
1536 |
2 |
No |
300Mi |
700Mi |
ads-restapi |
500 |
2000 |
800 |
1536 |
2 |
No |
300Mi |
1.2Gi |
ads-run |
500 |
2000 |
800 |
1536 |
2 |
No |
300Mi |
1Gi |
Note: Automation Decision Services also creates some jobs that request 200m CPU and 256Mi Memory.
The following jobs are created at the beginning of the installation and do not last long:
- ads-ltpa-creation
- ads-runtime-bai-registration
- ads-ads-runtime-zen-translation-job
- ads-designer-zen-translation-job
The ads-rr-integration and ads-ads-rr-as-runtime-synchro jobs are started every 15 minutes,
and are also short-lived.
Table 28. Automation Document Processing default
requirements for a large profile
Component |
CPU Request (m) |
CPU Limit (m) |
Memory Request (Mi) |
Memory Limit (Mi) |
Number of Replicas |
Pods are licensed for production/nonproduction |
Ephemeral storage Limit (Mi) |
OCR Extraction |
200 |
1000 |
1024 |
2560 |
5 |
Yes |
3072 |
Classify Process |
200 |
500 |
400 |
2048 |
2 |
Yes |
3072 |
Processing Extraction |
500 |
1000 |
1024 |
6656 |
15 |
Yes |
5120 |
Natural Language Extractor |
200 |
500 |
600 |
1440 |
2 |
Yes |
3072 |
PostProcessing |
200 |
1000 |
400 |
1229 |
2 |
No |
3072 |
Setup |
200 |
1000 |
600 |
2048 |
6 |
No |
3072 |
Deep Learning |
1000 |
2000 |
3072 |
15360 |
2 |
No |
7680 |
Backend |
200 |
1000 |
400 |
2048 |
6 |
No |
4608 |
Webhook |
200 |
300 |
400 |
500 |
3 |
No |
1024 |
RabbitMQ |
100 |
1000 |
100 |
1024 |
3 |
No |
3072 |
OCR engine 2 Runtime (wdu-runtime) (technology preview) |
200 |
4000 |
1024 |
11264 |
1 |
No |
4096 |
OCR engine 2 Extraction (wdu-extraction) (technology preview) |
300 |
1000 |
500 |
1024 |
1 |
No |
3072 |
Common Git Gateway Service (git-service) |
500 |
1000 |
512 |
1536 |
2 |
No |
Not applicable |
Content Designer Repo API (CDRA) |
500 |
2000 |
1024 |
3072 |
2 |
No |
Not applicable |
Content Designer UI and REST (CDS) |
500 |
2000 |
512 |
3072 |
2 |
No |
2048 |
Content Project Deployment Service (CPDS) |
500 |
2000 |
512 |
2048 |
2 |
No |
Not applicable |
Mongo database (MongoDB) |
500 |
1000 |
512 |
1024 |
1 |
No |
Not applicable |
Viewer service (viewone) |
1000 |
3000 |
3072 |
6144 |
2 |
No |
Not applicable |
Important:
- Document Processing - The Deep
Learning optional container can use NVIDIA GPU if it is available. NVIDIA is the only supported GPU
for Deep Learning in the Document Processing pattern. The GPU worker
nodes must have a unique label, for example
ibm-cloud.kubernetes.io/gpu-enabled:true
. You add this label value to the
deployment script or to the YAML file of your custom resource when you configure the YAML for
deployment. To install the NVIDIA GPU operator, follow these installation instructions. For high-availability, you need a
minimum of 2 GPU so that 2 replicas of Deep Learning pods can be started. You can change the replica
to 1 if you have 1 GPU on the node.
- For Document Processing, the CPU
of the worker nodes must meet TensorFlow AVX requirements. For more information, see Hardware requirements for TensorFlow with pip.
Note:
- Document Processing requires databases for project configuration and processing. These databases
must be Db2 or PostgreSQL. The hardware and storage requirements for the databases depend on the system load for
each document processing project.
- The previous table shows the requirements if deep learning object detection is enabled. If you
process only fixed-format documents, you might improve performance by disabling deep learning object
detection. For more information about the system requirements for Document Processing engine components in this scenario,
see IBM
Automation Document Processing system requirements when disabling deep learning object detection for
fixed-format documents.
- If you deploy with only the
document_processing
pattern, you can reduce the
sizing for some of the required components. For more information, see IBM Automation
Document Processing system requirements for a light production deployment (document_processing
pattern only).
- It is recommended to use 70% of the available space for projects, for example if you have 5000
Mb, use 3500 Mb for your projects. Because the model size is 153 Mb, it means you must create a
maximum of 22 projects. If you want to set up more than 22 projects, you must increase the ephemeral
storage for both the Deep Learning and Processing Extraction containers.
- The OCR engine 2 Runtime container enables support for low-quality documents and handwriting
recognition when the ca_configuration.ocrextraction.use_iocr parameter is set
to
auto
or all
.
- OCR engine 2 Extraction is an optional container that is used to make gRPC requests to the OCR
engine 2 Runtime service, and you deploy it by setting the
ca_configuration.ocrextraction.use_iocr parameter to
auto
or
all
.
Table 29. Automation Workstream Services default
requirements for a large profile
Component |
CPU Request (m) |
CPU Limit (m) |
Memory Request (Mi) |
Memory Limit (Mi) |
Number of replicas |
Pods are licensed for production/nonproduction |
Workflow Server |
1000 |
2000 |
3060 |
4000 |
4 |
Yes |
Notes:
Automation Workstream Services also creates some jobs that request 200m CPU and 128Mi Memory:
- basimport-job is created only with Business Automation
Studio
- db-init-job
- content-init-job
- ltpa-job
- oidc-registry-job
- oidc-registry-job-for-webpd is created only with Workflow Center
Table 30. Business Automation Application default requirements for a
large profile
Component |
CPU Request (m) |
CPU Limit (m) |
Memory Request (Mi) |
Memory Limit (Mi) |
Number of replicas |
Pods are licensed for production/nonproduction |
App Engine |
300 |
500 |
256 |
1024 |
6 |
Yes/No |
Resource Registry |
100 |
500 |
256 |
512 |
1 |
No |
Table 31. Business Automation Insights default requirements
for a large profile
Component |
CPU Request (m) |
CPU Limit (m) |
Memory Request (Mi) |
Memory Limit (Mi) |
Number of replicas |
Pods are licensed for production/nonproduction |
Business Performance Center |
100 |
4000 |
512 |
2000 |
2 |
Yes/No |
Flink task managers |
1000 |
1000 |
1728 |
1728 |
Default parallelism 2
|
Yes/No |
Flink job manager |
1000 |
1000 |
1728 |
1728 |
1 |
No |
Management REST API |
100 |
1000 |
50 |
160 |
2 |
No |
Management back end (second container of the same management pod as the previous one) |
100 |
500 |
350 |
512 |
2 |
No |
Note: Business Automation Insights relies on Kafka, and
Elasticsearch from Foundational Services. Business Automation Insights also creates the
bai-setup
and bai-core-application-setup
Kubernetes jobs and
requests 200m for CPU and 350Mi for memory. The CPU and memory limits are set equal to the requests.
The pods of these Kubernetes jobs run for a short time at the beginning of the installation, then
complete, thus freeing the resources.
Table 32. Business Automation Navigator default
requirements for a large profile
Component |
CPU Request (m) |
CPU Limit (m) |
Memory Request (Mi) |
Memory Limit (Mi) |
Number of replicas |
Pods are licensed for production/nonproduction |
Navigator |
2000 |
4000 |
6144 |
6144 |
6 |
No |
Navigator Watcher |
250 |
500 |
256 |
512 |
1 |
No |
Table 33. Business Automation
Studio default requirements for a
large profile
Component |
CPU Request (m) |
CPU Limit (m) |
Memory Request (Mi) |
Memory Limit (Mi) |
Number of replicas |
Pods are licensed for production/nonproduction |
App Engine playback |
300 |
500 |
256 |
1024 |
4 |
No |
BAStudio |
2000 |
4000 |
1752 |
3072 |
2 |
No |
Resource Registry |
100 |
500 |
256 |
512 |
3 |
No |
Table 34. Business Automation Workflow default
requirements with or without Automation Workstream Services for a large profile
Component |
CPU Request (m) |
CPU Limit (m) |
Memory Request (Mi) |
Memory Limit (Mi) |
Number of replicas |
Pods are licensed for production/nonproduction |
Workflow Server |
1000 |
2000 |
3060 |
4000 |
4 |
Yes |
Workflow Authoring |
1000 |
2000 |
2000 |
3000 |
2 |
No |
Intelligent Task Prioritization |
500 |
2000 |
1024 |
2560 |
2 |
No |
Workforce Insights |
500 |
2000 |
1024 |
2560 |
2 |
No |
Notes:
Intelligent Task Prioritization and Workforce Insights are optional and are
not supported on all platforms. For more information, see Detailed system
requirements.
Business Automation Workflow also creates
some jobs that request 200m CPU and 128Mi Memory:
- basimport-job is created only with Business Automation
Studio.
- case-init-job
- db-init-job
- content-init-job
- ltpa-job
Table 35. FileNet Content Manager default requirements for a large profile
Component |
CPU Request (m) |
CPU Limit (m) |
Memory Request (Mi) |
Memory Limit (Mi) |
Number of replicas |
Pods are licensed for production/nonproduction |
CPE |
3000 |
4000 |
8192 |
8192 |
2 |
Yes |
CSS |
2000 |
4000 |
8192 |
8192 |
2 |
Yes |
Enterprise Records (ER) |
500 |
1000 |
1536 |
1536 |
2 |
Yes |
Content Collector for SAP (CC4SAP) |
500 |
1000 |
1536 |
1536 |
2 |
Yes |
CMIS |
500 |
1000 |
1536 |
1536 |
2 |
No |
GraphQL |
1000 |
2000 |
3072 |
3072 |
6 |
No |
Task Manager |
500 |
1000 |
1536 |
1536 |
2 |
No |
Note: Not all containers are used in every workload. If a feature like the Content Services GraphQL
API is not used, that container requires less resources or is optionally not deployed.
In
high-volume indexing scenarios, where ingested docs are full-text indexed, the CSS usage can exceed
the CPE usage. Sometimes, this might be 3 - 5 times larger.
For optional processing such as
thumbnail generation or text filtering, at least 1 GB of native memory is required by the CPE for
each. If both types of processing are expected, add at least 2 GB to the memory requests/limits for
the CPE.
With the processing of content, resources required increase with the complexity and
size of the content. Increase both memory and CPU for the CPE and CSS services to reflect the type
and size of documents in your system. Resource requirements might also increase over time as the
amount of data in the system grows.
Table 36. Operational Decision Manager default requirements for a
large profile
Component |
CPU Request (m) |
CPU Limit (m) |
Memory Request (Mi) |
Memory Limit (Mi) |
Number of replicas |
Pods are licensed for production/nonproduction |
Ephemeral storage Request |
Ephemeral storage Limit |
Decision Center |
2000 |
2000 |
4096 |
16384 |
2 |
Yes |
1G |
2G |
Decision Runner |
500 |
4000 |
2048 |
2048 |
2 |
Yes |
200Mi |
1G |
Decision Server Runtime |
2000 |
2000 |
4096 |
4096 |
6 |
Yes |
200Mi |
1G |
Decision Server Console |
500 |
2000 |
512 |
4096 |
1 |
No |
200Mi |
1G |
Note: Operational Decision Manager also creates an
odm-oidc-job-registration job that requests 200m CPU and 256Mi Memory. The pod is created at the
beginning of the installation and does not last long.
To achieve high availability, you must adapt the cluster configuration and physical resources.
You can set up a Db2 High Availability Disaster Recovery
(HADR) database. For more information, see Preparing your environment for disaster recovery. For high availability and fault tolerance
to be effective, set the number of replicas that you need for the respective configuration
parameters in your custom resource file. The operator then manages the scaling.
Table 37. Workflow Process Service Authoring default
requirements for a large profile
Component |
CPU Request (m) |
CPU Limit (m) |
Memory Request (Mi) |
Memory Limit (Mi) |
Number of replicas |
Pods are licensed for production/nonproduction |
Workflow Process Service Authoring |
2000 |
4000 |
1752 |
3072 |
2 |
No |