System requirements
For each stage in your operations (a minimum of three stages is expected "development, preproduction, and production"), you must allocate a cluster of nodes before you install the Cloud Pak. Development, preproduction, and production are stages that are best run on different compute nodes. To achieve resource isolation, each namespace is a virtual cluster within the physical cluster and a Cloud Pak deployment is scoped to a single namespace. High-level resource objects are scoped within namespaces. Low-level resources, such as nodes and persistent volumes, are not in namespaces.
The Detailed system requirements page provides a cluster requirements guideline for IBM Cloud Pak® for Business Automation. To find information on the supported versions of OpenShift Container Platform for example, open the rendered report for 24.0.0 and go to the Containers tab.
- Hardware architecture: Intel (amd64 or x86_64 the 64-bit edition for Linux® x86) on all platforms, Linux on IBM Z, or Linux on Power.
- Node counts: Dual compute nodes for non-production and production clusters. A minimum of three nodes are needed for medium and large production environments and large test environments. Any cluster configuration needs to adapt to the size of the project and the workload that is expected.
Licensed pods
The following table lists all the components that are licensed in production deployments. Each Cloud Pak for Business Automation capability has at least one component that is licensed and components that are not licensed. For more information, see Licensing.
Capability component | Licensed for production/non-production | Pod name |
---|---|---|
Application Engine | Yes/No No for playback functions. No for automation services only. No for external workplace use. |
<metadata.name>-<ae instance
name>-aae-ae-deployment By default, |
Automation Decision Services: Runtime | Yes | <metadata.name>-ads-runtime |
Automation Document Processing: OCR Extraction | Yes | <metadata.name>-ocr-extraction |
Automation Document Processing: Classify Process | Yes | <metadata.name>-classify-process |
Automation Document Processing: Processing Extraction | Yes | <metadata.name>-processing-extraction |
Automation Document Processing: Natural Language Extractor | Yes | <metadata.name>-natural-language-extractor |
Business Automation Insights: BPC | Yes/No | <metadata.name>-bai-bpmn |
Business Automation Insights: Cockpit | Yes/No | <metadata.name>-insights-engine-cockpit |
Business Automation Insights: Engine | Yes/No | <metadata.name>-insights-engine |
Business Automation Insights: Flink task managers | Yes/No | <metadata.name>-insights-engine-flink |
Business Automation Workflow Server | Yes | <metadata.name>-<baw_instance>-baw-server-n |
Operational Decision Manager: Decision Center | Yes | <metadata.name>-odm-decisioncenter |
Operational Decision Manager: Decision Runner | Yes (always licensed as non-production) | <metadata.name>-odm-decisionrunner |
Operational Decision Manager: Decision Server Runtime | Yes | <metadata.name>-odm-decisionserverruntime |
FileNet Content Manager: CPE | Yes No for Automation Document Processing. No for Business Automation Workflow. No for Automation Workstream Services. No for Business Automation Application. |
<metadata.name>-cpe-deploy |
FileNet Content Manager: CSS | Yes | <metadata.name>-css-deploy |
FileNet Content Manager: Enterprise Records | Yes | <metadata.name>-ier-deploy |
FileNet Content Manager: Content Collector for SAP | Yes | <metadata.name>-iccsap-deploy |
By default, <metadata.name>
is
icp4adeploy
.
Deployment profiles
Based on your cluster requirement, you can pick a deployment profile
(sc_deployment_profile_size) and enable it during installation. Cloud Pak for Business Automation provides
small
, medium
, and large
deployment profiles. You
can set the profile during installation, in an update, and during an upgrade.
small
. Before you install the
Cloud Pak, you can change the profile to medium
or large
. You can
scale up or down a profile anytime after installation. However, if you install with a
medium
profile and another Cloud Pak specifies a medium
or
large
profile then if you scale down to size small
, the profile
for the foundational services remains as it is. You can scale down the foundational services to
small
only if no other Cloud Pak specifies a medium
or
large
size.It is recommended that you set the IBM Cloud Platform UI (Zen) service to the same size as
Cloud Pak for Business Automation. For a
small-sized deployment, the size of the Cloud Pak foundational services instance is set to
starterset
. The possible values include small, medium, and large. To determine the
real size that is needed for Cloud Pak foundational services, do proper performance testing
with your intended workload and modify the CRs to the correct size. For more information, see Hardware requirements and recommendations for foundational
services.
The following table describes each deployment profile.
Profile | Description | Scaling (per 8-hour day) | Minimum number of worker nodes |
---|---|---|---|
Small (no HA) | For environments that are used by 10 developers and 25 users. For environments that are used by a single department with a few users; useful for application development. |
|
8 |
Medium | For environments that are used by 20 developers and 125 users. For environments that are used by a single department and by limited users. |
|
16 |
Large | For environments that are used by 50 developers and 625 users. For environments that are shared by multiple departments and users. |
|
32 |
You can use custom resource templates to update the hardware requirements of the services that you want to install.
The following sections provide the default resources for each capability. For more information about the minimum requirements of foundational services, see Hardware requirements and recommendations for foundational services.
- Small profile hardware requirements
- Medium profile hardware requirements
- Large profile hardware requirements
small
, medium
, and
large
profiles are derived under specific operating and environment conditions. Due
to the differences in hardware, networking, and storage, the resources vary for each workload in
different environments. It is important that you run a performance test with peaked workload in a
Test or UAT environment. Monitor the resource usage (for example CPU and memory) to determine the
resource usage for each component. Based on the resource usage, make changes to CPU request, CPU
limit, memory request, and memory limit in the custom resource for each component.Ephemeral storage is storage that is tied to the lifecycle of a pod, so when a pod finishes or is restarted, that storage is deleted. It is used in any situation where your workloads need or generate transient local data, like logging. Use the /bin/df tool to monitor ephemeral storage usage on the volume where ephemeral container data is located. You can manage local ephemeral storage by setting quotas that define the limit ranges and the number of requests.
Small profile hardware requirements
- Table 3 Cloud Pak for Business Automation operator default requirements for a small profile
- Table 4 EDB Postgres default requirements for a small profile
- Table 5 Foundation default requirements for a small profile
- Table 6 Automation Decision Services default requirements for a small profile
- Table 7 Automation Document Processing default requirements for a small profile
- Table 8 Automation Workstream Services default requirements for a small profile
- Table 9 Business Automation Application default requirements for a small profile
- Table 10 Business Automation Workflow default requirements with or without Automation Workstream Services for a small profile
- Table 11 FileNet® Content Manager default requirements for a small profile
- Table 12 Operational Decision Manager default requirements for a small profile
- Table 13 Workflow Process Service Authoring default requirements for a small profile
- For IBM Workflow Process Service Runtime, see Planning for a Workflow Process Service Runtime deployment.
- For IBM FileNet Content Manager, see Planning for a CP4BA FileNet Content Manager production deployment.
- For IBM Process Federation Server, see Planning for a CP4BA Process Federation Server production deployment.
Component | CPU Request (m) | CPU Limit (m) | Memory Request (Mi) | Memory Limit (Mi) | Ephemeral Storage Limit (Mi) | Number of replicas | Pods are licensed for production/non-production |
---|---|---|---|---|---|---|---|
ibm-cp4a-operator | 500 | 1000 | 256 | 2048 | NA | 1 | No |
ibm-content-operator | 500 | 1000 | 256 | 2048 | NA | 1 | No |
ibm-ads-operator | 10 | 500 | 64 | 512 | NA | 1 | No |
ibm-odm-operator | 10 | 500 | 256 | 768 | NA | 1 | No |
ibm-dpe-operator | 10 | 1000 | 256 | 768 | 750 | 1 | No |
ibm-pfs-operator | 100 | 500 | 20 | 1024 | NA | 1 | No |
ibm-workflow-operator | 100 | 500 | 20 | 1024 | NA | 1 | No |
ibm-cp4a-wfps-operator | 100 | 500 | 20 | 500 | NA | 1 | No |
ibm-insights-engine-operator | 500 | 1000 | 256 | 2048 | 800 | 1 | No |
CPU Request and CPU Limit values are measured in units of millicore (m).
Memory Request and Memory Limit values are measured in units of mebibyte (Mi).
oc patch csv
command to add more
resources:oc patch csv ibm-cp4a-operator.v24.0.0 --type=json -p '[
{
"op":"replace",
"path": "/spec/install/spec/deployments/0/spec/template/spec/containers/0/resources/limits/cpu",
"value": "4"
},
{
"op":"replace",
"path": "/spec/install/spec/deployments/0/spec/template/spec/containers/0/resources/limits/memory",
"value": "8Gi"
},
{
"op":"replace",
"path": "/spec/install/spec/deployments/0/spec/template/spec/containers/0/resources/requests/cpu",
"value": "1500m"
},
{
"op":"replace",
"path": "/spec/install/spec/deployments/0/spec/template/spec/containers/0/resources/requests/memory",
"value": "1600Mi"
},
]'
CPU Request (m) | CPU Limit (m) | Memory Request (Mi) | Memory Limit (Mi) | Number of replicas | Pods are licensed for production/non-production | Ephemeral storage Limit (Mi) | Ephemeral storage Request (Mi) |
---|---|---|---|---|---|---|---|
1000 | 2000 | 4096 | 8192 | 1 | No | 1024 | 500 |
Component | CPU Request (m) | CPU Limit (m) | Memory Request (Mi) | Memory Limit (Mi) | Number of replicas | Pods are licensed for production/non-production | Ephemeral Storage Request (Mi) | Ephemeral Storage Limit (Mi) |
---|---|---|---|---|---|---|---|---|
Business Automation Insights: Business Performance Center | 100 | 4000 | 512 | 2000 | 1 | Yes/No | 1050 | 1150 |
Business Automation Insights: Flink task managers | 1000 | 1000 | 1728 | 1728 | Default parallelism 8 |
Yes/No | 500 | 2048 |
Business Automation Insights: Flink job manager | 1000 | 1000 | 1728 | 1728 | 1 | No | 500 | 2048 |
Business Automation Insights: Management REST API | 100 | 1000 | 50 | 160 | 1 | No | 371 | 395 |
Business Automation Insights: Management back end | 100 | 500 | 350 | 512 | 1 | No | 381 | 410 |
Navigator | 1000 | 1000 | 3072 | 3072 | 1 | No | ||
Navigator Watcher | 250 | 500 | 256 | 512 | 1 | No | ||
App Engine playback | 300 | 500 | 256 | 1024 | 1 | No | 512 | 2048 |
BAStudio | 1100 | 2000 | 1752 | 3072 | 1 | No | 1024 | 2048 |
Resource Registry | 100 | 500 | 256 | 512 | 1 | No | 128 | 2048 |
bai-setup
and bai-core-application-setup
jobs and
requests 200 m of CPU and 350 Mi of memory. The CPU and memory limits are the same as the requests.
The pods for these jobs run for a short time at the beginning of the installation and then stop, and
the resources are then released.Component | CPU Request (m) | CPU Limit (m) | Memory Request (Mi) | Memory Limit (Mi) | Ephemeral Storage Request (Mi) | Ephemeral Storage Limit (Mi) | Number of replicas | Pods are licensed for production/non-production |
---|---|---|---|---|---|---|---|---|
ads-runtime | 500 | 1000 | 2048 | 3072 | 300 | 500 | 1 | Yes |
ads-credentials | 250 | 1000 | 800 | 1536 | 300 | 600 | 1 | No |
ads-gitservice | 500 | 1000 | 800 | 1536 | 400 | 600 | 1 | No |
ads-parsing | 250 | 1000 | 800 | 1536 | 300 | 500 | 1 | No |
ads-restapi | 500 | 1000 | 800 | 1536 | 300 | 1228.8 | 1 | No |
ads-run | 500 | 1000 | 800 | 1536 | 300 | 700 | 1 | No |
- ads-ltpa-creation
- ads-runtime-bai-registration
- ads-ads-runtime-zen-translation-job
- ads-designer-zen-translation-job
The ads-rr-integration and ads-ads-rr-as-runtime-synchro jobs are started every 15 minutes, and are also short-lived.
Component | CPU Request (m) | CPU Limit (m) | Memory Request (Mi) | Memory Limit (Mi) | Ephemeral Storage Limit (Mi) | Number of replicas | Pods are licensed for production/non-production |
---|---|---|---|---|---|---|---|
OCR Extraction | 200 | 1000 | 1024 | 2560 | 3072 | 3 | Yes |
Classify Process | 200 | 500 | 400 | 2048 | 3072 | 1 | Yes |
Processing Extraction | 500 | 1000 | 1024 | 6656 | 5120 | 5 | Yes |
Natural Language Extractor | 200 | 500 | 600 | 1440 | 3072 | 2 | Yes |
PostProcessing | 200 | 1000 | 400 | 1229 | 3072 | 1 | No |
Setup | 200 | 1000 | 600 | 2048 | 3072 | 2 | No |
Deep Learning | 1000 | 2000 | 3072 | 15360 | 7680 | 2 | No |
Backend | 200 | 1000 | 400 | 2048 | 4608 | 2 | No |
Webhook | 200 | 300 | 400 | 500 | 1024 | 1 | No |
RabbitMQ | 100 | 1000 | 100 | 1024 | 3072 | 2 | No |
OCR engine 2 Runtime (wdu-runtime) | 200 | 4000 | 1024 | 7629 | 4096 | 1 | No |
OCR engine 2 Extraction (wdu-extraction) | 300 | 1000 | 500 | 1024 | 3072 | 1 | No |
Common Git Gateway Service (git-service) | 500 | 1000 | 512 | 1536 | Not applicable | 1 | No |
Content Designer Repo API (CDRA) | 500 | 1000 | 1024 | 3072 | Not applicable | 1 | No |
Content Designer UI and REST (CDS) | 500 | 1000 | 512 | 3072 | 2048 | 1 | No |
Content Project Deployment Service (CPDS) | 500 | 1000 | 512 | 3072 | Not applicable | 1 | No |
Mongo database (mongodb) | 500 | 1000 | 512 | 1024 | Not applicable | 1 | No |
Viewer service (viewone) | 500 | 1000 | 1024 | 3072 | Not applicable | 1 | No |
- Document Processing - The Deep
Learning optional container can use NVIDIA GPU if it is available. NVIDIA is the only supported GPU
for Deep Learning in the Document Processing pattern. The GPU worker
nodes must have a unique label, for example
ibm-cloud.kubernetes.io/gpu-enabled:true
. You add this label value to the deployment script or to the YAML file of your custom resource when you configure the YAML for deployment. To install the NVIDIA GPU operator, follow these installation instructions. For high-availability, you need a minimum of 2 GPU so that 2 replicas of Deep Learning pods can be started. You can change the replica to 1 if you have 1 GPU on the node. - For Document Processing, the CPU of the worker nodes must meet TensorFlow AVX requirements. For more information, see Hardware requirements for TensorFlow with pip.
- Document Processing requires databases for project configuration and processing. These databases must be Db2 or PostgreSQL. The hardware and storage requirements for the databases depend on the system load for each document processing project.
- The previous table shows the requirements if deep learning object detection is enabled. If you process fixed-format documents, you might want to improve performance by disabling deep learning object detection. For more information about the system requirements for Document Processing engine components in this scenario, see IBM Automation Document Processing system requirements when disabling deep learning object detection for fixed-format documents.
- If you deploy with only the
document_processing
pattern, you can reduce the sizing for some of the required components. For more information, see IBM Automation Document Processing system requirements for a light production deployment (document_processing pattern only). - Use a maximum of 70% of the available space for projects. For example, if you have 5000 Mb, use 3500 Mb for your projects. Because the model size is 153 Mb, it means you can create a maximum of 22 projects. If you want to set up more than 22 projects, increase the ephemeral storage for both the Deep Learning and Processing Extraction containers.
- The OCR engine 2 Runtime container enables support for low-quality documents and handwriting
recognition when the ca_configuration.ocrextraction.use_iocr parameter is set
to
auto
orall
. - OCR engine 2 Extraction is an optional container that is used to make gRPC requests to the OCR
engine 2 Runtime service, and you deploy it by setting the
ca_configuration.ocrextraction.use_iocr parameter to
auto
orall
.
Component | CPU Request (m) | CPU Limit (m) | Memory Request (Mi) | Memory Limit (Mi) | Number of replicas | Pods are licensed for production/non-production |
---|---|---|---|---|---|---|
Workflow Server | 500 | 2000 | 2048 | 3060 | 1 | Yes |
- For components that are included in your Automation Workstream Services instance from FileNet Content Manager, see Table 11.
- Automation Workstream Services also creates some jobs that request 200m CPU and 128Mi Memory:
- basimport-job is created only with Business Automation Studio
- db-init-job
- content-init-job
- ltpa-job
- oidc-registry-job
- oidc-registry-job-for-webpd is created only with Workflow Center
Component | CPU Request (m) | CPU Limit (m) | Memory Request (Mi) | Memory Limit (Mi) | Ephemeral Storage Request (Mi) | Ephemeral Storage Limit (Mi) | Number of replicas | Pods are licensed for production/non-production |
---|---|---|---|---|---|---|---|---|
App Engine | 300 | 500 | 256 | 1024 | 512 | 2048 | 1 | Yes/No |
Resource Registry | 100 | 500 | 256 | 512 | 128 | 2048 | 1 | No |
Component | CPU Request (m) | CPU Limit (m) | Memory Request (Mi) | Memory Limit (Mi) | Number of replicas | Pods are licensed for production/non-production |
---|---|---|---|---|---|---|
Workflow Server | 500 | 2000 | 2048 | 3060 | 1 | Yes |
Workflow Authoring | 500 | 2000 | 2048 | 3072 | 1 | No |
Intelligent Task Prioritization | 500 | 2000 | 1024 | 2560 | 1 | No |
Workforce Insights | 500 | 2000 | 1024 | 2560 | 1 | No |
- For components that are included in your Business Automation Workflow instance from FileNet Content Manager, see Table 11.
- Intelligent Task Prioritization and Workforce Insights are optional and are not supported on all platforms. For more information, see Detailed system requirements.
- Business Automation Workflow also creates
some jobs that request 200m CPU and 128Mi Memory:
- basimport-job is created only with Business Automation Studio.
- case-init-job
- db-init-job
- content-init-job
- ltpa-job
Component | CPU Request (m) | CPU Limit (m) | Memory Request (Mi) | Memory Limit (Mi) | Number of replicas | Pods are licensed for production/non-production |
---|---|---|---|---|---|---|
CPE | 1000 | 1000 | 3072 | 3072 | 1 | Yes |
CSS | 1000 | 1000 | 4096 | 4096 | 1 | Yes |
Enterprise Records (ER) | 500 | 1000 | 1536 | 1536 | 1 | Yes |
Content Collector for SAP (CC4SAP) | 500 | 1000 | 1536 | 1536 | 1 | Yes |
CMIS | 500 | 1000 | 1536 | 1536 | 1 | No |
GraphQL | 500 | 1000 | 1536 | 1536 | 1 | No |
Task Manager | 500 | 1000 | 1536 | 1536 | 1 | No |
In high-volume indexing scenarios, where ingested docs are full-text indexed, the CSS utilization can exceed the CPE utilization. Sometimes, this might be 3 - 5 times larger.
For optional processing such as thumbnail generation or text filtering, at least 1 GB of native memory is required by the CPE for each. If both types of processing are expected, add at least 2 GB to the memory requests/limits for the CPE.
With the processing of content, resources required increase with the complexity and size of the content. Increase both memory and CPU for the CPE and CSS services to reflect the type and size of documents in your system. Resource requirements might also increase over time as the amount of data in the system grows.
Component | CPU Request (m) | CPU Limit (m) | Memory Request (Mi) | Memory Limit (Mi) | Ephemeral Storage Request (Mi) | Ephemeral Storage Limit (Mi) | Number of replicas | Pods are licensed for production/non-production |
---|---|---|---|---|---|---|---|---|
Decision Center | 1000 | 1000 | 4096 | 4096 | 1024 | 2048 | 1 | Yes |
Decision Runner | 500 | 500 | 2048 | 2048 | 200 | 1024 | 1 | Yes |
Decision Server Runtime | 500 | 1000 | 2048 | 2048 | 200 | 1024 | 1 | Yes |
Decision Server Console | 500 | 500 | 1024 | 1024 | 200 | 1024 | 1 | No |
Component | CPU Request (m) | CPU Limit (m) | Memory Request (Mi) | Memory Limit (Mi) | Number of replicas | Pods are licensed for production/non-production |
---|---|---|---|---|---|---|
IBM Workflow Process Service Authoring | 1100 | 2000 | 1752 | 3072 | 1 | No |
Medium profile hardware requirements
- Table 14 Cloud Pak for Business Automation operator default requirements for a medium profile
- Table 15 EDB Postgres default requirements for a medium profile
- Table 16 Foundation default requirements for a medium profile
- Table 17 Automation Decision Services default requirements for a medium profile
- Table 18 Automation Document Processing default requirements for a medium profile
- Table 19 Automation Workstream Services default requirements for a medium profile
- Table 20 Business Automation Application default requirements for a medium profile
- Table 21 Business Automation Workflow default requirements with or without Automation Workstream Services for a medium profile
- Table 22 FileNet Content Manager default requirements for a medium profile
- Table 23 Operational Decision Manager default requirements for a medium profile
- Table 24 Workflow Process Service Authoring default requirements for a medium profile
- For IBM Workflow Process Service Runtime, see Planning for a Workflow Process Service Runtime deployment.
- For IBM FileNet Content Manager, see Planning for a CP4BA FileNet Content Manager production deployment.
- For IBM Process Federation Server, see Planning for a CP4BA Process Federation Server production deployment.
Component | CPU Request (m) | CPU Limit (m) | Memory Request (Mi) | Memory Limit (Mi) | Ephemeral Storage Limit (Mi) | Number of replicas | Pods are licensed for production/non-production |
---|---|---|---|---|---|---|---|
ibm-cp4a-operator | 500 | 1000 | 256 | 2048 | NA | 1 | No |
ibm-content-operator | 500 | 1000 | 256 | 2048 | NA | 1 | No |
ibm-ads-operator | 10 | 500 | 64 | 512 | NA | 1 | No |
ibm-odm-operator | 10 | 500 | 256 | 768 | NA | 1 | No |
ibm-dpe-operator | 10 | 1000 | 256 | 768 | 500 | 1 | No |
ibm-pfs-operator | 100 | 500 | 20 | 1024 | NA | 1 | No |
ibm-workflow-operator | 100 | 500 | 20 | 1024 | NA | 1 | No |
ibm-cp4a-wfps-operator | 100 | 500 | 20 | 500 | NA | 1 | No |
ibm-insights-engine-operator | 500 | 1000 | 256 | 2048 | 800 | 1 | No |
oc patch csv
command to add more
resources:oc patch csv ibm-cp4a-operator.v23.2.0 --type=json -p '[
{
"op":"replace",
"path": "/spec/install/spec/deployments/0/spec/template/spec/containers/0/resources/limits/cpu",
"value": "4"
},
{
"op":"replace",
"path": "/spec/install/spec/deployments/0/spec/template/spec/containers/0/resources/limits/memory",
"value": "8Gi"
},
{
"op":"replace",
"path": "/spec/install/spec/deployments/0/spec/template/spec/containers/0/resources/requests/cpu",
"value": "1500m"
},
{
"op":"replace",
"path": "/spec/install/spec/deployments/0/spec/template/spec/containers/0/resources/requests/memory",
"value": "1600Mi"
},
]'
CPU Request (m) | CPU Limit (m) | Memory Request (Mi) | Memory Limit (Mi) | Number of replicas | Pods are licensed for production/non-production | Ephemeral storage Limit (Mi) | Ephemeral storage Request (Mi) |
---|---|---|---|---|---|---|---|
1000 | 4000 | 4096 | 8192 | 1 | No | 1024 | 500 |
Component | CPU Request (m) | CPU Limit (m) | Memory Request (Mi) | Memory Limit (Mi) | Number of replicas | Pods are licensed for production/non-production | Ephemeral Storage Request (Mi) | Ephemeral Storage Limit (Mi) |
---|---|---|---|---|---|---|---|---|
Business Automation Insights: Business Performance Center | 100 | 4000 | 512 | 2000 | 2 | Yes/No | 1050 | 1150 |
Business Automation Insights: Flink task managers | 1000 | 1000 | 1728 | 1728 | Default parallelism 8 |
Yes/No | 500 | 2048 |
Business Automation Insights: Flink job manager | 1000 | 1000 | 1728 | 1728 | 1 | No | 500 | 2048 |
Business Automation Insights: Management REST API | 100 | 1000 | 50 | 160 | 2 | No | 371 | 395 |
Business Automation Insights: Management back end | 100 | 500 | 350 | 512 | 2 | No | 381 | 410 |
Navigator | 2000 | 3000 | 4096 | 4096 | 2 | No | ||
Navigator Watcher | 250 | 500 | 256 | 512 | 1 | No | ||
App Engine playback | 300 | 500 | 256 | 1024 | 2 | No | 512 | 2048 |
BAStudio | 1000 | 2000 | 1752 | 3072 | 2 | No | 1024 | 2048 |
Resource Registry | 100 | 500 | 256 | 512 | 3 | No | 128 | 2048 |
bai-setup
and bai-core-application-setup
jobs and
requests 200 m of CPU and 350 Mi of memory. The CPU and memory limits are the same as the requests.
The pods for these jobs run for a short time at the beginning of the installation and then stop, and
the resources are then released.Component | CPU Request (m) | CPU Limit (m) | Memory Request (Mi) | Memory Limit (Mi) | Ephemeral Storage Request (Mi) | Ephemeral Storage Limit (Mi) | Number of replicas | Pods are licensed for production/non-production |
---|---|---|---|---|---|---|---|---|
ads-runtime | 500 | 1000 | 2048 | 3072 | 2150.4 | 3072 | 2 | Yes |
ads-credentials | 250 | 1000 | 800 | 1536 | 300 | 600 | 2 | No |
ads-gitservice | 500 | 1000 | 800 | 1536 | 400 | 700 | 2 | No |
ads-parsing | 250 | 1000 | 800 | 1536 | 300 | 600 | 2 | No |
ads-restapi | 500 | 1000 | 800 | 1536 | 300 | 1228.8 | 2 | No |
ads-run | 500 | 1000 | 800 | 1536 | 300 | 700 | 2 | No |
- ads-ltpa-creation
- ads-runtime-bai-registration
- ads-ads-runtime-zen-translation-job
- ads-designer-zen-translation-job
The ads-rr-integration and ads-ads-rr-as-runtime-synchro jobs are started every 15 minutes, and are also short-lived.
Component | CPU Request (m) | CPU Limit (m) | Memory Request (Mi) | Memory Limit (Mi) | Ephemeral Storage Limit (Mi) | Number of Replicas | Pods are licensed for production/non-production |
---|---|---|---|---|---|---|---|
OCR Extraction | 200 | 1000 | 1024 | 2560 | 3072 | 4 | Yes |
Classify Process | 200 | 500 | 400 | 2048 | 3072 | 2 | Yes |
Processing Extraction | 500 | 1000 | 1024 | 6656 | 5120 | 7 | Yes |
Natural Language Extractor | 200 | 500 | 600 | 1440 | 3072 | 2 | Yes |
PostProcessing | 200 | 1000 | 400 | 1229 | 3072 | 2 | No |
Setup | 200 | 1000 | 600 | 2048 | 3072 | 4 | No |
Deep Learning | 1000 | 2000 | 3072 | 15360 | 7680 | 2 | No |
Backend | 200 | 1000 | 400 | 2048 | 4608 | 4 | No |
Webhook | 200 | 300 | 400 | 500 | 1024 | 2 | No |
RabbitMQ | 100 | 1000 | 100 | 1024 | 3072 | 3 | No |
OCR engine 2 Runtime (wdu-runtime) | 200 | 4000 | 1024 | 7629 | 4096 | 1 | No |
OCR engine 2 Extraction (wdu-extraction) | 300 | 1000 | 500 | 1024 | 3072 | 1 | No |
Common Git Gateway Service (git-service) | 500 | 1000 | 512 | 1536 | Not applicable | 1 | No |
Content Designer Repo API (CDRA) | 500 | 1000 | 1024 | 3072 | Not applicable | 2 | No |
Content Designer UI and REST (CDS) | 500 | 1000 | 512 | 3072 | 2048 | 2 | No |
Content Project Deployment Service (CPDS) | 500 | 1000 | 512 | 1024 | Not applicable | 2 | No |
Mongo database (mongodb) | 500 | 1000 | 512 | 1024 | Not applicable | 1 | No |
Viewer service (viewone) | 500 | 2000 | 1024 | 4096 | Not applicable | 2 | No |
- Document Processing - The Deep
Learning optional container has the ability to use NVIDIA GPU if it is available. NVIDIA is the only
supported GPU for Deep Learning in the Document Processing pattern. The GPU worker
nodes must have a unique label, for example
ibm-cloud.kubernetes.io/gpu-enabled:true
. You add this label value to the deployment script or to the YAML file of your custom resource when you configure the YAML for deployment. To install the NVIDIA GPU operator, follow these installation instructions. For high-availability, you need a minimum of 2 GPU so that 2 replicas of Deep Learning pods can be started. You can change the replica to 1 if you have 1 GPU on the node. - For Document Processing, the CPU of the worker nodes must meet TensorFlow AVX requirements. For more information, see Hardware requirements for TensorFlow with pip.
- Document Processing requires databases for project configuration and processing. These databases must be Db2 or PostgreSQL. The hardware and storage requirements for the databases depend on the system load for each document processing project.
- The previous table shows the requirements if deep learning object detection is enabled. If you process fixed-format documents, you want to might improve performance by disabling deep learning object detection. For more information about the system requirements for Document Processing engine components in this scenario, see IBM Automation Document Processing system requirements when disabling deep learning object detection for fixed-format documents.
- If you deploy with only the
document_processing
pattern, you can reduce the sizing for some of the required components. For more information, see IBM Automation Document Processing system requirements for a light production deployment (document_processing pattern only). - Use a maximum of 70% of the available space for projects. For example, if you have 5000 Mb, use 3500 Mb for your projects. Because the model size is 153 Mb, it means you can create a maximum of 22 projects. If you want to set up more than 22 projects, increase the ephemeral storage for both the Deep Learning and Processing Extraction containers.
- The OCR engine 2 Runtime container enables support for low-quality documents and handwriting
recognition when the ca_configuration.ocrextraction.use_iocr parameter is set
to
auto
orall
. - OCR engine 2 Extraction is an optional container that is used to make gRPC requests to the OCR
engine 2 Runtime service, and you deploy it by setting the
ca_configuration.ocrextraction.use_iocr parameter to
auto
orall
.
Component | CPU Request (m) | CPU Limit (m) | Memory Request (Mi) | Memory Limit (Mi) | Number of replicas | Pods are licensed for production/non-production |
---|---|---|---|---|---|---|
Workflow Server | 500 | 2000 | 2560 | 3512 | 2 | Yes |
- For components that are included in your Automation Workstream Services instance from FileNet Content Manager, see Table 22.
- Automation Workstream Services also creates some jobs that request 200m CPU and 128Mi Memory:
- basimport-job is created only with Business Automation Studio
- db-init-job
- content-init-job
- ltpa-job
- oidc-registry-job
- oidc-registry-job-for-webpd is created only with Workflow Center
Component | CPU Request (m) | CPU Limit (m) | Memory Request (Mi) | Memory Limit (Mi) | Ephemeral Storage Request (Mi) | Ephemeral Storage Limit (Mi) | Number of replicas | Pods are licensed for production/non-production |
---|---|---|---|---|---|---|---|---|
App Engine | 300 | 500 | 256 | 1024 | 512 | 2048 | 3 | Yes/No |
Resource Registry | 100 | 500 | 256 | 512 | 128 | 2048 | 3 | No |
Component | CPU Request (m) | CPU Limit (m) | Memory Request (Mi) | Memory Limit (Mi) | Number of replicas | Pods are licensed for production/non-production |
---|---|---|---|---|---|---|
Workflow Server | 500 | 2000 | 2560 | 3512 | 2 | Yes |
Workflow Authoring | 500 | 4000 | 1024 | 3072 | 1 | No |
Intelligent Task Prioritization | 500 | 2000 | 1024 | 2560 | 2 | No |
Workforce Insights | 500 | 2000 | 1024 | 2560 | 2 | No |
- For components that are included in your Business Automation Workflow instance from FileNet Content Manager, see Table 22.
- Intelligent Task Prioritization and Workforce Insights are optional and are not supported on all platforms. For more information, see Detailed system requirements.
- Business Automation Workflow also creates
some jobs that request 200m CPU and 128Mi Memory:
- basimport-job is created only with Business Automation Studio.
- case-init-job
- db-init-job
- content-init-job
- ltpa-job
Component | CPU Request (m) | CPU Limit (m) | Memory Request (Mi) | Memory Limit (Mi) | Number of replicas | Pods are licensed for production/non-production |
---|---|---|---|---|---|---|
CPE | 1500 | 2000 | 3072 | 3072 | 2 | Yes |
CSS | 1000 | 2000 | 8192 | 8192 | 2 | Yes |
Enterprise Records (ER) | 500 | 1000 | 1536 | 1536 | 2 | Yes |
Content Collector for SAP (CC4SAP) | 500 | 1000 | 1536 | 1536 | 2 | Yes |
CMIS | 500 | 1000 | 1536 | 1536 | 2 | No |
GraphQL | 500 | 2000 | 3072 | 3072 | 3 | No |
Task Manager | 500 | 1000 | 1536 | 1536 | 2 | No |
In high-volume indexing scenarios, where ingested docs are full-text indexed, the CSS utilization can exceed the CPE utilization. In some cases, this might be 3 - 5 times larger.
For optional processing such as thumbnail generation or text filtering, at least 1 GB of native memory is required by the CPE for each. If both types of processing are expected, add at least 2 GB to the memory requests/limits for the Content Platform Engine (CPE).
With the processing of content, resource requirements increase with the complexity and size of the content. Increase both memory and CPU for the CPE and CSS services to reflect the type and size of documents in your system. Resource requirements might also increase over time as the amount of data in the system grows.
Component | CPU Request (m) | CPU Limit (m) | Memory Request (Mi) | Memory Limit (Mi) | Ephemeral Storage Request (Mi) | Ephemeral Storage Limit (Mi) | Number of replicas | Pods are licensed for production/non-production |
---|---|---|---|---|---|---|---|---|
Decision Center | 1000 | 1000 | 4096 | 8192 | 1024 | 2048 | 2 | Yes |
Decision Runner | 500 | 2000 | 2048 | 2048 | 200 | 1024 | 2 | Yes |
Decision Server Runtime | 2000 | 2000 | 2048 | 2048 | 200 | 1024 | 3 | Yes |
Decision Server Console | 500 | 2000 | 512 | 2048 | 200 | 1024 | 1 | No |
To achieve high availability, you must adapt the cluster configuration and physical resources. You can set up a Db2® High Availability Disaster Recovery (HADR) database. For more information, see Preparing your environment for disaster recovery. For high availability and fault tolerance to be effective, set the number of replicas that you need for the respective configuration parameters in your custom resource file. The operator then manages the scaling.
Component | CPU Request (m) | CPU Limit (m) | Memory Request (Mi) | Memory Limit (Mi) | Number of replicas | Pods are licensed for production/non-production |
---|---|---|---|---|---|---|
Workflow Process Service Authoring | 1000 | 2000 | 1752 | 3072 | 2 | No |
Large profile hardware requirements
- Table 25 Cloud Pak for Business Automation operator default requirements for a large profile
- Table 26 EDB Postgres default requirements for a large profile
- Table 27 Foundation default requirements for a large profile
- Table 28 Automation Decision Services default requirements for a large profile
- Table 29 Automation Document Processing default requirements for a large profile
- Table 30 Automation Workstream Services default requirements for a large profile
- Table 31 Business Automation Application default requirements for a large profile
- Table 32 Business Automation Workflow default requirements with or without Automation Workstream Services for a large profile
- Table 33 FileNet Content Manager default requirements for a large profile
- Table 34 Operational Decision Manager default requirements for a large profile
- Table 35 Workflow Process Service Authoring default requirements for a large profile
- For IBM Workflow Process Service Runtime, see Planning for a Workflow Process Service Runtime deployment.
- For IBM FileNet Content Manager, see Planning for a CP4BA FileNet Content Manager production deployment.
- For IBM Process Federation Server, see Planning for a CP4BA Process Federation Server production deployment.
Component | CPU Request (m) | CPU Limit (m) | Memory Request (Mi) | Memory Limit (Mi) | Number of replicas | Pods are licensed for production/non-production | Ephemeral storage Limit |
---|---|---|---|---|---|---|---|
ibm-cp4a-operator | 500 | 1000 | 256 | 2048 | 1 | No | NA |
ibm-content-operator | 500 | 1000 | 256 | 2048 | 1 | No | NA |
ibm-ads-operator | 10 | 500 | 64 | 512 | 1 | No | NA |
ibm-odm-operator | 10 | 500 | 256 | 768 | 1 | No | NA |
ibm-dpe-operator | 10 | 1000 | 256 | 768 | 1 | No | 500 |
ibm-pfs-operator | 100 | 500 | 20 | 1024 | 1 | No | NA |
ibm-workflow-operator | 100 | 500 | 20 | 1024 | 1 | No | NA |
ibm-cp4a-wfps-operator | 100 | 500 | 20 | 500 | 1 | No | NA |
ibm-insights-engine-operator | 500 | 1000 | 256 | 2048 | 800 | 1 | No |
oc patch csv
command to add more
resources:oc patch csv ibm-cp4a-operator.v23.2.0 --type=json -p '[
{
"op":"replace",
"path": "/spec/install/spec/deployments/0/spec/template/spec/containers/0/resources/limits/cpu",
"value": "4"
},
{
"op":"replace",
"path": "/spec/install/spec/deployments/0/spec/template/spec/containers/0/resources/limits/memory",
"value": "8Gi"
},
{
"op":"replace",
"path": "/spec/install/spec/deployments/0/spec/template/spec/containers/0/resources/requests/cpu",
"value": "1500m"
},
{
"op":"replace",
"path": "/spec/install/spec/deployments/0/spec/template/spec/containers/0/resources/requests/memory",
"value": "1600Mi"
},
]'
CPU Request (m) | CPU Limit (m) | Memory Request (Mi) | Memory Limit (Mi) | Number of replicas | Pods are licensed for production/non-production | Ephemeral storage Limit (Mi) | Ephemeral storage Request (Mi) |
---|---|---|---|---|---|---|---|
1000 | 8000 | 4096 | 16384 | 1 | No | 1024 | 500 |
Component | CPU Request (m) | CPU Limit (m) | Memory Request (Mi) | Memory Limit (Mi) | Number of replicas | Pods are licensed for production/non-production | Ephemeral Storage Request (Mi) | Ephemeral Storage Limit (Mi) |
---|---|---|---|---|---|---|---|---|
Business Automation Insights: Business Performance Center | 100 | 4000 | 512 | 2000 | 2 | Yes/No | 1050 | 1150 |
Business Automation Insights: Flink task managers | 1000 | 1000 | 1728 | 1728 | Default parallelism 8 |
Yes/No | 500 | 2048 |
Business Automation Insights: Flink job manager | 1000 | 1000 | 1728 | 1728 | 1 | No | 500 | 2048 |
Business Automation Insights: Management REST API | 100 | 1000 | 50 | 160 | 2 | No | 371 | 395 |
Business Automation Insights: Management back end | 100 | 500 | 350 | 512 | 2 | No | 381 | 410 |
Navigator | 2000 | 4000 | 6144 | 6144 | 4 | No | ||
Navigator Watcher | 250 | 500 | 256 | 512 | 1 | No | ||
App Engine playback | 300 | 500 | 256 | 1024 | 4 | No | 512 | 2048 |
BAStudio | 2000 | 4000 | 1752 | 3072 | 2 | No | 1024 | 2048 |
Resource Registry | 100 | 500 | 256 | 512 | 3 | No | 128 | 2048 |
bai-setup
and bai-core-application-setup
jobs and
requests 200 m of CPU and 350 Mi of memory. The CPU and memory limits are the same as the requests.
The pods for these jobs run for a short time at the beginning of the installation and then stop, and
the resources are then released.Component | CPU Request (m) | CPU Limit (m) | Memory Request (Mi) | Memory Limit (Mi) | Ephemeral Storage Request (Mi) | Ephemeral Storage Limit (Mi) | Number of replicas | Pods are licensed for production/non-production |
---|---|---|---|---|---|---|---|---|
ads-runtime | 1000 | 2000 | 2048 | 3072 | 2150.4 | 2662.4 | 2 | Yes |
ads-credentials | 250 | 2000 | 800 | 1536 | 300 | 700 | 2 | No |
ads-gitservice | 500 | 2000 | 800 | 1536 | 400 | 800 | 2 | No |
ads-parsing | 250 | 2000 | 800 | 1536 | 300 | 700 | 2 | No |
ads-restapi | 500 | 2000 | 800 | 1536 | 300 | 1228.8 | 2 | No |
ads-run | 500 | 2000 | 800 | 1536 | 300 | 1024 | 2 | No |
- ads-ltpa-creation
- ads-runtime-bai-registration
- ads-ads-runtime-zen-translation-job
- ads-designer-zen-translation-job
The ads-rr-integration and ads-ads-rr-as-runtime-synchro jobs are started every 15 minutes, and are also short-lived.
Component | CPU Request (m) | CPU Limit (m) | Memory Request (Mi) | Memory Limit (Mi) | Ephemeral Storage Limit (Mi) | Number of Replicas | Pods are licensed for production/non-production |
---|---|---|---|---|---|---|---|
OCR Extraction | 200 | 1000 | 1024 | 2560 | 3072 | 6 | Yes |
Classify Process | 200 | 500 | 400 | 2048 | 3072 | 2 | Yes |
Processing Extraction | 500 | 1000 | 1024 | 6656 | 5120 | 14 | Yes |
Natural Language Extractor | 200 | 500 | 600 | 1440 | 3072 | 2 | Yes |
PostProcessing | 200 | 1000 | 400 | 1229 | 3072 | 2 | No |
Setup | 200 | 1000 | 600 | 2048 | 3072 | 6 | No |
Deep Learning | 1000 | 2000 | 3072 | 15360 | 7680 | 2 | No |
Backend | 200 | 1000 | 400 | 2048 | 4608 | 6 | No |
Webhook | 200 | 300 | 400 | 500 | 1024 | 3 | No |
RabbitMQ | 100 | 1000 | 100 | 1024 | 3072 | 3 | No |
OCR engine 2 Runtime (wdu-runtime) | 200 | 4000 | 1024 | 7629 | 4096 | 1 | No |
OCR engine 2 Extraction (wdu-extraction) | 300 | 1000 | 500 | 1024 | 3072 | 1 | No |
Common Git Gateway Service (git-service) | 500 | 1000 | 512 | 1536 | Not applicable | 2 | No |
Content Designer Repo API (CDRA) | 500 | 2000 | 1024 | 3072 | Not applicable | 2 | No |
Content Designer UI and REST (CDS) | 500 | 2000 | 512 | 3072 | 2048 | 2 | No |
Content Project Deployment Service (CPDS) | 500 | 2000 | 512 | 2048 | Not applicable | 2 | No |
Mongo database (mongodb) | 500 | 1000 | 512 | 1024 | Not applicable | 1 | No |
Viewer service (viewone) | 1000 | 3000 | 3072 | 6144 | Not applicable | 2 | No |
- Document Processing - The Deep
Learning optional container has the ability to use NVIDIA GPU if it is available. NVIDIA is the only
supported GPU for Deep Learning in the Document Processing pattern. The GPU worker
nodes must have a unique label, for example
ibm-cloud.kubernetes.io/gpu-enabled:true
. You add this label value to the deployment script or to the YAML file of your custom resource when you configure the YAML for deployment. To install the NVIDIA GPU operator, follow these installation instructions. For high-availability, you need a minimum of 2 GPU so that 2 replicas of Deep Learning pods can be started. You can change the replica to 1 if you have 1 GPU on the node. - For Document Processing, the CPU of the worker nodes must meet TensorFlow AVX requirements. For more information, see Hardware requirements for TensorFlow with pip.
- Document Processing requires databases for project configuration and processing. These databases must be Db2 or PostgreSQL. The hardware and storage requirements for the databases depend on the system load for each document processing project.
- The previous table shows the requirements if deep learning object detection is enabled. If you process only fixed-format documents, you might improve performance by disabling deep learning object detection. For more information about the system requirements for Document Processing engine components in this scenario, see IBM Automation Document Processing system requirements when disabling deep learning object detection for fixed-format documents.
- If you deploy with only the
document_processing
pattern, you can reduce the sizing for some of the required components. For more information, see IBM Automation Document Processing system requirements for a light production deployment (document_processing pattern only). - Use a maximum of 70% of the available space for projects. For example, if you have 5000 Mb, use 3500 Mb for your projects. Because the model size is 153 Mb, it means you can create a maximum of 22 projects. If you want to set up more than 22 projects, increase the ephemeral storage for both the Deep Learning and Processing Extraction containers.
- The OCR engine 2 Runtime container enables support for low-quality documents and handwriting
recognition when the ca_configuration.ocrextraction.use_iocr parameter is set
to
auto
orall
. - OCR engine 2 Extraction is an optional container that is used to make gRPC requests to the OCR
engine 2 Runtime service, and you deploy it by setting the
ca_configuration.ocrextraction.use_iocr parameter to
auto
orall
.
Component | CPU Request (m) | CPU Limit (m) | Memory Request (Mi) | Memory Limit (Mi) | Number of replicas | Pods are licensed for production/non-production |
---|---|---|---|---|---|---|
Workflow Server | 1000 | 2000 | 3060 | 4000 | 4 | Yes |
- For components that are included in your Automation Workstream Services instance from FileNet Content Manager, see Table 33.
- Automation Workstream Services also creates some jobs that request 200m CPU and 128Mi Memory:
- basimport-job is created only with Business Automation Studio
- db-init-job
- content-init-job
- ltpa-job
- oidc-registry-job
- oidc-registry-job-for-webpd is created only with Workflow Center
Component | CPU Request (m) | CPU Limit (m) | Memory Request (Mi) | Memory Limit (Mi) | Ephemeral Storage Request (Mi) | Ephemeral Storage Limit (Mi) | Number of replicas | Pods are licensed for production/non-production |
---|---|---|---|---|---|---|---|---|
App Engine | 300 | 500 | 256 | 1024 | 512 | 2048 | 6 | Yes/No |
Resource Registry | 100 | 500 | 256 | 512 | 128 | 2048 | 1 | No |
Component | CPU Request (m) | CPU Limit (m) | Memory Request (Mi) | Memory Limit (Mi) | Number of replicas | Pods are licensed for production/non-production |
---|---|---|---|---|---|---|
Workflow Server | 1000 | 2000 | 3060 | 4000 | 4 | Yes |
Workflow Authoring | 1000 | 2000 | 2000 | 3000 | 2 | No |
Intelligent Task Prioritization | 500 | 2000 | 1024 | 2560 | 2 | No |
Workforce Insights | 500 | 2000 | 1024 | 2560 | 2 | No |
- For components that are included in your Business Automation Workflow instance from FileNet Content Manager, see Table 33.
- Intelligent Task Prioritization and Workforce Insights are optional and are not supported on all platforms. For more information, see Detailed system requirements.
- Business Automation Workflow also creates
some jobs that request 200m CPU and 128Mi Memory:
- basimport-job is created only with Business Automation Studio.
- case-init-job
- db-init-job
- content-init-job
- ltpa-job
Component | CPU Request (m) | CPU Limit (m) | Memory Request (Mi) | Memory Limit (Mi) | Number of replicas | Pods are licensed for production/non-production |
---|---|---|---|---|---|---|
CPE | 3000 | 4000 | 8192 | 8192 | 2 | Yes |
CSS | 2000 | 4000 | 8192 | 8192 | 2 | Yes |
Enterprise Records (ER) | 500 | 1000 | 1536 | 1536 | 2 | Yes |
Content Collector for SAP (CC4SAP) | 500 | 1000 | 1536 | 1536 | 2 | Yes |
CMIS | 500 | 1000 | 1536 | 1536 | 2 | No |
GraphQL | 1000 | 2000 | 3072 | 3072 | 4 | No |
Task Manager | 500 | 1000 | 1536 | 1536 | 2 | No |
In high-volume indexing scenarios, where ingested docs are full-text indexed, the CSS utilization can exceed the CPE utilization. In some cases, this might be 3 - 5 times larger.
For optional processing such as thumbnail generation or text filtering, at least 1 GB of native memory is required by the CPE for each. If both types of processing are expected, add at least 2 GB to the memory requests/limits for the CPE.
With the processing of content, resources required increase with the complexity and size of the content. Increase both memory and CPU for the CPE and CSS services to reflect the type and size of documents in your system. Resource requirements might also increase over time as the amount of data in the system grows.
Component | CPU Request (m) | CPU Limit (m) | Memory Request (Mi) | Memory Limit (Mi) | Ephemeral Storage Request (Mi) | Ephemeral Storage Limit (Mi) | Number of replicas | Pods are licensed for production/non-production |
---|---|---|---|---|---|---|---|---|
Decision Center | 2000 | 2000 | 4096 | 16384 | 1024 | 2048 | 2 | Yes |
Decision Runner | 500 | 4000 | 2048 | 2048 | 200 | 1024 | 2 | Yes |
Decision Server Runtime | 2000 | 2000 | 4096 | 4096 | 200 | 1024 | 6 | Yes |
Decision Server Console | 500 | 2000 | 512 | 4096 | 200 | 1024 | 1 | No |
To achieve high availability, you must adapt the cluster configuration and physical resources. You can set up a Db2 High Availability Disaster Recovery (HADR) database. For more information, see Preparing your environment for disaster recovery. For high availability and fault tolerance to be effective, set the number of replicas that you need for the respective configuration parameters in your custom resource file. The operator then manages the scaling.
Component | CPU Request (m) | CPU Limit (m) | Memory Request (Mi) | Memory Limit (Mi) | Number of replicas | Pods are licensed for production/non-production |
---|---|---|---|---|---|---|
Workflow Process Service Authoring | 2000 | 4000 | 1752 | 3072 | 2 | No |