Hardware requirements
Before you install IBM Cloud Pak® for Data, review the hardware requirements for the control plane, the shared cluster components, and the services that you plan to install.
Components | Related links |
---|---|
Cloud Pak for Data platform hardware requirements | Review the minimum requirements for a stable installation based on the license that you purchased.
Work with your IBM Sales representative to determine whether you need additional resources based on the shared cluster components that you need to install, the services that you plan to install, and the types of workloads that you plan to run. In addition, ensure that you review the following resources. |
Shared cluster-wide components | Review the hardware requirements for the shared cluster components that you need to install. |
Instance-level prerequisites | Review the hardware requirements for the instance-level prerequisites. |
Services | Review the hardware requirements for the services that you plan to install.
Not all services are supported on all hardware. |
Automatically installed dependencies | Review the hardware requirements for the dependencies that are automatically installed by
some services.
Not all dependencies are supported on all hardware. |
Cloud Pak for Data platform hardware requirements
You must install Cloud Pak for Data on a Red Hat® OpenShift® Container Platform cluster. For information about the supported versions of Red Hat OpenShift Container Platform, see Software requirements.
It is strongly recommended that you deploy Cloud Pak for Data on a highly available cluster.
The hardware requirements depend on the Cloud Pak for Data license that you purchased.
- Use this information if you purchased one of the following licenses.
- Cloud Pak for Data Enterprise Edition
- Cloud Pak for Data Standard Edition
- Sizing your cluster
-
The following requirements are the minimum recommendations for a small, stable deployment of Cloud Pak for Data. Use the minimum recommended configuration as a starting point for your cluster configuration. If you use fewer resources, you are likely to encounter stability problems.
Important: Work with your IBM® Sales representative to size your cluster.The size of your cluster depends on multiple factors.
- The shared components that you need to install.
- The services that you plan to
install.
The sizing requirements for services are available in Services. If you install only a few services with small vCPU and memory requirements, you might not need additional resources. However, if you plan to install multiple services or services with large footprints, add the appropriate amount of vCPU and memory to the minimum recommendations.
- The types of workloads that you plan to run.
For example, if you plan to run complex analytics workloads in addition to other resource-intensive workloads, such as ETL jobs, you can expect reduced concurrency levels if you don't add additional computing power to your cluster.
Because workloads vary based on several factors, use measurements from running real workloads with realistic data to size your cluster.
For additional information on sizing your cluster, download the component scaling guidance PDF.
- Choosing a configuration
-
The following configuration has been tested and validated by IBM. However, Red Hat OpenShift Container Platform supports other configurations. If the configuration in the following table does not work in your environment, you can adapt the configuration based on the guidance in the Red Hat OpenShift documentation.
Node role Hardware Number of servers Minimum available vCPU Minimum memory Minimum storage Control plane - x86-64
- ppc64le
- s390x (z14 or later)
3 (for high availability) 4 vCPU per node
(This configuration supports up to 24 worker nodes.)
16 GB RAM per node (This configuration supports up to 24 worker nodes.)
No additional storage is needed for Cloud Pak for Data.
See the Red Hat OpenShift Container Platform documentation for sizing guidance.
Infra - x86-64
- ppc64le
- s390x (z14 or later)
3 (recommended) 4 vCPU per node (This configuration supports up to 27 worker nodes.)
24 GB RAM per node (This configuration supports up to 27 worker nodes.)
See the Red Hat OpenShift Container Platform documentation for sizing guidance. Worker/compute - x86-64
- ppc64le
- s390x (z14 or later)
3 or more worker/compute nodes 16 vCPU per node - 64 GB RAM per node (minimum)
- 128 GB RAM per node (recommended)
300 GB of storage space per node for storing container images locally. See Cloud Pak for Data platform storage requirements for details. Load balancer - x86-64
- ppc64le
- s390x (z14 or later)
2 load balancer nodes
(For development, test, or proof-of-concept clusters, you can use 1 load balancer node.)
- x86-64: 2 vCPU per node
- ppc64le: 4 vCPU per node
- s390x: 2 vCPU per node
4 GB RAM per node
Add another 4 GB of RAM for access restrictions and security control.
Add 100 GB of root storage for access restrictions and security control.
A load balancer is required when using three control plane nodes. The load balancer can either be in the cluster or external to the cluster. However, in a production-level cluster, an enterprise-grade external load balancer is recommended.
- Additional guidance for Power® hardware
-
The Cloud Pak for Data control plane supports POWER9 and POWER10, but does not take advantage of POWER10 optimizations.
Not all services support Power. For more information, see Services.
On Power hardware, the maximum supported configuration for each worker node is shown as follows.
- 160 vCPU
- 512 GB RAM
- Additional guidance for s390x hardware
-
Not all services support s390x. For more information, see Services.
- Use the information in this section if you purchased one of the following licenses.
- Data Governance Express
- Data Science and ML Ops Express
- ELT Pushdown Express
- Sizing your cluster
-
The following requirements are the minimum recommendations for a small, stable deployment of Cloud Pak for Data. Use the minimum recommended configuration as a starting point for your cluster configuration. If you use fewer resources, you are likely to encounter stability problems.
Important: Work with your IBM Sales representative to size your cluster.The size of your cluster depends on multiple factors.
- The shared components that you need to install.
- The services that you plan to
install.
The services that you can install depend on the license that you purchased. The sizing requirements for services are available in Services.
- The types of workloads that you plan to run.
For example, if you plan to run complex analytics workloads in addition to other resource-intensive workloads, you can expect reduced concurrency levels if you don't add additional computing power to your cluster.
Because workloads vary based on several of factors, use measurements from running real workloads with realistic data to size your cluster.
For more information on sizing your cluster, download the component scaling guidance PDF.
- Choosing a configuration
-
The following configuration has been tested and validated by IBM. However, Red Hat OpenShift Container Platform supports other configurations. If the configuration in the following table does not work in your environment, you can adapt the configuration based on the guidance in the Red Hat OpenShift documentation.
Node role Hardware Number of servers Minimum available vCPU Minimum memory Minimum storage Control plane - x86-64
- ppc64le
- s390x (z14 or later)
3 (for high availability) 4 vCPU per node
(This configuration supports up to 24 worker nodes.)
16 GB RAM per node (This configuration supports up to 24 worker nodes.)
No additional storage is needed for Cloud Pak for Data.
See the Red Hat OpenShift Container Platform documentation for sizing guidance.
Infra - x86-64
- ppc64le
- s390x (z14 or later)
3 (recommended) 4 vCPU per node (This configuration supports up to 27 worker nodes.)
24 GB RAM per node (This configuration supports up to 27 worker nodes.)
See the Red Hat OpenShift Container Platform documentation for sizing guidance. Worker/compute - x86-64
- ppc64le
- s390x (z14 or later)
2 or more worker/compute nodes 16 vCPU per node - 64 GB RAM per node (minimum)
- 128 GB RAM per node (recommended)
300 GB of storage space per node for storing container images locally. See Cloud Pak for Data platform storage requirements for details. Load balancer - x86-64
- ppc64le
- s390x (z14 or later)
2 load balancer nodes
(For development, test, or proof-of-concept clusters, you can use 1 load balancer node.)
- x86-64: 2 vCPU per node
- ppc64le: 4 vCPU per node
- s390x: 2 vCPU per node
4 GB RAM per node
Add another 4 GB of RAM for access restrictions and security control.
Add 100 GB of root storage for access restrictions and security control.
A load balancer is required when using three control plane nodes. The load balancer can either be in the cluster or external to the cluster. However, in a production-level cluster, an enterprise-grade external load balancer is recommended.
Cluster node settings
The time on all of the nodes must be synchronized within 500 ms.
Some services require additional node settings to run correctly. For information about the node settings and the services that require them, see Changing required node settings. You must change the node settings before you install Cloud Pak for Data.
Disk requirements
To prepare your storage disks, ensure that you have good I/O performance, and prepare the disks for encryption.
- I/O performance
- When I/O performance is not sufficient, services can experience poor performance or cluster
instability, such as functional failures with timeouts. This is especially true when you are running
a heavy workload. To assess your I/O performance:
- Run the Cloud Pak for Data
storage
performance validation playbook on the cluster where you plan to install Cloud Pak for Data and compare your results with the
recommendations.
The I/O performance requirements for Cloud Pak for Data are based on extensive testing in various cloud environments. The tests validate the I/O performance in these environments. The requirements are based on the performance of writing data to representative storage classes using the following block size and thread count combinations.
- To evaluate disk latency, the I/O tests use a small block (4 KB) with 8 threads.
- To evaluate disk throughput, the I/O tests use a large block (1 GB) with 2 threads.
Ensure that the results of the storage performance validation playbook are comparable to the following recommended minimum values.
- Disk latency (4 KB block with 8 threads)
- For disk latency tests, 11 MB/s has been found to provide sufficient performance.
- Disk throughput (1 GB block with 2 thread)
- For disk throughput tests, 128 MB/s has been found to provide sufficient performance.
To ensure sufficient performance, both requirements should be satisfied; however, this might not be feasible in all environments.
Some storage types might have more stringent I/O requirements. For details, see Storage considerations.
Important: It is recommended that you run the validation playbook several times to account for variations in workloads, access patterns, and network traffic.In addition, if your storage volumes are remote, network speed can be a key factor in your I/O performance. For good I/O performance, ensure that you have sufficient network speed, as described in Storage considerations.
- Complete a proof of concept with representative workloads.If your proof of concept encounters functional issues or performance issues, determine the root cause of the problem to confirm whether the issue is related to I/O performance. You can use the following best practices to help identify potential problems.
As you assess your I/O performance, apply the following information.- Workloads can vary dramatically in terms of complexity and concurrency.
- As your workload increases, you are more likely to require better I/O performance to avoid performance problems.
- If you are unable to satisfy the recommended I/O performance requirements, you run an increased risk of encountering performance problems.
- Run the Cloud Pak for Data
storage
performance validation playbook on the cluster where you plan to install Cloud Pak for Data and compare your results with the
recommendations.
- Encryption with Linux® Unified Key Setup
- To ensure that your data within Cloud Pak for Data is stored securely, you can encrypt your disks. If you use Linux Unified Key Setup-on-disk-format (LUKS), you must enable LUKS when you install Red Hat OpenShift Container Platform. For more information, see Encrypting disks during installation in the Red Hat OpenShift Container Platform documentation.
Instance-level prerequisites
Instance-level prerequisites provide underlying functionality for services. Use the following sections to understand the hardware requirements for:
- IBM Cloud Pak for Data control plane
- IBM Cloud Pak foundational services
For more information, see Instance-level components.
Use the following information to determine whether you have the minimum required resources to install each component on your cluster.
Software vCPU Memory Storage Notes IBM Cloud Pak for Data control plane The requirements for the control plane are included in the Cloud Pak for Data platform hardware requirements The requirements for the control plane are included in the Cloud Pak for Data platform hardware requirements The requirements for the control plane are included in the Cloud Pak for Data platform hardware requirements Required.
The control plane is installed once for each instance of Cloud Pak for Data on the cluster.
IBM Cloud Pak foundational services 4 vCPU 5 GB RAM See the Hardware requirements and recommendations for foundational services in the IBM Cloud Pak foundational services documentation:Required.
The IBM Cloud Pak foundational services are installed once for each instance of Cloud Pak for Data on the cluster.
Software vCPU Memory Storage Notes IBM Cloud Pak for Data control plane The requirements for the control plane are included in the Cloud Pak for Data platform hardware requirements The requirements for the control plane are included in the Cloud Pak for Data platform hardware requirements The requirements for the control plane are included in the Cloud Pak for Data platform hardware requirements Required.
The control plane is installed once for each instance of Cloud Pak for Data on the cluster.
IBM Cloud Pak foundational services 3 vCPU 5 GB RAM See the Hardware requirements and recommendations for foundational services in the IBM Cloud Pak foundational services documentation:Required.
The IBM Cloud Pak foundational services are installed once for each instance of Cloud Pak for Data on the cluster.
Software vCPU Memory Storage Notes IBM Cloud Pak for Data control plane The requirements for the control plane are included in the Cloud Pak for Data platform hardware requirements The requirements for the control plane are included in the Cloud Pak for Data platform hardware requirements The requirements for the control plane are included in the Cloud Pak for Data platform hardware requirements Required.
The control plane is installed once for each instance of Cloud Pak for Data on the cluster.
IBM Cloud Pak foundational services 3 vCPU 5 GB RAM See the Hardware requirements and recommendations for foundational services in the IBM Cloud Pak foundational services documentation:Required.
The IBM Cloud Pak foundational services are installed once for each instance of Cloud Pak for Data on the cluster.
Services
Use the following information to determine whether you have the minimum required resources to install each service that you want to use.
Software vCPU Memory Storage Notes AI Factsheets Operator pods:
0.1 vCPU
Catalog pods:
0.01 vCPU
Operand:
0.3 vCPUOperator pods:
0.256 GB RAM
Catalog pods:
0.05 GB RAM
Operand:
1.2 GB RAMPersistent storage:
Uses the persistent storage provisioned by the common core services.
Ephemeral storage:
300 GB
Image storage:
Up to 2.51 GBMinimum resources for an installation with a single replica per service.
Anaconda Repository for IBM Cloud Pak for Data 4 vCPU
8 GB RAM 1 TB This service cannot be installed on your Red Hat OpenShift cluster. For details, see the Anaconda installation requirements. Analytics Engine powered by Apache Spark Operator pods:
0.1 vCPU
Catalog pods:
0.01 vCPU
Operand:
2.3 vCPUOperator pods:
0.256 GB RAM
Catalog pods:
0.05 GB RAM
Operand:
9 GB RAMPersistent storage:This information is not currently available.
Ephemeral storage:
50 GB per vCPU request
(SSDs are recommended)
Image storage:
Up to 32.86 GBSpark jobs use emptyDir
volumes for temporary storage and shuffling. If your Spark jobs use a lot of disk space for temporary storage or shuffling, make sure that you have sufficient space on the local disk whereemptyDir
volumes are created.The recommended location is a partition in /var/lib. For details, see Understanding ephemeral storage in the Red Hat OpenShift documentation:If you don't have sufficient space on the local disk, Spark jobs might run slowly and some of the executors might evict jobs. A minimum of 50 GB of temporary storage for each vCPU request is recommended.
Minimum resources for an installation with a single replica per service.
Work with IBM Sales to get a more accurate sizing based on your expected workload.
All of the Analytics Engine powered by Apache Spark service instances associated with an instance of Cloud Pak for Data use the same pool of resources.
Cognos® Analytics Operator pods:
0.5 vCPU
Catalog pods:
0.01 vCPU
Operand:
10 vCPUOperator pods:
1 GB RAM
Catalog pods:
0.05 GB RAM
Operand:
40 GB RAMPersistent storage:- 500 MB for the service
- 2 GB per instance (smallest instance)
Ephemeral storage:
- 1 GB for the service
- 23.6 GB per instance (smallest instance)
Image storage:
Up to 25.81 GBMinimum resources for an installation with a single replica per service.
Work with IBM Sales to get a more accurate sizing based on your expected workload.
When you provision the Cognos Analytics service, you specify the size of the instance.
The information here is for the smallest instance. For other sizes, see Provisioning the Cognos Analytics service.
Cognos Dashboards Operator pods:
0.1 vCPU
Catalog pods:
0.5 vCPU
Operand:
11 vCPUOperator pods:
0.256 GB RAM
Catalog pods:
0.256 GB RAM
Operand:
36 GB RAMPersistent storage:
30 GB
Ephemeral storage:
37.4 GB
Image storage:
Up to 14.48 GBMinimum resources for an installation with a single replica per service.
Data Privacy Operator pods:
0.1 vCPU
Catalog pods:
0..01 vCPU
Operand:
1 vCPUOperator pods:
0.256 GB RAM
Catalog pods:
0.05 GB RAM
Operand:
3.77 GB RAMPersistent storage:
Uses the persistent storage provisioned by Watson Knowledge
Catalog.
Ephemeral storage:
4.5 GB
Image storage:
Up to 4.67 GBMinimum resources for an installation with a single replica per service.
Work with IBM Sales to get a more accurate sizing based on your expected workload.
Data Refinery Operator pods:
0.1 vCPU
Catalog pods:
0.5 vCPU
Operand:
1 vCPUOperator pods:
0.256 GB RAM
Catalog pods:
1 GB RAM
Operand:
4 GB RAMPersistent storage:
Uses the persistent storage provisioned by the common core services.
Ephemeral storage:
2 GB
Image storage:
Up to 4.51 GBMinimum resources for an installation with a single replica per service.
Work with IBM Sales to get a more accurate sizing based on your expected workload.
This service is installed when you install Watson Knowledge Catalog or Watson StudioData Replication Operator pods:
0.5 vCPU
Catalog pods:
0.01 vCPU
Operand:
13 vCPUOperator pods:
0.512 GB RAM
Catalog pods:
0.05 GB RAM
Operand:
14 GB RAMPersistent storage:
10 GB - 512 GB
Ephemeral storage:
22 GB
Image storage:
Up to 4.26 GBMinimum resources for an installation with a single replica per service.
Work with IBM Sales to get a more accurate sizing based on your expected workload.
DataStage® Operator pods:
0.1 vCPU
Catalog pods:
0.01 vCPU
Operand:
8 vCPUOperator pods:
0.256 GB RAM
Catalog pods:
0.05 GB RAM
Operand:
31 GB RAMPersistent storage:
300 GB
Ephemeral storage:This information is not currently available.
Image storage:
Up to 17.93 GBMinimum resources for an installation with a single replica per service.
Work with IBM Sales to get a more accurate sizing based on your expected workload.
- Local storage in /var/lib/containers
- Adjust the amount of local storage per node based on the volume of data you are analyzing. Local storage should be approximately 2 times larger than the amount of data you expect the system to process concurrently.
Db2® Operator pods:
0.1 vCPU
Catalog pods:
0.01 vCPU
Operand:
8 vCPUOperator pods:
0.256 GB RAM
Catalog pods:
0.05 GB RAM
Operand:
24 GB RAMPersistent storage:
540 GB (assuming defaults)
Ephemeral storage:
2.2 - 5.4 GB
Image storage:
Up to 1.16 GBMinimum resources for an installation with a single replica per service.
Work with IBM Sales to get a more accurate sizing based on your expected workload.
A dedicated node is recommended for production deployments of Db2. For details, see Setting up dedicated nodes.
Db2 Big SQL Operator pods:
0.2 vCPU
Catalog pods:
0.1 vCPU
Operand:
10.2 vCPUOperator pods:
0.3 GB RAM
Catalog pods:
0.2 GB RAM
Operand:
66.7 GB RAMPersistent storage:
470 GB total (assuming defaults)- Head pod:
200 GB (default)
- One worker pod:
200 GB (default)
- Scheduling pod:
10 GB
- Log storage:
30 GB per pod
Ephemeral storage:
1.4 - 12.2 GB
Image storage:
Up to 0.92 GBMinimum resources for an installation with a single replica per service.
Work with IBM Sales to get a more accurate sizing based on your expected workload.
When you provision the service, you can specify:
- The resources (vCPU and RAM) for the head and worker pods
- The number of worker pods
- The size of the persistent volume for the head pod and worker pods
Db2 Data Gate Operator pods:
0.1 vCPU
Catalog pods:
0.01 vCPU
Operand:
2 vCPU per instanceOperator pods:
0.1 GB RAM
Catalog pods:
0.05 GB RAM
Operand:
13 GB RAM per instancePersistent storage:
50 GB per instance
Ephemeral storage:
0.6 - 3.25 GB
Image storage:
Up to 10.70 GBMinimum resources for an installation with a single replica per service.
Work with IBM Sales to get a more accurate sizing based on your expected workload.
Db2 Data Management Console Operator pods:
0.1 vCPU
Catalog pods:
0.01 vCPU
Operand:
5 vCPUOperator pods:
0.256 GB RAM
Catalog pods:
0.05 GB RAM
Operand:
19.31 GB RAMPersistent storage:
10 GB
Ephemeral storage:
7.5 GB
Image storage:
Up to 5.86 GBMinimum resources for an installation with a single replica per service.
Work with IBM Sales to get a more accurate sizing based on your expected workload.
For information on sizing the provisioned instance, see Provisioning the service.
Db2 Warehouse Operator pods:
0.5 vCPU
Catalog pods:
0.01 vCPU
Operand:
SMP: 7 vCPU
MPP: 39 vCPUOperator pods:
0.256 GB RAM
Catalog pods:
0.05 GB RAM
Operand:
SMP: 98 GB RAM
MPP: 610 GB RAMPersistent storage:
540 GB (assuming defaults)
Ephemeral storage:
2.2 - 10.8 GB
Image storage:
Up to 2.65 GBMinimum resources for an installation with a single replica per service.
Use dedicated nodes for:
- Production SMP deployments (recommended)
- MPP deployments (required)
For detail, see Setting up dedicated nodes.
- Development deployment
-
- 1 node for SMP
- 2 nodes for MPP
- Production deployment
-
- 1 node for SMP
- 2-999 nodes for MPP
- Recommended configuration
-
Work with IBM Sales to get a more accurate sizing based on your expected workload.
Decision Optimization Operator pods:
0.1 vCPU
Catalog pods:
0.01 vCPU
Operand:
0.9 vCPUOperator pods:
0.256 GB RAM
Catalog pods:
0.05 GB RAM
Operand:
1.5 GB RAMPersistent storage:
12 GB
Ephemeral storage:
300 - 6500 MB
Image storage:
Up to 3.60 GBMinimum resources for an installation with a single replica per service.
Work with IBM Sales to get a more accurate sizing based on your expected workload.
EDB Postgres Operator pods:
IBM: 0.1 vCPU
Third-party: 0.5 vCPU
Catalog pods:
0.01 vCPU
Operand:
User-definedOperator pods:
IBM: 0.256 GB RAM
Third-party: 0.2 GB RAM
Catalog pods:
0.05 GB RAM
Operand:
User-definedPersistent storage:
100 GB
Ephemeral storage:This information is not currently available.
Image storage:
Up to 3.36 GBMinimum resources for an installation with a single replica per service.
Work with IBM Sales to get a more accurate sizing based on your expected workload.
Execution Engine for Apache Hadoop Operator pods:
0.1 vCPU
Catalog pods:
0.01 vCPU
Operand:
For each deployment:
0.5 vCPU + (0.5 vCPU * number of Hadoop registrations) + (0.6 vCPU * number of Hadoop jobs run)Operator pods:
0.256 GB RAM
Catalog pods:
0.05 GB RAM
Operand:
For each deployment:
0.5 GB + (0.5 GB * number of Hadoop registrations) + (0.5 GB * number of Hadoop jobs run)Persistent storage:
2 GB per image pushed
Ephemeral storage:
218 - 436 MB
Image storage:
Up to 2.67 GBMinimum resources for an installation with a single replica per service.
Work with IBM Sales to get a more accurate sizing based on your expected workload.
Each image that is pushed to the remote Hadoop cluster requires disk space where image tgz file can be stored.
Execution Engine for Apache Hadoop requires an Execution Engine for Hadoop RPM installation on the Apache Hadoop cluster. For details, see Installing the service on Apache Hadoop clusters.
IBM Match 360 with Watson Operator pods:
2 vCPU
Catalog pods:
1 vCPU
Operand:
29 vCPUOperator pods:
2 GB RAM
Catalog pods:
2 GB RAM
Operand:
101 GB RAMPersistent storage:
190 GB
Ephemeral storage:
15 GB
Image storage:
Up to 17.79 GBMinimum resources for an installation with a single replica per service.
Work with IBM Sales to get a more accurate sizing based on your expected workload.
Informix® Operator pods:
0.1 vCPU
Catalog pods:
0.1 vCPU
Operand:
2 vCPUOperator pods:
0.256 GB RAM
Catalog pods:
0.05 GB RAM
Operand:
2 GB RAMPersistent storage:
20 GB
Ephemeral storage:
900 MB (default)
Image storage:
Up to 6.57 GBMinimum resources for an installation with a single replica per service.
MANTA Automated Data Lineage Operator pods:
0.5 vCPU
Catalog pods:
0.01 vCPU
Operand:
11 vCPUOperator pods:
0.5 GB RAM
Catalog pods:
0.05 GB RAM
Operand:
26 GB RAMPersistent storage:
37 GB
Ephemeral storage:
5 GB
Image storage:
Up to 4.9 GBMinimum resources for an installation with a single replica per service.
Work with IBM Sales to get a more accurate sizing based on your expected workload.
OpenPages® Operator pods:
0.5 vCPU
Catalog pods:
0.01 vCPU
Operand:
6 vCPUOperator pods:
2 GB RAM
Catalog pods:
0.05 GB RAM
Operand:
20 GB RAMPersistent storage:
252 GB
Ephemeral storage:
10.9 GB
Image storage:
Up to 5.23 GBWhen you provision the OpenPages service, you specify the size of the instance and the storage class to use. You also specify whether to use the database that is provided with the OpenPages service or a database that is on an external server. These values represent the minimum resources for OpenPages with a Db2 database on Cloud Pak for Data.
- Using a Db2 database on Cloud Pak for Data
-
OpenPages uses Db2 as a service, which is different from the Db2 service in the services catalog.
You can optionally provision the Db2 database on dedicated nodes. For details, see Provisioning an instance of OpenPages.
- Using a Db2 database outside of Cloud Pak for Data
- If you use a database outside of Cloud Pak for Data, the minimum requirements for vCPUs and memory are lower.
Planning Analytics Operator pods:
0.1 vCPU
Catalog pods:
0.01 vCPU
Operand:
16 vCPUOperator pods:
0.256 GB RAM
Catalog pods:
0.05 GB RAM
Operand:
28 GB RAMPersistent storage:
20 GB
Ephemeral storage: 50 GB (maximum)
Image storage:
Up to 27.33 GBWork with IBM Sales to get a more accurate sizing based on your expected workload.
Select the size of your instance when you provision Planning Analytics. For details, see Provisioning the Planning Analytics service.
Product Master Operator pods:
0.3 vCPU
Catalog pods:
0.2 vCPU
Operand:
16 vCPUOperator pods:
0.5 GB RAM
Catalog pods:
0.05 GB RAM
Operand:
34 GB RAMPersistent storage:
200 GB
Ephemeral storage:
22 GB
Image storage:
Up to 22.42 GBMinimum resources for an installation with a single replica per service.
Work with IBM Sales to get a more accurate sizing based on your expected workload.
RStudio® Server Runtimes Operator pods:
0.1 vCPU
Catalog pods:
0.01 vCPU
Operand:
1 vCPUOperator pods:
0.256 GB RAM
Catalog pods:
0.05 GB RAM
Operand:
8.8 GB RAMPersistent storage:
Uses the persistent storage provisioned by the common core services.
Ephemeral storage:This information is not currently available.
Image storage:
Up to 40.13 GBMinimum resources for an installation with a single replica per service.
Work with IBM Sales to get a more accurate sizing based on your expected workload.
SPSS® Modeler Operator pods:
0.1 vCPU
Catalog pods:
0.01 vCPU
Operand:
0.25 vCPUOperator pods:
0.256 GB RAM
Catalog pods:
0.05 GB RAM
Operand:
1 GB RAMPersistent storage:
Uses the persistent storage provisioned by the common core services.
Ephemeral storage:
3 GB (maximum)
Image storage:
Up to 9.71 GBMinimum resources for an installation with a single replica per service.
Work with IBM Sales to get a more accurate sizing based on your expected workload.
Voice Gateway Operator pods:
0.2 vCPU
Catalog pods:
0.01 vCPU
Operand:
2 vCPUOperator pods:
0.256 GB RAM
Catalog pods:
0.05 GB RAM
Operand:
8 GB RAMPersistent storage:
Not applicable
Ephemeral storage:
4 GB
Image storage:
4 GBMinimum resources for a system that can provide voice-only support for up to 11 concurrent calls.
Dedicated nodes are recommended for production environments.
Work with IBM Sales to get a more accurate sizing based on your expected workload.
Watson Assistant Operator pods:
0.25 vCPU
Catalog pods:
0.01 vCPU
Operand:
10 vCPUOperator pods:
6 GB RAM
Catalog pods:
0.05 GB RAM
Operand:
110 GB RAMPersistent storage:
- 325 GB - 1340 GB
Only OpenShift Data Foundation and IBM Storage Fusion Data Foundation require 1340 GB of storage. All other storage types require 325 GB.
- 100 GB Multicloud Object Gateway storage
Ephemeral storage:
30 - 135 GB
Image storage:
Up to 39.48 GBMinimum resources for an installation with a single replica per service.
Work with IBM Sales to get a more accurate sizing based on your expected workload.
All of the Watson Assistant service instances associated with an instance of Cloud Pak for Data use the same pool of resources.
Your hardware must meet the following additional requirements:- CPUs must have a clock speed of 2.4 GHz or higher
- CPUs must support Linux SSE 4.2
- CPUs must support the AVX2 instruction set
Watson Discovery Operator pods:
0.05 vCPU
Catalog pods:
0.01 vCPU
Operand:
15 vCPUOperator pods:
0.1 GB RAM
Catalog pods:
0.05 GB RAM
Operand:
88 GB RAMPersistent storage:- 237 GB file storage
- 363 GB block storage
- 215 GB Multicloud Object Gateway storage
Ephemeral storage:
119 GB
Image storage:
Up to 97.10 GBStarter deployments are sized for demonstration purposes only. Production deployments are sized for robust use. Be sure to choose the right size for your needs. You cannot change the deployment type after you install the service. If you need to change it later, you must reinstall. These values represent the minimum requirements for a Starter deployment. CPUs must support the AVX2 instruction set.
Work with IBM Sales to get a more accurate sizing based on your expected workload.
All of the Watson Discovery service instances associated with an instance of Cloud Pak for Data use the same pool of resources.
Watson Discovery supports only single-zone OpenShift deployments. You cannot install Watson Discovery on a multi-zone deployment.
Watson Knowledge Catalog - Base
-
Operator pods:
0.75 vCPU
Catalog pods:
0.05 vCPU
Operand:
26 vCPU - Data quality
-
Operator pods:
0.1 vCPU
Catalog pods:
0.01 vCPU
Operand:
11 vCPU - Semantic search and lineage
-
Operator pods:
1.5 vCPU
Catalog pods:
0.05 vCPU
Operand:
5 vCPU - Advanced metadata import
-
Operator pods:
0.3 vCPU
Catalog pods:
0.05 vCPU
Operand:
6 vCPU
- Base
-
Operator pods:
4 GB RAM
Catalog pods:
0.2 GB RAM
Operand:
128 GB RAM - Data Quality
-
Operator pods:
0.256 GB RAM
Catalog pods:
0.05 GB RAM
Operand:
44 GB RAM - Semantic search and lineage
-
Operator pods:
0.7 GB RAM
Catalog pods:
0.05 GB RAM
Operand:
20 GB RAM - Advanced metadata import
-
Operator pods:
0.6 GB RAM
Catalog pods:
0.05 GB RAM
Operand:
24 GB RAM
Persistent storage:
900 GB
Ephemeral storage:
- Base
- 840 MB - 19.4 GB
- Data quality
- 105 MB - 2.15 GB
- Semantic search and lineage
- 210 MB - 2.53 GB
- Advanced metadata import
- This information is not currently available.
Image storage:
Up to 20.08 GB with all optional componentsThe minimum required resources depend on the features that you install.
If Data Refinery is not installed, add the vCPU and memory required for Data Refinery to the information listed for Watson Knowledge Catalog.
- Local storage in /var/lib/containers
- Adjust the amount of local storage per node based on the volume of data you are analyzing. Local storage should be approximately 2 times larger than the amount of data you expect the system to process concurrently.
- Persistent storage
- The raw size of shared storage depends on the storage class you use. For example, if you use
portworx-shared-gp3
, which has 3 replicas, multiply the storage by the number of replicas.
Watson Knowledge Studio Operator pods:
0.1 vCPU
Catalog pods:
0.01 vCPU
Operand:
7 vCPUOperator pods:
0.1 GB RAM
Catalog pods:
0.05 GB RAM
Operand:
31 GB RAMPersistent storage:
360 GB
Ephemeral storage:
Not applicable
Image storage:
Up to 17.04 GBMinimum resources for an installation with a single replica per service.
Work with IBM Sales to get a more accurate sizing based on your expected workload.
All of the Watson Knowledge Studio service instances associated with an instance of Cloud Pak for Data use the same pool of resources.
Watson Machine Learning Operator pods:
0.1 vCPU
Catalog pods:
0.01 vCPU
Operand:
6 vCPUOperator pods:
0.5 GB RAM
Catalog pods:
0.5 GB RAM
Operand:
27 GB RAMPersistent storage:
150 GB
Ephemeral storage:This information is not currently available.
Image storage:
Up to 173.73 GBMinimum resources for an installation with a single replica per service.
Work with IBM Sales to get a more accurate sizing based on your expected workload.
AVX2 is recommended but not required for AutoAI experiments.
Watson Machine Learning Accelerator Operator pods:
0.5 vCPU
Catalog pods:
0.01 vCPU
Operand:
5 vCPUOperator pods:
1GB RAM
Catalog pods:
0.05 GB RAM
Operand:
13 GB RAMPersistent storage:
73 GB
Ephemeral storage:This information is not currently available.
Image storage:
Up to 32.91 GBMinimum resources for an installation with a single replica per service.
Work with IBM Sales to get a more accurate sizing based on your expected workload.
GPU support is limited to NVIDIA V100, A100 and T4 GPUs.
Watson OpenScale Operator pods:
0.1 vCPU
Catalog pods:
0.01 vCPU
Operand:
12.75 vCPUOperator pods:
0.256 GB RAM
Catalog pods:
0.05 GB RAM
Operand:
68 GB RAMPersistent storage:
100 GB
Ephemeral storage:
13.5 GB
Image storage:
Up to 22.27 GBMinimum resources for an installation with a single replica per service.
Work with IBM Sales to get a more accurate sizing based on your expected workload.
Watson Pipelines Operator pods:
0.1 vCPU
Catalog pods:
0.01 vCPU
Operand:
1.4 vCPUOperator pods:
0.256 GB RAM
Catalog pods:
0.05 GB RAM
Operand:
2.625 GB RAMPersistent storage:
Uses the persistent storage provisioned by the common core services.
Ephemeral storage:
1.3 GB
Image storage:
Up to 5.56 GBMinimum resources for an installation with a single replica per service.
Work with IBM Sales to get a more accurate sizing based on your expected workload.
Watson Query Operator pods:
0.1 vCPU
Catalog pods:
0.01 vCPU
Operand:
12 vCPUOperator pods:
0.256 GB RAM
Catalog pods:
0.05 GB RAM
Operand:
38 GB RAMPersistent storage:
280 GB total (assuming defaults)- Head pod:
50 GB (default)
- One worker pod:
50 GB (default)
- Caching storage:
100 GB (default)
- Caching metadata:
10 GB
- Scheduling pod:
10 GB
- Log storage:
30 GB per pod
Ephemeral storage:
2.428 - 13GB
Image storage:
Up to 3.62 GBMinimum resources for an installation with a single replica per service.
Work with IBM Sales to get a more accurate sizing based on your expected workload.
When you provision the service, you can specify:
- The size of the persistent volume for the head pod
- The size of the persistent volume for the cache
- The number of worker pods
- The size of the persistent volume for the worker pods
Watson Speech services Operator pods:
0.3 vCPU
Catalog pods:
0.01 vCPU
Operand:
Speech to Text: 6 vCPU
Text to Speech: 6 vCPUOperator pods:
0.3 GB RAM
Catalog pods:
0.05 GB RAM
Operand:
Speech to Text: 17 GB RAM
Text to Speech: 11 GB RAMPersistent storage:- 900 GB
- 20 GB Multicloud Object Gateway storage
Ephemeral storage:
27 GB
Image storage:
Up to 79.18 GBMinimum resources for an instance with a single replica per service using the default models and voices (US-English). The amount of vCPU, memory, and ephemeral storage that is required increases when you install additional models.
Work with IBM Sales to get a more accurate sizing based on your expected workload.
CPUs must support the AVX2 instruction set.
All of the Watson Speech services service instances associated with an instance of Cloud Pak for Data use the same pool of resources.
Watson Studio Operator pods:
0.1 vCPU
Catalog pods:
0.01 vCPU
Operand:
2 vCPUOperator pods:
0.256 GB RAM
Catalog pods:
0.05 GB RAM
Operand:
8.8 GB RAMPersistent storage:
Uses the persistent storage provisioned by the common core services. Additional storage is required if you enable Visual Studio Code support.
Ephemeral storage:
5 - 10 GB
Image storage:
Up to 6.08 GBMinimum resources for an installation with a single replica per service.
Work with IBM Sales to get a more accurate sizing based on your expected workload.
If Data Refinery is not installed, add the vCPU and memory required for Data Refinery to the information listed for Watson Studio.
If you enable the Visual Studio Code extension for Watson Studio, you must allocate a minimum of 500-600 MB of storage per user for installed extensions. For details, see To enable Visual Studio Code in Post-installation tasks for the Watson Studio service.
Watson Studio Runtimes Operator pods:
0.1 vCPU
Catalog pods:
0.01 vCPU
Operand:
Dictated by the runtimesOperator pods:
0.256 GB RAM
Catalog pods:
0.05 GB RAM
Operand:
Dictated by the runtimesPersistent storage:
Uses the persistent storage provisioned by the common core services.
Ephemeral storage:
Dictated by the runtimes.
Image storage:
Up to 148.77 GBRuntimes use on-demand vCPU and memory. Watson Studio Runtimes includes the following runtimes:- Runtime 23.1 on Python 3.10
- Runtime 22.2 on Python 3.10
- Runtime 23.1 on Python 3.10 for GPU
- Runtime 22.2 on Python 3.10 for GPU
- Runtime 22.1 on Python 3.9
- Runtime 22.1 on Python 3.9 for GPU
- Runtime 23.1 on R 4.2
- Runtime 22.1 on R 3.6
- Runtime 22.2 on R 4.2
The following runtimes have additional hardware requirements:
- Runtime 23.1 on Python 3.10 for GPU
- At least 1 GPU core is required to use this runtime.
- Runtime 22.2 on Python 3.10 for GPU
- At least 1 GPU core is required to use this runtime.
- Runtime 22.1 on Python 3.9 for GPU
- At least 1 GPU core is required to use this runtime.
watsonx.data Operator pods:
0.5 vCPU
Catalog pods:
0.01 vCPU
Operand:
19 vCPUOperator pods:
0.75 GB RAM
Catalog pods:
0.05 GB RAM
Operand:
40 GB RAMPersistent storage:
542 GB
Ephemeral storage:
24 GB
Image storage:
Up to 15.61 GBMinimum resources for an installation with a single replica of the Presto engine.
If you increase the number of Presto replicas, you need additional vCPU, memory, and ephemeral storage.
Work with IBM Sales to get a more accurate sizing based on your expected workload.
The following services support POWER9 and POWER10. However, the services do not take advantage of POWER10 optimizations.
Software vCPU Memory Storage Notes Db2 Data Management Console Operator pods:
0.1 vCPU
Catalog pods:
0.01 vCPU
Operand:
5 vCPUOperator pods:
0.256 GB RAM
Catalog pods:
0.05 GB RAM
Operand:
19.31 GB RAMPersistent storage:
10 GB
Ephemeral storage:
7.5 GB
Image storage:
Up to 5.86 GBMinimum resources for an installation with a single replica per service.
Work with IBM Sales to get a more accurate sizing based on your expected workload.
For information on sizing the provisioned instance, see Provisioning the service.
The following services support POWER9 and POWER10.
It is recommended that you configure POWER9 logical partitions to run POWER9 compatibility mode, and that you configure POWER10 logical partitions to run POWER10 compatibility mode.
Software vCPU Memory Storage Notes Db2 Operator pods:
0.1 vCPU
Catalog pods:
0.01 vCPU
Operand:
8 vCPUOperator pods:
0.256 GB RAM
Catalog pods:
0.05 GB RAM
Operand:
24 GB RAMPersistent storage:
540 GB (assuming defaults)
Ephemeral storage:
2.2 - 5.4 GB
Image storage:
Up to 1.16 GBMinimum resources for an installation with a single replica per service.
Work with IBM Sales to get a more accurate sizing based on your expected workload.
A dedicated node is recommended for production deployments of Db2. For details, see Setting up dedicated nodes.
Db2 Warehouse Operator pods:
0.5 vCPU
Catalog pods:
0.01 vCPU
Operand:
SMP: 7 vCPU
MPP: 39 vCPUOperator pods:
0.256 GB RAM
Catalog pods:
0.05 GB RAM
Operand:
SMP: 98 GB RAM
MPP: 610 GB RAMPersistent storage:
540 GB (assuming defaults)
Ephemeral storage:
2.2 - 10.8 GB
Image storage:
Up to 2.65 GBMinimum resources for an installation with a single replica per service.
Use dedicated nodes for:
- Production SMP deployments (recommended)
- MPP deployments (required)
For detail, see Setting up dedicated nodes.
- Development deployment
-
- 1 node for SMP
- 2 nodes for MPP
- Production deployment
-
- 1 node for SMP
- 2-999 nodes for MPP
- Recommended configuration
-
Work with IBM Sales to get a more accurate sizing based on your expected workload.
- Restriction: The following services have a limited set of features on s390x hardware:
- Watson Machine Learning
- Watson Studio
- Watson Studio Runtimes
For a list of the features that are available on s390x hardware, see Capabilities on IBM Z®
Software vCPU Memory Storage Notes Analytics Engine powered by Apache Spark Operator pods:
0.1 vCPU
Catalog pods:
0.01 vCPU
Operand:
2.3 vCPUOperator pods:
0.256 GB RAM
Catalog pods:
0.05 GB RAM
Operand:
9 GB RAMPersistent storage:This information is not currently available.
Ephemeral storage:
50 GB per vCPU request
(SSDs are recommended)
Image storage:
Up to 32.86 GBSpark jobs use emptyDir
volumes for temporary storage and shuffling. If your Spark jobs use a lot of disk space for temporary storage or shuffling, make sure that you have sufficient space on the local disk whereemptyDir
volumes are created.The recommended location is a partition in /var/lib. For details, see Understanding ephemeral storage in the Red Hat OpenShift documentation:If you don't have sufficient space on the local disk, Spark jobs might run slowly and some of the executors might evict jobs. A minimum of 50 GB of temporary storage for each vCPU request is recommended.
Minimum resources for an installation with a single replica per service.
Work with IBM Sales to get a more accurate sizing based on your expected workload.
All of the Analytics Engine powered by Apache Spark service instances associated with an instance of Cloud Pak for Data use the same pool of resources.
Data Refinery Operator pods:
0.1 vCPU
Catalog pods:
0.5 vCPU
Operand:
1 vCPUOperator pods:
0.256 GB RAM
Catalog pods:
1 GB RAM
Operand:
4 GB RAMPersistent storage:
Uses the persistent storage provisioned by the common core services.
Ephemeral storage:
2 GB
Image storage:
Up to 4.51 GBMinimum resources for an installation with a single replica per service.
Work with IBM Sales to get a more accurate sizing based on your expected workload.
This service is installed when you install Watson Knowledge Catalog or Watson StudioDb2 Operator pods:
0.1 vCPU
Catalog pods:
0.01 vCPU
Operand:
8 vCPUOperator pods:
0.256 GB RAM
Catalog pods:
0.05 GB RAM
Operand:
24 GB RAMPersistent storage:
540 GB (assuming defaults)
Ephemeral storage:
2.2 - 5.4 GB
Image storage:
Up to 1.16 GBMinimum resources for an installation with a single replica per service.
Work with IBM Sales to get a more accurate sizing based on your expected workload.
A dedicated node is recommended for production deployments of Db2. For details, see Setting up dedicated nodes.
Db2 Data Gate Operator pods:
0.1 vCPU
Catalog pods:
0.01 vCPU
Operand:
2 vCPU per instanceOperator pods:
0.1 GB RAM
Catalog pods:
0.05 GB RAM
Operand:
13 GB RAM per instancePersistent storage:
50 GB per instance
Ephemeral storage:
0.6 - 3.25 GB
Image storage:
Up to 10.70 GBMinimum resources for an installation with a single replica per service.
Work with IBM Sales to get a more accurate sizing based on your expected workload.
Db2 Data Management Console Operator pods:
0.1 vCPU
Catalog pods:
0.01 vCPU
Operand:
5 vCPUOperator pods:
0.256 GB RAM
Catalog pods:
0.05 GB RAM
Operand:
19.31 GB RAMPersistent storage:
10 GB
Ephemeral storage:
7.5 GB
Image storage:
Up to 5.86 GBMinimum resources for an installation with a single replica per service.
Work with IBM Sales to get a more accurate sizing based on your expected workload.
For information on sizing the provisioned instance, see Provisioning the service.
Db2 Warehouse Operator pods:
0.5 vCPU
Catalog pods:
0.01 vCPU
Operand:
SMP: 7 vCPU
MPP: 39 vCPUOperator pods:
0.256 GB RAM
Catalog pods:
0.05 GB RAM
Operand:
SMP: 98 GB RAM
MPP: 610 GB RAMPersistent storage:
540 GB (assuming defaults)
Ephemeral storage:
2.2 - 10.8 GB
Image storage:
Up to 2.65 GBMinimum resources for an installation with a single replica per service.
Use dedicated nodes for:
- Production SMP deployments (recommended)
- MPP deployments (required)
For detail, see Setting up dedicated nodes.
- Development deployment
-
- 1 node for SMP
- 2 nodes for MPP
- Production deployment
-
- 1 node for SMP
- 2-999 nodes for MPP
- Recommended configuration
-
Work with IBM Sales to get a more accurate sizing based on your expected workload.
Execution Engine for Apache Hadoop Operator pods:
0.1 vCPU
Catalog pods:
0.01 vCPU
Operand:
For each deployment:
0.5 vCPU + (0.5 vCPU * number of Hadoop registrations) + (0.6 vCPU * number of Hadoop jobs run)Operator pods:
0.256 GB RAM
Catalog pods:
0.05 GB RAM
Operand:
For each deployment:
0.5 GB + (0.5 GB * number of Hadoop registrations) + (0.5 GB * number of Hadoop jobs run)Persistent storage:
2 GB per image pushed
Ephemeral storage:
218 - 436 MB
Image storage:
Up to 2.67 GBMinimum resources for an installation with a single replica per service.
Work with IBM Sales to get a more accurate sizing based on your expected workload.
Each image that is pushed to the remote Hadoop cluster requires disk space where image tgz file can be stored.
Execution Engine for Apache Hadoop requires an Execution Engine for Hadoop RPM installation on the Apache Hadoop cluster. For details, see Installing the service on Apache Hadoop clusters.
Watson Machine Learning Operator pods:
0.1 vCPU
Catalog pods:
0.01 vCPU
Operand:
6 vCPUOperator pods:
0.5 GB RAM
Catalog pods:
0.5 GB RAM
Operand:
27 GB RAMPersistent storage:
150 GB
Ephemeral storage:This information is not currently available.
Image storage:
Up to 173.73 GBMinimum resources for an installation with a single replica per service.
Work with IBM Sales to get a more accurate sizing based on your expected workload.
AVX2 is recommended but not required for AutoAI experiments.
Watson OpenScale Operator pods:
0.1 vCPU
Catalog pods:
0.01 vCPU
Operand:
12.75 vCPUOperator pods:
0.256 GB RAM
Catalog pods:
0.05 GB RAM
Operand:
68 GB RAMPersistent storage:
100 GB
Ephemeral storage:
13.5 GB
Image storage:
Up to 22.27 GBMinimum resources for an installation with a single replica per service.
Work with IBM Sales to get a more accurate sizing based on your expected workload.
Watson Studio Operator pods:
0.1 vCPU
Catalog pods:
0.01 vCPU
Operand:
2 vCPUOperator pods:
0.256 GB RAM
Catalog pods:
0.05 GB RAM
Operand:
8.8 GB RAMPersistent storage:
Uses the persistent storage provisioned by the common core services. Additional storage is required if you enable Visual Studio Code support.
Ephemeral storage:
5 - 10 GB
Image storage:
Up to 6.08 GBMinimum resources for an installation with a single replica per service.
Work with IBM Sales to get a more accurate sizing based on your expected workload.
If Data Refinery is not installed, add the vCPU and memory required for Data Refinery to the information listed for Watson Studio.
If you enable the Visual Studio Code extension for Watson Studio, you must allocate a minimum of 500-600 MB of storage per user for installed extensions. For details, see To enable Visual Studio Code in Post-installation tasks for the Watson Studio service.
Watson Studio Runtimes Operator pods:
0.1 vCPU
Catalog pods:
0.01 vCPU
Operand:
Dictated by the runtimesOperator pods:
0.256 GB RAM
Catalog pods:
0.05 GB RAM
Operand:
Dictated by the runtimesPersistent storage:
Uses the persistent storage provisioned by the common core services.
Ephemeral storage:
Dictated by the runtimes.
Image storage:
Up to 148.77 GBRuntimes use on-demand vCPU and memory. Watson Studio Runtimes includes the following runtimes:- Runtime 22.1 on Python 3.9
- Runtime 22.2 on Python 3.10
Automatically installed dependencies
Automatically installed dependencies provide underlying functionality for services. Use the following sections to understand the hardware requirements for:
- Common core services
- Db2 as a service
- Db2U
Use the following information to determine whether you have the minimum required resources to install each component on your cluster.
Service vCPU Memory Storage Notes Common core services Operator pods:
0.1 vCPU
Catalog pods:
0.01 vCPU
Operand:
11 vCPUOperator pods:
0.256 GB RAM
Catalog pods:
0.05 GB RAM
Operand:
18.3 GB RAMPersistent storage:
500 GB
Ephemeral storage:
100 GB
Image storage:
Up to 23.03 GBAutomatically installed by services that require it. Depending on the services that you install, this software is installed once in each Red Hat OpenShift project where Cloud Pak for Data is installed. For details, see Service software requirements.
Db2 as a service Operator pods:
0.1 vCPU
Catalog pods:
0.01 vCPU
Operand:
0.3 vCPUOperator pods:
0.256 GB RAM
Catalog pods:
0.05 GB RAM
Operand:
0.8 GB RAMPersistent storage:
Not applicable
Ephemeral storage:
0.5 GB
Image storage:
Up to 1.14 GBAutomatically installed by services that require it. Depending on the services that you install, this software is installed once in each Red Hat OpenShift project where Cloud Pak for Data is installed. For details, see Service software requirements.
Db2U Operator pods:
0.6 vCPU
Catalog pods:
0.01 vCPU
Operand:
Not applicable.Operator pods:
0.7 GB RAM
Catalog pods:
0.05 GB RAM
Operand:
Not applicable.Persistent storage:
Not applicable
Ephemeral storage:
0.5 GB
Image storage:
Up to 35.09 GBAutomatically installed by services that require it. Depending on the services that you install, the operands for this software might be installed multiple times in each Red Hat OpenShift project where Cloud Pak for Data is installed.
The operator is installed once per instance of Cloud Pak for Data.
The operands are generated by the services that have a dependency on Db2U.
For a list of services that have a dependency on Db2U, see Service software requirements.
Software vCPU Memory Storage Notes Db2 as a service Operator pods:
0.1 vCPU
Catalog pods:
0.01 vCPU
Operand:
0.3 vCPUOperator pods:
0.256 GB RAM
Catalog pods:
0.05 GB RAM
Operand:
0.8 GB RAMPersistent storage:
Not applicable
Ephemeral storage:
0.5 GB
Image storage:
Up to 1.14 GBAutomatically installed by services that require it. Depending on the services that you install, this software is installed once in each Red Hat OpenShift project where Cloud Pak for Data is installed. For details, see Service software requirements.
Db2U Operator pods:
0.6 vCPU
Catalog pods:
0.01 vCPU
Operand:
Not applicable.Operator pods:
0.7 GB RAM
Catalog pods:
0.05 GB RAM
Operand:
Not applicable.Persistent storage:
Not applicable
Ephemeral storage:
0.5 GB
Image storage:
Up to 35.09 GBAutomatically installed by services that require it. Depending on the services that you install, the operands for this software might be installed multiple times in each Red Hat OpenShift project where Cloud Pak for Data is installed.
The operator is installed once per instance of Cloud Pak for Data.
The operands are generated by the services that have a dependency on Db2U.
For a list of services that have a dependency on Db2U, see Service software requirements.
Software vCPU Memory Storage Notes Common core services Operator pods:
0.1 vCPU
Catalog pods:
0.01 vCPU
Operand:
11 vCPUOperator pods:
0.256 GB RAM
Catalog pods:
0.05 GB RAM
Operand:
18.3 GB RAMPersistent storage:
500 GB
Ephemeral storage:
100 GB
Image storage:
Up to 23.03 GBAutomatically installed by services that require it. Depending on the services that you install, this software is installed once in each Red Hat OpenShift project where Cloud Pak for Data is installed. For details, see Service software requirements.
Db2 as a service Operator pods:
0.1 vCPU
Catalog pods:
0.01 vCPU
Operand:
0.3 vCPUOperator pods:
0.256 GB RAM
Catalog pods:
0.05 GB RAM
Operand:
0.8 GB RAMPersistent storage:
Not applicable
Ephemeral storage:
0.5 GB
Image storage:
Up to 1.14 GBAutomatically installed by services that require it. Depending on the services that you install, this software is installed once in each Red Hat OpenShift project where Cloud Pak for Data is installed. For details, see Service software requirements.
Db2U Operator pods:
0.6 vCPU
Catalog pods:
0.01 vCPU
Operand:
Not applicable.Operator pods:
0.7 GB RAM
Catalog pods:
0.05 GB RAM
Operand:
Not applicable.Persistent storage:
Not applicable
Ephemeral storage:
0.5 GB
Image storage:
Up to 35.09 GBAutomatically installed by services that require it. Depending on the services that you install, the operands for this software might be installed multiple times in each Red Hat OpenShift project where Cloud Pak for Data is installed.
The operator is installed once per instance of Cloud Pak for Data.
The operands are generated by the services that have a dependency on Db2U.
For a list of services that have a dependency on Db2U, see Service software requirements.