Hardware requirements for ppc64le clusters
Before you install IBM Software Hub on a ppc64le cluster, review the hardware requirements for IBM Software Hub, the shared cluster components, and the services that you plan to install.
| Components | Related links |
|---|---|
| IBM Software Hub platform hardware requirements | Review the minimum requirements for a stable installation:
Work with your IBM Sales representative to determine whether you need more resources based on:
|
| Review the hardware requirements for the shared cluster components that you need to install. | |
| Instance-level prerequisites | Review the hardware requirements for the instance-level prerequisites. |
| Services | Review the hardware requirements for the services that you plan to install. Not all services are supported on all hardware. |
| Automatically installed dependencies | Review the hardware requirements for the automatically installed dependencies, such as the common core services. (These components are installed only if you install a service with a dependency on the component.) |
IBM Software Hub platform hardware requirements
You must install IBM Software Hub on a Red Hat® OpenShift® Container Platform cluster. For information about the supported versions of Red Hat OpenShift Container Platform, see Software requirements.
- It is strongly recommended that you deploy IBM Software Hub on a highly available cluster. If you plan to install IBM Software Hub on a highly available cluster, review Highly available deployments.
- If high availability is not a requirement for your deployment, you can deploy IBM Software Hub on a single node OpenShift (SNO) cluster. If you plan to install IBM Software Hub on SNO, review Single node OpenShift deployments.
IBM Software Hub supports PowerVM capable Power9 and Power10.
To improve performance, it is recommended that you configure Power9 logical partitions to run Power9 compatibility mode, and that you configure Power10 logical partitions to run Power10 compatibility mode.
Not all services support Power. For more information, see Services.
Highly available deployments
The following requirements are the minimum recommendations for a small, stable deployment of IBM Software Hub.
- Sizing your cluster
-
Use the minimum recommended configuration as a starting point for your cluster configuration. If you use fewer resources, you are likely to encounter stability problems.
Important: Work with your IBM Sales representative to size your cluster.The size of your cluster depends on multiple factors.
- The shared components that you need to install.
- The services that you plan to
install.
The sizing requirements for services are available in Services. If you install only a few services with small vCPU and memory requirements, you might not need additional resources. However, if you plan to install multiple services or services with large footprints, add the appropriate amount of vCPU and memory to the minimum recommendations.
- The types of workloads that you plan to run.
For example, if you plan to run complex analytics workloads in addition to other resource-intensive workloads, such as ETL jobs, you can expect reduced concurrency levels if you don't add additional computing power to your cluster.
Because workloads vary based on several factors, use measurements from running real workloads with realistic data to size your cluster.
For additional information on sizing your cluster, download the component scaling guidance PDF.
- Choosing a configuration
-
The following configuration has been tested and validated by IBM. However, Red Hat OpenShift Container Platform supports other configurations. If the configuration in the following table does not work in your environment, you can adapt the configuration based on the guidance in the Red Hat OpenShift documentation.
| Node role | Number of servers | Minimum available vCPU | Minimum memory | Minimum storage |
|---|---|---|---|---|
| Control plane | 3 (for high availability) |
4 vCPU per node (This configuration supports up to 24 worker nodes.) |
16 GB RAM per node (This configuration supports up to 24 worker nodes.) |
No additional storage is needed for IBM Software Hub. See the Red Hat OpenShift Container Platform documentation for sizing guidance. |
| Infra | 3 (recommended) | 4 vCPU per node (This configuration supports up to 27 worker nodes.) |
24 GB RAM per node (This configuration supports up to 27 worker nodes.) |
See the Red Hat OpenShift Container Platform documentation for sizing guidance. |
| Worker (compute) | 3 or more worker (compute) nodes | 16 vCPU per node The maximum vCPU per node is 220 vCPU. |
The maximum memory per node is 824 GB RAM. |
300 GB of storage space per node for storing container images locally. If you plan to install watsonx.ai™, increase the storage to 500 GB per node. See IBM Software Hub platform storage requirements for details. |
| Load balancer |
2 load balancer nodes (For development, test, or proof-of-concept clusters, you can use 1 load balancer node.) |
2 vCPU per node |
4 GB RAM per node Add another 4 GB of RAM for access restrictions and security control. |
Add 100 GB of root storage for access restrictions and security control. A load balancer is required when using three control plane nodes. The load balancer can either be in the cluster or external to the cluster. However, in a production-level cluster, an enterprise-grade external load balancer is recommended. |
Single node OpenShift deployments
You can use single node OpenShift (SNO) if redundancy and scalability are not a concern.
- You are planning a small deployment with a limited number of services
- You want to set up satellite or disconnected deployments that are connected to a larger deployment of IBM Software Hub
- You plan to run workloads on remote physical locations
- You want to set up ephemeral instances of IBM Software Hub for test pipelines
- You need a cluster for demonstrations, training, or proof-of-concept installations.
- Considerations
- Before you choose an SNO deployment,
ensure that you understand the following limitations of a single node deployment:
- Limited scalability
- SNO is not as scalable as a multi-node deployment. You might experience performance issues if you try to run too many applications on a single node.
- Limited capacity
- SNO has a limited capacity for storage and memory. You might need to upgrade your hardware to accommodate larger workloads.
- Single point of failure
- Because SNO runs on a single node, it is more susceptible to downtime in the event of a hardware failure.
- Limited storage support
- If you want to use SNO, you must use one
of the following types of persistent storage:
- NFS
- Amazon Elastic storage, specifically Amazon Elastic File System and Amazon Elastic Block Store
- Dedicated nodes are not supported
- You cannot use dedicated nodes on an SNO cluster.
- Best practices
- If you decide to proceed with an SNO
deployment, ensure that you adhere to the following best practices:
- Workload planning
- If you plan to run multiple applications on the cluster, ensure that you have sufficient capacity for the workloads that you plan to run. If you run a resource intensive workload, it might prevent other workloads from running on the cluster.
- Backup and recovery
- Implement a robust data protection strategy to help ensure that the system can be recovered quickly in the event of a disaster or system crash. See Backing up and restoring IBM Software Hub for information about supported backup methods.
- Sizing your cluster
-
Use the minimum recommended configuration as a starting point for your cluster configuration. If you use fewer resources, you are likely to encounter stability problems.
Important: Work with your IBM Sales representative to size your cluster.The size of your cluster depends on multiple factors.
- The services that you plan to
install.
The sizing requirements for services are available in Services. If you install only a few services with small vCPU and memory requirements, you might not need additional resources. However, if you plan to install multiple services or services with large footprints, add the appropriate amount of vCPU and memory to the minimum recommendations.
Best practice: Size your SNO cluster appropriately before deploying any software on the cluster. You will encounter performance issues if you have insufficient hardware. - The types of workloads that you plan to run.
For example, if you plan to run complex analytics workloads in addition to other resource-intensive workloads, such as ETL jobs, you can expect reduced concurrency levels if you don't add additional computing power to your cluster.
Because workloads vary based on several factors, use measurements from running real workloads with realistic data to size your cluster.
Best practice: If you plan to run multiple applications on the cluster, ensure that you have sufficient capacity for the workloads that you plan to run. If you run a resource intensive workload, it might prevent other workloads from running on the cluster.
For additional information on sizing your cluster, download the component scaling guidance PDF.
- The services that you plan to
install.
- Choosing a configuration
-
The following configuration has been tested and validated by IBM. However, Red Hat OpenShift Container Platform supports other configurations. If the configuration in the following table does not work in your environment, you can adapt the configuration based on the guidance in the Red Hat OpenShift documentation.
The recommended configuration uses two virtual machines (VMs) that act as:- A bastion node with NFS storage configured
- A worker (compute) node
| VM role | Minimum available vCPU | Minimum memory | Minimum storage |
|---|---|---|---|
| Bastion node | 4 vCPU | 8 GB RAM | Allocate a minimum of 500 GB of disk space. The disk can be:
|
| Worker (compute) | 16 vCPU | 64 GB RAM | Allocate a minimum of 300 GB of disk space on the node for image storage. See IBM Software Hub platform storage requirements for details. |
Shared cluster-wide components
Shared cluster components provide underlying functionality for the IBM Software Hub control plane and services. Use the following sections to understand the hardware requirements for the following components.
- IBM Certificate manager
- License Service
- Scheduling service
For more information, see Cluster-wide components.
Use the following information to determine whether you have the minimum required resources to install each component on your cluster.
IBM Certificate manager
The IBM Certificate manager is supported for upgrades only.
| vCPU | Memory | Storage | Notes |
|---|---|---|---|
| 1.6 vCPU | 1.94 GB RAM | This information is not currently available. | A certificate manager is required. This software is installed once on the cluster. For more information, see Cluster-wide components. Review the Hardware requirements and recommendations for foundational
services in the IBM Cloud Pak foundational services documentation for updates to the
hardware requirements:
|
License Service
| vCPU | Memory | Storage | Notes |
|---|---|---|---|
| 3.1 vCPU | 0.735 GB RAM | This information is not currently available. | Required. This software is installed once on the cluster. For more information, see Cluster-wide components. Review the Hardware requirements and recommendations for foundational
services in the IBM Cloud Pak foundational services documentation for updates to the
hardware requirements:
|
Scheduling service
| vCPU | Memory | Storage | Notes |
|---|---|---|---|
|
Operator pods:
0.2 vCPU Catalog pods: 0.01 vCPU Operand: Up to 0.9 vCPU |
Operator pods:
0.54 GB RAM Catalog pods: 0.05 GB RAM Operand: Up to 2.7 GB RAM |
Persistent storage:
Not applicable Ephemeral storage: 0.9 GB Image storage: Up to 2.69 GB |
Required in some situations, but generally recommended. This software is installed once on the cluster. For more information, see Cluster-wide components. Minimum resources for an installation with a single replica per service. |
Instance-level prerequisites
Instance-level prerequisites provide underlying functionality for services. Use the following sections to understand the hardware requirements for:
- IBM Software Hub
- IBM Cloud Pak foundational services
For more information, see Instance-level components.
Use the following information to determine whether you have the minimum required resources to install each component on your cluster.
IBM Software Hub
| vCPU | Memory | Storage | Notes |
|---|---|---|---|
| The requirements for the control plane are included in the IBM Software Hub platform hardware requirements. | The requirements for the control plane are included in the IBM Software Hub platform hardware requirements. | The requirements for the control plane are included in the IBM Software Hub platform hardware requirements. |
Required. The control plane is installed once for each instance of IBM Software Hub on the cluster. |
IBM Cloud Pak foundational services
| vCPU | Memory | Storage | Notes |
|---|---|---|---|
| 4 vCPU | 5 GB RAM |
See the Hardware requirements and recommendations for foundational
services in the
IBM Cloud Pak foundational services documentation:
|
Required. The IBM Cloud Pak foundational services are installed once for each instance of IBM Software Hub on the cluster. |
Services
Use the following information to determine whether you have the minimum required resources to install each service that you want to use.
The following services support PowerVM capable Power9 and Power10.
| Service | Limitations |
|---|---|
| Analytics Engine powered by Apache Spark | You cannot use the Spark Labs development environment |
| Watson Machine Learning | You cannot use, run, or deploy:
Not all software specifications are supported on Power hardware. For more information, see:
|
| Watson Studio | You cannot use, run, or deploy:
|
| Watson Studio Runtimes |
|
Analytics Engine powered by Apache Spark
| vCPU | Memory | Storage | Notes |
|---|---|---|---|
|
Operator pods:
0.1 vCPU Catalog pods: 0.01 vCPU Operand: 2.3 vCPU |
Operator pods:
0.256 GB RAM Catalog pods: 0.05 GB RAM Operand: 9 GB RAM |
Persistent storage:This information is not currently available.
Ephemeral storage: 50 GB per vCPU request (SSDs are recommended) Image storage: Up to 65.53 GB |
Spark jobs use
emptyDir volumes for temporary storage and shuffling. If your Spark jobs use a lot of disk space for temporary storage
or shuffling, make sure that you have sufficient space on the local disk where
emptyDir volumes are created.The recommended location is a partition in
/var/lib. For more information, see Understanding ephemeral storagein the Red Hat
OpenShift documentation:
If you don't have sufficient space on the local disk, Spark jobs might run slowly and some of the executors might evict jobs. A minimum of 50 GB of temporary storage for each vCPU request is recommended. Minimum resources for an installation with a single replica per service. Work with IBM Sales to get a more accurate sizing based on your expected workload. All of the Analytics Engine powered by Apache Spark service instances associated with an instance of IBM Software Hub use the same pool of resources. |
Data Refinery
| vCPU | Memory | Storage | Notes |
|---|---|---|---|
|
Operator pods:
0.1 vCPU Catalog pods: 0.5 vCPU Operand: 1 vCPU |
Operator pods:
0.256 GB RAM Catalog pods: 1 GB RAM Operand: 4 GB RAM |
Persistent storage:
Uses the persistent storage provisioned by the common core services. Ephemeral storage: 2 GB Image storage: Up to 4.39 GB |
Minimum resources for an installation with a single replica per service. Work with IBM Sales to get a more accurate sizing based on your expected workload. This service is installed when you install IBM Knowledge Catalog or Watson Studio
|
Db2
| vCPU | Memory | Storage | Notes |
|---|---|---|---|
|
Operator pods:
0.1 vCPU Catalog pods: 0.01 vCPU Operand: 8 vCPU |
Operator pods:
0.256 GB RAM Catalog pods: 0.05 GB RAM Operand: 24 GB RAM |
Persistent storage:
540 GB (assuming defaults) Ephemeral storage: 2.2 - 9.7 GB Image storage: Up to 0.73 GB |
Minimum resources for an installation with a single replica per service. Work with IBM Sales to get a more accurate sizing based on your expected workload. A dedicated node is recommended for production deployments of Db2. For details, see Setting up dedicated nodes. |
Db2 Data Management Console
| vCPU | Memory | Storage | Notes |
|---|---|---|---|
|
Operator pods:
0.1 vCPU Catalog pods: 0.01 vCPU Operand: 5 vCPU |
Operator pods:
0.256 GB RAM Catalog pods: 0.05 GB RAM Operand: 19.31 GB RAM |
Persistent storage:
10 GB Ephemeral storage: 7.5 GB Image storage: Up to 6.09 GB |
Minimum resources for an installation with a single replica per service. Work with IBM Sales to get a more accurate sizing based on your expected workload. For information on sizing the provisioned instance, see Creating a service instance for Db2 Data Management Console from the web client. |
Db2 Warehouse
| vCPU | Memory | Storage | Notes |
|---|---|---|---|
|
Operator pods:
0.5 vCPU Catalog pods: 0.01 vCPU Operand: SMP: 7 vCPU MPP: 39 vCPU |
Operator pods:
0.256 GB RAM Catalog pods: 0.05 GB RAM Operand: SMP: 98 GB RAM MPP: 610 GB RAM |
Persistent storage:
540 GB (assuming defaults) Ephemeral storage: 2.2 - 10.8 GB Image storage: Up to 0.73 GB |
Minimum resources for an installation with a single replica per service. Use dedicated nodes for:
For detail, see Setting up dedicated nodes.
|
Decision Optimization
| vCPU | Memory | Storage | Notes |
|---|---|---|---|
|
Operator pods:
0.1 vCPU Catalog pods: 0.01 vCPU Operand: 0.9 vCPU |
Operator pods:
0.256 GB RAM Catalog pods: 0.05 GB RAM Operand: 1.5 GB RAM |
Persistent storage:
12 GB Ephemeral storage: 300 - 6500 MB Image storage: Up to 3.76 GB |
Minimum resources for an installation with a single replica per service. Work with IBM Sales to get a more accurate sizing based on your expected workload. |
RStudio® Server Runtimes
| vCPU | Memory | Storage | Notes |
|---|---|---|---|
|
Operator pods:
0.1 vCPU Catalog pods: 0.01 vCPU Operand: 1 vCPU |
Operator pods:
0.256 GB RAM Catalog pods: 0.05 GB RAM Operand: 8.8 GB RAM |
Persistent storage:
Uses the persistent storage provisioned by the common core services. Ephemeral storage: Dictated by the runtimes. Approximately 1 GB for each runtime. Image storage: Up to 18.29 GB |
Minimum resources for an installation with a single replica per service. Work with IBM Sales to get a more accurate sizing based on your expected workload. |
Watson Machine Learning
| vCPU | Memory | Storage | Notes |
|---|---|---|---|
|
Operator pods:
0.1 vCPU Catalog pods: 0.01 vCPU Operand: 6 vCPU |
Operator pods:
0.5 GB RAM Catalog pods: 0.5 GB RAM Operand: 27 GB RAM |
Persistent storage:
150 GB Ephemeral storage:This information is not currently available. Image storage: Up to 93.10 GB |
Minimum resources for an installation with a single replica per service. Work with IBM Sales to get a more accurate sizing based on your expected workload. AVX2 is recommended but not required for AutoAI experiments. If you plan to use deep learning or models that require GPU, the service requires at least one GPU. The service requires at least one GPU. GPU support is limited to:
All GPU nodes on the cluster must be the same type of GPU. If you want to partition GPU nodes using NVIDIA Multi-Instance GPU, all of the partitions must be the same configuration and partition size. MIG support is limited to:
|
Watson Studio
| vCPU | Memory | Storage | Notes |
|---|---|---|---|
|
Operator pods:
0.1 vCPU Catalog pods: 0.01 vCPU Operand: 2 vCPU |
Operator pods:
0.256 GB RAM Catalog pods: 0.05 GB RAM Operand: 8.8 GB RAM |
Persistent storage:
Uses the persistent storage provisioned by the common core services. Additional storage is required if you enable Visual Studio Code support. Ephemeral storage: 5 - 10 GB Image storage: Up to 6.55 GB |
Minimum resources for an installation with a single replica per service. Work with IBM Sales to get a more accurate sizing based on your expected workload. If Data Refinery is not installed, add the vCPU and memory required for Data Refinery to the information listed for Watson Studio. If you enable the Visual Studio Code extension for Watson Studio, you must allocate a minimum of 500-600 MB of storage per user for installed extensions. For details, see To enable Visual Studio Code in Post-installation tasks for the Watson Studio service. |
Watson Studio Runtimes
| vCPU | Memory | Storage | Notes |
|---|---|---|---|
|
Operator pods:
0.1 vCPU Catalog pods: 0.01 vCPU Operand: Dictated by the runtimes |
Operator pods:
0.256 GB RAM Catalog pods: 0.05 GB RAM Operand: Dictated by the runtimes |
Persistent storage:
Uses the persistent storage provisioned by the common core services. Ephemeral storage: Dictated by the runtimes. Image storage: Up to 53.71 GB |
Runtimes use on-demand vCPU and memory. Watson Studio Runtimes includes the following runtimes:
|
Automatically installed dependencies
Automatically installed dependencies provide underlying functionality for services. Use the following sections to understand the hardware requirements for:
- Common core services
- Db2 as a service
- Db2U
Use the following information to determine whether you have the minimum required resources to install each component on your cluster.
Common core services
| vCPU | Memory | Storage | Notes |
|---|---|---|---|
|
Operator pods:
0.1 vCPU Catalog pods: 0.01 vCPU Operand: 11 vCPU |
Operator pods:
0.256 GB RAM Catalog pods: 0.05 GB RAM Operand: 18.3 GB RAM |
Persistent storage:
500 GB Ephemeral storage: 100 GB Image storage: Up to 23.83 GB |
Automatically installed by services that require it. Depending on the services that you install, this software is installed once in each Red Hat OpenShift project where IBM Software Hub is installed. For details, see Service software requirements. |
Db2 as a service
| vCPU | Memory | Storage | Notes |
|---|---|---|---|
|
Operator pods:
0.1 vCPU Catalog pods: 0.01 vCPU Operand: 0.3 vCPU |
Operator pods:
0.256 GB RAM Catalog pods: 0.05 GB RAM Operand: 0.8 GB RAM |
Persistent storage:
Not applicable Ephemeral storage: 0.5 GB Image storage: Up to 2.02 GB |
Automatically installed by services that require it. Depending on the services that you install, this software is installed once in each Red Hat OpenShift project where IBM Software Hub is installed. For details, see Service software requirements. |
Db2U
| vCPU | Memory | Storage | Notes |
|---|---|---|---|
|
Operator pods:
0.6 vCPU Catalog pods: 0.01 vCPU Operand: Not applicable. |
Operator pods:
0.7 GB RAM Catalog pods: 0.05 GB RAM Operand: Not applicable. |
Persistent storage:
Not applicable Ephemeral storage: 0.5 GB Image storage: Up to 40.27 GB |
Automatically installed by services that require it. Depending on the services that you install, the operands for this software might be installed multiple times in each Red Hat OpenShift project where IBM Software Hub is installed. The operator is installed once per instance of IBM Software Hub. The operands are generated by the services that have a dependency on Db2U. For a list of services that have a dependency on Db2U, see Service software requirements. |