System requirements

The system requirements for installing IBM Fusion software.

The IBM Fusion operators and your workloads run on OpenShift® Container Platform that in turn uses 64-bit Intel, AMD x86, Linux on IBM Z, or Power hardware architecture. For more information about Fusion data services on the different deployment platforms of IBM Fusion, see IBM Fusion Services platform support matrix.

The following table lists the system requirements for IBM Fusion software:
Note: Though there is no minimum number of nodes requirement, the following table provides data based on three control nodes and three compute nodes cluster. However, sizing changes based on the number of nodes for components with "Per node".

Fusion Data Foundation can be deployed on Infra or worker nodes, whereas all other IBM Fusion services including Base must be deployed on worker nodes.

Component vCPUs Memory Storage GPU
IBM Fusion Base Request CPU: 4

Limit CPU: 10

Request memory: 4 GiB

Limit memory: 13 GiB

Overall storage is a minimum of 500 MB. For CAS, you need a GPU based solution.
Fusion Data Foundation
Internal mode (Local and dynamic)

Request CPU: 22+2*(total count of Object Storage Daemon )

Limit CPU: 26 (total count of Object Storage Daemon)

External mode

Request CPU: 6

Limit CPU: 5

For IBM Power Systems:
Internal mode
48 CPU (logical)
External mode
24 CPU (logical)
Internal mode (Local and dynamic)

Request memory: 56+5*(total count of Object Storage Daemon) GiB

Limit memory: 56+5*(total count of Object Storage Daemon) GiB

External mode

Request memory:16 GiB

Limit memory: 16 GiB

For IBM Power Systems:
Internal mode
192 GiB memory
External mode
48 GiB memory
Minimum three storage Nodes

For more information about Fusion Data Foundation, see Important considerations when deploying Red Hat OpenShift Data Foundation.

For IBM Power Systems:
Internal
3 storage devices, each with additional 500 GB of disk
NA
Global Data Platform
Per node

Request CPU: 10% (total vCPU of the node)

Cluster

Request CPU: 3

Limit CPU: 12

Per node

Request memory: 10% (total memory of the node)

Cluster

Request memory: 13

Limit memory: 30

For more information about IBM Spectrum Scale Container Native Storage Access, see Hardware requirements for IBM Storage Scale Container Native Storage Access. NA
Backup & Restore        
Backup & Restore Base Hub install requirements (assuming 3 node cluster)
  • Request CPU: 4.46
  • Limit CPU: 35.9
  • Request Memory: 18,210 MiB
  • Limit Memory: 46,502 MiB
  • Request Ephemeral Storage: 3,435 MiB
  • Limit Ephemeral Storage: 8,270 MiB
 
Backup & Restore Base Spoke install requirements (3 node cluster). For more information scaling the hub, see Backup & restore hub performance and scaling.
  • Request CPU: 1.39
  • Limit CPU: 14
  • Request Memory: 2,104 MiB
  • Limit Memory: 13,872 MiB
  • Request Ephemeral Storage: 875 MiB
  • Limit Ephemeral Storage: 3,150 MiB
 
Each additional node for Backup & Restore (Beyond the first three) Applicable for both Hub and Spoke)
  • Request CPU: 0.05
  • Limit CPU: 4
  • Request Memory: 250 MiB
  • Limit Memory: 2048 MiB
  • Request Ephemeral Storage: 25 MiB
  • Limit Ephemeral Storage: 500 MiB
 
Per Job Additional Requirements: Data Mover (1 per PVC, 5 per job per node)

Request CPU: 2

Limit CPU: 4

Request Memory: 4,096 MiB

Limit Memory: 16,384 MiB

Request Ephemeral Storage: 5,000 MiB

Limit Ephemeral Storage: 5,000 MiB

 
Per Job Additional Requirements: Legacy Restic Data Mover (1 per PVC, max 3 per job)

Request CPU: 0.1

Limit CPU: 2

Request Memory: 4,000 MiB

Limit Memory: 15,000 MiB

Request Ephemeral Storage: 400 MiB

Limit Ephemeral Storage: 2,000 MiB

 
Per Job Additional Requirements: Maintenance (after every job and once per day per application/policy combination)

Request CPU: 0.5

Limit CPU: 4

Request Memory: 500 MiB

Limit Memory: 4,096 MiB

Request Ephemeral Storage: 5,000 MiB

Limit Ephemeral Storage: 5,000 MiB

 
IBM Data Cataloging

Request CPU: 20

Limit CPU: 100

Request memory: 40 GiB

Limit memory: 180 GiB

120 GB Minimum

For more information about IBM Data Cataloging, see Installing IBM Data Cataloging.

NA
Content-Aware Storage ( CAS)
  • Starter Configuration: <12TB Ingested Data: 160 vCPU (SMT=2)
  • >12TB Ingested Data with High Availability: 320 vCPU
  • Starter configuration: 768 GiB
  • 12TB Ingested Data with High Availability: 2560 GiB
The minimum required filesystem size is 200 GB. For more information about IBM Spectrum Scale Container Native Storage Access, see Hardware requirements for IBM Storage Scale Container Native Storage Access.
Starter configuration <12TB Ingested Data:
  • 1 GPU worker node
  • 2 non-GPU worker nodes
  • Production with Multi-Instance GPU (MIG) enabled: 2 MIG Capable NVIDIA GPUs (A100 80GB, H100, RTX PRO 6000)
  • Production without Multi-Instance GPU (MIG) enabled: 6 NVIDIA GPUs (L40S, A100 40GB, A100 80GB, H100, H200, RTX PRO 6000)
  • Non-production with NVIDIA time slicing enabled: 2 NVIDIA GPUs (L40S, A10G)
>12TB Ingested Data with High Availability:
  • 2 GPU worker nodes
  • 1 non-GPU worker node
  • Production with Multi-Instance GPU (MIG) enabled: 4 MIG Capable NVIDIA GPUs (A100 80GB, H100, RTX PRO 6000)

For more information about MIG, see Configuring a Multi-Instance GPU (MIG) with CAS.

The following table lists the component versions:
Component Versions
On-premises VMware Version 7.0
On-premises IBM z/OS Container Extensions (IBM zCX) All the zCX hypervisor versions on which OpenShift Container Platform is supported.
Red Hat® OpenShift Container Platform 4.14, 4.15, 4.16, 4.17, 4.18.
Note: The console plugin does not support the OpenShift Container Platform versions of Fusion Backup & Restore service. For more information, see Guidance for Fusion Native Console plugin OCP version.
Red Hat OpenShift Kubernetes Engine (OKE) You can use OKE as an alternative for OpenShift Container Platform. The supported version of OKE is same as OpenShift Container Platform.
Red Hat OpenShift Virtualization Engine (OVE) IBM Fusion provides support for OVE as an alternative for OpenShift Container Platform. The supported version of OVE is same as OpenShift Container Platform. For more information about the limitations and considerations of using OVE, see OVE considerations.
Storage Fusion Data Foundation 4.14, 4.15, 4.16, 4.17, 4.18

Note: For deployment platforms and supported services, see Support matrix for IBM Fusion versions.
IBM Storage Scale
Note: IBM Fusion supports IBM Storage Scale 5.2.3.x.
Remote mount of IBM Storage Scale storage:
  • IBM Storage Scale remote storage cluster release 5.1.9.0 or higher.
  • The IBM Fusion cluster accesses the storage owned by an IBM Storage Scale storage cluster by using a remote mount.

    The IBM Storage Scale file system version of the owning storage cluster cannot be newer than version 37.00.

    To determine the version of your IBM Storage Scale cluster, run the mmdiag --version command on the IBM Storage Scale cluster. To determine the version of your IBM Storage Scale file system, run the mmlsfs all -V command.

    For further information, including installation/upgrade instructions for the remote IBM Storage Scale cluster, see IBM Storage Scale documentation.

Backup & Restore
IBM Fusion supports backup and recovery operations on Red Hat OpenShift Container Platform environment with any storage provider that implemented the Container Storage Interface (CSI) driver and meets the following prerequisites:
  • The Container Storage Interface driver must be v1 or higher (no support for alpha or beta versions)
  • The Container Storage Interface driver must support Volume Snapshot and Restore.
  • StorageClass must be created with volumeBindingMode: Immediate or volumeBindingMode: WaitForFirstConsumer and allowVolumeExpansion: true.
  • PersistentVolume (PV) must be created with volumeMode: Filesystem or volumeMode: Block
Note: It is also recommended that the VolumeSnapshotClass deletion policy is configured to delete snapshots after the corresponding VolumeSnapshotContent is deleted (deletionPolicy: Delete).
Note: For backups of PVCs provisioned on IBM Storage Scale, snapshots are be created only from independent fileset-based persistent volume claims (PVCs). PVCs that are based on lightweight directories and dependent file sets are not supported.