Storage considerations

To install IBM Cloud Pak® for Data, you must have a supported persistent storage solution that is accessible to your Red Hat® OpenShift® cluster.

What storage options are supported for the platform?

Cloud Pak for Data supports and is optimized for several types of persistent storage.

Cloud Pak for Data uses dynamic storage provisioning. A Red Hat OpenShift cluster administrator must properly configure storage before Cloud Pak for Data is installed.

Important: It is your responsibility to review the documentation for the storage that you plan to use. Ensure that you understand any limitations that are associated with the storage.

As you plan your installation, remember that not all services support all types of storage. For complete information on the storage that each service supports, see Storage requirements. If the services that you want to install don't support the same type of storage, you can have a mixture of different storage types on your cluster. However, it is recommended to use one type of storage, if possible.

Storage option Version Notes
OpenShift Data Foundation
  • Version 4.10 or later fixes
  • Version 4.12 or later fixes
Available in Red Hat OpenShift Platform Plus.

Ensure that you install a version of OpenShift Data Foundation that is compatible with the version of Red Hat OpenShift Container Platform that you are running. For details, see https://access.redhat.com/articles/4731161.

IBM® Storage Fusion Data Foundation
  • Version 2.5.2 or later fixes
  • Version 2.6.0 or later fixes (Recommended)
Available in IBM Storage Fusion.

Ensure that you install a version of IBM Storage Fusion Data Foundation that is compatible with the version of Red Hat OpenShift Container Platform that you are running.

If you want to use IBM Storage Fusion for backup and recovery, you must use Version 2.6.

If you are upgrading to IBM Cloud Pak for Data Version 4.7, and you want to use IBM Storage Fusion, Version 2.6.0, upgrade your storage after you upgrade IBM Cloud Pak for Data.

IBM Storage Fusion Global Data Platform
  • Version 2.4.0 or later fixes
  • Version 2.5.2 or later fixes
  • Version 2.6.0 or later fixes (Recommended)
Available in IBM Storage Fusion.

If you want to use IBM Storage Fusion for backup and recovery, you must use Version 2.6.

If you are upgrading to IBM Cloud Pak for Data Version 4.7, and you want to use IBM Storage Fusion, Version 2.6.0, upgrade your storage after you upgrade IBM Cloud Pak for Data.

IBM Storage Scale Container Native (with IBM Storage Scale Container Storage Interface) Version 5.1.5 or later fixes

CSI Version 2.6.x or later fixes

Available in the following storage:
  • IBM Storage Fusion
  • IBM Storage Suite for IBM Cloud® Paks
Portworx
  • Version 2.9.1.3 or later fixes
  • Version 2.13.3 or later fixes
If you are running Red Hat OpenShift Container Platform Version 4.12, you must use Portworx Version 2.13.3 or later.
NFS Version 3 or 4
Version 3 is recommended if you are using any of the following services:
  • Db2®
  • Db2 Big SQL
  • Db2 Warehouse
  • Watson™ Knowledge Catalog
  • Watson Query

If you use Version 4, ensure that your storage class uses NFS Version 3 as the mount option. For details, see Setting up dynamic provisioning.

Amazon Elastic Block Store (EBS) Not applicable In addition to EBS storage, your environment must also include EFS storage.
Amazon Elastic File System (EFS) Not applicable It is recommended that you use both EBS and EFS storage.
NetApp Trident Version 22.4.0 or later fixes This information applies to both self-managed and managed NetApp Trident.
Note: The preceding storage options have been evaluated by IBM. However, you should run the Cloud Pak for Data storage validation tool on your Red Hat OpenShift cluster to:
  • Evaluate whether the storage on your cluster is sufficient for use with Cloud Pak for Data.
  • Assess storage provided by other vendors. This tool does not guarantee support for other types of storage. You can use other storage environments at your own risk.

What storage options are supported on my deployment environment?

If Cloud Pak for Data supports a storage option, you can install Cloud Pak for Data with that storage if it is supported on your deployment option. Ensure that you select a storage option that:
  • Works on your chosen deployment environment.

    Some storage options are supported only on a specific deployment environment.

    For clusters hosted on third-party infrastructure, such as IBM Cloud or Amazon Web Services, it is recommended that you use storage that is native to the infrastructure or well integrated with the infrastructure, if possible.

  • Supports the services that you plan to install.

    Some services support a subset of the storage options that are supported by the platform. For details, see Storage requirements.

    Has sufficient I/O performance.

    For information on how to test I/O performance, see Disk requirements.

Deployment environment Managed OpenShift Self-managed OpenShift
On-premises IBM Cloud Satellite supports the following storage options:
  • OpenShift Data Foundation
  • Portworx
The following storage options are supported on bare metal and VMware infrastructure:
  • OpenShift Data Foundation
  • IBM Storage Fusion Data Foundation
  • IBM Storage Fusion Global Data Platform
  • IBM Storage Scale Container Native
  • Portworx
  • NFS
  • NetApp Trident
IBM Cloud Red Hat OpenShift on IBM Cloud supports the following storage options on VPC infrastructure:
  • OpenShift Data Foundation
  • IBM Storage Fusion Data Foundation
  • Portworx

For up-to-date information about the storage supported on this environment, review Storing data on persistent storage in the Red Hat OpenShift on IBM Cloud documentation.

The following storage options are supported on VPC IBM Cloud infrastructure:
  • OpenShift Data Foundation
  • IBM Storage Fusion Data Foundation
  • Portworx
  • NFS
Amazon Web Services (AWS) Red Hat OpenShift Service on AWS (ROSA) supports the following storage options:
  • IBM Storage Fusion Global Data Platform
  • Amazon Elastic Block Store (EBS)
  • Amazon Elastic File System (EFS)
  • NetApp Trident (includes Amazon FSx for NetApp ONTAP)
The following storage options are supported on AWS infrastructure:
  • OpenShift Data Foundation
  • IBM Storage Fusion Data Foundation
  • Amazon Elastic Block Store (EBS)
  • Amazon Elastic File System (EFS)
  • Portworx
  • NFS
  • NetApp Trident (includes Amazon FSx for NetApp ONTAP)
Microsoft Azure Azure Red Hat OpenShift (ARO) supports the following storage options:
  • OpenShift Data Foundation
  • IBM Storage Fusion Data Foundation
  • Portworx
  • NFS
The following storage options are supported on Microsoft Azure infrastructure:
  • OpenShift Data Foundation
  • IBM Storage Fusion Data Foundation
  • Portworx
  • NFS, specifically Microsoft Azure locally redundant Premium SSD storage
Google Cloud Managed OpenShift on Google Cloud is not supported. The following storage options are supported on Google Cloud infrastructure:
  • OpenShift Data Foundation
  • Portworx
  • NFS

What storage options are supported on the version of Red Hat OpenShift Container Platform that I am running?

Storage option Version 4.10 Version 4.12
OpenShift Data Foundation
IBM Storage Fusion Data Foundation
IBM Storage Fusion Global Data Platform
IBM Storage Scale Container Native
Portworx
NFS
Amazon Elastic Block Store (EBS)
Amazon Elastic File System (EFS)
NetApp Trident

What storage options are supported on my hardware?

Storage option x86-64 Power® s390x
OpenShift Data Foundation  
IBM Storage Fusion Data Foundation  
IBM Storage Fusion Global Data Platform  
IBM Storage Scale Container Native    
Portworx    
NFS
Amazon Elastic Block Store (EBS)    
Amazon Elastic File System (EFS)    
NetApp Trident    

Storage comparison

Use the following information to decide which storage solution is right for you:


License requirements

The following table lists whether you need a separate license to use each storage option.

Storage option Details
OpenShift Data Foundation

IBM Cloud Pak for Data customers can obtain OpenShift Data Foundation Essentials storage entitlement at no charge.

Entitlement terms

IBM Storage Fusion entitlement applies only to self-managed OpenShift.

You are entitled to use IBM Storage Fusion with the following limitations:

  • You can use up to 6 TB of IBM Storage Fusion storage.
  • You can use IBM Storage Fusion for up to 12 months.

If you exceed these terms, a separate license is required.

Contact your IBM Sales representative for access to this storage.

IBM Storage Fusion Data Foundation

IBM Cloud Pak for Data customers can obtain IBM Storage Fusion storage entitlement at no charge.

Entitlement terms

IBM Storage Fusion entitlement applies only to self-managed OpenShift.

You are entitled to use IBM Storage Fusion with the following limitations:

  • You can use up to 6 TB of IBM Storage Fusion storage.
  • You can use IBM Storage Fusion for up to 12 months.

If you exceed these terms, a separate license is required.

Contact your IBM Sales representative for access to this storage.

IBM Storage Fusion Global Data Platform

IBM Cloud Pak for Data customers can obtain IBM Storage Fusion storage entitlement at no charge.

Entitlement terms

IBM Storage Fusion entitlement applies only to self-managed OpenShift.

You are entitled to use IBM Storage Fusion with the following limitations:

  • You can use up to 6 TB of IBM Storage Fusion storage.
  • You can use IBM Storage Fusion for up to 12 months.

If you exceed these terms, a separate license is required.

Contact your IBM Sales representative for access to this storage.

IBM Storage Scale Container Native (with IBM Storage Scale Container Storage Interface) You can use IBM Storage Scale Container Native as part of IBM Storage Fusion.
Portworx A separate license is required.
NFS No license is required.
Amazon Elastic Block Store (EBS) A separate subscription is required.
Amazon Elastic File System (EFS) A separate subscription is required.
NetApp Trident
Self-managed NetApp Trident
A separate license is required.
Managed NetApp Trident (Amazon FSx for NetApp ONTAP)
No license is required.


Storage classes

The person who installs Cloud Pak for Data and the services on the cluster must know which storage classes to use during installation. The following table lists the required types of storage. When applicable, the table also lists the recommended storage classes to use and points to additional guidance on how to create the storage classes.

Storage option Details
OpenShift Data Foundation The recommended storage classes are automatically created when you install OpenShift Data Foundation.
Cloud Pak for Data uses the following storage classes:
  • RWX file storage: ocs-storagecluster-cephfs
  • RWO block storage: ocs-storagecluster-ceph-rbd
IBM Storage Fusion Data Foundation The recommended storage classes are automatically created when you install IBM Storage Fusion Data Foundation.
Cloud Pak for Data uses the following storage classes:
  • RWX file storage: ocs-storagecluster-cephfs
  • RWO block storage: ocs-storagecluster-ceph-rbd
IBM Storage Fusion Global Data Platform The recommended RWX storage class is called ibm-spectrum-scale-sc.

IBM Storage Fusion Global Data Platform supports both ReadWriteMany (RWX access) and ReadWriteOnce (RWO access) with the same storage class.

IBM Storage Scale Container Native (with IBM Storage Scale Container Storage Interface) The recommended RWX storage class is called ibm-spectrum-scale-sc.

IBM Storage Scale Container Native supports both ReadWriteMany (RWX access) and ReadWriteOnce (RWO access) with the same storage class.

For details on creating the recommended storage class, see Setting up IBM Storage Fusion Global Data Platform or IBM Storage Scale Container Native storage.

Portworx The recommended storage classes are listed in Creating Portworx storage classes.
NFS The recommended RWX storage class is called managed-nfs-storage. For details on setting up dynamic provisioning and creating the recommended storage class, see Setting up NFS storage.
Amazon Elastic Block Store (EBS) Use either of the following RWO storage classes:
  • gp2-csi
  • gp3-csi
Amazon Elastic File System (EFS) The recommended RWX storage class is called efs-nfs-client. For details on setting up dynamic storage provisioning and creating the recommended storage class, see Setting up Amazon Elastic File System.
NetApp Trident
Self-managed NetApp Trident
The recommended RWX storage class is called ontap-nas. For details on setting up dynamic provisioning and creating the recommended storage class, see Setting up NetApp Trident.
Managed NetApp Trident (Amazon FSx for NetApp ONTAP)
The requirements are the same as self-managed NetApp Trident.


Data replication for high availability
Storage option Details
OpenShift Data Foundation Supported

By default, all services use multiple replicas for high availability. OpenShift Data Foundation maintains each replica in a distinct availability zone.

IBM Storage Fusion Data Foundation Supported

By default, all services use multiple replicas for high availability. IBM Storage Fusion Data Foundation maintains each replica in a distinct availability zone.

IBM Storage Fusion Global Data Platform Supported.

Replication is supported and can be enabled within the IBM Storage Fusion Global Data Platform in various ways, see Data Mirroring and Replication in the IBM Storage Scale documentation.

IBM Storage Scale Container Native (with IBM Storage Scale Container Storage Interface) Supported.

Replication is supported and can be enabled within the IBM Storage Scale Storage Cluster in various ways, see Data Mirroring and Replication in the IBM Storage Scale documentation.

Portworx Supported

By default, most services use a storage class that supports 3 replicas.

For details about the replicas for each storage class, see Creating Portworx storage classes.

For details about the storage classes required for each service, see Storage requirements.

NFS Replication support depends on your NFS server.
Amazon Elastic Block Store (EBS) Supported

When you create an EBS volume, it is automatically replicated within its Availability Zone to prevent data loss due to failure of any single hardware component.

Amazon Elastic File System (EFS) Supported

You can use EFS replication to create a replica of your EFS file system in the AWS Region of your choice. When you enable replication on an EFS file system, Amazon EFS automatically and transparently replicates the data and metadata on the source file system to the target file system. For details, see Amazon EFS replication.

NetApp Trident
Self-managed NetApp Trident

Supported

For details, see Data replication requirements in the NetApp Trident documentation.

Managed NetApp Trident (Amazon FSx for NetApp ONTAP)
The requirements are the same as self-managed NetApp Trident.


Storage-level backup and restore

Storage-level backup and restore does not include backup and restore of Cloud Pak for Data deployments.

Storage option Details
OpenShift Data Foundation Container Storage Interface support for snapshots and clones.

Tight integration with Velero CSI plug-in for Red Hat OpenShift Container Platform backup and recovery.

IBM Storage Fusion Data Foundation Container Storage Interface support for snapshots and clones.

Tight integration with Velero CSI plug-in for Red Hat OpenShift Container Platform backup and recovery.

IBM Storage Fusion Global Data Platform
For storage level backup, see Backing up and restoring your data in the IBM Storage Fusion documentation.
IBM Storage Scale Container Native (with IBM Storage Scale Container Storage Interface) For details, see Data protection and disaster recovery in the IBM Storage Scale documentation.
Portworx
On-premises
Limited support.
IBM Cloud
Supported with the Portworx Enterprise Disaster Recovery plan.
NFS Limited support.
Amazon Elastic Block Store (EBS)
Amazon Elastic File System (EFS)
NetApp Trident  


Cloud Pak for Data backup and restore

Cloud Pak for Data backup and restore is applicable to application-level backups and does not include backing up data on the storage device.

Storage
OADP
Offline backup and restore
OADP
Online backup and restore to the same cluster
Fusion
Online backup and restore to same cluster
Online disaster recovery
Red Hat OpenShift Data Foundation

with IBM Storage Fusion

IBM Storage Fusion Data Foundation

with IBM Storage Fusion

IBM Storage Fusion Global Data Platform

with IBM Storage Fusion

IBM Storage Scale Container Native

with IBM Storage Fusion

Portworx

Requires Portworx v2.12.2 or higher

   
NetApp Trident    
NFS

Restic backups only

     
Amazon Elastic File System

Restore to same cluster with Restic backups only

     
Amazon Elastic File System and Amazon Elastic Block Store

Restore to same cluster with Restic backups only

     
Amazon FSx for NetApp ONTAP      


Encryption of data at rest
Storage option Details
OpenShift Data Foundation Supported.

OpenShift Data Foundation uses Linux Unified Key System (LUKS) version 2 based encryption with a key size of 512 bits and the aes-xts-plain64 cipher.

You must enable encryption for your whole cluster during cluster deployment to ensure encryption of data at rest. Encryption is disabled by default. Working with encrypted data incurs a small performance penalty. For details, see Security considerations in the OpenShift Data Foundation documentation:
Restriction: If you are using OpenShift Data Foundation Essentials, you can use only keys that are managed on the cluster, you cannot use an external key management system (KMS).
Support for FIPS cryptography
By storing all data in volumes that use RHEL-provided disk encryption and enabling FIPS mode for your cluster, both data at rest and data in motion, or network data, are protected by FIPS Validated Modules in Process encryption. You can configure your cluster to encrypt the root file system of each node. For details, see FIPS 140-2 in the OpenShift Data Foundation documentation:
 

If you have OpenShift Data Foundation Advanced, you can also encrypt persistent volume claims (PVCs) in addition to enabling encryption for the whole cluster. You can enable PVC encryption for storage that is created by the ocs-storagecluster-ceph-rbd storage class.

IBM Storage Fusion Data Foundation Supported.

IBM Storage Fusion Data Foundation uses Linux Unified Key System (LUKS) version 2 based encryption with a key size of 512 bits and the aes-xts-plain64 cipher.

You must enable encryption for your whole cluster during cluster deployment to ensure encryption of data at rest. Encryption is disabled by default. Working with encrypted data incurs a small performance penalty. For details, see Security considerations in the IBM Storage Fusion documentation:
Support for FIPS cryptography
By storing all data in volumes that use RHEL-provided disk encryption and enabling FIPS mode for your cluster, both data at rest and data in motion, or network data, are protected by FIPS Validated Modules in Process encryption. You can configure your cluster to encrypt the root file system of each node. For details, see FIPS-140-2 in the IBM Storage Fusion documentation:
 
IBM Storage Fusion Global Data Platform Supported

For details, see Encryption in the IBM Storage Scale documentation.

IBM Storage Scale Container Native (with IBM Storage Scale Container Storage Interface) Supported

For details, see Encryption in the IBM Storage Scale documentation.

Portworx Supported with Portworx Enterprise only.

Portworx uses the LUKS format of dm-crypt and AES-256 as the cipher with xts-plain64 as the cipher mode.

On-premises deployments
Refer to Enabling Portworx volume encryption in the Portworx documentation.
IBM Cloud deployments
To protect the data in your Portworx volumes, encrypt the volumes with IBM Key Protect or Hyper Protect Crypto Services.
NFS Check with your storage vendor on the steps to enable encryption of data at rest.
Amazon Elastic Block Store (EBS)
Amazon Elastic File System (EFS)
NetApp Trident
Self-managed NetApp Trident

Supported

For details, see Encryption of data at rest in the NetApp Trident documentation.

Managed NetApp Trident (Amazon FSx for NetApp ONTAP)
The requirements are the same as self-managed NetApp Trident.


Network and I/O requirements
Storage option Details
OpenShift Data Foundation
Network requirements
Your network must support a minimum of 10 Gbps.
I/O requirements
Each node must have at least one enterprise-grade SSD or NVMe device that meets the Disk requirements in the system requirements.

If SSD or NVMe aren't supported in your deployment environment, use an equivalent or better device.

IBM Storage Fusion Data Foundation
Network requirements
Your network must support a minimum of 10 Gbps.
I/O requirements
Each node must have at least one enterprise-grade SSD or NVMe device that meets the Disk requirements in the system requirements.

If SSD or NVMe aren't supported in your deployment environment, use an equivalent or better device.

IBM Storage Fusion Global Data Platform
Network requirements
You must have sufficient network performance to meet the storage I/O requirements.
I/O requirements
For details, see Disk requirements in the system requirements.
IBM Storage Scale Container Native (with IBM Storage Scale Container Storage Interface)
Network requirements
You must have sufficient network performance to meet the storage I/O requirements.
I/O requirements
For details, see Disk requirements in the system requirements.
Portworx
Network requirements
Your network must support a minimum of 10 Gbps.

For details, see Prerequisites in the Portworx documentation.

I/O requirements
For details, see Disk requirements in the system requirements.

For details on performance, see FIO performance in the Portworx documentation.

NFS
Network requirements
You must have sufficient network performance to meet the storage I/O requirements.
I/O requirements
For details, see Disk requirements in the system requirements.
Amazon Elastic Block Store (EBS)
Network requirements
You must have sufficient network performance to meet the storage I/O requirements.
I/O requirements
For details, see Disk requirements in the system requirements.
Amazon Elastic File System (EFS)
Network requirements
You must have sufficient network performance to meet the storage I/O requirements.
I/O requirements
For details, see Disk requirements in the system requirements.
NetApp Trident
Network requirements
Self-managed NetApp Trident
You must have sufficient network performance to meet the storage I/O requirements.
Managed NetApp Trident (Amazon FSx for NetApp ONTAP)
The requirements are the same as self-managed NetApp Trident.
I/O requirements
Self-managed NetApp Trident
For details, see Disk requirements in the system requirements.
Managed NetApp Trident (Amazon FSx for NetApp ONTAP)
The requirements are the same as self-managed NetApp Trident.


Resource requirements

This section describes the resource requirements for the various storage options.

For information about the minimum amount of storage that is required for your environment, see Storage requirements.

Important: Work with your IBM Sales representative to ensure that you have sufficient storage for the services that you plan to run on Cloud Pak for Data and for your expected workload.
Storage option vCPU Memory Storage
OpenShift Data Foundation
  • 10 vCPU per node on three initial nodes.
  • 2 vCPU per node on any additional nodes
For details, see Resource requirements:
  • 24 GB of RAM per node on initial three nodes.
  • 5 GB of RAM on any additional nodes.
For details, see Resource requirements:
A minimum of three nodes.

On each node, you must have at least one SSD or NVMe device. Each device should have at least 1 TB of available storage.

For details, see Storage device requirements:
IBM Storage Fusion Data Foundation
  • 10 vCPU per node on three initial nodes.
  • 2 vCPU per node on any additional nodes

For details, see System requirements.

  • 24 GB of RAM on initial three nodes.
  • 5 GB of RAM on any additional nodes.

For details, see System requirements.

A minimum of three nodes.

On each node, you must have at least one SSD or NVMe device. Each device should have at least 1 TB of available storage.

For details, see System requirements.

IBM Storage Fusion Global Data Platform 8 vCPU on each worker node to deploy IBM Storage Scale Container Native and IBM Storage Scale Container Storage Interface Driver.

See the IBM Storage Scale Container Native hardware requirements.

16 GB of RAM on each worker node.

For details, see the IBM Storage Scale Container Native requirements

1 TB or more of available space
IBM Storage Scale Container Native (with IBM Storage Scale Container Storage Interface) 8 vCPU on each worker node to deploy IBM Storage Scale Container Native and IBM Storage Scale Container Storage Interface Driver.

See the IBM Storage Scale Container Native requirements.

16 GB of RAM on each worker node.

For details, see the IBM Storage Scale Container Native requirements

1 TB or more of available space
Portworx
On-premises
4 vCPU on each storage node
IBM Cloud
For details see the following sections of Storing data on software-defined-storage (SDS) with Portworx:
  • What worker node flavor in Red Hat OpenShift on IBM Cloud is the right one for Portworx?
  • What if I want to run Portworx in a classic cluster with non-SDS worker nodes?
4 GB of RAM on each storage node A minimum of three storage nodes.
On each storage node, you must have:
  • A minimum of 1 TB of raw, unformatted disk
  • An additional 100 GB of raw, unformatted disk for a key-value database.
NFS 8 vCPU on the NFS server 32 GB of RAM on the NFS server 1 TB or more of available space
Amazon Elastic Block Store (EBS)
Amazon Elastic File System (EFS)
NetApp Trident
Self-managed NetApp Trident
Not applicable.
Managed NetApp Trident (Amazon FSx for NetApp ONTAP)
Not applicable.
Self-managed NetApp Trident
Not applicable.
Managed NetApp Trident (Amazon FSx for NetApp ONTAP)
Not applicable.
Self-managed NetApp Trident
1 TB or more of available space
Managed NetApp Trident (Amazon FSx for NetApp ONTAP)
The requirements are the same as self-managed NetApp Trident.

Additional documentation

Storage option Documentation links
OpenShift Data Foundation
Installation
See the Product Documentation for Red Hat OpenShift Data Foundation:
Cloud Pak for Data configuration guidance
For post-installation guidance, see Configuring persistent storage for IBM Cloud Pak for Data.
Troubleshooting
See the product documentation for Troubleshooting OpenShift Data Foundation.
IBM Storage Fusion Data Foundation
Installation
  1. To deploy IBM Storage Fusion, see Deploying IBM Storage Fusion in the IBM Storage Fusion documentation:
  2. To install the Data Foundation service, see IBM Storage Fusion Data Foundation in the IBM Storage Fusion documentation:
Cloud Pak for Data configuration guidance
For post-installation guidance, see Configuring persistent storage for IBM Cloud Pak for Data.
Troubleshooting
See Troubleshooting IBM Storage Fusion in the IBM Storage Fusion documentation:
IBM Storage Fusion Global Data Platform
Installation
  1. To deploy IBM Storage Fusion, see Deploying IBM Storage Fusion in the IBM Storage Fusion documentation:
  2. To install the Global Data Platform service, see IBM Storage Fusion Global Data Platform in the IBM Storage Fusion documentation:
Cloud Pak for Data configuration guidance
For post-installation guidance, see Configuring persistent storage for IBM Cloud Pak for Data.
Troubleshooting
See Troubleshooting IBM Storage Fusion in the IBM Storage Fusion documentation:
IBM Storage Scale Container Native (with IBM Storage Scale Container Storage Interface)
Installation
See Installing the IBM Storage Scale container native operator and cluster in the IBM Storage Scale Container Native documentation.
Cloud Pak for Data configuration guidance
For post-installation guidance, see Configuring persistent storage for IBM Cloud Pak for Data.
Troubleshooting
Portworx
Installation
See Install Portworx on OpenShift in the Portworx documentation.
Cloud Pak for Data configuration guidance
For post-installation guidance, see Configuring persistent storage for IBM Cloud Pak for Data.
Troubleshooting
See the product documentation for Troubleshoot Portworx on Kubernetes.
NFS
Installation
Refer to the installation documentation for your NFS storage provider.
Cloud Pak for Data configuration guidance
For post-installation guidance, see Configuring persistent storage for IBM Cloud Pak for Data.
Troubleshooting
Refer to the documentation from your NFS provider.
Amazon Elastic Block Store (EBS)
Installation
Managed OpenShift
EBS is provisioned by default when you install Red Hat OpenShift Service on AWS (ROSA).
Self-managed OpenShift
EBS is provisioned by default when you install Red Hat OpenShift Container Platform on AWS infrastructure.
Cloud Pak for Data configuration guidance
For post-installation guidance, see Configuring persistent storage for IBM Cloud Pak for Data.
Troubleshooting
See the AWS documentation.
Amazon Elastic File System (EFS)
Installation
Install EFS from the AWS Console. It is recommended that you create a regional file system. For details, see Getting started in the Amazon Elastic File System documentation.
Cloud Pak for Data configuration guidance
For post-installation guidance, see Configuring persistent storage for IBM Cloud Pak for Data.
Troubleshooting
See Troubleshooting Amazon EFS in the AWS documentation.
NetApp Trident
Installation
Self-managed NetApp Trident
See Learn about Astra Trident installation in the NetApp Astra Trident documentation.
Managed NetApp Trident (Amazon FSx for NetApp ONTAP)

The provisioned throughput should be 128 MB per second or higher.

See Learn about Astra Trident installation in the NetApp Astra Trident documentation.

Cloud Pak for Data configuration guidance
For post-installation guidance, see Configuring persistent storage for IBM Cloud Pak for Data.
Troubleshooting
See Troubleshooting in the product documentation.