Storage requirements
Learn about the storage requirements for Infrastructure Automation.
Persistent storage requirements
Infrastructure Automation requires persistent storage that supports the RWO (read-write-once) access mode.
Storage class requirements
For production environments, storage classes must have allowVolumeExpansion
enabled. This allows persistent volumes to be expanded if necessary, to avoid storage from filling up and causing unrecoverable failures. This is also highly
recommended for starter deployments, since without it you are limited to the default capacity that might not be sufficient for your specific needs. To enable allowVolumeExpansion
, edit the storage class to enable expansion. Follow
the instructions in the Red Hat documentation Enabling volume expansion support .
Recommended storage providers
The recommended storage providers are:
- IBM Cloud
- Red Hat OpenShift Data Foundation (ODF)
- IBM Fusion Data Foundation
- IBM Fusion Global Data Platform
- IBM Storage Scale Container Native
- Portworx
- AWS native storage
The following table shows the tested and supported storage providers for the platforms where Infrastructure Automation can be deployed.
Platform | IBM Cloud Storage (Block and File) | Red Hat® OpenShift® Data Foundation | IBM Fusion Data Foundation | IBM Fusion Global Data Platform | IBM Storage Scale Container Native | Portworx | AWS native storage |
---|---|---|---|---|---|---|---|
Azure Red Hat OpenShift (ARO) | Yes | Yes | |||||
Red Hat OpenShift Container Platform | Yes | Yes | Yes | Yes | Yes | Yes | |
Red Hat OpenShift on IBM Cloud (ROKS) | Yes | Yes | Yes | Yes | Yes | ||
Red Hat OpenShift Service on AWS (ROSA) | Yes | Yes |
Note:
- IBM Storage Scale Container Native and Red Hat® OpenShift® Data Foundation are part of IBM Storage Suite for IBM Cloud Paks.
The preceding storage providers are the only providers that are tested and validated for a deployment of Infrastructure Automation. You can choose to use an alternate storage provider if they meet the requirements for deploying Infrastructure Automation. Your chosen storage provider must meet the same storage and hardware requirements as the recommended storage providers. If you choose to use an alternate storage provider, your overall performance can differ from any listed sizings, throughput rates or other performance metrics that are listed in the Infrastructure Automation documentation. Work with your IBM Sales representative (or Business Partner) to ensure that your chosen storage provider is sufficient for your deployment plan.
If you are installing Infrastructure Automation with Cloud Pak for AIOps, see Storage requirements for installation on OpenShift
IBM Cloud
The following storage classes are required by Infrastructure Automation, and are created when Red Hat OpenShift on IBM Cloud (ROKS) is installed:
Storage provider | Storage class name | Large block storage class name |
---|---|---|
IBM Cloud® | ibmc-file-gold-gid | ibmc-block-gold |
Installing IBM Cloud storage
For more information about IBM Cloud storage, see Storing data on classic IBM Cloud File Storage and Storing data on classic IBM Cloud Block Storage
in
the IBM Cloud Docs.
Red Hat OpenShift Data Foundation (ODF)
The following storage classes are required by Infrastructure Automation, and are created when Red Hat OpenShift Data Foundation is installed:
Storage provider | Storage class name | Large block storage class name |
---|---|---|
Red Hat® OpenShift® Data Foundation | ocs-storagecluster-cephfs | ocs-storagecluster-ceph-rbd |
Installing Red Hat OpenShift Data Foundation
Red Hat OpenShift Data Foundation is available for purchase through the IBM Storage Suite for IBM Cloud Paks. Red Hat OpenShift Data Foundation is an implementation of the open source Ceph storage software, which is engineered to provide data and storage services on Red Hat OpenShift. Use version 4.12 or higher.
For more information about deploying Red Hat OpenShift Data Foundation, see Deploying OpenShift Data Foundation in the Red Hat documentation. Choose the appropriate deployment instructions for your deployment platform.
IBM Fusion Data Foundation
The following storage classes are required by IBM Cloud Pak for AIOps:
Storage provider | Storage class name | Large block storage class name |
---|---|---|
IBM Fusion Data Foundation | ocs-storagecluster-cephfs | ocs-storagecluster-ceph-rbd |
Installing IBM Fusion Data Foundation
IBM Fusion Software version 2.9.0 and IBM Fusion HCI Systems version 2.9.x are compatible with Infrastructure Automation.
To learn more about IBM Fusion and how to install IBM Fusion Data Foundation service, see the following two methods:
IBM Fusion Software
IBM Fusion Data Foundation is a highly integrated collection of cloud storage and data services for Red Hat OpenShift Container Platform. It is available as part of the Red Hat OpenShift Container Platform service catalog, packaged as an operator to facilitate simple deployment and management.
The IBM Fusion Software provides the following features:
- IBM Fusion software is a software-only solution that can be deployed on a variety of hardware platforms.
- IBM Fusion software is designed to scale up to meet the needs of larger environments.
- IBM Fusion software can be integrated with a variety of third-party solutions.
Before you install IBM Fusion, ensure that you meet all of the prerequisites. For more information, see Prerequisites .
For more information about Deploying IBM Fusion, see Deploying IBM Fusion .
For more information about how to install Data Foundation service using IBM Fusion, see Data Foundation .
For more information about IBM Fusion Software storage class, see Data Foundation.
IBM Fusion HCI Systems
The IBM Fusion Data Foundation service provides a foundational data layer for applications to function and interact with data in a simplified, consistent, and scalable manner.
The IBM Fusion HCI Systems provides the following features:
- IBM Fusion HCI system is a pre-integrated, pre-tested, and pre-configured appliance that combines hardware and software.
- IBM Fusion HCI system is designed to scale out to meet growing storage needs.
- IBM Fusion HCI system is tightly integrated with IBM's other hybrid cloud solutions, such as IBM Cloud and IBM Power Systems.
For more information about Prerequisites of IBM Fusion HCI system, see Planning and prerequisites .
For more information about Deploying IBM Fusion HCI System, see Deploying IBM Fusion HCI System .
For more information about how to install Data Foundation service using IBM Fusion HCI Systems, see Data Foundation .
For more information about IBM Fusion HCI Systems storage class, see Data Foundation.
IBM Fusion Global Data Platform
The following storage classes are required by Infrastructure Automation:
Storage provider | Storage class name | Large block storage class name |
---|---|---|
IBM Fusion Global Data Platform | If you are using IBM Fusion, use ibm-spectrum-scale-sc. If you are using IBM Fusion HCI System, use ibm-storage-fusion-cp-sc |
If you are using IBM Fusion, use ibm-spectrum-scale-sc. If you are using IBM Fusion HCI System, use ibm-storage-fusion-cp-sc |
Installing IBM Fusion Global Data Platform
The Global Data Platform storage type provides the following features:
- File storage
- High availability via capacity-efficient erasure coding
- Metro and regional disaster recovery
- CSI snapshot support with built-in application consistency
- Encryption at rest
- Ability to mount file systems hosted by remote IBM Storage Scale clusters.
You can deploy Global Data Platform service by using the following two methods:
IBM Fusion Software
Before you install IBM Fusion, ensure that you meet all of the prerequisites. For more information, see Prerequisites .
For more information about Deploying IBM Fusion, see Deploying IBM Fusion .
For more information about how to install Global Data Platform using IBM Fusion Software, see Global Data Platform .
For more information about IBM Fusion Software storage class, see IBM Storage Scale.
IBM Fusion HCI Systems
For more information about Prerequisites of IBM Fusion HCI system, see Planning and prerequisites .
For more information about Deploying IBM Fusion HCI System, see Deploying IBM Fusion HCI System .
For more information about how to deploy Global Data Platform service using IBM Fusion HCI Systems, see Global Data Platform .
For more information about IBM Fusion HCI Systems storage class, see IBM Storage Scale.
IBM Storage Scale Container Native
The following storage class is required by Infrastructure Automation, and is created when IBM Storage Scale Container Native is installed.
Storage provider | Storage class name | Large block storage class name |
---|---|---|
IBM Storage Scale Container Native | ibm-spectrum-scale-sc | ibm-spectrum-scale-sc |
This class is used as both the ReadWriteMany storage class and large block storage class when you are installing Infrastructure Automation. This StorageClass includes a parameter to set permissions of data within the StorageClass to shared: true
,
which is required to support the Kubernetes SubPath feature. For more information about the permissions field for the class, see the IBM Spectrum Scale CSI Driver documentation .
Installing IBM Storage Scale Container Native
IBM Storage Scale Container Native is available for purchase through the IBM Storage Suite for IBM Cloud Paks. IBM Spectrum Scale is a cluster file system that provides concurrent access to a single file system or set of file systems from multiple nodes. The nodes can be SAN-attached, network attached, a mixture of SAN-attached and network attached, or in a shared nothing cluster configuration. This enables high-performance access to this common set of data to support a scale-out solution or to provide a high availability platform. IBM Storage Scale Container Native Storage Access must be at version 5.1.1.3 or higher, with IBM Spectrum Scale Container Storage Interface version 2.3.0 or higher.
For more information, see the IBM Storage Scale Container Native documentation .
To use IBM Storage Scale Container Native with Infrastructure Automation, you do not require a separate license. You can use up to 12 TB of IBM Storage Scale Container Native storage for up to 36 months, fully supported by IBM, within your production environments (Level 1 and Level 2). If you exceed these terms, a separate license is required.
To install and use IBM Storage Scale Container Native, your cluster must meet the following requirements:
-
Minimum amount of storage: 1 TB or more of available space.
-
Minimum amount of vCPU: 8 vCPU on each worker node. This minimum is a general requirement for the node, not a dedicated resource requirement for IBM Storage Scale Container Native. For more information, see the IBM Spectrum Scale requirements documentation
about IBM Storage Scale Container Native.
-
Minimum amount of memory: 16 GB of RAM on each worker node. This minimum is a general requirement for the node, not a dedicated resource requirement for IBM Storage Scale Container Native. For more information, see the IBM Spectrum Scale requirements documentation
.
-
In addition, your network must have sufficient performance to meet the storage I/O requirements. When I/O performance is not sufficient, services can experience poor performance or cluster instability when the services are handling a heavy load, such as functional failures with timeouts.Run a disk latency test and disk throughput test to verify whether your network meets the I/O requirements. You can use the DD command-line utility to run the following tests. These tests are based on the performance of writing data to representative storage locations by using two chosen block sizes (4 KB and 1 GB). Use the MB/s metric from the tests and ensure that your test result is comparable to, or better than, the expected targets:
-
Disk latency test
dd if=/dev/input.file of=/directory_path/output.file bs=block-size count=number-of-blocks oflag=dsync
The value must be comparable to or better than: 2.5 MB/s.
-
Disk throughput test
dd if=/dev/input.file of=/directory_path/output.file bs=block-size count=number-of-blocks oflag=dsync
The value must be comparable to or better than: 209 MB/s.
Where,
if=/dev/input.file
- Identifies the input file that you want to read using the DD command.of=/directory_path/output.file
- Identifies the output file where you want to write with the DD command.bs=block-size
- Indicates the block size that you want to use. For example, you can use 4096 (4 KB) for the latency test and 1G for the throughput test.count=number-of-blocks
- The number of blocks to use in the test. For example, you can use 1000 for the latency test and 1 for the throughput test.oflag=dsync
- Indicates the synchronization I/O for data.
-
To install IBM Storage Scale Container Native and the IBM Spectrum Scale Container Storage Interface, follow the IBM Storage Scale Container Native installation documentation .
If you encounter any errors or issues with installing or using IBM Storage Scale Container Native or the IBM Spectrum Scale Container Storage Interface, refer to the following documentation:
Portworx
The following storage classes are required by Infrastructure Automation, and must be created using the instructions in the following section:
Note: The px-csi-aiops-mz
storage class is only applicable if you are installing Infrastructure Automation with Cloud Pak for AIOps. To use this storage class, you need to use multizone HADR for Cloud Pak for
AIOps.
Storage provider | Storage class name | Large block storage class name |
---|---|---|
Portworx | px-csi-aiops | px-csi-aiops |
Portworx (multi-zone HA) | px-csi-aiops-mz | px-csi-aiops-mz |
Installing Portworx
Important: sharedv4
volumes require NFS ports to be open. For more information, see the Portworx documentation Open NFS Ports .
-
Install the Portworx operator and configure a Portworx StorageCluster.
Two editions of Portworx are available: Portworx Enterprise and Portworx Essentials. Portworx Enterprise is suitable for production deployments. Portworx Essentials is suitable only for demonstration deployments, as it can be used only on clusters of five nodes or less, and includes storage size limits. For more information, see the Portworx documentation Installing Portworx on Openshift
.
Note: You must be a cluster administrator to install Portworx on the cluster.
-
Define a custom Portworx storage class.
The custom Portworx storage class will be used for file and block storage. The storage class is scoped to the cluster, so setting the project (namespace) is not required.
If you are not using the Multi-zone HA technical preview, then use Option 1 to define
px-csi-aiops
. If you are using the Multi-zone HA technical preview, then use Option 2 to definepx-csi-aiops-mz
.Option 1: Define
px-csi-aiops
Log in to your OpenShift cluster's console. Click the plus icon on the upper right to open the Import YAML dialog box, paste in the following content, and then click Create.
kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: px-csi-aiops provisioner: pxd.portworx.com parameters: fs: xfs io_profile: db_remote repl: '2' reclaimPolicy: Delete allowVolumeExpansion: true volumeBindingMode: Immediate
Option 2: Define
px-csi-aiops-mz
Log in to your OpenShift cluster's console. Click the plus icon on the upper right to open the Import YAML dialog box, paste in the following content, and then click Create.
kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: px-csi-aiops-mz provisioner: pxd.portworx.com parameters: fs: xfs io_profile: db_remote repl: '3' reclaimPolicy: Delete allowVolumeExpansion: true volumeBindingMode: Immediate
AWS native storage
The following storage classes are required by Infrastructure Automation. gp3-csi
is created for you, but you must create efs-sc
using the instructions in the following section.
Storage provider | Storage class name | Large block storage class name |
---|---|---|
AWS native storage | efs-sc | gp3-csi |
Amazon Elastic Block Store (EBS) provides block storage - the storage class is gp3-csi
, and is created when ROSA is installed. Amazon Elastic File System (EFS) provides file storage - the storage class is efs-sc
.
You must create the efs-sc
storage class using the instructions in the following section.
Installing AWS native storage
For more information about Amazon Web Services (AWS) native storage, see Cloud Storage on AWS in the AWS documentation.
For more information, see Setting up the AWS EFS CSI Driver Operator
and Creating the AWS EFS storage class
in the Red Hat OpenShift documentation.
You can also review the information in the Red Hat article Enabling the AWS EFS CSI Driver Operator on ROSA.