Storage providers and requirements

You need persistent storage configured for your cluster to install and use IBM Cloud Pak® for Watson AIOps and Infrastructure Automation.

Storage requirements

IBM Cloud Pak for Watson AIOps and Infrastructure Automation require persistent storage that supports the RWO (read-write-once) and RWX (read-write-many) access modes.

For information about the persistent volume and sizing requirements, see Hardware requirements.

Federal Information Processing Standards (FIPS) storage requirements: If your environment must support FIPS, then you must enable FIPS support when you install IBM Cloud Pak for Watson AIOps or Infrastructure Automation). For more information, see Federal Information Processing Standards (FIPS).

Recommended storage providers

The following table shows the tested and supported storage providers that are recommended for a deployment of IBM Cloud Pak for Watson AIOps or Infrastructure Automation.

Table 1. Supported storage providers
Platform IBM Cloud Storage (Block and File) Red Hat® OpenShift® Data Foundation IBM Storage Fusion IBM Spectrum Scale Container Native Portworx
Azure Red Hat OpenShift (ARO) Yes Yes
Google Cloud Platform (GCP) Yes Yes
Red Hat OpenShift Container Platform Yes Yes Yes Yes
Red Hat OpenShift on IBM Cloud (ROKS) Yes Yes
Red Hat OpenShift Service on AWS (ROSA) Yes

Notes:

  • IBM Spectrum Scale Container Native and Red Hat® OpenShift® Data Foundation are part of IBM Storage Suite for IBM Cloud Paks.
  • IBM Cloud Pak for Watson AIOps requires persistent RWX (read-write-many) storage. Red Hat® does not currently support Red Hat® OpenShift® Data Foundation (ODF) on ROSA. Portworx is the only recommended storage provider that provides a RWX storage solution for a deployment of IBM Cloud Pak for Watson AIOps on ROSA. Portworx is available for a free 30 day trial, but will require a license for a longer timespan and production usage.

Other storage providers

The preceding storage providers are the only providers that are tested and validated for a deployment of IBM Cloud Pak for Watson AIOps or Infrastructure Automation. You can choose to use an alternate storage provider if they meet the requirements for deploying IBM Cloud Pak for Watson AIOps or Infrastructure Automation.

Your chosen storage provider must meet the same storage and hardware requirements as the recommended storage providers. For instance, for deploying IBM Cloud Pak for Watson AIOps, your chosen provider must support the required access modes and storage modes.

Note: If you choose to use an alternate storage provider, your overall performance can differ from any listed sizings, throughput rates or other performance metrics that are listed in the IBM Cloud Pak for Watson AIOps documentation. Work with your IBM Sales representative to ensure that your chosen storage provider is sufficient for your deployment plan.

Storage classes for recommended storage providers

If you choose to use one of the recommended storage providers, you need to configure specific storage classes, which are identified in the following table. Then, when you are installing IBM Cloud Pak for Watson AIOps, you need to specify the appropriate storage class for your chosen storage provider.

Table 2. Supported storage providers
Storage provider storage_class_name large_block_storage_class_name
IBM Cloud® ibmc-file-gold-gid ibmc-block-gold
Red Hat® OpenShift® Data Foundation ocs-storagecluster-cephfs ocs-storagecluster-ceph-rbd
IBM Storage Fusion ibm-spectrum-scale-sc ibm-spectrum-scale-sc
IBM Spectrum Scale Container Native ibm-spectrum-scale-sc ibm-spectrum-scale-sc
Portworx portworx-fs portworx-aiops

Storage class requirements

For production environments, storage classes must have allowVolumeExpansion enabled. This allows persistent volumes to be expanded if necessary, to avoid storage from filling up and causing unrecoverable failures. This is also highly recommended for starter deployments, since without it you are limited to the default capacity that might not be sufficient for your specific needs.

To enable allowVolumeExpansion, edit the storage class to enable expansion. Follow the instructions in the Red Hat documentation Enabling volume expansion support.

Installing recommended storage providers and configuring storage classes

Expand the following collapsed sections to learn more about installing a recommended storage providers and configuring storage.

  • For more information about IBM Cloud storage, see Storing data on classic IBM Cloud File Storage Opens in a new tab and Storing data on classic IBM Cloud Block Storage Opens in a new tab in the IBM Cloud Docs.


    Installing IBM Cloud File Storage and IBM Cloud Block Storage

    The storage classes that are required by IBM Cloud Pak for Watson AIOps are created when Red Hat OpenShift on IBM Cloud (ROKS) is installed. For more information, see Storage Classes in the Red Hat OpenShift on IBM Cloud documentation Opens in a new tab.

  • Version 4.10 or higher.

    Red Hat OpenShift Data Foundation is available for purchase through the IBM Storage Suite for IBM Cloud Paks. Red Hat OpenShift Data Foundation is an implementation of the open source Ceph storage software, which is engineered to provide data and storage services on OpenShift. Prior to version 4.9, Red Hat OpenShift Data Foundation was previously called Red Hat OpenShift Container Storage.

    Installing Red Hat OpenShift Data Foundation

    For more information about installing and using Red Hat OpenShift Data Foundation, see Deploying OpenShift Data FoundationOpens in a new tab in the Red Hat documentation. Choose the appropriate deployment instructions for your deployment platform.

    The following storage classes are installed when you deploy Red Hat OpenShift Data Foundation:

    • ocs-storagecluster-ceph-rbd
    • ocs-storagecluster-cephfs
    • openshift-storage.noobaa.io
    • ocs-storagecluster-ceph-rgw

  • IBM Storage Fusion (previously called IBM Spectrum Fusion up to version 2.4) is a supported storage option for IBM Cloud Pak for Watson AIOps. Version 2.2 or higher is supported.

    IBM Storage Fusion is a container-native hybrid cloud data platform that offers simplified deployment and data management for Kubernetes applications on Red Hat OpenShift Container Platform. IBM Storage Fusion is designed to meet the storage requirements of modern, stateful Kubernetes applications and to make it easy to deploy and manage container-native applications and their data on Red Hat OpenShift Container Platform. For more information, see the IBM Storage Fusion documentation Opens in a new tab.

    Installing IBM Storage Fusion

    Before you install IBM Storage Fusion, ensure that you meet all of the prerequisites.

    Important: When using IBM Cloud Pak for Watson AIOps with IBM Storage Fusion, ensure that your version of Red Hat OpenShift Container Platform is supported for your version of IBM Storage Fusion. For more information, see: IBM Storage Fusion Prerequisites Opens in a new tab

    Once you have all of the prerequisites met, you can begin the installation. For more information, see: Deploying IBM Storage Fusion Opens in a new tab

  • IBM Spectrum Scale Container Native Storage Access must be at version 5.1.1.3 or higher, with IBM Spectrum Scale Container Storage Interface version 2.3.0 or higher.

    IBM Spectrum Scale Container Native is available for purchase through the IBM Storage Suite for IBM Cloud Paks. IBM Spectrum Scale is a cluster file system that provides concurrent access to a single file system or set of file systems from multiple nodes. The nodes can be SAN-attached, network attached, a mixture of SAN-attached and network attached, or in a shared nothing cluster configuration. This enables high-performance access to this common set of data to support a scale-out solution or to provide a high availability platform. For more information, see the IBM Spectrum Scale Container Native documentation Opens in a new tab.

    Requirements

    To use IBM Spectrum Scale Container Native with IBM Cloud Pak for Watson AIOps, you do not require a separate license. You can use up to 12 TB of IBM Spectrum Scale Container Native storage for up to 36 months, fully supported by IBM, within your production environments (Level 1 and Level 2). If you exceed these terms, a separate license is required.

    To install and use IBM Spectrum Scale Container Native, your cluster must meet the following requirements:

    • Minimum amount of storage: 1 TB or more of available space.
    • Minimum amount of vCPU: 8 vCPU on each worker node. This minimum is a general requirement for the node, not a dedicated resource requirement for IBM Spectrum Scale Container Native. For more information, see the IBM Spectrum Scale documentation about IBM Spectrum Scale Container Native requirements Opens in a new tab..
    • Minimum amount of memory: 16 GB of RAM on each worker node. This minimum is a general requirement for the node, not a dedicated resource requirement for IBM Spectrum Scale Container Native. For more information, see the IBM Spectrum Scale documentation about IBM Spectrum Scale Container Native requirements Opens in a new tab.
    • In addition, your network must have sufficient performance to meet the storage I/O requirements. When I/O performance is not sufficient, services can experience poor performance or cluster instability when the services are handling a heavy load, such as functional failures with timeouts.Run a disk latency test and disk throughput test to verify whether your network meets the I/O requirements. You can use the DD command-line utility to run the following tests. These tests are based on the performance of writing data to representative storage locations by using two chosen block sizes (4 KB and 1 GB). Use the MB/s metric from the tests and ensure that your test result is comparable to, or better than, the expected targets.
      • Disk latency test
        dd if=/dev/input.file of=/directory_path/output.file bs=block-size count=number-of-blocks oflag=dsync
        The value must be comparable to or better than: 2.5 MB/s.
      • Disk throughput test
        dd if=/dev/input.file of=/directory_path/output.file bs=block-size count=number-of-blocks oflag=dsync
        The value must be comparable to or better than: 209 MB/s.
      Where,
      • `if=/dev/input.file` - Identifies the input file that you want to read using the DD command.
      • `of=/directory_path/output.file` - Identifies the output file where you want to write with the DD command.
      • `bs=block-size` - Indicates the block size that you want to use. For example, you can use `4096` (4 KB) for the latency test and `1G` for the throughput test.
      • `count=number-of-blocks` - The number of blocks to use in the test. For example, you can use `1000` for the latency test and `1` for the throughput test.
      • `oflag=dsync` - Indicates the synchronization I/O for data.

    Installing IBM Spectrum Scale Container Native

    To install IBM Spectrum Scale Container Native and the IBM Spectrum Scale Container Storage Interface, follow the IBM Spectrum Scale Container Native installation documentation Opens in a new tab.

    When IBM Spectrum Scale Container Native is installed, the `ibm-spectrum-scale-sc` is installed. This class is used as both the ReadWriteMany storage class and large block storage class when you are installing IBM Cloud Pak for Watson AIOps. This StorageClass includes a parameter to set permissions of data within the StorageClass to 777, which is required to support the Kubernetes SubPath feature. For more information about the permissions field for the class, see the IBM Spectrum Scale CSI Driver documentation Opens in a new tab.

    If you encounter any errors or issues with installing or using IBM Spectrum Scale Container Native or the IBM Spectrum Scale Container Storage Interface, refer to the following documentation:

  • Important: sharedv4 volumes require NFS ports to be open. For more information, see the Portworx documentation Open NFS Ports Opens in a new tab.

    Installing Portworx

    1. Install the Portworx operator and configure a Portworx StorageCluster.
      Two editions of Portworx are available: Portworx Enterprise and Portworx Essentials. Portworx Enterprise is suitable for production deployments. Portworx Essentials is suitable only for demonstration deployments, as it can be used only on clusters of five nodes or less, and includes storage size limits.
      For more information, see the Portworx documentation Installing the Portworx operator Opens in a new tab and Deploying Portworx using the Operator Opens in a new tab.
      Note: You must be a cluster administrator to install Portworx on the cluster.
    2. Define two storage classes: `portworx-fs` and `portworx-aiops`.
      The storage classes are scoped to the cluster, so setting the project (namespace) is not required.
      Portworx recommends a replication factor (`parameters.repl`) of 2 or 3. For more information, see the Portworx documentation Running in Production Opens in a new tab.

      You can define the storage classes with either of the following methods:

      • Option 1: Define the Portworx storage classes with the OpenShift console
      • Option 2: Define the Portworx storage classes with the OpenShift CLI

      Option 1: Define the Portworx storage classes with the OpenShift console

      1. Log in to your OpenShift cluster's console.
      2. Create the `portworx-fs` storage class. This storage class is used for other components, such as Vault, and IBM Cloud Pak foundational services.

        Click the plus icon on the upper right to open the **Import YAML** dialog box, paste in the following content, and then click **Create**.

        kind: StorageClass
        apiVersion: storage.k8s.io/v1
        metadata:
            name: portworx-fs
        provisioner: kubernetes.io/portworx-volume
        parameters:
            repl: "3"
            io_profile: "db"
            priority_io: "high"
            sharedv4: "true"
        allowVolumeExpansion: true

      3. Create the `portworx-aiops` storage class.

        Click the plus icon on the upper right to open the **Import YAML** dialog box, paste in the following content, and then click **Create**.

        kind: StorageClass
        apiVersion: storage.k8s.io/v1
        metadata:
            name: portworx-aiops
        provisioner: kubernetes.io/portworx-volume
        parameters:
            repl: "3"
            priority_io: "high"
            snap_interval: "0"
            io_profile: "db"
            block_size: "64k"
        allowVolumeExpansion: true

      Option 2: Define the Portworx storage classes with the OpenShift CLI
      Run the following commands to configure the cluster with the following storage class resources.

      1. Create the `portworx-fs` storage class.
        cat << EOF | oc apply -f -
        kind: StorageClass
        apiVersion: storage.k8s.io/v1
        metadata:
            name: portworx-fs
        provisioner: kubernetes.io/portworx-volume
        parameters:
            repl: "3"
            io_profile: "db"
            priority_io: "high"
            sharedv4: "true"
        allowVolumeExpansion: true
        EOF
      2. Create the `portworx-aiops` storage class.
        cat << EOF | oc apply -f -
        kind: StorageClass
        apiVersion: storage.k8s.io/v1
        metadata:
            name: portworx-aiops
        provisioner: kubernetes.io/portworx-volume
        parameters:
            repl: "3"
            priority_io: "high"
            snap_interval: "0"
            io_profile: "db"
            block_size: "64k"
        allowVolumeExpansion: true
        EOF