Platform storage

Learn about the storage options for Cloud Pak for Data System 2.0.x.

Cloud Pak for Data System 2.0 uses Red Hat OpenShift Data Foundation (formerly known as OpenShift Container Storage) and OpenShift Container Platform to supply storage to Cloud Pak for Data services. Portworx is no longer used in versions 2.0.x.

Red Hat OpenShift Data Foundation

Red Hat OpenShift Data Foundation (ODF) is a highly integrated collection of cloud storage and data services for Red Hat OpenShift Container Platform.

  • Installs and manages a Rook-Ceph cluster as the data plane
  • ODF supports four class families:
    • RWO block and file (RBD)
    • RWX filesystem (CephFS)
    • Local S3-compatible object store (RGW)
    • Multi-cloud S3 gateway (MCG)
    The last two are disabled by default to save on vCPUs (6.5). If they are needed, they can be enabled optionally.
  • Cloud Pak for Data System uses internal mode of deployment: Ceph cluster is automatically deployed on OpenShift nodes, OpenShift Data Foundation operator pods co-reside with the raw storage.

Storage options

Cloud Pak for Data System supplies storage behind the storage class types provided by OpenShift Data Foundation on Cloud Pak for Data System hardware: block and file.

External NFS server connectivity for apps to connect to through Cloud Pak for Data user interface is still supported.

Netezza® Performance Server, if installed on the system, consists of:
  • NPS host
  • NPS SPUs
SPUs use legacy storage and remain outside Red Hat OpenShift, while NPS host is in the OpenShift. NPS host uses ODF for /nz, /nzscratch, /nz/export. NPS host runs on a worker node (typically fourth) and can fail over between nodes. It requests 20 vCPUs in a large deployment.

Deployment options on Cloud Pak for Data System

Deployment with 8 nodes:

ODF vCPU usage

Deployment with 8 nodes:

  • 10 vCPU that float around the workers in granular chunks of 1-3 vCPU per pod, scheduling where they can fit. This is for the ODF overhead.
  • 8 vCPU that have hard affinity to the first three worker nodes on the cluster. These are for the ODF OSD pods - these are basically daemons, one per drive.

ODF storage capacity

Given for all configurations:
  • 3 Replicas for all data
  • 4 NVMe drives per node, each 3.84 TB
Lenovo base cluster:
  • 5 worker nodes
  • 4 × 3.84 × 5 ÷ 3 = 25.60 TB
Expansion Lenovo enclosure:
  • 4 × 4 × 3.84 ÷ 3 = 20.48 TB each