Integration with Red Hat OpenShift
Containerized workloads on IBM Z and IBM LinuxONE
Red Hat OpenShift Container Platform (RHOCP) is Red Hat’s enterprise Kubernetes distribution. It provides developers, IT organizations, and business leaders with a hybrid cloud application platform for developing and deploying new containerized applications. Developers can create and deploy applications by delivering a consistent environment throughout the lifecycle of the application including development, deployment, and maintenance. The self-service capabilities and minimal maintenance requirements reduce the burden on IT organizations that are associated with deploying and maintaining application platforms.
An RHOCP cluster can entirely reside on an IBM Z or IBM LinuxONE platform, virtualized with either IBM z/VM or RHEL KVM hypervisor. It consists of three control plane nodes and multiple compute nodes, with a minimum of two. For performance reasons, the use of Infrastructure nodes for isolation of non-productive pods is common. Applications running in containers require persistent storage because a container is stateless. If a container ends its activity, all data within that container is lost unless written to a persistent volume. An application might be spread across multiple nodes in an RHOCP cluster for load balancing and high availability reasons. Therefore, container workloads require a shared persistent storage to enable the containers to work with the same persistent data. For high availability reasons, shared persistent storage solutions are also implemented as cluster storage solutions, such as IBM Spectrum Scale.

The implementation mode for RHOCP can be:
- As User Provisioned Infrastructure Mode (UPI) or
- Installer Provisioned Infrastructure (IPI) (this is not available for IBM Z and IBM LinuxONE)
In a UPI environment, the user is responsible for the establishment of the infrastructure before the automatic installation of RHOCP.
Adding persistent storage to an Red Hat OpenShift cluster
You need to decide for different storage types to be used:
-
The Hypervisor storage: The installation of the hypervisor and its guests, which represent RHCOP nodes, can use different disks that are typically local-attached disks to IBM Z or IBM LinuxONE.
-
The Container Storage: The data disks, which are used by the RHOCP container applications for their persistent data, can differ from the hypervisor storage disks. They are typically on shared storage and Network Attached Storage (NAS).

The hypervisor storage can be FICON-attached DASD storage using Extended Count Key Data (ECKD) format or it can be Fibre Channel (FCP) attached storage using SCSI block device format. Applications can use just hypervisor (non-shared) storage, but most containerized applications require shared dynamically provisioned storage. For RHOCP, it is mandatory to have at least the container image registry on shared storage.
The way how RHOCP is consuming storage is that container applications are requesting storage via so called Persistent Volume Claims (PVC) which are fulfilled via the Persistent Volumes (PVs) made available in RHOCP. For shared storage, a typical interface to make PVs available is via the Container Storage Interface (CSI). Therefore, software-defined storage such as IBM Storage Scale uses CSI API with dynamic provisioning.

For the container workload within RHOCP, as a shared file system (FS) you can use:
- NFS-attached storage (recommended for testing and development only)
- IBM Fusion Data Foundation, which also part of the IBM Fusion offering (previously known as Red Hat OpenShift Data Foundation) and based on the open source component Ceph. Fusion Data Foundation is deployed as an operator runs as storage nodes inside the Red Hat OpenShift cluster.
- IBM Storage Scale, which is deployed outside of the Red Hat OpenShift cluster directly on RHEL, but offers container-native storage to Red Hat OpenShift workload using IBM Storage Scale Container Native Storage Access (CNSA)
IBM Storage Scale Container Native Storage Access
IBM Storage Scale Container Native Storage Access (CNSA) implements the integration between the Red Hat OpenShift cluster and the IBM Storage storage cluster. CNSA is a Storage Scale client cluster, which remotely mounts storage from an external non-containerized IBM Spectrum Scale deployment.
CNSA comprises a lightweight containerized stack of GPFS running inside Red Hat OpenShift. CNSA leverages the Kubernetes Container Storage Interface (CSI), which mounts the remote Storage Scale cluster (cross-cluster mount as described in the previous chapter). Also, the CSI driver allows to manage the lifecycle of persistent storage that is used by container workload and can also dynamically provision the required storage capacity. Both CNSA and the CSI driver are installed as Red Hat OpenShift operators.
CNSA fits nicely into IBM’s broader strategy for Hybrid Multi-Cloud environments: While containerized workload is orchestrated on different cloud environments and hardware units (even across architectures) by Red Hat OpenShift, CNSA ensures that the stored data is shared and accessible to the applications.
Deploying CNSA and IBM Storage also gives flexibility in platform and hardware architecture choices. You could deploy IBM Storage Scale, CNSA, and the Red Hat OpenShift cluster entirely on IBM Z and IBM LinuxONE or mix different hardware to hybrid deployment topologies.

It is also possible to directly attache remote disks to the CNSA/OpenShift cluster to optimize performance.

Running IBM Storage Scale CNSA on IBM Z and IBM LinuxONE
If IBM Storage Scale storage cluster is implemented on IBM Z or IBM LinuxONE, you can take advantage of:
- Scalability of disk attachments
- Security hardware in IBM Z
- HiperSockets, the internal network communication in the hardware that enables a short path from RHOCP.