Deployment considerations
Before deployment, make sure that you are aware of the Red Hat OpenShift version, cluster network, persistent storage, and the IBM Storage Scale storage cluster considerations.
Red Hat OpenShift cluster considerations
The following list includes the Red Hat OpenShift cluster considerations:
- A minimum configuration of three master nodes and three worker nodes, with a maximum of 128 worker nodes is required.
- Deploying IBM Storage Scale pods on master nodes is not supported. An exception is in a "compact" cluster configuration. For more information, see Master node configuration (compact cluster).
- Single node Red Hat OpenShift clusters are not supported. Instead, access data on an IBM Storage Scale storage cluster through NFS.
- Red Hat Enterprise Linux™ CoreOS (RHCOS) restricts new file system mounts to the
/mnt
subtree. IBM Storage Scale mounts any file system under/mnt
on the Red Hat OpenShift cluster regardless of the default mount point that is defined on the storage cluster.
Red Hat OpenShift cluster network considerations
The IBM Storage Scale container native comes with a collection of different pods. A subset of these pods can be considered regular pods that behave like typical application pods. Those pods are the operator, the GUI pods, and the performance data collector pods. The exception is what is referred to as the "core pods" as they provide the actual file system services. The core pods are not deployed by the Kubernetes scheduler through a regular DaemonSet. Instead, the IBM Storage Scale container native operator handles the management of those pods.
- The file system daemon inside the core pods requires a static IP address for communication between daemons on different nodes.
- All core pods must be able to communicate with each other through the chosen network.
There are two network configurations that can be employed: host network or container network interface (CNI) network. Only one network configuration can be chosen.
Host network
By default, the IBM Storage Scale pods use the host network. This is the simplest configuration but has some disadvantages:
- Using the host network breaks the network isolation that usually comes with containers. For example, any network port opened by IBM Storage Scale may conflict with a network port opened by another component on the host.
- Security features, like network policies, are not available for the host network.
- If the node has multiple network adapters, there is no way to select a different adapter. The host network always uses the network adapter that the worker node IP is assigned to.
Container Network Interface (CNI) network
As an alternative to host network, IBM Storage Scale can use a CNI network. There is more configuration effort to set up the CNI:
- In this configuration, core pods have an IP address on the usual Red Hat OpenShift SDN and another one on the CNI network.
- Red Hat OpenShift SDN is used for communication with other pods.
- CNI network is used for communication between file system daemons, both inter-cluster and with the storage cluster.
- If the node is equipped with high-speed network, the CNI interface should use the high-speed network.
- This is the daemon network where the file system I/O runs on. High bandwidth and low latency are highly beneficial for performance.
- The CNI network will be used exclusively by IBM Storage Scale and eliminates the potential for port conflicts with other components.
- Security features like network policies work on
MACVLAN
CNIs. - The DNS must be configured properly to allow the worker nodes the ability to resolve the storage cluster nodes.
- For more information, see Host aliases.
Advanced features of SR-IOV
type CNIs, such as RDMA and GPUdirect, are not yet supported.
For more information about configuring CNI with IBM Storage Scale container native, see Container network interface (CNI) configuration.
Red Hat OpenShift cluster persistent storage considerations
The following list includes the Red Hat OpenShift cluster persistent storage considerations:
- The IBM Storage Scale pods use host path mounts to store IBM Storage Scale cluster metadata and various logs.
- The IBM Storage Scale container native operator creates two local PersistentVolumes (PVs) on two eligible worker nodes. At least 25 GB of free space must be available in the file system that contains the
/var
directory on all eligible worker nodes to avoid potential failures during the deployment. These PVs are created with the ReadWriteOnce (RWO) access mode. - Both the host path mounts and local PVs are not automatically cleaned up when you delete the associated IBM Storage Scale container native cluster. You must manually clean these up. For more information about cleaning up the persistent storage, see Cleaning up the worker nodes and Cleaning up IBM Storage Scale container native
- IBM Storage Scale container native pmcollector does not support the use of dynamically created or pre-created PVs.
Considerations for enterprise grade image registry
The following list includes the considerations for enterprise grade image registry:
- In a restricted network environment where the Red Hat OpenShift Container Platform cluster cannot pull IBM Storage Scale images from the IBM Container Repository, images must be mirrored to a production grade enterprise image registry that the Red Hat OpenShift Container Platform cluster can access.
- In a restricted network environment, there must be a node that can communicate externally and also with the target Red Hat OpenShift Container Platform cluster.
- Any registry that is used for hosting the container images of IBM Storage Scale container native must not be accessible to external users. Also, it must be restricted to the service account used for IBM Storage Scale container native management. All users and machines that are accessing these container images must be authorized per the IBM Storage Scale license agreement.
Considerations for direct storage attachment
The following list includes the considerations for direct storage attachment:
- Support for direct storage attachment on x86_64, ppc64le (Power), and s390x (IBM Z) servers. In direct storage attachment configuration, the worker nodes use the SAN fabric instead of the IBM Storage Scale NSD protocol for I/O traffic.
- If using x86_64 or ppc64le (Power) servers, it might be necessary to load multi-path drivers through Red Hat CoreOS before storage can be seen.
- The virtualization layers of an IBM Z server allow the physical connection of the disks containing the IBM Storage Scale file system data to both the storage cluster and the IBM Storage Scale container native cluster.
- For more information about setting up a direct storage attachment, see Attaching direct storage on IBM Z in IBM Storage Scale documentation.