Hybrid storage

Create persistent storage before your deployment of IBM® Netcool® Operations Insight® on Red Hat® OpenShift®.

Note: If you want to deploy IBM Netcool Operations Insight on Red Hat OpenShift on a cloud platform, such as Red Hat OpenShift Kubernetes Service (ROKS), assess your storage requirements.

Configuring persistent storage

To ensure your current and future storage requirements are met, regular audits of capacity usage for persistent storage are highly recommended.

Red Hat OpenShift Container Platform uses the Kubernetes persistent volume (PV) framework. Persistent volumes are storage resources in the cluster, and persistent volume claims (PVCs) are storage requests that are made on those PV resources by Netcool Operations Insight. For more information on persistent storage in Red Hat OpenShift clusters, see Understanding persistent storage Launch out icon.

Note: Persistence must be enabled. Set persistence.enabled to true in the custom resource.

You can deploy Netcool Operations Insight on OpenShift with the following persistent storage options.

Note: If local storage is used (in a nonpoduction environment), the noi-cassandra-* and noi-cassandra-bak-* PVs must be on the same local node. Cassandra pods fail to bind to their PVCs if this requirement is not met.

Storage class requirements

For production environments, storage classes must support allowVolumeExpansion.
Note: Enable allowVolumeExpansion to avoid storage from filling up and unrecoverable failures.
To enable allowVolumeExpansion, complete the following steps:
  1. Edit the storage class to enable expansion. For more information, see https://docs.openshift.com/container-platform/4.12/storage/expanding-persistent-volumes.html#add-volume-expansion_expanding-persistent-volumes Launch out icon.
  2. Increase the individual PVCs to increase capacity. For more information, see https://docs.openshift.com/container-platform/4.12/storage/expanding-persistent-volumes.html#expanding-pvc-filesystem_expanding-persistent-volumes Launch out icon.

Configuring storage classes

During the installation, you are asked to specify the storage classes for components that require persistence. You must create the persistent volumes and storage classes yourself, or use a preexisting storage class.

Note: Do not use CephFS storage. For more information, see OCS / ODF Database Workloads Must Not Use CephFS PVs/PVCs (RDBMSs, NoSQL, PostgreSQL, Mongo DBs, etc.) Launch out icon in the Red Hat OpenShift documentation.
Check which storage classes are configured on your cluster by using the oc get sc command. This command lists all available classes to choose from on the cluster. If no storage classes exist, then ask your cluster administrator to configure a storage class by following the guidance in the Red Hat OpenShift documentation, at the following links.

Configuring storage Security Context Constraint (SCC)

Before you configure storage, you need to determine and declare your storage SCC for a chart that runs in a non-root environment across various storage solutions. For more information about how to secure your storage environment, see the Red Hat OpenShift documentation: Managing security context constraints Launch out icon.

Persistent volume size requirements

Table 1 shows information about persistent volume size and access mode requirements for a full deployment.
Note: There is a need for read-write-many storage volumes. Ensure that your storage provider supports ReadWriteMany (RWX) volumes. The storage for the Spark pods is shared between the spark pods. The shared Spark storage must support multi-node access.
Table 1. Persistent volume size requirements
Name Trial Production Recommended size per replica (trial) Recommended size per replica (production) Access mode
cassandra-data 1 3 200 Gi 1500 Gi ReadWriteOnce
cassandra-bak 1 3 50 Gi 50 Gi ReadWriteOnce
zookeeper 1 3 10 Gi 10 Gi ReadWriteOnce
couchdb 1 3 20 Gi 20 Gi ReadWriteOnce
fileobserver 1 1 5 Gi 10 Gi ReadWriteOnce
MinIO 1 4 50 Gi 200 Gi ReadWriteOnce
postgres-cluster-1 1 1 100 Gi 100 Gi ReadWriteOnce
postgres-cluster-1-wal 1 1 50 Gi 50 Gi ReadWriteOnce
spark-shared-state 1 1 100 Gi 100 Gi ReadWriteMany

If Application Discovery is enabled for topology management, then further storage is required. All the components of Application Discovery require persistent storage, including state of Application Discovery data that is stored outside of the database. For more information, see table 2.

Table 2. Persistent storage requirements for Application Discovery
Application Discovery component Trial Production Recommended size per replica (trial) Recommended size per replica (production) Access mode
Primary storage server 1 4 50 Gi 50 Gi ReadWriteOnce
Secondary storage server 1 4 50 Gi 50 Gi ReadWriteOnce
Discovery server 1 4 50 Gi 50 Gi ReadWriteOnce

Portworx storage

Portworx version 2.6.3 or higher, is a supported storage option for IBM Netcool Operations Insight on Red Hat OpenShift. For more information, see https://docs.portworx.com/portworx-install-with-kubernetes/openshift/operator/1-prepare/ Launch out icon and https://docs.portworx.com/portworx-install-with-kubernetes/openshift/operator/1-prepare/ Launch out icon.

Portworx uses FIPS 140-2 certified cryptographic modules. Portworx can encrypt the whole storage cluster using a Storage class with encryption enabled. For more information, see the Encrypting PVCs using StorageClass with Kubernetes Secrets external icon topic in the Portworx documentation.

Federal Information Processing Standards (FIPS) storage requirements

If you want the storage for your IBM Netcool Operations Insight on Red Hat OpenShift deployment to be FIPS-compliant, refer to your storage provider's documentation to ensure that your storage meets this requirement.