Storage
Learn about the storage options for your deployment of IBM® Netcool® Operations Insight® on Red Hat® OpenShift®.
Configuring persistent storage
To ensure your current and future storage requirements are met, complete regular audits of persistence storage capacity usage.
Red Hat OpenShift Container Platform uses the Kubernetes persistent volume (PV) framework. Persistent volumes are storage resources in the cluster, and persistent volume claims (PVCs) are storage requests that are made on those PV resources by Netcool Operations Insight. For more information on persistent storage in Red Hat OpenShift clusters, see Understanding persistent storage .
persistence.enabled
to true
in the custom resource.You can deploy Netcool Operations Insight on OpenShift with the following persistent storage options.
- VMware vSphere storage For more information, see Persistent storage using VMware vSphere volumes .
- Local storage For trial, demonstration, or development systems, you can
configure local storage with the Red Hat
OpenShift operator method. For more
information, see Persistent storage using local volumes .Note: Do not use local storage for a production environment.
- Portworx For more information, see Portworx storage.
- Other Any storage that implements the Container Storage Interface (CSI) or Red Hat OpenShift Container Storage (OCS). For more information, see Configuring CSI volumes and Red Hat OpenShift Container Storage .
Storage class requirements
allowVolumeExpansion
.allowVolumeExpansion
to avoid filling up storage and unrecoverable
failures.- Edit the storage class to enable expansion. For more information, see https://docs.openshift.com/container-platform/4.12/storage/expanding-persistent-volumes.html#add-volume-expansion_expanding-persistent-volumes .
- Increase the individual PVCs to increase capacity. For more information, see https://docs.openshift.com/container-platform/4.12/storage/expanding-persistent-volumes.html#expanding-pvc-filesystem_expanding-persistent-volumes .
Configuring storage classes
During the installation, you are asked to specify the storage classes for components that require persistence. Create the persistent volumes and storage classes yourself, or use a preexisting storage class.
oc get
sc
command. This command lists all available classes to choose from on the cluster. If no
storage classes exist, then ask your cluster administrator to configure a storage class by following
the guidance in the Red Hat
OpenShift documentation at the following links.
Configuring storage Security Context Constraint (SCC)
Before you configure storage, determine and declare your storage SCC for a chart that runs in a non-root environment across various storage solutions. For more information about how to secure your storage environment, see the Red Hat OpenShift documentation: Managing security context constraints .
Persistent volume size requirements
Name | Trial | Production | Recommended size per replica (trial) | Recommended size per replica (production) | Access mode |
---|---|---|---|---|---|
cassandra-data |
1 | 3 | 200 Gi | 1500 Gi | ReadWriteOnce |
cassandra-bak |
1 | 3 | 50 Gi | 50 Gi | ReadWriteOnce |
kafka |
3 | 6 | 50 Gi | 100 Gi | ReadWriteOnce |
zookeeper |
1 | 3 | 10 Gi | 10 Gi | ReadWriteOnce |
couchdb |
1 | 3 | 20 Gi | 20 Gi | ReadWriteOnce |
nciserver |
1 | 1 | 10 Gi | 20 Gi | ReadWriteOnce |
impactgui |
1 | 1 | 5 Gi | 5 Gi | ReadWriteOnce |
ncobackup |
1 | 1 | 5 Gi | 10 Gi | ReadWriteOnce |
ncoprimary |
1 | 1 | 5 Gi | 10 Gi | ReadWriteOnce |
openldap |
1 | 1 | 1 Gi | 1 Gi | ReadWriteOnce |
fileobserver |
1 | 1 | 5 Gi | 10 Gi | ReadWriteOnce |
MinIO |
1 | 4 | 50 Gi | 200 Gi | ReadWriteOnce |
postgres-cluster-1 |
1 | 1 | 100 Gi | 100 Gi | ReadWriteOnce |
postgres-cluster-1-wal |
1 | 1 | 50 Gi | 50 Gi | ReadWriteOnce |
spark-shared-state |
1 | 1 | 100 Gi | 100 Gi | ReadWriteMany |
If Application Discovery is enabled for topology management, provision extra storage. All the components of Application Discovery require persistent storage, including state of Application Discovery data that is stored outside of the database. For more information, see the Table 2 table.
Application Discovery component | Trial | Production | Recommended size per replica (trial) | Recommended size per replica (production) | Access mode |
---|---|---|---|---|---|
Primary storage server | 1 | 4 | 50 Gi | 50 Gi | ReadWriteOnce |
Secondary storage server | 1 | 4 | 50 Gi | 50 Gi | ReadWriteOnce |
Discovery server | 1 | 4 | 50 Gi | 50 Gi | ReadWriteOnce |
Portworx storage
Portworx version 2.6.3 or higher, is a supported storage option for IBM Netcool Operations Insight on Red Hat OpenShift. For more information, see https://docs.portworx.com/portworx-install-with-kubernetes/openshift/operator/1-prepare/ and https://docs.portworx.com/portworx-install-with-kubernetes/openshift/operator/1-prepare/ .
Portworx uses FIPS 140-2 certified cryptographic modules. Portworx can encrypt the whole storage cluster using a Storage class with encryption enabled. For more information, see the Encrypting PVCs using StorageClass with Kubernetes Secrets topic in the Portworx documentation.
Federal Information Processing Standards (FIPS) storage requirements
If you want the storage for your IBM Netcool Operations Insight on Red Hat OpenShift deployment to be FIPS-compliant, refer to your storage provider's documentation to ensure that your storage meets this requirement.