Hybrid storage
Create persistent storage before your deployment of IBM® Netcool® Operations Insight® on Red Hat® OpenShift®.
https://console-openshift-console.apps.your-cluster.acme.com
.Navigate to Storage > PersistentVolumeClaims. Search on the Kafka PVC(s). For the Kafka PVC(s), select the "3 dots" icon on the right-hand-side and select Expand PVC. Increase the PVC size by 2Gi for every 10,000 KPIs being processed.
Configuring persistent storage
To ensure your current and future storage requirements are met, regular audits of persistence storage capacity usage are highly recommended.
Red Hat OpenShift uses the Kubernetes persistent volume (PV) framework. Persistent volumes are storage resources in the cluster, and persistent volume claims (PVCs) are storage requests that are made on those PV resources by Netcool Operations Insight. For more information on persistent storage in Red Hat OpenShift clusters, see Understanding persistent storage .
You can deploy Netcool Operations Insight on OpenShift with the following persistent storage options.
- VMware vSphere storage For more information, see Persistent storage using VMware vSphere volumes .
- Local storage For trial, demonstration, or development systems, you can configure local
storage with the Red Hat
OpenShift operator method. For more
information, see Persistent storage using local volumes.Note: Do not use local storage for a production environment.
- Any storage that implements the Container Storage Interface (CSI) or Red Hat OpenShift
Container Storage (OCS)
For more information, see Configuring CSI volumes and Red Hat OpenShift Container Storage . - Portworx For more information, see Portworx storage.
Storage class requirements
allowVolumeExpansion
.allowVolumeExpansion
to avoid storage filling up and unrecoverable
failures.allowVolumeExpansion
, complete the following steps:- Edit the storage class to enable expansion. For more information, see https://docs.openshift.com/container-platform/4.10/storage/expanding-persistent-volumes.html#add-volume-expansion_expanding-persistent-volumes .
- Increase the individual PVCs to increase capacity. For more information, see https://docs.openshift.com/container-platform/4.10/storage/expanding-persistent-volumes.html#expanding-pvc-filesystem_expanding-persistent-volumes .
Configuring storage classes
During the installation, you are asked to specify the storage classes for components that require persistence. You must create the persistent volumes and storage classes yourself, or use a preexisting storage class.
oc get
sc
command. This command lists all available classes to choose from on the cluster. If no
storage classes exist, then ask your cluster administrator to configure a storage class by following
the guidance in the Red Hat
OpenShift documentation, at the following links.
Configuring storage Security Context Constraint (SCC)
Before configuring storage, you need to determine and declare your storage SCC for a chart running in a non-root environment across a number of storage solutions. For more information about how to secure your storage environment, see the Red Hat OpenShift documentation: Managing security context constraints .
Persistent volume size requirements
Table 1 shows information about persistent volume size and access mode requirements for a full deployment.
Name | Trial | Production | Recommended size per replica (trial) | Recommended size per replica (production) | Access mode |
---|---|---|---|---|---|
cassandra-data | 1 | 3 | 200 Gi | 1500 Gi | ReadWriteOnce |
cassandra-bak | 1 | 3 | 50 Gi | 50 Gi | ReadWriteOnce |
kafka | 3 | 6 | 50 Gi | 100 Gi | ReadWriteOnce |
zookeeper | 1 | 3 | 10 Gi | 10 Gi | ReadWriteOnce |
couchdb | 1 | 3 | 20 Gi | 20 Gi | ReadWriteOnce |
elasticsearch | 1 | 3 | 50 Gi | 50 Gi | ReadWriteOnce |
elastic search-topology | 1 | 3 | 100 Gi | 375 Gi | ReadWriteOnce |
fileobserver | 1 | 1 | 5 Gi | 10 Gi | ReadWriteOnce |
MinIO | 1 | 4 | 50 Gi | 200 Gi | ReadWriteOnce |
If Application Discovery is enabled for topology management, then further storage is required. All the components of Application Discovery require persistent storage, including state of Application Discovery data that is stored outside of the database. Refer to Table 2 for more information.
Application Discovery component | Trial | Production | Recommended size per replica (trial) | Recommended size per replica (production) | Access mode |
---|---|---|---|---|---|
Primary storage server | 1 | 4 | 50 Gi | 50 Gi | ReadWriteOnce |
Secondary storage server | 1 | 4 | 50 Gi | 50 Gi | ReadWriteOnce |
Discovery server | 1 | 4 | 50 Gi | 50 Gi | ReadWriteOnce |
Non-production deployments only: configuring persistent volumes with the local storage script
The script facilitates the creation of local storage PVs. The PVs are mapped volumes, which are mapped to directories off the root file system on the parent node. The script also generates example SSH scripts that create the directories on the local file system of the node. The SSH scripts create directories on the local hard disk that is associated with the virtual machine and are only suitable for proof of concept or development work.
Portworx storage
Portworx version 2.6.3 or higher, is a supported storage option for IBM Netcool Operations Insight on Red Hat OpenShift. For more information, see https://docs.portworx.com/portworx-install-with-kubernetes/openshift/operator/1-prepare/ and https://docs.portworx.com/portworx-install-with-kubernetes/openshift/operator/1-prepare/ .
Portworx uses FIPS 140-2 certified cryptographic modules. Portworx can encrypt the whole storage cluster using a Storage class with encryption enabled. For more information, see the Encrypting PVCs using StorageClass with Kubernetes Secrets topic in the Portworx documentation.
Federal Information Processing Standards (FIPS) storage requirements
If you want the storage for your IBM Netcool Operations Insight on Red Hat OpenShift deployment to be FIPS compliant, refer to your storage provider's documentation to ensure that your storage meets this requirement.