Storage
Learn about the storage options for your deployment of IBM® Netcool® Operations Insight® on Red Hat® OpenShift®.
- Alter the retentionMs setting on the following two topics down to 2 hours:
noi-actions
itsm.baselines
oc exec evtmanager-kafka-0 -- bash -c "/opt/kafka/bin/kafka-topics.sh --zookeeper 10.254.51.37:2181 --alter --topic noi-actions --config retention.ms=7200000"
Configuring persistent storage
To ensure your current and future storage requirements are met, regular audits of persistence storage capacity usage are highly recommended.
Red Hat OpenShift uses the Kubernetes persistent volume (PV) framework. Persistent volumes are storage resources in the cluster, and persistent volume claims (PVCs) are storage requests that are made on those PV resources by Netcool Operations Insight. For more information on persistent storage in OpenShift clusters, see Understanding persistent storage .
You can deploy Netcool Operations Insight on OpenShift with the following persistent storage options.
- VMware vSphere storage For more information, see Persistent storage using VMware vSphere volumes .
- Local storage For trial, demonstration, or development systems, you can configure local
storage with the Red Hat
OpenShift operator method. For more
information, see Non-production deployments only: configuring persistent volumes with the local storage script and Persistent storage using local volumes
.Note: Do not use local storage for a production environment.
- Any storage that implements the Container Storage Interface (CSI) or Red Hat OpenShift
Container Storage (OCS)
For more information, see Configuring CSI volumes and Red Hat OpenShift Container Storage . - Portworx For more information, see Portworx storage.
Storage class requirements
allowVolumeExpansion
.allowVolumeExpansion
to avoid storage filling up and unrecoverable
failures.allowVolumeExpansion
, complete the following steps:- Edit the storage class to enable expansion. For more information, see https://docs.openshift.com/container-platform/4.8/storage/expanding-persistent-volumes.html#add-volume-expansion_expanding-persistent-volumes .
- Increase the individual PVCs to increase capacity. For more information, see https://docs.openshift.com/container-platform/4.8/storage/expanding-persistent-volumes.html#expanding-pvc-filesystem_expanding-persistent-volumes .
Configuring storage classes
oc get
sc
. This command lists all available classes to choose from on the cluster. If no storage
classes exist, then ask your cluster administrator to configure a storage class by following the
guidance in the OpenShift
documentation, at the following links.
Configuring storage Security Context Constraint (SCC)
Before configuring storage, you need to determine and declare your storage SCC for a chart running in a non-root environment across a number of storage solutions. For more information about how to secure your storage environment, see the OpenShift documentation: Managing security context constraints .
Persistent volume size requirements
Table 1 shows information about persistent volume size and access mode requirements for a full deployment.
Name | Trial | Production | Recommended size per replica (trial) | Recommended size per replica (production) | Access mode | User | fsGroup |
---|---|---|---|---|---|---|---|
cassandra-data | 1 | 3 | 200 Gi | 450 Gi | ReadWriteOnce | 1001 | 2001 |
cassandra-bak | 1 | 3 | 50 Gi | 50 Gi | ReadWriteOnce | 1001 | 2001 |
kafka | 3 | 6 | 50 Gi | 100 Gi | ReadWriteOnce | 1001 | 2001 |
zookeeper | 1 | 3 | 10 Gi | 10 Gi | ReadWriteOnce | 1001 | 2001 |
couchdb | 1 | 3 | 20 Gi | 20 Gi | ReadWriteOnce | 1001 | 2001 |
impact | 1 | 1 | 10 Gi | 20 Gi | ReadWriteOnce | 1001 | 2001 |
impactgui | 1 | 1 | 5 Gi | 5 Gi | ReadWriteOnce | 1001 | 2001 |
ncobackup | 1 | 1 | 5 Gi | 10 Gi | ReadWriteOnce | 1001 | 2001 |
ncoprimary | 1 | 1 | 5 Gi | 10 Gi | ReadWriteOnce | 1001 | 2001 |
openldap | 1 | 1 | 1 Gi | 1 Gi | ReadWriteOnce | 1001 | 2001 |
elasticsearch | 1 | 3 | 50 Gi | 50 Gi | ReadWriteOnce | 1001 | 2001 |
elastic search-topology | 1 | 3 | 100 Gi | 250 Gi | ReadWriteOnce | 1000 | 1000 |
fileobserver | 1 | 1 | 5 Gi | 10 Gi | ReadWriteOnce | 1001 | 2001 |
If Application Discovery is enabled for topology management, then further storage is required. All the components of Application Discovery require persistent storage, including state of Application Discovery data that is stored outside of the database. Refer to Table 2 for more information.
Application Discovery component | Trial | Production | Recommended size per replica (trial) | Recommended size per replica (production) | Access mode | User | fsGroup |
---|---|---|---|---|---|---|---|
Primary storage server | 1 | 4 | 50 Gi | 50 Gi | ReadWriteOnce | 1001 | 2001 |
Secondary storage server | 1 | 4 | 50 Gi | 50 Gi | ReadWriteOnce | 1001 | 2001 |
Discovery server | 1 | 4 | 50 Gi | 50 Gi | ReadWriteOnce | 1001 | 2001 |
Non-production deployments only: configuring persistent volumes with the local storage script
The script facilitates the creation of local storage PVs. The PVs are mapped volumes, which are mapped to directories off the root file system on the parent node. The script also generates example SSH scripts that create the directories on the local file system of the node. The SSH scripts create directories on the local hard disk that is associated with the virtual machine and are only suitable for proof of concept or development work.
Portworx storage
Portworx version 2.6.3 or higher, is a supported storage option for IBM Netcool Operations Insight on Red Hat OpenShift. For more information, see https://docs.portworx.com/portworx-install-with-kubernetes/openshift/operator/1-prepare/ and https://docs.portworx.com/portworx-install-with-kubernetes/openshift/operator/1-prepare/ .
17 Sep 20 Github #6826 Adding this section to explain storage classes. In 1.6.3 this section should be moved into a separate topic under a Storage container topic, and expanded to provide more background on storage classes and their relationship to storage. Also these topics in Cloud and hybrid should be single sourced but that is an entire separate discussion about single sourcing the Cloud and Hybrid docs.