Hybrid storage

Create persistent storage before your deployment of IBM® Netcool® Operations Insight® on Red Hat® OpenShift®.

Note: If you want to deploy IBM Netcool Operations Insight on Red Hat OpenShift on a cloud platform, such as Red Hat OpenShift Kubernetes Service (ROKS), assess your storage requirements.
Note: Data collection from a metric data source commonly starts with historical data collection. For this reason, it is good practice to increase the PVC size of the Kafka PVC. PVC sizes can be increased by logging in to the Red Hat OpenShift Container Platform: https://console-openshift-console.apps.your-cluster.acme.com.

Navigate to Storage > PersistentVolumeClaims. Search on the Kafka PVC(s). For the Kafka PVC(s), select the "3 dots" icon on the right-hand-side and select Expand PVC. Increase the PVC size by 2Gi for every 10,000 KPIs being processed.

Configuring persistent storage

To ensure your current and future storage requirements are met, regular audits of persistence storage capacity usage are highly recommended.

Red Hat OpenShift uses the Kubernetes persistent volume (PV) framework. Persistent volumes are storage resources in the cluster, and persistent volume claims (PVCs) are storage requests that are made on those PV resources by Netcool Operations Insight. For more information on persistent storage in Red Hat OpenShift clusters, see Understanding persistent storage Launch out icon.

You can deploy Netcool Operations Insight on OpenShift with the following persistent storage options.

Note: If local storage is used (in a non-poduction environment), the noi-cassandra-* and noi-cassandra-bak-* PVs must be on the same local node. Cassandra pods fail to bind to their PVCs if this requirement is not met.

Storage class requirements

For production environments, storage classes must support allowVolumeExpansion.
Note: Enable allowVolumeExpansion to avoid storage filling up and unrecoverable failures.
To enable allowVolumeExpansion, complete the following steps:
  1. Edit the storage class to enable expansion. For more information, see https://docs.openshift.com/container-platform/4.10/storage/expanding-persistent-volumes.html#add-volume-expansion_expanding-persistent-volumes Launch out icon.
  2. Increase the individual PVCs to increase capacity. For more information, see https://docs.openshift.com/container-platform/4.10/storage/expanding-persistent-volumes.html#expanding-pvc-filesystem_expanding-persistent-volumes Launch out icon.

Configuring storage classes

During the installation, you are asked to specify the storage classes for components that require persistence. You must create the persistent volumes and storage classes yourself, or use a preexisting storage class.

Check which storage classes are configured on your cluster by using the oc get sc command. This command lists all available classes to choose from on the cluster. If no storage classes exist, then ask your cluster administrator to configure a storage class by following the guidance in the Red Hat OpenShift documentation, at the following links.

Configuring storage Security Context Constraint (SCC)

Before configuring storage, you need to determine and declare your storage SCC for a chart running in a non-root environment across a number of storage solutions. For more information about how to secure your storage environment, see the Red Hat OpenShift documentation: Managing security context constraints Launch out icon.

Persistent volume size requirements

Table 1 shows information about persistent volume size and access mode requirements for a full deployment.

Table 1. Persistent volume size requirements
Name Trial Production Recommended size per replica (trial) Recommended size per replica (production) Access mode
cassandra-data 1 3 200 Gi 1500 Gi ReadWriteOnce
cassandra-bak 1 3 50 Gi 50 Gi ReadWriteOnce
kafka 3 6 50 Gi 100 Gi ReadWriteOnce
zookeeper 1 3 10 Gi 10 Gi ReadWriteOnce
couchdb 1 3 20 Gi 20 Gi ReadWriteOnce
elasticsearch 1 3 50 Gi 50 Gi ReadWriteOnce
elastic search-topology 1 3 100 Gi 375 Gi ReadWriteOnce
fileobserver 1 1 5 Gi 10 Gi ReadWriteOnce
MinIO 1 4 50 Gi 200 Gi ReadWriteOnce

If Application Discovery is enabled for topology management, then further storage is required. All the components of Application Discovery require persistent storage, including state of Application Discovery data that is stored outside of the database. Refer to Table 2 for more information.

Table 2. Persistent storage requirements for Application Discovery
Application Discovery component Trial Production Recommended size per replica (trial) Recommended size per replica (production) Access mode
Primary storage server 1 4 50 Gi 50 Gi ReadWriteOnce
Secondary storage server 1 4 50 Gi 50 Gi ReadWriteOnce
Discovery server 1 4 50 Gi 50 Gi ReadWriteOnce

Non-production deployments only: configuring persistent volumes with the local storage script

Note: Do not use local storage for a production environment.
For trial, demonstration, or development systems, you can download the createStorageAllNodes.sh script from the IT Operations Management Developer Center http://ibm.biz/local_storage_script Launch out icon. This script must not be used in production environments.

The script facilitates the creation of local storage PVs. The PVs are mapped volumes, which are mapped to directories off the root file system on the parent node. The script also generates example SSH scripts that create the directories on the local file system of the node. The SSH scripts create directories on the local hard disk that is associated with the virtual machine and are only suitable for proof of concept or development work.

Note: If local storage is used, the noi-cassandra-* and noi-cassandra-bak-* PVs must be on the same local node. Cassandra pods fail to bind to their PVCs if this requirement is not met.

Portworx storage

Portworx version 2.6.3 or higher, is a supported storage option for IBM Netcool Operations Insight on Red Hat OpenShift. For more information, see https://docs.portworx.com/portworx-install-with-kubernetes/openshift/operator/1-prepare/ Launch out icon and https://docs.portworx.com/portworx-install-with-kubernetes/openshift/operator/1-prepare/ Launch out icon.

Portworx uses FIPS 140-2 certified cryptographic modules. Portworx can encrypt the whole storage cluster using a Storage class with encryption enabled. For more information, see the Encrypting PVCs using StorageClass with Kubernetes Secrets external icon topic in the Portworx documentation.

Federal Information Processing Standards (FIPS) storage requirements

If you want the storage for your IBM Netcool Operations Insight on Red Hat OpenShift deployment to be FIPS compliant, refer to your storage provider's documentation to ensure that your storage meets this requirement.