Products and components on a container platform

It is possible to install the base Netcool® Operations Insight® solution, Operations Management, within a private cloud, by using a container platform. Learn about the architecture of a deployment of Operations Management on a container platform.

version 1.6.0.2
iconOperations Management can be deployed on IBM Cloud® Private. In Fix Pack 2 and later versions, Operations Management can also be deployed on Red Hat OpenShift.

The Operations Management cluster is made up of a set of virtual machines that serve as nodes within the cluster. There is one master node, one management node, and the remaining virtual machines serve as worker nodes, where the Kubernetes pods and containers, which are known as workloads, are deployed. There are one or more containers within each pod.

The following figure shows the container architecture of a deployment of Operations Management on a container platform.
Figure 1. Architecture of a deployment of Operations Management on a container platform
Container architecture of a deployment of Operations Management on a container platform
Some of the elements of the diagram are described here:
Ingress
A Kubernetes ingress is a collection of rules that can be configured to give services externally reachable URLs. This list represents the ingresses that are needed for a deployment on IBM Cloud Private:
  • noi-ibm-ea-ui-api
  • noi-ibm-hdm-analytics-dev-backend
  • noi-ibm-hdm-common-ui
  • noi-impactgui
  • noi-nci
  • noi-scala
  • noi-webgui
  • was-noi-webgui
  • noi-common-dash-auth-im-repo
Nodeport
This list represents the Nodeports that are needed for a container platform deployment:
  • noi-log-analysis-service
  • noi-objserv-agg-primary-nodeport
  • noi-objserv-agg-backup-nodeport
  • noi-proxy
  • noi-webgui
Local PVC
The PVCs in the following list are local by default. They can be customized to distributed if wanted.
  • cassandra
  • couchdb
  • kafka
  • zookeeper
  • db2ese
  • impactgui
  • ncoprimary
  • ncobackup
  • openldap
  • scala
  • nciserver
Routes
This list represents the routes that are needed for a Red Hat OpenShift deployment:
  • helm_releasename-common-dash-auth-im-repo
  • helm_releasename-ibm-ea-ui-api-graphql
  • helm_releasename-ibm-hdm-analytics-dev-backend-ingress-0
  • helm_releasename-ibm-hdm-analytics-dev-backend-ingress-1
  • helm_releasename-ibm-hdm-analytics-dev-backend-ingress-2
  • helm_releasename-ibm-hdm-analytics-dev-backend-ingress-3
  • helm_releasename-ibm-hdm-analytics-dev-backend-ingress-4
  • helm_releasename-ibm-hdm-common-ui
  • helm_releasename-impactgui-xyz
  • helm_releasename-nci-0
  • helm_releasename-nci-1
  • helm_releasename-scala-xyz
  • helm_releasename-webgui-3pi
  • helm_releasename-webgui-dashravewidget
  • helm_releasename-webgui-ibm
  • helm_releasename-webgui-impact-dashlet
  • helm_releasename-webgui-isc
  • helm_releasename-webgui-iscadmin
  • helm_releasename-webgui-ischa
  • helm_releasename-webgui-iscwire
  • helm_releasename-webgui-mybox
  • helm_releasename-webgui-oauth2
  • helm_releasename-webgui-tip-iscadmin
  • helm_releasename-webgui-tiputil
  • helm_releasename-webgui-tipwebwidget
  • helm_releasename-webgui-twl-ssd
  • helm_releasename-webgui-xyza
Container Platform
This is the underlying Red Hat OpenShift or IBM Cloud Private system on which the Kubernetes cluster is deployed. IBM Cloud Pak​ supports the Operations Management workload, together with other deployed workloads. For Operations Management the cluster is made up of a minimum number of virtual machines, which are deployed as a master node (including management, proxy, and boot functions), and worker nodes within the cluster, together with a local storage file system.
For more information, see the following documentation links:
Operations Management cluster

The diagram displays an installation on a container platform, where the Operations Management cluster is deployed as containerized Operations Management applications within pods. The cluster is made up of a set of virtual machines that serve as nodes within the cluster. There is one master node, one management node, and the remaining virtual machines serve as worker nodes, where the Kubernetes pods and containers, which are known as workloads, are deployed. You can also create namespaces within your cluster. This enables multiple independent installations of Operations Management within the cluster, with each installation deployed in a separate namespace.

Interaction with the cluster is managed by the master node, as follows:
  • Administration of the cluster is performed by connecting to the master node, either with the catalog UI, or with the Kubernetes command-line interface, kubectl.
  • Users log in to applications provided by the pods and containers within the cluster, such as Web GUI and Operations Analytics - Log Analysis, by opening a web browser and navigating to a URL made up of the master node hostname and the port number used by the relevant application.
Kubernetes manages how pods are deployed across the worker nodes and orchestrates communication between the pods. Pods are only deployed on worker nodes that meet the minimum resource requirements that are specified for that pod. Kubernetes uses a mechanism called affinity to ensure that pods that must be deployed on different worker nodes are deployed correctly. For example, it ensures that the primary ObjectServer container is deployed on a different worker node to the backup ObjectServer container.

Storage within the cluster
Storage within the cluster is provided by local Persistent Volumes. Distributed shared storage can be provided by vSphere. Currently no other distributed storage technologies are supported.
Persistent Volume Claims
Scalable network file systems within the cluster that provide storage to the pods in the cluster on demand. Local storage, vSphere, and Red Hat CEPH storage is supported.

Pod and containers in the Operations Management cluster
Discrete Operations Management applications are deployed as containers within the cluster. One or more containers are deployed within pods. Where containers need to be tightly coupled, for example, the backup ObjectServer container and its associated bidirectional gateway container, they are deployed together within a pod to enable shared context, the same IP address and port space.
The pods and containers in the cluster are listed in the following table.
Table 1. Pods and containers in the cluster
Component or Capability Application Container Pod
Netcool/OMNIbus Primary ObjectServer ncoprimary ncoprimary
Backup ObjectServer ncobackup-agg-b ncobackup
Bidirectional gateway ncobackup-agg-gate
Netcool/Impact Primary Netcool/Impact core server nciserver-0 nciserver-0
Backup Netcool/Impact core server nciserver-1 nciserver-1
Netcool/Impact GUI server impactgui impactgui
Db2® container Db2 database db2ese db2ese
Event Search Operations Analytics - Log Analysis unity scala
Gateway for Message Bus gateway
Dashboard Application Services Hub GUIs Dashboard Application Services Hub webgui webgui
LDAP LDAP proxy server openldap openldap
Proxy Proxy proxy proxy
Cloud Native Analytics Cassandra cassandra cassandra
Cloud Native Analytics cassandra-change-super-user-post-install cassandra
Cloud Native Analytics CouchDB couchdb couchdb
Cloud Native Analytics Cloud Native Analytics Action service ea-noi-layer-eanoiactionservice ea-noi-layer-eanoiactionservice
Cloud Native Analytics Cloud Native Analytics Gateway ea-noi-layer-eanoigateway ea-noi-layer-eanoigateway
Cloud Native Analytics Cloud Native Analytics UI API ea-ui-api-graphql ea-ui-api-graphql
Cloud Native Analytics Cloud Native Analytics Collator service ibm-hdm-analytics-dev-collater-aggregationservice ibm-hdm-analytics-dev-collater-aggregationservice
Cloud Native Analytics Cloud Native Analytics Deduplication service ibm-hdm-analytics-dev-dedup-aggregationservice ibm-hdm-analytics-dev-dedup-aggregationservice
Cloud Native Analytics Cloud Native Analytics Normalization service ibm-hdm-analytics-dev-normalizer-aggregationservice ibm-hdm-analytics-dev-normalizer-aggregationservice
Cloud Native Analytics Cloud Native Analytics Archiving service ibm-hdm-analytics-dev-archivingservice ibm-hdm-analytics-dev-archivingservice
Cloud Native Analytics Cloud Native Analytics Event Query service ibm-hdm-analytics-dev-eventsqueryservice ibm-hdm-analytics-dev-eventsqueryservice
Cloud Native Analytics Cloud Native Analytics Inference service ibm-hdm-analytics-dev-inferenceservice ibm-hdm-analytics-dev-inferenceservice
Cloud Native Analytics Cloud Native Analytics Ingestion service ibm-hdm-analytics-dev-ingestionservice ibm-hdm-analytics-dev-ingestionservice
Cloud Native Analytics Cloud Native Analytics Policy Registry service ibm-hdm-analytics-dev-policyregistryservice ibm-hdm-analytics-dev-policyregistryservice
Cloud Native Analytics Cloud Native Analytics service monitor ibm-hdm-analytics-dev-servicemonitorservice ibm-hdm-analytics-dev-servicemonitorservice
Cloud Native Analytics Cloud Native Analytics setup ibm-hdm-analytics-dev-setup ibm-hdm-analytics-dev-setup
Cloud Native Analytics Cloud Native Analytics trainer ibm-hdm-analytics-dev-trainer ibm-hdm-analytics-dev-trainer
Cloud Native Analytics Cloud Native Analytics UI server ibm-hdm-common-ui-uiserver ibm-hdm-common-ui-uiserver
Cloud Native Analytics Kafka kafka kafka
Cloud Native Analytics Redis sentinel redis-sentinel redis-sentinel
Cloud Native Analytics Redis server redis-server redis-server
Cloud Native Analytics Spark master spark-master spark-master
Cloud Native Analytics Spark slave spark-slave spark-slave
Cloud Native Analytics Zookeeper zookeeper zookeeper

version 1.6.0.1
iconCloud Native Analytics and Agile Service Manager topology analytics integration

Agile Service Manager Normaliser Mirror maker ibm-ea-asm-normalizer-mirrormaker ibm-ea-asm-normalizer-mirrormaker

version 1.6.0.1
iconCloud Native Analytics and Agile Service Manager topology analytics integration

Agile Service Manager Normaliser Stream ibm-ea-asm-normalizer-normalizerstreams ibm-ea-asm-normalizer-normalizerstreams

version 1.6.0.2
iconCloud Native Analytics

Nginx common-dash-auth-im-repo-dashauth common-dash-auth-im-repo-dashauth
Primary ObjectServer
This pod is made up of the Aggregation ObjectServer - primary container (ncoprimary). Containerized probes and on-premises probes connect to this container to send alert information by using the external node port for the pod.
Backup ObjectServer
This pod is made up of the Aggregation ObjectServer - backup container (ncobackup-agg-b) and a bidirectional gateway container (ncobackup-agg-gate). As in a traditional Netcool/OMNIbus ObjectServer pair this ObjectServer provides failover if the ObjectServer in the primary ObjectServer pod fails.
Primary Netcool/Impact core server
This pod is made up of the Netcool/Impact core server - primary container (nciserver-0). This container provides standard Netcool/Impact core server functionality.
Backup Netcool/Impact core server
This pod is made up of the Netcool/Impact core server - secondary container (nciserver-1). As in a traditional Netcool/Impact core server pair this core server provides failover if the core server in the primary Netcool/Impact core server pod fails.
Db2 database
This pod is made up of the Db2 container (db2ese), and is used by webgui.
Netcool/Impact GUI server
This pod is made up of the Netcool/Impact GUI server - secondary container (impactgui). This container provides standard Netcool/Impact core server functionality.
Operations Analytics - Log Analysis
This pod is made up of the Operations Analytics - Log Analysis container (unity) and the Gateway for Message Bus container (gateway). The Gateway for Message Bus container sends the event data using the Accelerated Event Notification (AEN) client to the Operations Analytics - Log Analysis container, where the event data is indexed.
Dashboard Application Services Hub
This pod is made up of the Web GUI container (webgui), which implements all of the GUIs used in Operations Management:
  • Web GUI Event Viewer
  • Event Analytics GUIs
  • Event Search dashboards and GUIs
LDAP proxy server
By default LDAP proxy functionality is provided in the LDAP proxy container (openldap), which includes default LDAP configuration, predefined users, and single sign-on capability for those users. All of the other pods communicate with the LDAP proxy pod to support this default functionality. You can configure the LDAP container to change the LDAP server that the pods in the cluster connect to.
Cloud Native Analytics Action service
This service is responsible for updating the ObjectServer with the findings from the Cloud Native Analytics Inference service and the Cloud Native Analytics Collator service. It updates and enriches entries in the ObjectServer with correlation, seasonal and other data.
Cloud Native Analytics Gateway
This service sends ObjectServer alerts.status insertions to the Cloud Native Analytics Ingestion service, for archiving and processing by the Cloud Native Analytics Inference Service.
Cloud Native Analytics UI API
Internal API service used by the UI.
Cloud Native Analytics Collator service
This service generates supergroups where groups have one or more common events. It also generates metrics on the groups and supergroups that are applied to individual events.
Cloud Native Analytics Deduplication service
This service de-duplicates event actions that are received from the Inference Service into single entries. These are then used to update the ObjectServer, and by the Cloud Native Analytics Collator service.
Cloud Native Analytics Normalization service
This service takes the output of the Cloud Native Analytics Collator service from Kafka, and posts the items to a REST endpoint. Currently, the endpoint is hosted by the Cloud Native Analytics NOI Action service but it can be any service that supports the API endpoint.
Cloud Native Analytics Archiving service
This service receives events that are published by the Cloud Native Analytics Ingestion service on the internal Cloud Native Analytics Kafka events topic, and creates occurrence and instance records in the underlying Cassandra database. This event data is used for training algorithms, and by the Cloud Native Analytics UI to query events that appear in correlation groups of seasonal enrichment, by using the Cloud Native Analytics Event Query service.
Cloud Native Analytics Event Query service
This service provides a REST API for querying the Cassandra database to find NOI event instances.
Cloud Native Analytics Inference service
This service receives events and then applies relevant policies to infer relevant correlations and enrichment actions for events.
Cloud Native Analytics Ingestion service
The Ingestion service provides a REST API for the ingestion of events in NOI format into the Cloud Native Analytics backend. Events are validated and converted into an internal Cloud Native Analytics event representation before they are published onto the internal Cloud Native Analytics Kafka events topic.
Cloud Native Analytics Policy Registry service
This service provides a REST API for managing and querying policies that have been created by an algorithm or a user for correlation or event enrichment. It is used by the Cloud Native Analytics Inference service to fetch the set of policies that are related to a set of event IDs, and by the Cloud Native Analytics UI for fetching details on specific policies that have run.
Cloud Native Analytics Service Monitor service
This service provides a single REST API that can be queried to get the current service health status of all other services in the deployment. The service has a list of deployed service endpoints, and queries each one and then returns a consolidated service view to the caller.
Cloud Native Analytics Setup service
The setup job is created and run as part of the deployment of NOI. It performs the initial setup and creation tasks that are required by the deployment, such as creating the necessary table schemas in the underlying Cassandra database, creating the default 'ScopeID' based scope correlation policy in the Cloud Native Analytics Policy Registry service, and creating the initial training schedule for the available algorithms in the trainer.
Cloud Native Analytics Trainer service
Manages the training schedules for all of the algorithms, and starts retraining jobs in the Spark cluster.
Cloud Native Analytics UI server
This service hosts and serves the key UI components of the Cloud Native Analytics.
version 1.6.0.1
iconCloud Native Analytics Agile Service Manager Normaliser Mirror maker
Mirrors the Agile Service Manager Kafka status topic.
version 1.6.0.1
iconCloud Native Analytics Agile Service Manager Normaliser Streams
Combines ASM status with event data from Operations Management's Kafka topic.