Architecture of a hybrid system

Learn about the architecture of a hybrid deployment of IBM® Netcool® Operations Insight®.

Architecture

A hybrid deployment integrates an on-premises Operations Management installation with a smaller deployment of IBM Netcool Operations Insight on OpenShift®, called the cloud native Netcool Operations Insight components.

The cloud native Netcool Operations Insight components deployed on Red Hat® OpenShift provide cloud native analytics, event management, runbook automation, service and topology management, and topology analytics. The on-premises Operations Management installation provides the IBM Tivoli® Netcool/OMNIbus ObjectServer and WebGUI(s), IBM Tivoli Netcool/Impact, and probes and gateways.

The IBM Netcool Operations Insight cluster on Red Hat OpenShift is composed of a set of virtual machines, which are deployed as master nodes or worker nodes, together with a local storage file system. The master nodes provide management, proxy and boot functions, and the worker nodes are where the Kubernetes pods are deployed.

The following figure shows the architecture of a hybrid deployment.

Figure 1. Architecture of Netcool Operations Insight hybrid deployment.
Container architecture of a deployment of Netcool Operations Insight in a hybrid deployment

On-premises IBM Netcool Operations Insight

The on-premises Operations Management installation is composed of the ObjectServer(s), WebGUI, the Impact server and UI, and the probes and gateways. Extra authentication is configured at installation to allow on-premises services and cloud services mutual access. The hybrid solution can be deployed with multiple on-premises WebGUI instances in High Availability (HA) mode to provide redundancy.

IBM Netcool Operations Insight on OpenShift cluster

The IBM Netcool Operations Insight cluster is deployed as containerized IBM Netcool Operations Insight applications within pods on Red Hat OpenShift. Each pod has one or more containers.

Kubernetes orchestrates communication between the pods, and manages how the pods are deployed across the worker nodes. Pods are only deployed on worker nodes that meet the minimum resource requirements that are specified for that pod. Kubernetes uses affinity to ensure that pods that must be deployed on different worker nodes are deployed correctly.

Interaction with the cluster is managed by the master node, as follows.

  • Administration of the cluster is performed by connecting to the master node, either with the catalog UI, or with the Red Hat OpenShift command-line interface, oc.
  • Users log in to applications provided by the pods and containers within the cluster, with the on-premises Web GUI, and the Cloud GUI. These GUIs are accessed by browsing to a URL made up of the hostname and the port number that is used by the relevant application.

If you require multiple independent installations of IBM Netcool Operations Insight, then you can create namespaces within your cluster and deploy each instance into a separate namespace.

For more information, see Red Hat Product Documentation for OpenShift Container Platform 4.10 https://access.redhat.com/documentation/en-us/openshift_container_platform/4.10/external link.

Routes

A Netcool Operations Insight on OpenShift deployment requires several routes to be created; to direct traffic from clients, such as web browsers, to the Netcool Operations Insight services, and also for services to communicate internally. For a full list of routes, run the command oc get routes on a deployed instance of Netcool Operations Insight.

Storage

Storage for the cloud native Netcool Operations Insight components must be created before you deploy Netcool Operations Insight on OpenShift. For more information, see Hybrid storage.