Monitoring and logging

IBM Storage Fusion provides multiple ways to monitor the health of the hardware and software.

  • Events are fed into OpenShift® event manager

    Example IBM Storage Fusion events:

    • Hardware events for IBM Storage Fusion HCI System, including switches and nodes
    • Events related to the data services of IBM Storage Fusion, such as backup and restore

    As IBM Storage Fusion feeds events into OpenShift event manager, they show up in any integration that you use to monitor OpenShift events.

  • Metrics are fed into Prometheus

    IBM Storage Fusion feeds metrics into OpenShift Prometheus instance.

    The advantage of this is that they are available in any monitoring tool you are using with prometheus, such as Grafana. IBM Storage Fusion provides a set of default Grafana dashboards and a access link to the interface.

  • IBM Storage Fusion provides logs for its services
  • IBM Storage Fusion provides a user interface that shows the status of the system
  • IBM Storage Fusion provides CRs that reflect the status of the system

Events and metrics provide observability that can be fed into other monitoring systems, such as Grafana and ELK.

Important:
  • IBM Storage Fusion does not install Grafana and ELK as part of its installation.
  • If you already have monitoring tools that you use in your environments, then you can feed IBM Storage Fusion observability into these existing tools. For example, if you use Grafana to monitor your OpenShift environment, then you can import metrics and dashboards of IBM Storage Fusion into its instance.

Monitoring

The metrics are collected from storage, network, and compute components and sent to Prometheus, which is then propagated to Grafana for visualization.

Follow the steps to access IBM Storage Fusion information in Grafana. After Grafana is installed and Prometheus data source is configured, you can import dashboards of IBM Storage Fusion with the following steps:
Important:
  • Ensure that you must have a working Grafana instance.
  • Ensure that you are in OpenShift Container Platform version 4.12 or later.
  • The following steps 1 - 5 explain how to configure Prometheus data sources. If it is already configured, then you can directly go to step 6.
Important: Ensure that you change the namespace or project before following the steps. Run the following command to change the namespace.
oc project <your-grafana-namespace>
  1. Create a ServiceAccount associated with your Grafana, and replace the namespace.
    oc create serviceaccount grafana -n <your-grafana-namespace>
  2. Create a Role Binding to a ServiceAccount that is associated with your datasource in Grafana, and replace the namespace. So, that can use the Prometheus cluster.
    oc create clusterrolebinding grafana-cluster-monitoring-view \
      --clusterrole=cluster-monitoring-view \
      --serviceaccount=<your-grafana-namespace>:grafana
  3. Create a secret of type service-account-token using following YAML.
    apiVersion: v1
    kind: Secret
    type: kubernetes.io/service-account-token
    metadata:
      name: grafana-secret
      annotations:
        kubernetes.io/service-account.name: grafana
  4. Run the following command to get a token.
    oc get secret grafana-secret -ojsonpath='{.data.token}' | base64 -d
  5. Configure Grafana with a Prometheus datasource and authentication as follows:
    1. Log in to Grafana user interface.
    2. From the menu, go to Configurations and select Data Sources.
    3. Click Add data sources and select Prometheus.
    4. Update the following values:
      • URL
        Run the following command to get the address of the Cluster Prometheus Route.
        oc get route prometheus-k8s -n openshift-monitoring
        Ensure that you enable Skip TLS Verify parameter under Auth section.
        Customer HTTP Headers

        HTTP Method: Post

        Header: Authorization

        Value: bearer <token-from-step-3>

  6. Ensure that you download the JSON file for each of the dashboard:
  7. Import the files in Grafana dashboard. For the procedure to import files in Grafana dashboard, see Importing files.
    Note: Select Upload JSON file option and upload JSON downloaded in step 6.
  8. Verify whether that Grafana dashboard is imported correctly.
  9. Click Dashboards menu and choose the newly imported dashboard.

Viewing dashboard

In Grafana, you can view different dashboards for storage, network, and compute wherein the following metrics are captured for each component and represented graphically:

The following metrics are captured for each of the components:
Storage Network Compute
Throughput High-speed switches energy status
diskreadtime Management switches Thermal status
ReadIOPs Spine switches  
WriteIOPs    
Disk capacity    

An example Grafana screen showing the networking dashboard of IBM Storage Fusion for a high-speed switch. A screen capture of snmp-exporter-hspeed1 switch on the Graffana networking dashboard

An example Grafana screen showing the storage dashboard of IBM Storage Fusion:An example Graffana screen showing the storage dashboard of IBM Spectrum Fusion. It includes Throughput, Disk Read Time, Write IOPs, Read IOPs, and Disk Capacity.

For more information about Grafana, see https://grafana.com/docs/. For more information about Prometheus, see https://prometheus.io/docs/introduction/overview/.

Log and audit logs

Logs and audit logs are collected in ElasticSearch for IBM Storage Fusion and viewed in Kibana. The IBM Storage Fusion HCI System component logging and OpenShift Container Platform logs and audit logs can be viewed on Kibana. For more information about searching the logs, see https://docs.openshift.com/container-platform/4.12/logging/cluster-logging-visualizer.html.

To open Kibana from IBM Storage Fusion HCI System user interface, click the Kibana outbound link. As a prerequisite, you must create a basic index pattern. For the steps to create the basic index pattern, see Creating an audit index pattern and visualize audit data in Kibana.

To know more about Kibana, see https://www.elastic.co/guide/en/kibana/index.html.

To export OpenShift Container Platform logs from Kibana or Elasticsearch, see https://access.redhat.com/solutions/4599791.