Maximo Application Suite
Customer-managed

Configuring Red Hat OpenShift cluster monitoring

Maximo® Application Suite applications provide application level metrics and dashboards for monitoring various aspects for application health and performance. Maximo Application Suite uses the Prometheus monitoring stack within OCP for storing application level metrics. Maximo Application Suite also uses Grafana for rendering application level metrics in integrated dashboards.

Red Hat® OpenShift® Container Platform (OCP) is preconfigured with a Prometheus based monitoring stack that collects resource level metrics from compute nodes in the cluster. Some examples of the metrics that are collected by OCP are compute node CPU, memory, disk, and I/O metrics. Maximo Application Suite applications cannot use the preconfigured Prometheus cluster for collecting Maximo Application Suite application metrics, as it is reserved for OCP cluster metrics. Instead, a second Prometheus cluster can be enabled and configure to collect metrics from user-defined projects.

For more information about the Red Hat OpenShift Monitoring stack, see Red Hat OpenShift Container Platform : Monitoring overview.

Tip: This task maps to the following Ansible role: cluster_monitoring. For more information, see IBM Maximo Application Suite installation with Ansible collection.

Before you begin

Consider how many days to store Prometheus metrics. The number of retention days determines how much storage to configure for both the base and user workload Prometheus clusters. Allocate 5 GB - 10 GB of storage for each retention day. The amount of storage that is required by Prometheus depends on the number of compute nodes in the cluster, the number of Maximo Application Suite applications installed, and the number of retention days.

About this task

Use the following storage classes to configure Prometheus storage, according to the Cloud Service Provider hosting your Red Hat OpenShift cluster:
Table 1. Storage classes
Cloud Service Provider Prometheus Storage Classes - ${PROMETHEUS_STORAGE_CLASS}
On-premises ocs-storagecluster-ceph-rbd
AWS ocs-storagecluster-ceph-rbd
Azure
IBM Cloud ibmc-block-bronze

Procedure

Install by using the Red Hat OpenShift Container Platform web console.

  1. Update the cluster-monitoring-config and user-workload-monitoring-config ConfigMaps.
    1. Click Import YAML (Plus icon).
    2. Enter the following YAML to configure both the base and user workload Prometheus clusters.
      • Replace ${PROMETHEUS_STORAGE_CLASS} with the corresponding Prometheus Storage Class from the preceding table according to your Cloud Service Provider hosting your installation.
      • Any storage class that supports RWO access mode and file system volume mode is sufficient. The I/O requirements for the Prometheus persistent volumes are not significant.
      • In the example YAML, both the base and user workload Prometheus clusters are configured to retain metrics for 15 days.
      
      ---
      apiVersion: v1
      kind: ConfigMap
      data:
        config.yaml: |
          prometheusOperator:
            baseImage: quay.io/coreos/prometheus-operator
            prometheusConfigReloaderBaseImage: quay.io/coreos/prometheus-config-reloader
            configReloaderBaseImage: quay.io/coreos/configmap-reload
          prometheusK8s:
            retention: "15d"
            baseImage: openshift/prometheus
            volumeClaimTemplate:
              spec:
                storageClassName: "${PROMETHEUS_STORAGE_CLASS}"
                resources:
                  requests:
                    storage: "150Gi"
          alertmanagerMain:
            baseImage: openshift/prometheus-alertmanager
            volumeClaimTemplate:
              spec:
                storageClassName: "${PROMETHEUS_STORAGE_CLASS}"
                resources:
                  requests:
                    storage: "20Gi"
          enableUserWorkload: true
          nodeExporter:
            baseImage: openshift/prometheus-node-exporter
          kubeRbacProxy:
            baseImage: quay.io/coreos/kube-rbac-proxy
          kubeStateMetrics:
            baseImage: quay.io/coreos/kube-state-metrics
          grafana:
            baseImage: grafana/grafana
          auth:
            baseImage: openshift/oauth-proxy
      metadata:
        name: cluster-monitoring-config
        namespace: openshift-monitoring
      
      
      ---
      apiVersion: v1
      kind: ConfigMap
      data:
        config.yaml: |
          prometheus:
            retention: "15d"
            volumeClaimTemplate:
              spec:
                storageClassName: "${PROMETHEUS_STORAGE_CLASS}"
                resources:
                  requests:
                    storage: "150Gi"
      metadata:
        name: user-workload-monitoring-config
        namespace: openshift-user-workload-monitoring
      
    3. Click Create.
  2. On the Workloads > StatefulSets page, switch to the openshift-user-workload-monitoring project and wait for the prometheus-user-workload statefulset to indicate that there are two pods in Running state.
    No configuration required in MAS. PodMonitor and ServiceMonitor resources that are created by Maximo Application Suite and Maximo Application Suite applications will automatically be registered with the user workload Prometheus cluster. MAS metrics are scraped by Prometheus.