Installing and configuring cluster logging

IBM Cloud Pak for Integration supports both Red Hat OpenShift cluster logging and user-defined logging solutions.

For running pods, you can use the logs that are available in the OpenShift console. To make logging data persistent, you need to install a logging solution for your cluster.

The following procedures require logging in to either the OpenShift web console or CLI.

Installing Red Hat OpenShift cluster logging

To install OpenShift cluster logging, begin by following the procedure in Installing cluster logging in the Red Hat OpenShift documentation.

Configuring the custom resource for cluster logging

This section offers some guidance on common settings for cluster logging in Cloud Pak for Integration. For detailed guidance see Red Hat OpenShift cluster logging.

Minimal install: If this is for a proof-of-concept where data loss or log loss is not a concern, and the cluster has limited resources, you can run a single node Elasticsearch cluster. To do this, update the redundancyPolicy to ZeroRedundancy and the nodeCount to 1 in the following snippet. If the cluster has no persistent storage and you still want to test the logging setup, you can set the storage to empty.

apiVersion: logging.openshift.io/v1
kind: ClusterLogging
metadata:
  name: instance
  namespace: openshift-logging
spec:
  logStore:
    type: "elasticsearch"
    elasticsearch:
      nodeCount: 1
      storage: {}
      redundancyPolicy: ZeroRedundancy

Example custom resource: This is an example ClusterLogging custom resource snippet for deploying cluster logging using the ibmc-block-gold RWO storage class:

apiVersion: logging.openshift.io/v1
kind: ClusterLogging
metadata:
  name: instance
  namespace: openshift-logging
spec:
  collection:
    logs:
      type: fluentd
      fluentd: {}
  curation:
    curator:
    schedule: 30 3 * * *
    type: curator
  logStore:
    elasticsearch:
      nodeCount: 3
      redundancyPolicy: SingleRedundancy
      storage:
        size: 200G
        storageClassName: ibmc-block-gold
    retentionPolicy:
      application:
        maxAge: 7d
      infra:
        maxAge: 7d
      audit:
        maxAge: 7d
    type: elasticsearch
  managementState: Managed
  visualization:
    kibana:
      replicas: 1
    type: kibana

Deploying components individually: If you do not want to deploy all the components of the OpenShift cluster logging resource, you can install the ones you want individually. For example, this snippet allows you to deploy only the fluentd collector:

apiVersion: logging.openshift.io/v1
kind: ClusterLogging
metadata:
  name: instance
  namespace: openshift-logging
spec:
  collection:
    logs:
      type: fluentd
      fluentd: {}
  managementState: Managed

Verifying cluster logging

Verify that the cluster IP address of the service is working with this command (replace the placeholders with the applicable pod name and cluster IP address). Make sure you are logged into the OCP cluster with a token:

token=$(oc whoami -t)
oc exec <THE_POD_NAME> -n openshift-logging -- curl -sS -k -H "Authorization: Bearer ${token}" https://<THE_IP_CLUSTER_ADDRESS>:9200/_cat/health

You should get output similar to this example:

1611854452 17:20:52 elasticsearch green 3 3 414 207 0 0 0 0 - 100.0%

Accessing cluster logging in IBM Cloud Pak Platform UI

  1. If you are not already logged in to the OpenShift web console or CLI, log in now.

  2. Log in to IBM Cloud Pak Platform UI.

  3. Navigate to the instance view that lists the instances for which you need to access logging: Creation form yaml

  4. Click Logs.

  5. By default, no index patterns are created, and therefore kibana does not show any logs from the instance. To get the logs, create an index pattern of app-*.

Exposing cluster logging

  1. Extract the CA Certificate using oc extract secret/elasticsearch --to=. --keys=admin-ca -n openshift-logging

  2. Create a route file called es-route.yaml with this snippet:

apiVersion: route.openshift.io/v1
kind: Route
metadata:
    name: elasticsearch
    namespace: openshift-logging
spec:
    host:
    to:
        kind: Service
        name: elasticsearch
    tls:
        termination: reencrypt
        destinationCACertificate: |
  1. To add the CA certificate content to the YAML file and create the route, run:

cat ./admin-ca | sed -e "s/^/      /" >> esroute.yaml
oc create -f esroute.yaml
  1. Check that the route is working as expected:

token=$(oc whoami -t)

routeES=`oc get route elasticsearch -n openshift-logging -o jsonpath={.spec.host}`

curl -sS -k -H "Authorization: Bearer ${token}" "https://${routeES}/_cat/health"

Install the cluster log forwarder

For detailed guidance on how to setup a log forwarder, see Forwarding logs to external third-party logging systems.

Using a custom logging solution

When using a custom logging solution, configure the loggingUrl parameter of the Platform Navigator custom resource. This allows the deployment interface to link to the logging stack in the UI. For more information, see "Custom resource values" in Using the Platform UI.

Once the configuration is successful, you can access persistent logging by clicking Logs for each instance provisioned in the Platform UI. Instances can be found in their respective overflow menus, which are accessed from the common header menu after you click Integration instances in the navigation menu.