For LINUX platforms

Analyzing application Logs on Red Hat OpenShift Container Platform with Elasticsearch, Fluentd, and Kibana

You can deploy the open source Elasticsearch, Fluentd, and Kibana stack on a Kubernetes cluster to aggregate application logs on Red Hat® OpenShift® Container Platform 4.16 and analyze these logs on the Kibana dashboard.

Pod processes running in Kubernetes frequently produce application logs. To effectively manage the application log data and ensure that no loss of log data occurs when a pod stops, deploy log aggregation tools on the Kubernetes cluster. Log aggregation tools help you persist, search, and visualize the log data that is gathered from the pods across the cluster.

The following information describes how to deploy the Elasticsearch, Fluentd, and Kibana (EFK) stack by using the Elasticsearch Operator and the Red Hat OpenShift logging operator. Use this preconfigured EFK stack to aggregate all container logs. After a successful installation, the EFK pods exist inside the openshift-logging namespace of the cluster. You can view the application log data on the Kibana dashboard.

Installing Red Hat OpenShift Container Platform logging

  1. Install the Red Hat OpenShift logging component.

    You can use either the Elasticsearch Operator or the Loki Operator to manage the default log storage. To use the Loki Operator to manage the default log storage, see Installing Logging. To use the Elasticsearch Operator to manage the default log storage, complete the following steps.

    • Before you deploy the example Cluster Logging instance YAML in the guide, install the Red Hat OpenShift Elasticsearch Operator into all namespaces in your cluster.
    • Ensure that you set up storage for Elasticsearch through persistent volumes. When you deploy the .yaml file for the Red Hat OpenShift logging instance, the Elasticsearch pods that are created automatically search for persistent volumes to bind to. If no persistent volumes are available, the Elasticsearch pods are stuck in a pending state.

    In-memory storage is also possible when you remove the storage definition from the Red Hat OpenShift logging instance from the .yaml file, but this in-memory storage is not suitable for production.

  2. Verify that the installation completes without any errors and that you see the Red Hat OpenShift logging, Elasticsearch, Fluentd, and Kibana pods are running in the openshift-logging namespace. The number of pods that are running for each of the EFK components varies depending on the configuration that is specified in the ClusterLogging custom resource (CR). The following example shows the pods that are running in the openshift-logging namespace.
    oc get pods -n openshift-logging
    
    NAME                                            READY   STATUS      RESTARTS   AGE
    cluster-logging-operator-874597bcb-qlmlf        1/1     Running     0          150m
    curator-1578684600-2lgqp                        0/1     Completed   0          4m46s
    elasticsearch-cdm-4qrvthgd-1-5444897599-7rqx8   2/2     Running     0          9m6s
    elasticsearch-cdm-4qrvthgd-2-865c6b6d85-69b4r   2/2     Running     0          8m3s
    collector-rmdbn                                 1/1     Running     0          9m5s
    collector-vtk48                                 1/1     Running     0          9m5s
    kibana-756fcdb7f-rw8k8                          2/2     Running     0          9m6s

The Red Hat OpenShift logging instance also exposes a route for external access to the Kibana console, as shown in the following example.

oc get routes -n openshift-logging

NAME     HOST/PORT                                               PATH   SERVICES   PORT    TERMINATION          WILDCARD
kibana   kibana-openshift-logging.apps.host.kabanero.com                kibana     <all>   reencrypt/Redirect   None

Parsing JSON container logs

In Red Hat OpenShift, the Red Hat OpenShift logging Fluentd collectors capture the application container logs and puts each log in a message field of a Fluentd JSON document as a string. If you output the logs in JSON format, they are nested in the message field of the Fluentd JSON document. To use the JSON log data inside a Kibana dashboard, the individual fields inside the nested JSON logs must be parsed.

You can parse these nested JSON application container logs by deploying a Cluster Log Forwarder instance. The deployed Cluster Log Forwarder instance copies the nested JSON logs in to a separate structured field inside the Fluentd JSON document. The individual fields from the JSON container log can be accessed in the structured.<field_name> format.

Different products or applications can use the same JSON field names to represent different data types. To avoid conflicting JSON fields, the Cluster Log Forwarder instance requires JSON container logs from different products or applications to be separated into unique indexes. In the following instructions, the ClusterLogForwarder CR creates these unique indexes by using a label that is attached to the service of your application.

  1. Add the logFormat: liberty label to your WebSphereLibertyApplication CR. The Cluster log Forwarder instance uses this label later to create a unique index for the application logs of the container.
    kind: WebSphereLibertyApplication
    apiVersion: v1
    metadata:
      name: <your-liberty-app>
      labels:
        logFormat: liberty
    ....
  2. Restart your application deployment to include the updated label in the service and pod of your application.
  3. Create the following cluster-logging-forwarder.yaml file to configure a Cluster log Forwarder instance that parses your JSON container logs.
    apiVersion: logging.openshift.io/v1
    kind: ClusterLogForwarder
    metadata:
      namespace: openshift-logging
      name: instance
    spec:
      inputs:
        - name: liberty-logs
          application:
            namespaces:
              - liberty-app   #modify this value to be your own app namespace
      outputDefaults:
        elasticsearch:
          structuredTypeKey: kubernetes.labels.logFormat
          structuredTypeName: nologformat
      outputs:
      pipelines:
        - name: parse-liberty-json
          inputRefs:
            - liberty-logs
          outputRefs:
            - default
          parse: json

The .yaml file creates a parse-liberty-json pipeline for the ClusterLogForwarder kind. This pipeline takes an input from the liberty_logs input reference all the container logs from the liberty-app namespace. The pipeline outputs the container logs to the Red Hat OpenShift default Elasticsearch log store for the cluster. The parse: json definition enables the JSON log parsing.

The configured outputDefaults.elasticsearch.structuredTypeKey parameter builds a unique index for the container logs by adding the app- prefix to the logFormat label in the container. Previously, the logFormat: liberty label was added to the service of your WebSphereLibertyApplication CR. Therefore, the log files that are forwarded to the Elasticsearch default log store follow the app-liberty-* index pattern. If no logFormat label exists in your application container, the outputDefaults.elasticsearch.structuredTypeName parameter provides a fallback index name.

Deploy the Cluster log Forwarder instance by using the following command:
oc create -f cluster-logging-forwarder.yaml

For more information about parsing JSON logs, see the Red Hat OpenShift guide on enabling JSON logging.

Viewing application logs in Kibana

In cases where the application server provides the option, output application logs in JSON format. With the JSON format, you can take advantage of the Kibana dashboard functions. Kibana is then able to process the data from each field of the JSON object to create customized visualizations for that field.
  1. View the Kibana dashboard by using the Kibana route URL.
    Run the following command to get the Kibana route URL.
    oc get routes -n openshift-logging
  2. Log in to the Kibana dashboard with your Kubernetes user ID and password.

    The browser redirects you to Management > Create index pattern on the Kibana dashboard.

  3. For the index pattern field, enter the app-liberty-* value to select all the Elasticsearch indexes used for your application logs.

    The following image shows the Create index pattern page where you enter the index value.

    Kibana dashboard page where you enter the index pattern
  4. Click Discover to view the application logs that are generated by the deployed application and that include the app-liberty-* value in the file name.

    The following image displays the application logs for the app-liberty-* value.

    Kibana dashboard page that displays the application logs for the app-liberty-* value
  5. Expand an individual log file entry to see the structured.* formatted individual fields, parsed and copied out of the nested JSON log entry.

    The following image displays these structured.* formatted individual fields.

    Kibana dashboard page that displays the expanded log file entry
You can import and use the following Kibana dashboards for WebSphere® Application Server Liberty logging events:
To import a dashboard and its associated objects, complete the following steps:
  1. Click Management > Saved Objects > Import.
  2. Select the dashboard file and click the Yes, overwrite all.
  3. Click Dashboard and view the log files.

The following image displays the logs on an imported Kibana dashboard. Imported Kibana dashboard page that displays the logs; visualizes message, trace and FFCD information (by namespace, pd, container, host, user directory, or server); and shows counts of fatal errors, errors, warnings, and system errors

Configuring and uninstalling Red Hat OpenShift logging

To change the installed EFK stack, edit the ClusterLogging CR of the deployed Red Hat OpenShift logging instance.

To uninstall the EFK stack, remove the Red Hat OpenShift logging instance from the Red Hat OpenShift Logging Operator Details page.