Collecting performance data for IBM System Dashboard

The IBM System Dashboard is a performance monitoring tool that is distributed with Content Services. The System Dashboard displays data that administrators can use to proactively identify and resolve potential performance problems before they occur. The performance data can also be archived for management reporting and trend analysis.

About this task

You can collect data from a Cloud Pak for Business Automation deployment that includes the Content Processing Engine (CPE) by using the Archiving Manager. The Archiving Manager is distributed with IBM System Dashboard and is a utility that gathers data that is needed for later analysis.

The Archiving Manager (archiver.jar) must be run from within the CPE container. The exported data is then copied to a local machine where the System Dashboard can access it. The data within the dashboard looks identical to data directly streamed into the dashboard.

A script is needed to run the archiver.jar, and an XML file is needed to designate servers to collect the data from the CPE container. Run the archiver.jar on multiple CPE containers to collect the data from each container.

Procedure

  1. Copy all the Archiving Manager files into a folder (achiver) on a local client with access to the cluster.

    For more information, see Installing the System Dashboard.

  2. Establish a connection to the OCP cluster from the client. Use the oc command-line tools installed.
  3. Log in to the OCP cluster from a bastion host or the infrastructure node.
  4. Set the following environment variables.
    export namespace=<namespace>
    export podname=$(oc get pod | grep cpe-deploy | awk '{print $1}')
  5. Copy the files to the CPE text extraction (cpe-textextstore) volume, or any available persistent volume for the CPE, except the cpe-cfgstore as it holds the Liberty overrides.
    oc cp ./archiver $namespace/$podname:/opt/ibm/textext

    Make sure that the volume is allocated sufficient space to hold the number of archiver files you want.

  6. Connect to the CPE pod and start the archiver.sh script.
    oc exec $podname -i -t -- bash
    cd /opt/ibm/textext/archiver
    chmod 770 archiver.sh
    ./archiver.sh &

    The following archiver.sh script sets values for how long to capture data (-t 28 hours), at what interval (-I 60 seconds), and with files replaced by new files every six hours (-n). For more information, see Archiving Manager.

    #!/bin/sh
    # Script assumes archiver.jar and cluster.xml exist on the volume for the cpe-textextstore in the /opt/ibm/textext/archiver directory.
     
    cd /opt/ibm/textext
    mkdir /opt/ibm/textext/archiver/logs
    chmod 770 -R /opt/ibm/textext/archiver/logs
     
    JAVA_HOME=/opt/ibm/java/jre/bin
    
    /opt/ibm/java/bin/java 
    $JAVA_HOME/java -jar /opt/ibm/textext/archiver/archiver.jar -d /opt/ibm/textext/archiver/logs -t 12:00 -n 06:00 -i 60 /opt/ibm/textext/archiver/cluster.xml
     
    exit 0

    The example cluster.xml file used by the script, sets the Host to capture archives from the localhost or CPE container. Similar XML files can be created in the System Dashboard.

    <?xml version='1.0' encoding='UTF-8'?>
    <Clusters>
       <Cluster>
          <Name>local</Name>
          <Interval>60</Interval>
          <MaxDataPoints>1500</MaxDataPoints>
          <Host>localhost</Host>
       </Cluster>
    </Clusters>
    Note: If the persistent volume uses storage with a reclaimPolicy of retain, the files remain in the persistent volume even when the CPE container is stopped. All the pods can see the files that are copied to the location because the volumes that are used with the CPE instances are rwx. The archived files can be identified only from their timestamps as they are all named with the localhost they are gathered from.
  7. When you are ready, copy the archiver files from the CPE pod to a local directory.
    The following commands copy the files to a directory called archiver_logs.
    mkdir ./archiver_logs
    oc cp $podname:/opt/ibm/textext/archiver/logs ./archiver_logs

    If you want to examine the data during the process, the files can be copied and recopied at various intervals.

  8. Use the System Dashboard to process the archive files.

    You can open an archive file for analysis within the Dashboard. From the File menu, click the Open Archives option. After the archive file is loaded, its data is placed in a virtual cluster named Archives.

    For more information, see Using System Dashboard for Enterprise Content Management.