Monitoring Tomcat applications in Kubernetes environment

Before you monitor Tomcat applications in IBM Cloud Private or OpenShift, you must connect the data collector to the Monitoring server by creating a secret. Then, you can update your application deployment to monitor the Tomcat applications.

Before you begin

About this task

Configure the Tomcat light weight data collector to the server by creating a secret. Then, update the application deployment to use the Docker file that you build. You can create a secret by using the global.environment file and keyfiles that are extracted from the Monitoring configuration package. Then, you can mount this secret when you deploy the application as a Kubernetes deployment.

Procedure

  1. Go to the ibm-cloud-apm-dc-configpack directory where you extract the configuration package in Obtaining the server configuration information, and run the following command to create a secret to connect to the server, for example, name it as icam-server-secret.

     kubectl -n my_namespace create secret generic icam-server-secret \
     --from-file=keyfiles/keyfile.jks \
     --from-file=keyfiles/keyfile.p12 \
     --from-file=keyfiles/keyfile.kdb \
     --from-file=global.environment
    

    where my_namespace is the namespace where you want to create the secret.

  2. Update the Docker file of your Tomcat application to include Tomcat light weight data collector details. Following is the sample of a Docker file:

       RUN chmod a+w <tomcat_home>/bin/
       ADD <path_to_tomcatdc_folder> /opt/tomcat_datacollector
       # add edited silent config file
       ADD silent_config_tomcat_dc.txt <tomcat_datacollector>/bin/
       RUN chmod +x <tomcat_datacollector>/bin/*.sh
       RUN <tomcat_datacollector>/bin/config_dc.sh -silent
       WORKDIR  <tomcat_datacollector>/runtime
       RUN cp  <tomcat_datacollector>/runtime/<TestServer>/setenv.sh <tomcat_home>/bin
       RUN chmod +x  <tomcat_home>/bin/setenv.sh
       #Optional commands if JMX authentication needs to be enabled.
       ADD jmxremote.access <tomcat_home>/conf
       ADD jmxremote.password <tomcat_home>/conf
       RUN chmod -R 600 <tomcat_home>/conf/jmxremote.password
       RUN chmod -R 600 <tomcat_home>/jmxremote.access
    

    where:

    • path_to_tomcatdc_folder is an extracted folder for the downloaded tomcat_datacollector.tgz
    • tomcat_datacollector is the path to the Tomcat Lightweight data collector home directory
    • tomcat_home is the Tomcat server path
    • TestServer value should be the same as given in the silent configuration file.
    • Ensure jmxremote.access and jmxremote.password are present with appropriate contents for credentials.

      Note: We can configure the Tomcat data collector without JMX authentication. you can do this by omitting the following lines:

      ADD jmxremote.access <tomcat_home>/conf
      ADD jmxremote.password <tomcat_home>/conf
      RUN chmod -R 600 <tomcat_home>/conf/jmxremote.password
      RUN chmod -R 600 <tomcat_home>/jmxremote.access
      

      Note: The RUN cp <tomcat_datacollector>/runtime/<TestServer>/setenv.sh <tomcat_home>/bin command will overwrite the setenv contents (if any) for the existing tomcat server image. Ensure, that this content is part of your setenv.sh file in addition to any existing files. You can skip these steps if you do not want specific parameters for setenv.sh. If you want to retain any specific parameters in existing setenv.sh contents, the sample setenv contents are as follows:

      export ITCAMDCHOME=<tomcat_datacollector_path>
      export NLSPATH=$ITCAMDCHOME/toolkit/msg/%L/%N.cat
      export LD_LIBRARY_PATH=$ITCAMDCHOME/toolkit/lib/lx8266:$ITCAMDCHOME/toolkit/lib/lx8266/ttapi:$LD_LIBRARY_PATH
      export LIBPATH=$ITCAMDCHOME/toolkit/lib/lx8266:$ITCAMDCHOME/toolkit/lib/lx8266/ttapi:$LIBPATH
      export ITCAMDCVERSION=tomcat_datacollector
      export HOSTNAME=localhost
      export JRE_HOME=<Java_home_in_silent_configFile>
      export SERVER_NAME=<TestServer>
      
      export CATALINA_OPTS=" -Dcom.sun.management.jmxremote=true -Dcom.sun.management.jmxremote.port=10050 -Dcom.sun.management.jmxremote.rmi.port=10050 -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.local.only=false -Djava.rmi.server.hostname=127.0.0.1 $CATALINA_OPTS"
      
      export JAVA_OPTS=" -DSERVER_NAME=<TestServer> -DAppServerName=$ITCAMDCHOME/runtime/<TestServer> -javaagent:$ITCAMDCHOME/toolkit/lib/tk_nodyninst.jar -Xbootclasspath/p:$ITCAMDCHOME/toolkit/lib/bcm-bootstrap.jar -DisDeepDiveEnabled=true -Djava.security.policy=${ITCAMDCHOME}/itcamdc/etc/datacollector.policy -Dcom.ibm.tivoli.itcam.ai.runtimebuilder.inputs=$ITCAMDCHOME/runtime/<TestServer>/<TestServer>_DCManualInput.txt -Dcom.ibm.tivoli.itcam.toolkit.runtime.dir=$ITCAMDCHOME/runtime/<TestServer> -Dcom.ibm.tivoli.itcam.toolkit.ai.runtimebuilder.enable.rebuild=true  -Dam.home=$ITCAMDCHOME/itcamdc -Dappserver.platform=tomcat90 -DRUNTIME_DIR=$ITCAMDCHOME/runtime/<TestServer> -Dam.camtoolkit.gpe.dchome.directory=$ITCAMDCHOME -Djlog.propertyFileDir.CYN=$ITCAMDCHOME/runtime/<TestServer> $JAVA_OPTS"
      export TOMCAT_SERVER_DIR=<tomcat_server_home_dir>
      

      where:

    • tomcat_datacollector_path is the tomcat data collector installer path in the container
    • Java_home_in_silent_configFile is the java home mentioned in silent configuration file
    • TestServer is the SERVER_NAME property in silent configuration file
    • tomcat_server_home_dir is the tomcat server home directory
  3. Create a silent_config_tomcat_dc.txt silent configuration file in the same directory as your Dockerfile. Following is the example of a silent configuration file for Tomcat light weight data collector:

       JAVA_HOME=/usr
       TT_STATUS=true
       DD_STATUS=false
       MT_STATUS=false
       SERVER_NAME=TestServer
       SERVER_HOME=/opt/tomcat/apache-tomcat-9.0.12
       SERVER_VERSION=8
       SERVER_JMX_HOSTNAME=127.0.0.1
       SERVER_JMX_PORT_NUMBER=9988
       SERVER_JMX_USER_NAME=
       SERVER_JMX_PASSWORD=
    

    where:

    • JAVA_HOME is the Java home that is used by the Tomcat server. The default value is /usr.
    • TT_STATUS is the flag to enable transaction tracking.
    • DD_STATUS and MT_STATUS are currently not used, and can be set to default values.
    • SERVER_NAME is the name of the Tomcat server that is monitored by the data collector. The default value is 'TestServer'
    • SERVER_HOME is the Tomcat server home directory, default value is /opt/tomcat
    • SERVER_VERSION is Tomcat server version, default value is 8. Supported values are 8 and 9.
    • SERVER_JMX_HOSTNAME is the hostname where JMX for corresponding Tomcat Server will be accessible.
    • SERVER_JMX_PORT_NUMBER is the port number to be configured for accessing JMX for the containerized Tomcat server.
    • SERVER_JMX_USER_NAME is the user name for authentication of JMX connection. Default value is blank that indicates 'no authentication'.
    • SERVER_JMX_PASSWORD is the password of JMX user for authentication of JMX connection. Default value is blank that indicates 'no authentication'.
  4. Build and tag the new Docker image of the application and push this new image to the private registry. Also, ensure that you include the docker group when you build and push the image, as shown here:

    docker build -t <openshift_image_registry>/<image_group>/<image-name>:<tag>.
    docker push <openshift_image_registry>/<image_group>/<image-name>:<tag>
    

    For example,

    docker build -t default-route-openshift-image-registry.apps.tomdcocp.os.fyre.ibm.com/openshift/tomcatdc:latest .
    docker push default-route-openshift-image-registry.apps.tomdcocp.os.fyre.ibm.com/openshift/tomcatdc:latest
    
  5. To update the application deployment yaml file to mount the secret, add the volume mount information to the Containers definition. For example:

     volumeMounts:
         - name: global-environment
           mountPath: /opt/tomcat_datacollector/itcamdc/etc/global.environment
           subPath: global.environment
         - name: keyfile
           mountPath: /opt/tomcat_datacollector/itcamdc/etc/keyfile.jks
           subPath: keyfile.jks
    
  6. Add the volume information to the Spec: object in the application deployment yaml file as shown here:

            volumes:
         -
           name: global-environment
           secret:
             optional: true
             secretName: icam-server-secret
         -
           name: keyfile
           secret:
             optional: true
             secretName: icam-server-secret
    

    where:

    • <tomcat_datacollector>/itcamdc/etc/ is the fixed value to store the files in the docker container.
    • icam-server-secret is the name of the secret that is created in step 1.
 Example of a `yaml` file that is updated:
 ```
  apiVersion: apps/v1
  kind: Deployment
  metadata:
    name: tomcat-dc-app
    namespace: icam
    labels:
      app: tomcat-dc
  spec:
  replicas: 1
  selector:
    matchLabels:
      app: tomcat-dc
      pod: tomcat-dc
  template:
    metadata:
      name: tomcat-dc
      labels:
        app: tomcat-dc
        pod: tomcat-dc
    spec:
      imagePullSecrets:
        - name: icam-server-secret
      containers:
      - env:
  - name: LATENCY_SAMPLER_PARAM
          value: "1"
        name: tomcat-dc
        image: image-registry.openshift-image-registry.svc:5000/default/tomcatdc:latest
        imagePullPolicy: Always
        volumeMounts:
        - name: global-environment
        mountPath: /opt/tomcat_datacollector/itcamdc/etc/global.environment
        subPath: global.environment

   # Add new entry for key file: 

      name: keyfile
      mountPath: /opt/tomcat_datacollector/itcamdc/etc/keyfile.jks
      subPath: keyfile.jks 
   volumes:

   name: global-environment
      secret: 
        optional: true
        secretName: icam-server-secret
    - name: keyfile
      secret: 
        optional: true
        secretName: icam-server-secret
```
  1. Deploy the Kubernetes service.

    
     apiVersion: v1
     kind: Service
     metadata: 
     name: tomservice
     namespace: default
    spec: 
    ports: 
     - 
       name: port1
       port: 8088
       protocol: TCP
       targetPort: 8080
     - 
       name: port2
       port: 9988
       protocol: TCP
       targetPort: 9988
     selector: 
     app: tomcat-dc
    

    where: image-registry.openshift-image-registry.svc is the image registry.

  1. If you are working with a local application deployment yaml file, then you must run the following command for the changes to take effect:

    kubectl create -f application_deployment_yaml_file -n my_namespace