Monitoring IBM MQ

You can monitor IBM MQ with the IBM MQ sensor. After you install the Instana agent, IBM MQ sensor is installed automatically. You can view metrics that are related to IBM MQ in the Instana UI after you configure IBM MQ sensor as outlined in the Configuring section.

Support information

To make sure that the IBM MQ sensor is compatible with your current setup, check the following support information sections:

Supported operating systems

Supported versions and support policy

The sensor supports IBM MQ 8.0.0 and later.

Instana IBM MQ Tracing supports IBM MQ 9.0.0 and later.

The following table shows the latest supported version and support policy:

Table 1. Latest supported version and support policy
Technology Support policy Latest version Latest supported version
IBM MQ 45 days 9.4 9.4

For more information about the support policy, see Support strategy for sensors.

Supported client-side tracing

Instana supports client-side tracing for Java.

The configuration steps in the Configuring section are for monitoring and collecting IBM MQ metrics. You can continue to use IBM MQ Tracing on your IBM MQ hosts as you need.

To use IBM MQ server-side tracing, you must install and enable IBM MQ Tracing first. For detailed steps, see IBM MQ Tracing. For IBM i servers, see IBM MQ Tracing on IBM i.

Common scenarios

IBM MQ sensor can work in a Kubernetes cluster, Docker, virtual machine, and bare metal in hybrid environments. It can also monitor both local and remote Queue Manager in current complex environments. See the following common scenarios for reference:

  • Monitoring IBM MQ Queue Manager instances through the local or client binding modes depend on the privilege of the user account into the mqm group of IBM MQ to run the Instana host agent:
Table 3. IBM MQ sensor connection monitoring scenarios
Instana host agent and IBM MQ location Type of monitoring Connection mode (to queue manager) User account requirement Configuration for monitoring
In the same virtual machine Local monitoring Local binding mode User account in the mqm group Automatically monitors all IBM MQ Queue Manager instances (for privileged user account)
In the same virtual machine Local monitoring Clint binding mode User account is not in the mqm group Configure IBM MQ connection parameters in the agent configuration file <agent_install_dir>/etc/instana/configuration.yaml
In the same Kubernetes cluster or Docker environment Local monitoring Clint binding mode NA configure IBM MQ connection parameters in the agent configuration file. For more information, see the Instana agent in a Kubernetes cluster and Instana agent in a Docker container sections
In different environments Remote monitoring Clint binding mode Not applicable configure some IBM MQ connection parameters in the agent configuration file. For more information, see the Instana agent in a Docker container section

For more information about MQ connection parameters, see the Instana agent in a virtual machine section.

High availability (HA) scenarios

If you want to monitor IBM MQ HA queue managers, install the Instana host agent on each monitored Queue Manager instance locally and set the support_ha parameter to true in the <agent_install_dir>/etc/instana/configuration.yaml file. The IBM MQ HA queue manager support is not enabled by default to ensure consistency with a normal queue manager. For more information, see the Configuring section. Enabling the HA support for the queue manager activates monitoring of the following IBM MQ HA environment: multi-instance queue managers, RDQM, native HA, and single instances in Red Hat OpenShift Container Platform.

When the monitoring of HA queue managers is supported, the infrastructure link between the Instana IBM MQ sensor and IBM MQ Tracing in call details continues to work after you configure tracing. For more information, see Troubleshooting.

After you enable the HA support, you can get the following features:

  • Monitor the HA queue managers as one queue manager with a unified view of continuous historical data that is obtained by aggregating data from all the HA Queue Manager instances.
  • Identify the specific HA node on which the current queue manager is running.
  • Trigger event by default when the HA queue manager switches over.

After you enable the IBM MQ HA queue manager, you can see the following changes in the Instana UI:

  • Before IBM MQ HA queue manager monitoring is supported, all instances of HA queue managers are monitored and shown separately with the label qmName@host in the Instana UI. Each instance is shown as a layer of the host where it is running. After IBM MQ HA queue manager monitoring is supported, all instances of HA queue managers are monitored as a unified queue manager, and their data is aggregated. In this case, all instances of HA queue managers are monitored and displayed as one queue manager with the label qmNameas a separated box. The name qmName@host1-host2-host3 is displayed in summary information on the IBM MQ Queue Manager instance dashboard in the Instana UI.
  • The correlation with the host where the Queue Manager instance is running is removed due to the aggregation of 3 HA queue managers. So the original filters that are related with the queue manager host no longer work. A new filter like entity.ibmmq.qm.activeNode:* can to be used for searching the HA queue manager.
  • You need to set the availability zone in the ibmmq plug-in section in the agent configuration.yaml file instead of the host section because the queue manager correlation with the original single host is removed. If the zone is set in the host part, the queue manager is displayed in an undefined zone after the HA support is enabled. You need to reset the availability zone in the ibmmq plug-in section.
  • In the Red Hat OpenShift Container Platform environment, if you want to manually set support_ha in the configuration.yaml file to activate HA support, you must edit the deployment file and redeploy the Instana host agent. In other environments, setting support_ha in the configuration.yaml file activates the HA support without restarting an agent.

Replicated data queue manager (RDQM) support

You need to install the Instana host agent on each RDQM host. The host must be a virtual machine or physical machine of the Linux platform. Then, Instana automatically discovers and monitors the running and standby Queue Manager instances through local monitoring. The Instana UI displays only 1 RDQM queue manager name with the aggregated data of 3 Queue Manager instances. You can view the information about HA Type, HA Role, HA Preferred Location, HA Floating IP address, Active Node, and Standby Nodes values on the dashboard of the running queue manager in the Instana UI.

Multi-instance queue managers support

All the distributed platforms and Kubernetes environments are supported. For the distributed platform, you need to install the Instana host agent on each Queue Manager instance of the multi-instance host. For the Kubernetes environment, you need to install the Instana host agent in the Kubernetes environment. Then, Instana automatically discovers and monitors queue managers that are in running, standby, and elsewhere status through local monitoring. The Instana UI displays only 1 multi-instance queue manager name with the aggregated data of the other Queue Manager instances. You can view the information about HA Type, Active Node, Standby Nodes, and Elsewhere Nodes values on the dashboard of the running queue manager in the Instana UI.

Native HA queue manager support

You need to install the Instana host agent in the Kubernetes environment. Then, Instana automatically discovers and monitors queue managers that are in running and replica status through local monitoring. The Instana UI display only 1 Native HA queue manager name with the aggregated data of the other Queue Manager instances. You can view the information about HA Type, Active Node, and Standby Nodes on the dashboard of the running queue manager in the Instana UI.

Single-instance queue manager in Red Hat OpenShift Container Platform support

You need to install the Instana host agent on the Kubernetes environment. Then, Instana automatically discovers and monitors the running Queue Manager instance through local monitoring. The Instana UI displays the information about HA Type and Active Node of this Queue Manager instance on the dashboard.

Important IBM MQ monitoring concepts

The following concepts are used in the Configuring section. Make sure that you fully understand these concepts and their differences before you start the configuration.

Local binding mode and client binding mode

Local and client binding modes refer to the connection mode that IBM MQ sensor connects to queue manager.

Table 4. Local binding and client binding modes
Local binding mode Client binding mode
By using the local binding mode, IBM MQ sensor can do fully automatic discovery and monitoring. By using the client binding mode, IBM MQ sensor can do only partially automatic discovery. In addition, you need to provide some connection parameters.
By using local binding mode, more monitoring metrics including metrics from local files can be provided. By using client binding mode, IBM MQ sensor can provide only metrics that are retrieved from IBM MQ PCF interface and IBM MQ queues.
If you use the local binding mode, you need to only add the Instana agent user to the mqm group of IBM MQ, and then all the IBM MQ data can be monitored. If you are using client binding mode, you need to configure queue manager name, channel name, and related authority to connect to queue manager as a client application.

Local monitoring and remote monitoring

Local and remote monitoring methods show whether IBM MQ sensor is in the same environment as queue manager. If IBM MQ sensor and queue manager are in the same environment, this is called local monitoring. If they are not in the same environment, this is called remote monitoring.

Table 5. Local monitoring and remote monitoring methods
Local monitoring Remote monitoring
The locally monitored Queue Manager instance is shown as one layer in the separated host or Kubernetes cluster node in the Infrastructure Map page of the Instana UI. The remotely monitored Queue Manager instances are shown in the Infrastructure Map page as separate boxes in an availability zone, and these instances are grouped by the availabilityZone configuration property or IBM MQ cluster name if present.
By using the local monitoring, IBM MQ sensor can connect to queue manager by either the local binding mode or client binding mode. By using the remote monitoring, IBM MQ sensor can connect to queue manager only by the client binding mode.

The collection of monitoring metrics is decided by the connection modes: local binding mode or client binding mode. The local binding mode can collect more metrics in local files.

Configuring

Instana supports monitoring of both remote and local IBM MQ Queue Manager instances.

  • By using the local binding mode, IBM MQ sensor can do fully automatic discovery and monitoring. So you do not need to complete the configuration steps in this section. However, the Instana agent user must be privileged, and then all the IBM MQ data can be monitored.

  • By using the client binding mode, IBM MQ sensor can do only partially automatic discovery. You need to configure the following fields in the agent configuration file <agent_install_dir>/etc/instana/configuration.yaml:

    com.instana.plugin.ibmmq:
    enabled: true
    poll_rate: 60 # The default is 60 seconds. The minimum value is 30 seconds.
    support_ha: false # true or false. The default value is false. If the value is true, the HA Queue manger instances will be shown as 1 aggregated Queue manager.
    queueManagers:
      QMGR01:    # Your Queue Manager name here. If there are queue managers with the same name, it is required to append '-<instance>' in the queue manager name to distinguish them. You can select any string for <instance>.
        host: '127.0.0.1' # Queue Manager host, required for remote monitoring. Remove it for local monitoring or when Instana agent is on Kubernetes cluster. (Optional)
        port: '1414' # Remote administration channel port, required for remote monitoring. Remove it for local monitoring or when Instana agent is on Kubernetes cluster. (Optional)
        channel: 'SYSTEM.ADMIN.SVRCONN' # Server connection channel
        username: 'mqmuser' # User ID to connect to MQ. (Required only when user ID and password checking is enabled. Optional)
        password: 'mqmuser' # User password to connect to MQ. (Required only when user ID and password checking is enabled. Optional)
        queuesIncludeRegex: '.*' # Regex for filtering inclusive queues. An example for multiple conditions: (^AMQ\..*)|(^ECHO\..*)|(^SYSTEM\.DEAD\..*) (Optional)
        queuesExcludeRegex: '' # Regex for filtering exclusive queues. An example for multiple conditions: (^AMQ\..*)|(^ECHO\..*)|(^SYSTEM\.DEAD\..*) (Optional)
        customEventQueues: 'SYSTEM.ADMIN.PERFM.EVENT, SYSTEM.ADMIN.CHANNEL.EVENT, SYSTEM.ADMIN.QMGR.EVENT' # User defined queue names to read performance/channel/qmgr events. Separated by comma. (Optional)
        customEvents: 'Alias Base Queue Type Error, Bridge Stopped, Channel Auto-definition Error, Channel Blocked, Channel Conversion Error, Channel Not Activated, Channel Not Available, Channel SSL Error, Channel SSL Warning, Channel Stopped, Channel Stopped By User, Default Transmission Queue Type Error, Default Transmission Queue Usage Error, Get Inhibited, Not Authorized, Put Inhibited, Queue Depth High, Queue Full, Queue Manager Not Active, Queue Service Interval High, Queue Type Error, Remote Queue Name Error, Transmission Queue Type Error, Transmission Queue Usage Error, Unknown Alias Base Queue, Unknown Default Transmission Queue, Unknown Object Name, Unknown Remote Queue Manager, Unknown Transmission Queue' # Filter custom events to trigger. Separated by comma. (Optional)
        availabilityZone: 'IBM MQ Custom Zone' # Cluster name will be used by default. (Optional)
        keystore: '/tmp/application.jks' # Keystore path for TLS connection. (Required only when TLS is enabled)
        keystorePassword: 'password' # Keystore password for TLS connection. (Required only when TLS is enabled)
        cipherSuite: 'TLS_RSA_WITH_AES_256_CBC_SHA256' # TLS cipher suite for TLS connection. (Required only when TLS is enabled)
        poll_rate: 60 # Metrics poll rate in seconds. The minimum value is 30 seconds. (Optional)
      QMGR02:    # Your Queue Manager name here. If there are queue managers with the same name, it is required to append '-<instance>' in the queue manager name to distinguish them. You can select any string for <instance>.
        host: '127.0.0.1' # Queue Manager host, required for remote monitoring. Remove it for local monitoring or when Instana agent is on Kubernetes cluster. (Optional)
        port: '1415' # Remote administration channel port, required for remote monitoring. Remove it for local monitoring or when Instana agent is on Kubernetes cluster. (Optional)
        channel: 'SYSTEM.ADMIN.SVRCONN'  # Server connection channel
        username: 'mqmuser' # User ID to connect to MQ. (Required only when user ID and password checking is enabled)
        password: 'mqmuser' # User password to connect to MQ. (Required only when user ID and password checking is enabled)
        queuesIncludeRegex: '.*' # Regex for filtering inclusive queues. An example for multiple conditions: (^AMQ\..*)|(^ECHO\..*)|(^SYSTEM\.DEAD\..*) (Optional)
        queuesExcludeRegex: '' # Regex for filtering exclusive queues. An example for multiple conditions: (^AMQ\..*)|(^ECHO\..*)|(^SYSTEM\.DEAD\..*) (Optional)
        customEventQueues: 'SYSTEM.ADMIN.PERFM.EVENT, SYSTEM.ADMIN.CHANNEL.EVENT, SYSTEM.ADMIN.QMGR.EVENT' # User defined queue names to read performance/channel/qmgr events. Separated by comma. (Optional)
        customEvents: 'Alias Base Queue Type Error, Bridge Stopped, Channel Auto-definition Error, Channel Blocked, Channel Conversion Error, Channel Not Activated, Channel Not Available, Channel SSL Error, Channel SSL Warning, Channel Stopped, Channel Stopped By User, Default Transmission Queue Type Error, Default Transmission Queue Usage Error, Get Inhibited, Not Authorized, Put Inhibited, Queue Depth High, Queue Full, Queue Manager Not Active, Queue Service Interval High, Queue Type Error, Remote Queue Name Error, Transmission Queue Type Error, Transmission Queue Usage Error, Unknown Alias Base Queue, Unknown Default Transmission Queue, Unknown Object Name, Unknown Remote Queue Manager, Unknown Transmission Queue' # Filter custom events to trigger. Separated by comma. (Optional)
        availabilityZone: 'IBM MQ Custom Zone' # Cluster name will be used by default. (Optional)
        keystore: '/tmp/application.jks' # Keystore path for TLS connection. (Required only when TLS is enabled)
        keystorePassword: 'password' # Keystore password for TLS connection. (Required only when TLS is enabled)
        cipherSuite: 'TLS_RSA_WITH_AES_256_CBC_SHA256' # TLS cipher suite for TLS connection. (Required only when TLS is enabled)
        poll_rate: 60 # Metrics poll rate in seconds. The minimum value is 30 seconds. (Optional)
    
    

Notes:

  • If two IBM MQ Queue Manager instances with the same name is configured, you need to append different -<instance> to the queue manager name to distinguish them in the same configuration file. You can select any string for <instance>.

  • The queuesIncludeRegex or queuesExcludeRegex list is a regex expression to filter queues. If both lists are defined, the exclusive list has the priority over the inclusive list.

  • The support_ha field can be set as true or false to indicate whether the IBM MQ HA queue managers are monitored as a whole or not. If the value is true, all the data of IBM MQ HA queue managers is aggregated and shown as qmName in a separate box in the Instana UI as one queue manager, and the historical data is continuous even when the queue managers are in failover or switchover status. If the value is false, all the IBM MQ HA queue managers are monitored separately and shown in the Instana UI as qmName@host as normal queue managers, and thus the HA queue managers are shown as 2 or 3 queue managers as its instance number separately. The default value of this field is false. If this field is not set, its value is considered as false. The value of this field is hot-loaded. No need to restart the agent after you change the value of this field.

  • The customEventQueues is a list of user-defined queue names to read the performance, channel, and queue manager events. This field is optional. If this field is not defined, the default queue names SYSTEM.ADMIN.PERFM.EVENT, SYSTEM.ADMIN.CHANNEL.EVENT, and SYSTEM.ADMIN.QMGR.EVENT are used to read the performance, channel, and queue manager events. You can define 1 to 3 items so that the specified event queues are used to read events.

  • The customEvents is a list of custom event names that must be triggered. Remove the events that you don't need from the list. This field is optional. If this field is not defined, the current listed events are triggered. You must enable event queues in the customEventQueues parameter for your required custom events. The following table shows the relationship between the events and event queues:

    Table 6. Relation between the events and event queues
    Event queues Events
    SYSTEM.ADMIN.QMGR.EVENT Alias Base Queue Type Error, Default Transmission Queue Type Error, Default Transmission Queue Usage Error, Get Inhibited, Not Authorized, Put Inhibited, Queue Manager Not Active, Queue Type Error, Remote Queue Name Error, Transmission Queue Type Error, Transmission Queue Usage Error, Unknown Alias Base Queue, Unknown Default Transmission Queue, Unknown Object Name, Unknown Remote Queue Manager, and Unknown Transmission Queue
    SYSTEM.ADMIN.PERFM.EVENT Queue Depth High, Queue Full, and Queue Service Interval High
    SYSTEM.ADMIN.CHANNEL.EVENT Bridge Stopped, Channel Auto-definition Error, Channel Blocked, Channel Conversion Error, Channel Not Activated, Channel Not Available, Channel SSL Error, Channel SSL Warning, Channel Stopped, and Channel Stopped By User
  • Setting the environment variable FORCE_CLIENT_BINDING to true forces the sensor to use client binding mode.

For more information about IBM MQ HA Queue Managers, see High availability (HA) scenarios.

Support matrix of IBM MQ connection mode

The following table outlines the support matrix for IBM MQ connection mode:

Table 7. Support matrix for IBM MQ connection mode
Support matrix Queue manager in a virtual or physical machine queue Manager in a Kubernetes cluster Queue manager in a Docker container
Instana agent in a virtual or physical machine Local monitoring and remote monitoring Remote monitoring Remote monitoring
Instana agent in a Kubernetes cluster Not supported Local monitoring Not supported
Instana agent in a Docker container Local monitoring and remote monitoring Remote monitoring Local monitoring

Monitoring IBM MQ in a virtual or physical machine

When the Instana host agent and IBM MQ run in the same machine, two connection modes are supported: local binding mode and client binding mode.

Local monitoring with the local binding mode

The Instana host agent must be privileged, and then the IBM MQ sensor uses the local binding mode to retrieve data from IBM MQ.

By using the local binding mode, IBM MQ sensor can discover IBM MQ Queue Manager instances automatically and show all the data. You don't need to set anything in the Instana agent configuration file.

Local monitoring with the client binding mode

If the user account to run Instana agent is not privileged, the IBM MQ sensor must use the client binding mode to retrieve data from IBM MQ. You must configure the IBM MQ connection parameters in the agent configuration file.

  1. Configure the ibmmq plugin section in the agent configuration file as follows:

    com.instana.plugin.ibmmq:
      enabled: true
      poll_rate: 60
      queueManagers:
        QMGR03:    # Your Queue Manager name here. If there are queue managers with the same name, it is required to append '-<instance>' in the queue manager name to distinguish them. You can select any string for <instance>.
          channel: '<INSERT_CHANNEL_HERE>' # Remote administration channel
    
  2. Configure other authorities based on your IBM MQ configurations. If the security of IBM MQ is enabled, you need to configure the user and password. If TLS of IBM MQ is enabled, you need to provide keystore-related information in the agent configuration file. See the following configuration example:

    For keystore certificate file, only the jks file is supported.

    com.instana.plugin.ibmmq:
      enabled: true
      poll_rate: 60
      queueManagers:
        QMGR03:    # Your Queue Manager name here. If there are queue managers with the same name, it is required to append '-<instance>' in the queue manager name to distinguish them. You can select any string for <instance>.
          channel: '<INSERT_CHANNEL_HERE>' # Remote administration channel
          username: '<INSERT_USERNAME_HERE>' # User ID to connect to MQ (optional)
          password: '<INSERT_PASSWORD_HERE>' # User password to connect to MQ (optional)
          keystore: '<INSERT_KEYSTORE_PATH_HERE>' # Keystore path for TLS connection (required only when TLS is enabled for remote monitoring. Optional)
          keystorePassword: '<INSERT_KEYSTORE_PASSWORD_HERE>' # Keystore password for TLS connection (required only when TLS is enabled for remote monitoring. Optional)
          cipherSuite: '<INSERT_CIPHER_SUITE_HERE>' # TLS cipher suite for TLS connection (required only when TLS is enabled for remote monitoring. Optional)
    

    By using the client binding mode, IBM MQ sensor can partially discover some configurations such as host and port automatically in the local monitoring and show metrics in the Instana UI.

    When the Instana host agent and IBM MQ run in different machines, only the client binding mode is supported.

Remote monitoring with the client binding mode

In this mode, IBM MQ sensor also uses the client binding mode to retrieve and show data from IBM MQ. You need to configure IBM MQ connection parameters in the agent configuration file. The configuration steps are the same as in the Local monitoring with the client binding mode section. By using the client binding mode in the remote monitoring, you need to configure the host and port because IBM MQ sensor can't discover them in remote monitoring. The metrics are also showed in the Instana UI.

Monitoring IBM MQ in a Kubernetes cluster

When the Instana host agent and IBM MQ run in the same Kubernetes cluster, only local monitoring with the client binding mode is supported.

Client binding mode configurations for local monitoring in Kubernetes clusters

In this mode, when the Instana host agent and IBM MQ run in the same Kubernetes cluster, only the client binding mode is supported because the host agent and queue manager can not run in the same pod. You need to configure IBM MQ connection parameters in the agent configuration file as follows. Because the host and port can be automatically discovered, don't set them to avoid duplicate querying of all Queue Manager instances in the Kubernetes cluster.

com.instana.plugin.ibmmq:
  enabled: true
  poll_rate: 60
  queueManagers:
    QMGR03:    # Your Queue Manager name here. If there are queue managers with the same name, it is required to append '-<instance>' in the queue manager name to distinguish them. You can select any string for <instance>.
      channel: '<INSERT_CHANNEL_HERE>' # Remote administration channel

For Instana host agent on Kubernetes cluster, if TLS is enabled for monitored Queue Manager instances on Kubernetes cluster, you also need to configure TLS in the agent configuration file. Follow the steps:

  1. Create a secret with the user's keystore file in the Instana agent's namespace:

    kubectl create secret generic keystore-secret-name --from-file=./<jks-file-name>.jks -n instana-agent
    

    Replace &lt;jks-file-name&gt; with the user jks file that you use. For the certificate file, only the JKS (Java KeyStore) format is supported.

  2. Mount the secret as a volume into the Instana agent pod by specifying the volume and volume mounts in the agent's CustomResource or in the Helm chart's values.yaml file as follows:

    agent:
      pod:
        volumeMounts:
        - mountPath: /opt/instana/agent/etc/jks-file-name.jks
          name: mq-key-jks-name
          subPath: jks-file-name.jks
        volumes:
        - name: mq-key-jks-name
          secret:
            secretName: keystore-secret-name
    

    For more information about mounting secrets into the Instana agent pod, see the Instana agent Helm chart documentation.

  3. Configure the keystore file in the com.instana.plugin.ibmmq section of the agent configuration.yaml file, either in the CustomResource or the Helm chart's values.yaml file:

    com.instana.plugin.ibmmq:
      enabled: true
      ...
      keystore: /opt/instana/agent/etc/jks-file-name.jks
      keystorePassword: <optional-keystore-password>
      ...
    

    Do not set the host and port in the agent configuration file, but grant authority to connect to queue manager in the Kubernetes cluster. The metrics are also displayed in the Instana UI.

Monitoring IBM MQ in Docker

When the Instana host agent and IBM MQ run in different Docker containers in the same machine, only local monitoring with the client binding mode is supported because Instana host agent and IBM MQ run in different Docker operating systems. In such scenario, you need to configure IBM MQ connection parameters in the agent configuration file. Because IBM MQ sensor discovers IBM MQ Queue Manager instances automatically, you don't need to set the host and port in the agent configuration file.

Local monitoring with client binding mode

The configuration steps are the same as in the Local monitoring with the client binding mode section. You don't need to configure the host and port in the agent configuration file. The metrics are also showed in the Instana UI.

Remote monitoring with client binding mode

The configuration steps are the same as in the Remote monitoring with the client binding mode section.

Extra IBM MQ configurations

To collect specific IBM MQ metrics, you must enable queue statistics, queue manager performance events, or channel events. Follow these steps to configure the necessary settings:

  1. To collect the queue metrics Last Put Time, Last Get Time, Oldest Message Time, and On Queue Message Time, enable queue statistic at either the queue level or the queue manager level.
  • At the queue level, run the following RUNMQSC command:
    alter ql(QUEUE_NAME) MONQ(LOW)
    
  • At the queue manager level, run the following RUNMQSC command:
    alter qmgr MONQ(LOW)
    
    For more information about the alter qmgr command, see IBM MQ documents.
  1. To collect the IBM MQ performance built-in events, such as Queue Depth High, Queue Full, and Queue Service Interval High, you must enable the queue manager performance events (PERFMEV) by running the following RUNMQSC command:

    alter qmgr PERFMEV(ENABLED) CHLEV(ENABLED)
    
    1. To collect built-in events that are related to queue manager, such as Alias Base Queue Type Error, Default Transmission Queue Type Error, Default Transmission Queue Usage Error, Get Inhibited, Not Authorized, Put Inhibited, Queue Manager Not Active, Queue Type Error, Remote Queue Name Error, Transmission Queue Type Error, Transmission Queue Usage Error, Unknown Alias Base Queue, Unknown Default Transmission Queue, Unknown Object Name, Unknown Remote Queue Manager, and Unknown Transmission Queue, you must enable the following channel events (CHLEV) events:

      • AUTHOREV (Authorization events)
      • INHIBTEV (Inhibition events)
      • LOCALEV (Local events)
      • REMOTEEV (Remote events)
      • STRSTPEV (Storage events)

      Run the following RUNMQSC command:

      alter qmgr AUTHOREV(ENABLED) INHIBTEV(ENABLED) LOCALEV(ENABLED) REMOTEEV(ENABLED) STRSTPEV(ENABLED)
      

      For more information about controlling queue manager events, see Controlling queue manager events.

    2. To collect built-in events that are related to channel, such as Bridge stopped, Channel Auto-definition Error, Channel Blocked, Channel Conversion Error, Channel Not Activated, Channel Not Available, Channel SSL Error, Channel SSL Warning, Channel Stopped, and Channel Stopped By User, you must enable the following channel-related events (CHLEV) events:

      • CHLEV (Channel events)
      • BRIDGEEV (Bridge events)
      • SSLEV (SSL events)
      • CHADEV (Channel auto-definition events)

      Run the following RUNMQSC command:

      alter qmgr CHLEV(ENABLED) BRIDGEEV(ENABLED) SSLEV(ENABLED) CHADEV(ENABLED)
      

      For more information about controlling channel events, see Controlling channel and bridge events.

    For more information about the alter qmgr command, see IBM MQ documents.

  2. To collect MQ queue statistics metrics such as Persistent Put Bytes, Nonpersistent Put Bytes, Persistent Get Bytes, Nonpersistent Get Bytes, Expired Msg Count, Put Fail Count, Put1 Fail Count, and Get Fail Count, the following steps are necessary from IBM MQ:

    1. At the queue manager level, enable queues statistics data collection for all queues by running the following command:
      alter qmgr STATQ(ON)
      
      At the queue level, enable statistics for a specific queue by running the following Queue_Name command:
      alter qlocal(Queue_Name) STATQ(ON)
      
      You can also change the queue statistics interval by using the IBM MQ Queue Manager parameter STATINT, which is 1800 seconds by default. For example, to set it to 900 seconds, run the following command:
      alter qmgr STATINT(900)
      
    2. To collect statistics data from the SYSTEM.ADMIN.STATISTICS.QUEUE, you need to grant the get authority to the Instana agent user because the statistics data is collected from this queue. For example, run the following command:
      setmqaut -m QmgrName -n SYSTEM.ADMIN.STATISTICS.QUEUE -t q -p user +get
      
      For more information about the setmqaut command, see IBM MQ documents.

Configuring IBM MQ authority

  • Using privileged user for the Instana agent

    If the Instana agent is installed by using a privileged user, it has all the necessary authorities. Therefore, you can view all the monitoring data in the Instana UI.

  • Using non-privileged user for the Instana agent

    When the Instana agent is installed by using a non-privileged user to connect to queue manager, you must grant appropriate Object Authority Manager (OAM) authorities to the user to query all the monitoring data.

    To provide the IBM MQ OAM authorities to a non-privileged user, use the setmqaut control command. The user that you use to issue the setmqaut command must be a privileged user. The setmqaut command is in the $MQ_INSTALLED_DIR/bin directory.

    The user that you use to issue the setmqaut command must be a privileged user.

  1. To grant the user the appropriate authorities to access queue manager that you want to monitor, run the following command:

    setmqaut -m QMGR -t qmgr -p UserID +inq +connect +dsp
    

    where:

    • QMGR: Name of Queue Manager
    • UserID: UserID of the user.

    You must specify the fully qualified user for the -p option. For example, -p user@domain or -p user@host.

    To specify a user group name, replace the -p option with the -g option in the command.

  2. To provide the appropriate authorities to the user to access the system queues of queue manager, run the following commands:

    setmqaut -m QMGR -t q -n SYSTEM.ADMIN.COMMAND.QUEUE -p UserID +inq +dsp +chg +get +put
    setmqaut -m QMGR -t q -n SYSTEM.DEFAULT.MODEL.QUEUE -p UserID +dsp +get
    setmqaut -m QMGR -t q -n SYSTEM.ADMIN.QMGR.EVENT -p UserID +inq +dsp +chg +get
    setmqaut -m QMGR -t q -n SYSTEM.ADMIN.PERFM.EVENT -p UserID +inq +dsp +chg +get
    setmqaut -m QMGR -t q -n SYSTEM.ADMIN.CHANNEL.EVENT -p UserID +inq +dsp +chg +get
    setmqaut -m QMGR -t q -n SYSTEM.ADMIN.STATISTICS.QUEUE -p UserID +inq +dsp +chg +get
    setmqaut -m QMGR -t q -n SYSTEM.AUTH.DATA.QUEUE -p UserID +dsp
    setmqaut -m QMGR -t q -n SYSTEM.ADMIN.LOGGER.EVENT -p UserID +inq +dsp +chg
    setmqaut -m QMGR -t q -n SYSTEM.ADMIN.CONFIG.EVENT -p UserID +inq +dsp +chg
    setmqaut -m QMGR -t q -n SYSTEM.ADMIN.COMMAND.EVENT -p UserID +inq +dsp +chg
    
  3. For the Instana UI to display data, the user specified by UserID needs the display access to various objects. To grant the appropriate authorities, run the following commands:

    setmqaut -m QMGR -t q -n "**" -p UserID +dsp +chg
    setmqaut -m QMGR -t channel -n "**" -p UserID +dsp
    setmqaut -m QMGR -t clntconn -n "**" -p UserID +dsp
    setmqaut -m QMGR -t listener -n "**" -p UserID +dsp
    setmqaut -m QMGR -t topic -n "**" -p UserID +dsp
    
  4. To ensure all the commands work, restart queue manager or refresh security:

    echo "REFRESH SECURITY" | runmqsc QMGR
    

These permissions cover all the necessary access rights for IBM MQ sensors to query and gather monitoring data. Typically, read-only access is adequate for most monitoring data needs. However, for some of the following metrics, additional permissions are required. The permissions must be adjusted accordingly to accommodate these requirements based on your required monitoring metrics.

  • Queue reset statistics data:

    To retrieve queue reset statistics data, queue objects require the change authority (+chg).

  • Statistics data from SYSTEM.ADMIN.STATISTICS.QUEUE queue:

    Access to statistics data from SYSTEM.ADMIN.STATISTICS.QUEUE queue requires the get authority (+get) because the statistics data is collected from this queue.

  • IBM MQ events:

    To obtain IBM MQ events, such as queue manager Events, Performance Events and Channel Events, grant get authority (+get) to the SYSTEM.ADMIN.PERFM.EVENT, SYSTEM.ADMIN.CHANNEL.EVENT, and SYSTEM.ADMIN.QMGR.EVENT system queues.

  • MQ monitoring data from the PCF interface:

    The IBM MQ sensor calls the PCF interface to inquire data. The MQ management mechanism uses the put authority (+put) for SYSTEM.ADMIN.COMMAND.QUEUE queue to put the PCF command. To create a temporary queue based on the queue template and to save the PCF command, queue manager needs get authority to retrieve the queue template from SYSTEM.DEFAULT.MODEL.QUEUE queue.

Viewing metrics

After you complete the configuration steps in the Configuring section, you can view metrics that are related to IBM MQ in the Instana UI.

To view the metrics, complete the following steps:

  1. In the sidebar of the Instana UI, select Infrastructure.

  2. Click a specific monitored host.

    • Locally monitored Queue Manager instance is shown as one layer in the separated host or Kubernetes cluster node in the Infrastructure Map page of the Instana UI.
    • Remotely monitored Queue Manager instances are shown in the Infrastructure Map page as separate boxes in an availability zone. These instances are grouped by the availabilityZone configuration property or IBM MQ cluster name if present. If a Queue Manager instance is not running in cluster mode and the availabilityZone configuration property is not defined, queue manager is shown in the Undefined zone.
  3. After you click the host or Queue Manager instance from the Infrastructure map, you need to click Open Dashboard to see the metrics data.

For detailed metrics that IBM MQ sensor supports, see Viewing IBM MQ metrics.

Health signatures

For each sensor, there is a curated knowledgebase of health signatures that are evaluated continuously against the incoming metrics and are used to raise issues or incidents depending on user impact.

Built-in events trigger issues or incidents based on failing health signatures on entities, and custom events trigger issues or incidents based on the thresholds of an individual metric of any given entity.

For information about built-events for the IBM MQ sensor, see the Built-in events reference.

Troubleshooting IBM MQ sensor

Most problems that you might encounter are related to IBM MQ connection and authority. See the following problems:

Problems with the local binding mode

For the queue managers that are running, after you make the user privileged, make sure that the security settings work. You must either refresh security by running the following IBM MQ runmqsc command or restart queue manager to make authority that you granted take effective.

REFRESH SECURITY

Monitoring might stop after a sensor update due to a known issue in local binding mode. To resume monitoring, restart the agent.

Connection or authority problems with the client binding mode

For the client binding mode, the IBM MQ sensor acts as a client application to connect to queue manager. Therefore, the sensor requires the same authority as any other applications to connect to queue manager and inquire data, such as MQ explorer, amqsputc MQ sample application. When queue manager does not get connected with your configured parameters, you can try your connection parameters with MQ explorer to connect to queue manager and check whether you can connect queue manager.

See the following issues for reference:

  1. RC 2538 - MQRC_HOST_NOT_AVAILABLE

    Instana agent log message: "Listener not started on{host}:{port} ({exceptionCode}) {message}".

    • Listener is not started.

      Solution: Start the listener. To start the listener, run the following IBM MQ runmqsc command:

      START LISTENER($Listener_Name)
      
    • Qmgr@host can’t be found. Check if the queue manager and host names are correct and can be connected.

      Solution: Check if the connection parameters for queue manager and host names are correct and can be connected. Or check if a firewall is blocking the connection.

    • The host and port are configured in the agent configuration file in the Kubernetes cluster.

      In the Kubernetes cluster, when queue manager is restarted, the host IP changes. Therefore, the IBM MQ sensor fails to connect to queue manager.

      Solution: To automatically discover the host and port information, delete the host and port in the agent configuration file for the host agent in the Kubernetes cluster.

  2. RC 2540 - Channel is not defined.

    Instana agent log message: "Channel {channel} is not defined ({exceptionCode}). {message}".

    Solution: Check if the correct SVRCONN channel is configured in the configuration.yaml agent configuration file.

  3. RC 2035 - Authority problem.

    Instana agent log message: "Channel {channel} authorization failed for user {username} ({exceptionCode}). {message}".

    The problem is caused by authority problem and might be caused by different reasons according to different MQ configurations. To debug the authority problem, check whether channel security is enabled:

    • If channel security is disabled, but reports no authority with the user, then the MQ Application asserts user has no authority. Check which user is used and provide the correct authority to this user or change to another proper user to connect to queue manager.
    • If channel security is enabled. Check whether the provided user and password have the correct authority to connect to queue manager. If the TLS is enabled, you need to provide the correct keystore, keystorePassword, and the corresponding cipherSuite to connect queue manager.

The following image shows the debug flow chart:

MQ authority debug flow

The following problems that are frequently encountered might cause 2035 error:

  1. Channel security is disabled, but no authority with “root” is reported.

    Usually when CHLAUTH parameter is disabled and CONNAUTH parameter is not set, then the MQ channel security is disabled. You need to determine which user is used for authorization. Here is the order of precedence for security features.

    If you do not configure Security exit, channel record USERSRC(MAP), CLNTUSER, MCAUSER, then the application asserted user, which is the operating system user in a remote connection is used. In this scenario, “root” is used as the application asserted user because the Instana agent is running as root. If root doesn’t have authority, you obtain no authority with “root” in the IBM MQ error log. For more user authority priority details, see https://www.ibm.com/docs/en/ibm-mq/9.3?topic=objects-determining-which-user-is-used-authorization

    Solution: You can configure the channel record and define CLNTUSER or MCAUSER for the channel. Define MCAUSER as a user with MQ access authority for your server connection channel, and then use this user to connect queue manager.

    For example, alter channel(SVRCONN) chltype(SVRCONN) MCAUSER(‘mqmtest‘)

  2. No authority is granted to connect to the SYSTEM server connection channel.

    Some system server connection channels like SYSTEM.AUTO.SVRCONN are blocked by default, here are MQ BLOCKUSERS rules: There are three default rules for CHLAUTH processing:

    • NO ACCESS to all channels by any MQ-admin* users
    • NO ACCESS to all SYSTEM.* channels by all users
    • ALLOW access to SYSTEM.ADMIN.SVRCONN channel (non MQ-admin users)

    The first two rules block access to all channels. The third rule is more specific and takes preference over the other two, that is, CHLAUTH allows access only to SYSTEM.ADMIN.SVRCONN channel. For more information, see Resolving CHLAUTH access issues.

    Solution: Unblock the user for system server connection channel before the user is used, or define your own server connection channel for connection, which can workaround this block user problem.

  3. Security is enabled or TLS is enabled

    When channel security is enabled--CHLAUTH(ENABLED), or TLS is enabled, but the corresponding username and password or keystore parameters (keystore, keystorePassword, and cipherSuite) are not provided, then provide the credentials. After you change MQ security-related configurations, you need to run refresh security type(CONNAUTH) from the runmqsc prompt to make it work.

    Solution: Provide the corresponding username and password or keystore parameters in the agent configuration file.

  4. Security is enabled for both CHLAUTH and CONNAUTH, but user still has authority problem to connect to queue manager.

    Solution: CHLAUTH and CONNAUTH configurations in queue manager, and then check CHLAUTH and CONNAUTH interactions flow to confirm the priority to see which security record works, and correct the problem. For more information, see Interaction of CHLAUTH and CONNAUTH.

  5. MQ connection has no problem, but object security is not enough for the user to get other monitoring data.

    Solution: Check the MQ error log to see which object and which user have an authority problem, and provide the correct authority to the object to solve the problem. For more information, see Configuring IBM MQ authority.

Insufficient authority to access SYSTEM.AUTH.DATA.QUEUE

You encounter the following error when the IBM MQ has insufficient authority to obtain the queue reset statistics data. To obtain queue reset statistics data, you need the change authority to access SYSTEM.AUTH.DATA.QUEUE. But SYSTEM.AUTH.DATA.QUEUE is a special queue for which you cannot provide permission to change authority.

AMQ8077W: Entity 'user' has insufficient authority to access object
'SYSTEM.AUTH.DATA.QUEUE'.

EXPLANATION:
The specified entity is not authorized to access the required object. The
following requested permissions are unauthorized: chg
ACTION:
Ensure that the correct level of authority has been set for this entity against
the required object, or ensure that the entity is a member of a privileged
group.

Solution: If you want to stop such error messages from appearing in the IBM MQ log for SYSTEM.AUTH.DATA.QUEUE, make the user of the Instana agent privileged.