A problem can occur with the agent after it has been installed.
Table 1 contains problems and solutions that can occur with the agent after it has been installed.
Problem | Solution |
---|---|
Log data accumulates too rapidly. | Check the RAS trace option settings, which are described in Setting RAS trace parameters by using the GUI. The trace option settings that you can set on the KBB_RAS1= and KDC_DEBUG= lines potentially generate large amounts of data. |
When
using the itmcmd agent commands to start or stop
this monitoring agent, you receive the following error message: MKCIIN0201E Specified product is not configured. |
Include the command option -o to
specify the instance to start or stop. The instance name must match
the name used for configuring the agent. For example: ./itmcmd agent -o Test1 start rz For more information about using the itmcmd commands, see the IBM Tivoli Monitoring Command Reference. |
A configured and running instance of the monitoring agent is not displayed in the Tivoli® Enterprise Portal, but other instances of the monitoring agent on the same system are displayed in the portal. | IBM® Tivoli Monitoringproducts use
Remote Procedure Call (RPC) to define and control product behavior.
RPC is the mechanism that a client process uses to make a subroutine
call (such as GetTimeOfDay or ShutdownServer) to a server process
somewhere in the network. Tivoli processes
can be configured to use TCP/UDP, TCP/IP, SNA, and SSL as the protocol
(or delivery mechanism) for RPCs that you want. IP.PIPE is the name given to Tivoli TCP/IP protocol for RPCs. The RPCs are socket-based operations that use TCP/IP ports to form socket addresses. IP.PIPE implements virtual sockets and multiplexes all virtual socket traffic across a single physical TCP/IP port (visible from the netstat command). A Tivoli process derives the physical port for IP.PIPE communications based on the configured, well-known port for the hub Tivoli Enterprise Monitoring Server. (This well-known port or BASE_PORT is configured by using the 'PORT:' keyword on the KDC_FAMILIES / KDE_TRANSPORT environment variable and defaults to '1918'.) The physical port allocation
method is defined as (BASE_PORT + 4096*N), where N=0 for a Tivoli Enterprise
Monitoring Server process and
N={1, 2, ..., 15} for another type of monitoring server process. Two
architectural limits result as a consequence of the physical port
allocation method:
A single system image can support any number of Tivoli Enterprise Monitoring Server processes (address spaces) if each Tivoli Enterprise Monitoring Server on that image reports to a different hub. By definition, one Tivoli Enterprise Monitoring Server hub is available per monitoring enterprise, so this architecture limit has been simplified to one Tivoli Enterprise Monitoring Server per system image. No more than 15 IP.PIPE processes or address spaces can be active on a single system image. With the first limit expressed above, this second limitation refers specifically to Tivoli Enterprise Monitoring Agent processes: no more than 15 agents per system image. Continued on next row. |
Continued from previous row. | This limitation can be circumvented (at current maintenance levels, IBM Tivoli Monitoring V6.1, Fix Pack 4 and later) if the Tivoli Enterprise Monitoring Agent process is configured to use the EPHEMERAL IP.PIPE process. (This process is IP.PIPE configured with the 'EPHEMERAL:Y' keyword in the KDC_FAMILIES / KDE_TRANSPORT environment variable). The number of ephemeral IP.PIPE connections per system image has no limitation. If ephemeral endpoints are used, the Warehouse Proxy agent is accessible from the Tivoli Enterprise Monitoring Server associated with the agents using ephemeral connections either by running the Warehouse Proxy agent on the same computer or by using the Firewall Gateway feature. (The Firewall Gateway feature relays the Warehouse Proxy agent connection from the Tivoli Enterprise Monitoring Server computer to the Warehouse Proxy agent computer if the Warehouse Proxy agent cannot coexist on the same computer.) |
In the krzagent
log file, the following message is repeatedly displayed: 4A6A1D11.1C79-9:kraahbin.cpp, 519,"WriteRow") Error writing to file intall_dir/ arch/rz/hist/ instance_name/KRZ attribute_group_ID errno = 1Check the size of the KRZattribute_group_ID historical file, and the file size exceeds the file size limitation of the file system, for example, 2G on Linux for System x® RH4. This problem also causes high CPU usage, and might cause the agent to fail. |
Use the following steps to solve this problem:
|
A 0 (zero)
is displayed in the following columns for the monitored Oracle RDBMS
10g instance:
|
0 is the value that is reported by the Oracle
database. The value of the free_mb column in the Oracle views, v$asm_disk
and v$asm_diskgroup, is 0 if the value is queried from a database
instance. This problem exists for Oracle RDBMS 10g. For detailed information,
see Oracle metalink 294325.1. The free_mb attribute value is the free space in an ASM DISKGROUP (V$ASM_DISKGROUP) or in an ASM DISK (V$ASM_DISK). Configure the agent and connect it to an ASM instance, and the correct values are displayed in the Unused Capacity columns and % Free column in the ASM Disk Group Capacity workspace under ASM subnode, and the Unused Capacity column and % Free column in the ASM Disk Capacity workspace under ASM subnode. |
The memory usage of the Oracle Database Extended agent processes, krzstart or krzclient, increases continually when the Oracle Database Extended agent instance has inactive database connections. | Configure Oracle Database Extended agent instances with the Oracle database or Oracle instant client version 11.1.0.6 or later. |
Processes of the Oracle Database Extended agent consume high CPU. |
|
I cannot find my queries. | Agents that include subnodes display their queries within the element in the Query Editor list that represents the location of the attribute group. The queries are most often found under the name of the subnode, not the name of the agent. |
Historical data collection and the situation that is using the TOP SQL attribute group do not work. | To enable historical collection for the TOP
SQL attribute group, configure historical collection:
Define a situation to monitor the TOP SQL workspace, and add
the following additional attributes to the situation formula: Begin
Hour Before Current, End Hour Before Current, Order By, and Row Order.
See Attributes in each attribute group for attribute
descriptions. The following workspaces use the Order By attribute
value indicated:
|
After upgrading to V6.3.1 Fix Pack 1, the subnode ID is truncated from 25 to 24 characters, the original subnode with 25 characters changes to grey, and the new subnode with 24 characters is displayed. | This behavior is expected. In 6.3.1-TIV-ITM_KRZ-IF0001, the agent changed the maximum subnode ID to 24 characters to avoid the following problem, which is a known APAR for ITM622FP4: "pure event cannot fire when subnode name length is 32 characters". To avoid the subnode being truncated, reduce the length of the database connection name, agent instance name, or host name; or, see Changing default naming conventions for database connections for information. |