Performance logging is designed to assist when you are troubleshooting
performance issues. The reports are generated by a madconfig target
and based on performance logs that are gathered by the application
server.
The date and time that the report was stared and stopped is recorded,
along with the total amount of time in which the reporting process
ran.
The
Instance Summary section identifies:
- Host name - identifies the name of the system on which the instance
is running.
- Instance name - identifies the instance name.
- Operational server threading - shows the average, median, minimum,
maximum and the 95th percentile of free contexts.
- Throughput data - shows the number of transactions that were processed
during the report interval, the average number of interactions per
second and average number of interactions per millisecond.
For each interaction that was called during the report interval,
the
Instance Ixn Summary section shows:
- Count - the number of times this interaction was called during
the interval.
- Total seconds - the total number of seconds this interaction took
to process during the interval.
- Total latency - in milliseconds, amount of time the interaction
took to process for each percentile.
In the
Detailed Instance IXN Latency section,
you can see for each interaction:
- Count - the number of times the interaction was called during
the interval.
- Total latency - in milliseconds, the average, median and 95th
percentile of the amount of time the interaction took to process.
The
Detailed Instance IXN Payload section
is used to determine the number of attributes that are returned in
an interaction. The greater number of attributes that are configured
to return, the longer the interaction takes to process. For each interaction
called during the interval, you can see:
- Input source count - the average, median, and maximum number of
attributes inputs from the source.
- Input segment count - the average, median, and maximum number
of attributes inputs from the segment.
- Input member count - the average, median, and maximum number of
attributes inputs per member.
- Input row count - the average, median, and maximum number of attributes
inputs per row.
- Output member count - the average, median, and maximum number
of attributes output per member.
- Output row count - the average, median, and maximum number of
attributes output per row.
The Detailed Queue Management section shows
the performance of the entity manager (ENTMNGMEM) and relationship
linker (RELLINKER) processes for each entity type that is defined
for the instance.
For the entity manager, you see:
- Count - the number of times the entity management process ran
during the interval.
- Total time - the amount of time, in milliseconds, it took for
the entity manager to run during the interval.
- Total latency - in milliseconds the average, median, and percentile
of the time it took to process each entity.
- Candidate slope - this metric measures the throughput capabilities
(candidates that are selected versus candidate selection time) of
the MDM supporting infrastructure. Lower numbers indicate better database
and storage subsystem performance.
- Bucket selection - in milliseconds the average, median, and percentile
of the bucket selection process.
- Candidate match - in milliseconds the average, median, and percentile
of the candidate matching process.
- Bucket count - in milliseconds the average, median, and percentile
number of buckets created.
- Candidate count - in milliseconds the average, median, and percentile
the number of candidates processed.
- Matched count - the number of matched candidates that are processed
during the interval.
For the relationship linker:
- Count - the number of times the relationship linker process ran
during the interval.
- Total time - the total amount of time, in milliseconds, it took
to processes relationships during the interval.
- Total latency - in milliseconds the average, median, and percentile
of the time it took to process each relationship.
- Old links count - the number of previously linked records that
were reprocessed during the interval.
- New links count - the number of new links that are created during
the interval.
- New tasks count - the number of new tasks that are created during
the interval.
Tip: Percentiles are an indication of system stability.
If the percentile number is close to the average count, then the system
is considered stable. A percentile number that is lower or higher
than the average count by a significant amount is considered a spike
and you want to more closely monitor system activity.