What's new in 5.1

The Z Common Data Provider 5.1 documentation was updated in May 2024. Review this summary of changes for information about the updates.

Summary of new features in the May 2024 update

Table 1. Summary of new features in the May 2024 update
Feature More information
New currency support:
  • LOGREC
  • User CICS® dictionary records
  • SMF 42 subtype 15, 16, 17, 18, 19
  • SMF 90 subtype 37
  • SMF 99 subtype 12, 14
  • SMF 115 subtype 5, 6, 7, 216
  • New and updated data streams:
    • SMF_016_V2
    • SMF_023_V2
    • SMF_100_5_V2
    • SMF_102_QW106
    • SMF_102_QW402, SMF_102_QW402_DAT
    • SMF_102_QW411, SMF_102_QW411_DAT
    • SMF_102_QW412, SMF_102_QW412_DAT
    • SMF_102_QW172 (updated)
    • SMF_102_QW196 (updated)
    • SMF_102_QW365 (updated)
    • SMF_113_1_X
Enhancements to the Configuration Tool:

The setup script savingpolicy.sh now provides a step to automatically copy the needed configuration files from the IBM Z® Operational Log and Data Analytics installation directory to the working directory for the Z Common Data Provider Configuration Tool. This time-saving enhancement eliminates manual file transfer and ensures the Configuration Tool functions correctly.

Enhancements to the System Data Engine:

You can set the new parameter IBM_MSG_MAXSIZE in the System Data Engine started task to specify the maximum size of logical messages that are sent to the Data Streamer. This capability gives you control over message sizes, ensuring compatibility with subscribers that have limitations on the maximum packet size they can handle.

The SET statement
Enhancements to the Log Forwarder:
The Log Forwarder newly supports collection of the following data types:
  • RMF III STORM
  • Z Workload Scheduler Audit data from the UNIX System Services system log
Configuration reference for data gathered by Log Forwarder
Enhancements to the Data Collector:
The Data Collector newly supports collection of the following data types:
  • RMF III STORM
  • WebSphere® SYSPRINT in Distributed Format data
  • WebSphere Liberty Console log
  • WebSphere USS Sysout data
  • WebSphere USS Sysprint in Distributed Format data
  • WebSphere Liberty log
  • Z Workload Scheduler Audit data from the UNIX System Services system log
  • WebSphere HPEL logs
Configuration reference for data gathered by Data Collector
Enhancements to the Data Streamer:
  • You can now enable the 'CSV Header Included' function to include CSV headers within data transmitted to certain subscribers. This feature significantly enhances the flexibility and customization of data handling in streaming scenarios.
  • You can now use the Data Streamer to consume log data from the Apache Kafka topics to other subscribers.
Enhancements to TLS connections between Data Streamer and its subscribers:
  • You can now renew an expired or an expiring certificate in the keystore or truststore files of the Data Receiver and Data Streamer.
  • You can now set up secure communications between the Data Streamer and its subscribers with one-way or two-way TLS by using a RACF® key ring.

Summary of new features in the November 2023 update

Table 2. Summary of new features in the November 2023 update
Feature More information
New currency support:
  • CICS journal records
  • SMF 1154 subtype 84
  • SMF 1154 subtype 113
  • SMF 1154 subtype 114

Enhancements to the security communications:

The structure and format of the keystore files have been updated to eliminate the impact of the version upgrade from Java™ 8 to Java 11.

Enhancements to the Log Forwarder:

The configuration now supports applying the same policy to collect data from different NetView® domains in different LPARs.

Enhancements to the Data Collector:

The Data Collector now supports collection of WebSphere Sysout data.
Configuration reference for data gathered by Data Collector

Summary of new features in the May 2023 update

Table 3. Summary of new features in the May 2023 update
Feature More information
Currency support:
  • The record definitions of MQS_115_1 and MQS_115_2 are updated to accommodate the latest log manager statistics and shared message data set (SMDS) statistics in IBM® MQ 9.3.
  • The record definitions of IMS_07 and IMS_56FA are updated to accommodate the latest log records X'07' and X'56FA' in IBM Information Management System 15.3.
  • The SMF_1154_97 data stream and its record definition are updated to support new fields in SMF 1154 subtype 97.
SMF data stream reference.
Enhancements to the Configuration Tool:
  • The Configuration Tool is redesigned and enhanced to support the data collection of comprehensive data types by the Data Collector.
  • A new file <policy_name>.summary that contains an overview of the data streams and subscribers defined in the policy is now generated when you create or re-save a policy. This update enables you to quickly view a policy by referring to the summary file.
Enhancements to the System Data Engine:
  • The System Data Engine now opens all output data sets based on the policy file or the definition members specified in the HBOIN DD concatenations. This update enables the RLSE option in batch jobs for processing records from SMF dump data sets can release any unused space, regardless of whether data has been written.
  • When processing CICS CMF records, a warning message is now displayed in real-time if no matching dictionary record is found. Also, statistics are provided to indicate the number of missed CICS CMF records during the previous collection cycle. This update enhances serviceability and enables prompt actions to be taken.
  • The System Data Engine started task is updated to synchronize with the Data Collector on topic name resolving.
For more information about the updated started task, see Customizing the System Data Engine started task to collect SMF and LOGREC data.
Enhancements to the Data Collector:
  • On the Configuration Tool web interface, you can generate the policy file <policy>.collection-config.json under the Policy for streaming data to Apache Kafka through Data Collector section. This enhancement makes the definition of policy for the Data Collector more efficiently and intuitively.
  • The Data Collector now supports the collection of a broader range of data streams from the following sources:
    • z/OS SYSLOG
    • SMF
    • RMF Monitor III report
    • Job log
    • z/OS UNIX log file
    • Entry-sequenced VSAM cluster
    • z/OS sequential data set
    • IBM Z NetView messages
    • IBM WebSphere Application Server for z/OS HPEL log
  • The configuration of the application.properties file is optional unless further customizations are required.
  • In the Global setting window, you can now define User defined resume point to set the resume point for collecting OPERLOG data and RMF III report data.
  • In the Global setting window, you can now define Save file threshold to update the size of the staging file for storing unsent data when Kafka is down.
  • If you updated the policy files in the Configuration Tool, you can issue the MVS MODIFY command to the address spaces of the Data Collector to load the updated policy files dynamically.
Enhancements to the Data Streamer:
  • The workload report function is now available for you to generate a statistics report to record how much data has been received by the Data Streamer and to record the Java heap usage. This capability enables you to quickly diagnose the Data Streamer storage issues.
  • The message ID and message text of a z/OS SYSLOG record are separated into individual fields for Humio to support easier data analysis based on message ID.
For more information about the workload report function, see Enabling the workload report function for the Data Streamer.
Enhancements to the security communications between the Data Streamer and its subscribers:
  • New scripts are provided for setting up one-way or two-way Transport Layer Security (TLS) authentication between the Data Streamer and the Data Receiver.
  • A migration script is provided for eliminating the migration impact if you manually configured two-way TLS (mutual TLS) authentication and want to use the new scripts to configure two-way TLS authentication.
A new step-by-step procedure is available to show how to configure the Z Common Data Provider components to establish secure communications with the Apache Kafka brokers via Simple Authentication and Security Layer (SASL). Configuring SASL authentications with Apache Kafka
A new section is available to show how to enable secure communications for the Z Common Data Provider by using Application Transparent Transport Layer Security (AT-TLS).
The configuration section for the Data Collector is now restructured and simplified in line with the Configuration Tool enhancements. You can refer to the data stream configuration to obtain details on the configuration values that you can update in the Configure Data Resources window.

Summary of new features in the November 2022 update

Table 4. Summary of new features in the November 2022 update
Feature More information
New currency support:
  • New SMF_102 data streams
  • New SMF_110 data streams
  • New SMF_111 data streams
  • New SMF_124 data streams
  • SMF_125_1
  • MQS_115_QEST
For more information about the newly supported data streams, see SMF data stream reference.
If you updated the policy files in the Configuration Tool, you can issue the MVS MODIFY command to the address spaces of the System Data Engine, Log Forwarder, and Data Streamer on z/OS to load the updated policy files dynamically. Refreshing policy files for z/OS address spaces
You can group Sysplex level resources to collect on a given LPAR.
Enhancements to the System Data Engine:
  • You can view SMF record exit with a MODIFY command.
  • You can set the maximum size of logical messages that are sent to the Data Streamer so you can control the limit.
Enhanced configuration for the Log Forwarder:
  • To avoid the Java OutOfMemory issue, new parameters are available to set the heap value and the maximum heap value that are used by the Log Forwarder Java™ application.
See the parameters DEFAULT_HEAP and MAXIMUM_HEAP in Customizing the Log Forwarder started task to collect z/OS log data.
Simplified configuration for the Log Forwarder:
  • In the Configuration Tool, you only need to update the values for Discovery Interval and Pattern Discovery Interval.
  • Sample1.zlf.conf file is no longer there for the Log Forwarder. If you resave the policy that is created before you deploy the Z Common Data Provider PTF UJ09384, the Sample1.zlf.conf file is renamed to Sample1.zlf.conf.hidden and will not be updated.
  • You do not need to copy the Log Forwarder configuration files to the environment directory.
Enhancements to the Data Collector:
  • You can specify the Apache Kafka topic name for a group of SMF data types when you create a policy for streaming data to Apache Kafka through Data Collector.
  • JSON format is supported for the output of RMF III data that is collected by the Data Collector.
  • You can enable dynamic tracing for the Data Collector without restarting the Data Collector.
  • RMF III CRYOVW report is supported.
Newly supported or enhanced subscribers:
  • A new analytics platform Fluentd is supported.
  • You can stream OMEGAMON® data to Instana or to a different Kafka.
  • To improve its high availability, backup subscribers are now supported by the Data Streamer. If the primary subscriber server is not available, data will be streamed to backup subscriber servers. You can also reset the server to the primary server when it is available.

Summary of new features in the May 2022 update

Table 5. Summary of new features in the May 2022 update
Feature More information
New currency support:
  • New SMF_030 data streams
  • SMF 132
  • SMF 1153
  • SMF 1154
  • IMS records x40 subtype 1, x45 subtypes 2-8 and x47
  • Db2® 102 Class 7 statistics
  • RMF III CRYOVW report type
  • z/OS sequential data set
See the following topics for more information about the newly supported data streams:
You can stream OMEGAMON data from Apache Kafka to analytics platforms like Splunk, the Elastic Stack, and Humio. Streaming OMEGAMON data from Apache Kafka to analytics platforms
You can create a policy to stream OMEGAMON data with a script. By running a shell script CDPParseYaml2Policy.sh, you can now generate policies in an easy way to stream OMEGAMON data without using the Configuration Tool.

See Creating a policy to stream OMEGAMON data stream.

The Data Streamer can read and stream RMF III reports from Kafka topics to the supported subscribers. When you configure the Data Streamer, you must update the environmental variables SYSLOG_TOPIC_NAME and RMF_TOPIC_NAME in the procedure HBODSPRO.

See Customizing the Data Streamer started task.

The Configuration Tool is enhanced to support creating policies to stream data to Apache Kafka through Data Collector. Managing policies for the Data Collector
The Data Collector and Log Forwarder remove the 500 KB buffer size limitation of SYSLOG messages and now can support streaming large (over 500 KB) SYSLOG message by allocating extra data buffer instead of discarding the message. If you use the Data Collector to stream SYSLOG data, a new parameter FULLDATA is available to decide whether to allocate extra data buffer to collect large (over 500 KB) SYSLOG messages.

See Customizing the Data Collector started task to collect SMF data and log data.

The data collection of OPERLOG data can be resumed with a warm start of the Data Collector. When you use the Data Collector to stream OPERLOG data, a new parameter Start is available for you to specify the start mode of the Data Collector. A warm start resumes data collection where it was previously stopped, while a cold start starts data collection anew.

See Customizing the Data Collector started task to collect SMF data and log data.

The Data Collector supports specifying prefix of the configuration files to distinguish between different policies. When you use the Data Collector to collect data, a new parameter POLICY is available for you to specify the prefix of the Data Collector configuration files.

See Customizing the Data Collector started task to collect SMF data and log data or Configuring the Data Collector to collect data in batch mode.

IBM Z Operational Log and Data Analytics now supports a standardized and extensible naming schema for Apache Kafka topics.
Exact file names for the log records that you send to subscribers are supported despite wildcard characters that are used in the names. Example:

If you configure Common Data Provider to collect and send all logs under /tmp/logs directory to a subscriber with the file path set as /tmp/logs/logs-2021*.log, in the subscriber, you can get the exact file names for the log records, for example, /tmp/logs/logs-2022-05-26.log.