What's new in 5.1
The Z Common Data Provider 5.1 documentation was updated in May 2024. Review this summary of changes for information about the updates.
Summary of new features in the May 2024 update
Feature | More information |
---|---|
New currency support:
|
|
Enhancements to the Configuration Tool: The setup script
|
|
Enhancements to the System Data Engine: You can set the new parameter IBM_MSG_MAXSIZE in the System Data Engine started task to specify the maximum size of logical messages that are sent to the Data Streamer. This capability gives you control over message sizes, ensuring compatibility with subscribers that have limitations on the maximum packet size they can handle. |
The SET statement |
Enhancements to the Log Forwarder: The Log Forwarder newly supports collection of
the following data types:
|
Configuration reference for data gathered by Log Forwarder |
Enhancements to the Data Collector: The Data Collector newly supports collection of
the following data types:
|
Configuration reference for data gathered by Data Collector |
Enhancements to the Data Streamer:
|
|
Enhancements to TLS connections between Data Streamer and its subscribers:
|
Summary of new features in the November 2023 update
Feature | More information |
---|---|
New currency support:
|
|
Enhancements to the security communications: The structure and format of the keystore files have been updated to eliminate the impact of the version upgrade from Java™ 8 to Java 11. |
|
Enhancements to the Log Forwarder: The configuration now supports applying the same policy to collect data from different NetView® domains in different LPARs. |
|
Enhancements to the Data Collector: The Data Collector now supports collection of WebSphere Sysout data. |
Configuration reference for data gathered by Data Collector |
Summary of new features in the May 2023 update
Feature | More information |
---|---|
Currency support:
|
SMF data stream reference. |
Enhancements to the Configuration Tool:
|
|
Enhancements to the System Data Engine:
|
For more information about the updated started task, see Customizing the System Data Engine started task to collect SMF and LOGREC data. |
Enhancements to the Data Collector:
|
|
Enhancements to the Data Streamer:
|
For more information about the workload report function, see Enabling the workload report function for the Data Streamer. |
Enhancements to the security communications between the Data Streamer and its
subscribers:
|
|
A new step-by-step procedure is available to show how to configure the Z Common Data Provider components to establish secure communications with the Apache Kafka brokers via Simple Authentication and Security Layer (SASL). | Configuring SASL authentications with Apache Kafka |
A new section is available to show how to enable secure communications for the Z Common Data Provider by using Application Transparent Transport Layer Security (AT-TLS). | |
The configuration section for the Data Collector is now restructured and simplified in line
with the Configuration Tool enhancements. You can refer to the data stream configuration to obtain
details on the configuration values that you can update in the Configure Data Resourceswindow. |
Summary of new features in the November 2022 update
Feature | More information |
---|---|
New currency support:
|
For more information about the newly supported data streams, see SMF data stream reference. |
If you updated the policy files in the Configuration Tool, you can issue the MVS
MODIFY command to the address spaces of the System Data Engine, Log Forwarder, and Data
Streamer on z/OS to load the updated policy files
dynamically. |
Refreshing policy files for z/OS address spaces |
You can group Sysplex level resources to collect on a given LPAR. |
|
Enhancements to the System Data Engine:
|
|
Enhanced configuration for the Log Forwarder:
|
See the parameters DEFAULT_HEAP and MAXIMUM_HEAP in Customizing the Log Forwarder started task to collect z/OS log data. |
Simplified configuration for the Log Forwarder:
|
|
Enhancements to the Data Collector:
|
|
Newly supported or enhanced subscribers:
|
|
Summary of new features in the May 2022 update
Feature | More information |
---|---|
New currency support:
|
See the following topics for more information about the newly supported data streams: |
You can stream OMEGAMON data from Apache Kafka to analytics platforms like Splunk, the Elastic Stack, and Humio. | Streaming OMEGAMON data from Apache Kafka to analytics platforms |
You can create a policy to stream OMEGAMON data with a script. | By running a shell script CDPParseYaml2Policy.sh , you can now generate
policies in an easy way to stream OMEGAMON data without
using the Configuration Tool.
|
The Data Streamer can read and stream RMF III reports from Kafka topics to the supported subscribers. | When you configure the Data Streamer, you must update the environmental variables
SYSLOG_TOPIC_NAME and RMF_TOPIC_NAME in the procedure
HBODSPRO .
|
The Configuration Tool is enhanced to support creating policies to stream data to Apache Kafka through Data Collector. | Managing policies for the Data Collector |
The Data Collector and Log Forwarder remove the 500 KB buffer size limitation of SYSLOG messages and now can support streaming large (over 500 KB) SYSLOG message by allocating extra data buffer instead of discarding the message. | If you use the Data Collector to stream SYSLOG data, a new parameter
FULLDATA is available to decide whether to allocate extra data buffer to
collect large (over 500 KB) SYSLOG messages. See Customizing the Data Collector started task to collect SMF data and log data. |
The data collection of OPERLOG data can be resumed with a warm start of the
Data Collector. |
When you use the Data Collector to stream OPERLOG data, a new parameter
Start is available for you to specify the start mode of the Data Collector. A
warm start resumes data collection where it was previously stopped, while a cold start starts data
collection anew. See Customizing the Data Collector started task to collect SMF data and log data. |
The Data Collector supports specifying prefix of the configuration files to distinguish between different policies. | When you use the Data Collector to collect data, a new parameter POLICY
is available for you to specify the prefix of the Data Collector configuration files. See Customizing the Data Collector started task to collect SMF data and log data or Configuring the Data Collector to collect data in batch mode. |
IBM Z Operational Log and Data Analytics now supports a standardized and extensible naming schema for Apache Kafka topics. | |
Exact file names for the log records that you send to subscribers are supported despite wildcard characters that are used in the names. | Example: If you configure Common Data Provider to collect and send all logs under /tmp/logs directory to a subscriber with the file path set as /tmp/logs/logs-2021*.log, in the subscriber, you can get the exact file names for the log records, for example, /tmp/logs/logs-2022-05-26.log. |