What's new in 5.1

IBM Z® Operational Log and Data Analytics 5.1 documentation was updated in May 2024. Review this summary of changes for information about the updates.

Tip:

Summary of new features in the May 2024 update

Table 1. Summary of new features in the May 2024 update
Updated part Feature More information
Z Common Data Provider New currency support:
  • LOGREC
  • User CICS® dictionary records
  • SMF 42 subtype 15, 16, 17, 18, 19
  • SMF 90 subtype 37
  • SMF 99 subtype 12, 14
  • SMF 115 subtype 5, 6, 7, 216
  • New and updated data streams:
    • SMF_016_V2
    • SMF_023_V2
    • SMF_100_5_V2
    • SMF_102_QW106
    • SMF_102_QW402, SMF_102_QW402_DAT
    • SMF_102_QW411, SMF_102_QW411_DAT
    • SMF_102_QW412, SMF_102_QW412_DAT
    • SMF_102_QW172 (updated)
    • SMF_102_QW196 (updated)
    • SMF_102_QW365 (updated)
    • SMF_113_1_X
Enhancements to the Configuration Tool:

The setup script savingpolicy.sh now provides a step to automatically copy the needed configuration files from the IBM Z Operational Log and Data Analytics installation directory to the working directory for the Z Common Data Provider Configuration Tool. This time-saving enhancement eliminates manual file transfer and ensures the Configuration Tool functions correctly.

Enhancements to the System Data Engine:

You can set the parameter IBM_MSG_MAXSIZE in the System Data Engine started task to specify the maximum size of logical messages that are sent to the Data Streamer. This capability gives you control over message sizes, ensuring compatibility with subscribers that have limitations on the maximum packet size they can handle.

The SET statement
Enhancements to the Log Forwarder:
The Log Forwarder newly supports collection of the following data types:
  • RMF III STORM
  • Z Workload Scheduler Audit data from the UNIX System Services system log
Configuration reference for data gathered by Log Forwarder
Enhancements to the Data Collector:
The Data Collector newly supports collection of the following data types:
  • RMF III STORM
  • WebSphere® SYSPRINT in Distributed Format data
  • WebSphere Liberty Console log
  • WebSphere USS Sysout data
  • WebSphere USS Sysprint in Distributed Format data
  • WebSphere Liberty log
  • Z Workload Scheduler Audit data from the UNIX System Services system log
Configuration reference for data gathered by Data Collector
Enhancements to the Data Streamer:
  • You can now enable the 'CSV Header Included' function to include CSV headers within data transmitted to certain subscribers. This feature significantly enhances the flexibility and customization of data handling in streaming scenarios.
  • You can now use the Data Streamer to consume log data from the Apache Kafka topics to other subscribers.
Enhancements to TLS connections between Data Streamer and its subscribers:
  • You can now renew an expired or an expiring certificate in the keystore or truststore files of the Data Receiver and Data Streamer.
  • You can now set up secure communications between the Data Streamer and its subscribers with one-way or two-way TLS by using a RACF® key ring.
Reference updates:
  • A table is now provided to help you understand the data format of the Apache Kafka records for the SMF data that is processed by the Data Collector.
  • You can now refer to the list of configuration values that you can update for the Z Workload Scheduler Audit data from the UNIX System Services system log data stream.
Documentation

The Z Common Data Provider deployment guide now features a structured approach and clear instructions, helping you to streamline the deployment process and save valuable time.

Deploying the Z Common Data Provider
Z Data Analytics Platform
Role-based access control and data segregation by tenant:
  • You can now use the new role-based access control and data segregation by tenant features to exercise fine-grained control over what data you can access and what actions you can perform with that data.
  • A new section is available to provide information that is related to workspace functionality, workspace access at login, and steps to switch workspaces after login.
New GUI-based alerting and notification features:
  • The Z Data Analytics Platform now offers a new web UI based alerting and notification capability.
  • The new capability offers great configuration flexibility as well as compatibility with the new data segregation feature.
  • It is offered as an alternative to the existing event forwarding capability that is built into the Problem Insights server.
Serviceability enhancement:

The kafka-topics command for the OCI containers now provides a broader set of capabilities to improve serviceability.

Command reference for OCI containers
Support for z/OS Container Extensions:
The Z Data Analytics Platform can now be deployed on z/OS Container Extensions (zCX).
Note: zCX Foundation for Red Hat OpenShift is currently not supported.
Planning for deployment of the Z Data Analytics Platform
Splunk New IBM® MQ dashboards:

New IBM MQ for z/OS queue statistics dashboards are now available for the Splunk platform. You can leverage these new out of the box dashboards to view queue-level statistics such as queue depth, message put and get counts, average time on queue, input and output handle counts, and numerous message failure statistics, for all queue sharing groups and queue managers.

Z Data Analytics Platform, the Elastic Stack, and Splunk

Enhanced data support:

Enhanced data support is now available with Db2® for z/OS SMF 100 and SMF 101 curated data records.

Summary of new features in the November 2023 update

Table 2. Summary of new features in the November 2023 update
Updated part Feature More information
Z Common Data Provider New currency support:
  • CICS journal records
  • SMF 1154 subtype 84
  • SMF 1154 subtype 113
  • SMF 1154 subtype 114

Enhancements to the security communications:

The structure and format of the keystore files have been updated to eliminate the impact of the version upgrade from Java™ 8 to Java 11.

Enhancements to the Log Forwarder:

The configuration now supports applying the same policy to collect data from different NetView® domains in different LPARs.

Enhancements to the Data Collector:

The Data Collector now supports collection of WebSphere Sysout data.
Configuration reference for data gathered by Data Collector
Software containers
Enhancements to the management of authentication service:
  • You can now use Keycloak to delegate authentication to a user authentication provider that uses the Lightweight Directory Access Protocol (LDAP).
  • For both IBM Z Anomaly Analytics and IBM Z Operational Log and Data Analytics, Keycloak provides support for multifactor authentication through various methods.

Enhancements to the logging configuration:

The logging of the OCI containers is now more flexible and easier to integrate with the underlying operating system. You can configure the logging driver to be either json-file or journald, based on your needs.
Configuring logging drivers
Major changes to the configuration files and commands:
  • Runtime configuration files have been moved into OCI volumes to remove dependency on host storage.
  • Commands for administering the software containers have been updated.
Z Data Analytics Platform, the Elastic Stack, and Splunk

New z/OS Connect Enterprise Edition API Requester dashboards:

New z/OS Connect Enterprise Edition API Requester dashboards are available on the Z Data Analytics Platform, the Elastic Stack, and Splunk. You can leverage these new out of the box dashboards to view API requester data for z/OS, CICS Transaction Server for z/OS, and IMS for z/OS.
Elastic Stack New scripted tooling to help efficiently deploy the configuration files in Logstash:
  • The scripted tooling allows you to automatically copy the Logstash configuration files for raw data.
  • The scripted tooling allows you to update the default values in the Elasticsearch connection definition in Logstash configuration files for curated and raw data.
Z Data Analytics Platform Miscellaneous changes to the Z Data Analytics Platform:
  • A new reporting capability that can be used to create ad hoc and scheduled reports from dashboards and searches is available.
  • A new menu with options to log off and refresh your current session is provided to help you better manage your sessions.
  • The IBM Operations Analytics - Log Analysis data migration tool is no longer shipped. If you still need this tool, make sure to retain a copy from a prior fix pack image.
Reporting

Summary of new features in the May 2023 update

Table 3. Summary of new features in the May 2023 update
Updated part Feature More information
Z Common Data Provider Currency support:
  • The record definitions of MQS_115_1 and MQS_115_2 are updated to accommodate the latest log manager statistics and shared message data set (SMDS) statistics in IBM MQ 9.3.
  • The record definitions of IMS_07 and IMS_56FA are updated to accommodate the latest log records X'07' and X'56FA' in IBM Information Management System 15.3.
  • The SMF_1154_97 data stream and its record definition are updated to support new fields in SMF 1154 subtype 97.
SMF data stream reference.
Enhancements to the Configuration Tool:
  • The Configuration Tool is redesigned and enhanced to support the data collection of comprehensive data types by the Data Collector.
  • A new file <policy_name>.summary that contains an overview of the data streams and subscribers defined in the policy is now generated when you create or re-save a policy. This update enables you to quickly view a policy by referring to the summary file.
Enhancements to the System Data Engine:
  • The System Data Engine now opens all output data sets based on the policy file or the definition members specified in the HBOIN DD concatenations. This update enables the RLSE option in batch jobs for processing records from SMF dump data sets can release any unused space, regardless of whether data has been written.
  • When processing CICS CMF records, a warning message is now displayed in real-time if no matching dictionary record is found. Also, statistics are provided to indicate the number of missed CICS CMF records during the previous collection cycle. This update enhances serviceability and enables prompt actions to be taken.
  • The System Data Engine started task is updated to synchronize with the Data Collector on topic name resolving.
For more information about the updated started task, see Customizing the System Data Engine started task to collect SMF and LOGREC data.
Enhancements to the Data Collector:
  • On the Configuration Tool web interface, you can generate the policy file <policy>.collection-config.json under the Policy for streaming data to Apache Kafka through Data Collector section. This enhancement makes the definition of policy for the Data Collector more efficiently and intuitively.
  • The Data Collector now supports the collection of a broader range of data streams from the following sources:
    • z/OS SYSLOG
    • SMF
    • RMF Monitor III report
    • Job log
    • z/OS UNIX log file
    • Entry-sequenced VSAM cluster
    • z/OS sequential data set
    • IBM Z NetView messages
    • IBM WebSphere Application Server for z/OS HPEL log
  • The configuration of the application.properties file is optional unless further customizations are required.
  • In the Global setting window, you can now define User defined resume point to set the resume point for collecting OPERLOG data and RMF III report data.
  • In the Global setting window, you can now define Save file threshold to update the size of the staging file for storing unsent data when Kafka is down.
  • If you updated the policy files in the Configuration Tool, you can issue the MVS MODIFY command to the address spaces of the Data Collector to load the updated policy files dynamically.
Enhancements to the Data Streamer:
  • The workload report function is now available for you to generate a statistics report to record how much data has been received by the Data Streamer and to record the Java heap usage. This capability enables you to quickly diagnose the Data Streamer storage issues.
  • The message ID and message text of a z/OS SYSLOG record are separated into individual fields for Humio to support easier data analysis based on message ID.
For more information about the workload report function, see Enabling the workload report function for the Data Streamer.
Enhancements to the security communications between the Data Streamer and its subscribers:
  • New scripts are provided for setting up one-way or two-way Transport Layer Security (TLS) authentication between the Data Streamer and the Data Receiver.
  • A migration script is provided for eliminating the migration impact if you manually configured two-way TLS (mutual TLS) authentication and want to use the new scripts to configure two-way TLS authentication.
Problem Insights server

Enhancements to the deployment and management of the Problem Insights server

On Linux, the Problem Insights server is no longer a separately installable component. You can now deploy and manage the Problem Insights server as an OCI software container along with other container-based components included in Z Operational Log and Data Analytics.
Z Data Analytics Platform

Enhancements to the deployment and management scripts for the Z Data Analytics platform

The scripts for deploying and managing the Z Data Analytics Platform are now merged into the OCI container deployment and management scripts that have been in use for Z Anomaly Analytics. This enhancement provides the ability to manage Z Operational Log and Data Analytics and Z Anomaly Analytics either jointly or separately.
Problem Insights server and Z Data Analytics Platform

Enhancements to the deployment script for common services

The deployment script is enhanced to exclusively support the installation of the common services. If you don't want to use the Z Data Analytics Platform, but need access to the Problem Insights server and authentication service for integration with Splunk or the Elastic Stack, you can now install only the common services you need.
Deploying the Z Data Analytics Platform and the common services
Problem Insights server and Z Data Analytics Platform

Enhancements to the security configuration

The Keycloak security realm (zdap) that was previously used to administer access to the Z Data Analytics platform is now merged into the IzoaKeycloak security realm used for administering access to the Problem Insights server and to Z Anomaly Analytics components.
Managing authorization and authentication
Coexistence of Z Anomaly Analytics and Z Operational Log and Data Analytics Container-based components for Z Anomaly Analytics and Z Operational Log and Data Analytics can now share key components, simplifying deployment and management, reducing overall hardware requirements and eliminating potential port conflicts. Coexistence with IBM Z Anomaly Analytics
Z Data Analytics Platform and the Elastic Stack

New IBM MQ dashboards

New IBM MQ dashboards are available for the Z Data Analytics Platform and the Elastic Stack. You can leverage these new out of the box dashboards to uncover IBM MQ insights and accelerate hybrid incident identification.
Documentation and digital assets

The IBM Z Operational Log and Data Analytics - Foundations badge is available!

Take the self-paced digital course to earn the badge and check out how IBM Z Operational Log and Data Analytics can enhance availability across a hybrid cloud environment with faster hybrid incident identification.

Digital course
A new step-by-step procedure is available to show how to configure the Z Common Data Provider components to establish secure communications with the Apache Kafka brokers via Simple Authentication and Security Layer (SASL). Configuring SASL authentications with Apache Kafka
A new section is available to show how to enable secure communications for the Z Common Data Provider by using Application Transparent Transport Layer Security (AT-TLS).
The configuration section for the Data Collector is now restructured and simplified in line with the Configuration Tool enhancements. You can refer to the data stream configuration to obtain details on the configuration values that you can update in the Configure Data Resources window.
Earlier releases of PDF manuals are available for download. PDF files

Summary of new features in the February 2023 update

Table 4. Summary of new features in the February 2023 update
Updated part Feature More information
Z Data Analytics Platform and the Elastic Stack The configuration process now allows you to create index templates based on Z Common Data Provider policy files. These index templates provide improved ease of use for searches and graphing of non-curated (raw) data. For the Z Data Analytics Platform: For the Elastic Stack:
Z Data Analytics Platform Enhancements are made to prevent shard exhaustion, search bucket exhaustion, and other performance issues for the Z Data Analytics Platform.

In the configuration file zdap_env.config, new properties are added for sharding management. You can customize shard allocation, search buckets, and other configurations to suit your business needs.

Documentation and digital assets A new video library that contains a series of demo videos is available! In this video library, you can find the following types of videos and quickly learn and adopt IBM Z Operational Log and Data Analytics:
  • Overview videos that show product features and benefits
  • End-to-end deployment demo videos that help you quickly deploy the major components on z/OS systems and distributed systems
  • Use cases videos that show how you can resolve IT problems by using Z Operational Log and Data Analytics
Z Operational Log and Data Analytics video library

A self-paced digital course is available! The digital course is well-designed with rational information architecture and comprehensive multimedia assets. Whether you are being introduced to Z Operational Log and Data Analytics for the first time or looking to gain a working knowledge of the product, the new digital course is the entry point to starting that journey. It takes you through the product overview, architecture, end-to-end deployment, and use cases with various multimedia assets.

Through diverse interactive H5P content and demo videos, you can quickly grasp a holistic knowledge about Z Operational Log and Data Analytics with both an informative and interactive content experience.

Z Operational Log and Data Analytics digital course
The documentation now includes detailed descriptions for dashboards on analytics platforms. You can learn which metrics are visualized in each dashboard.

Summary of new features in the November 2022 update

Table 5. Summary of new features in the November 2022 update
Updated part Feature More information
Z Common Data Provider New currency support:
  • New SMF_102 data streams
  • New SMF_110 data streams
  • New SMF_111 data streams
  • New SMF_124 data streams
  • SMF_125_1
  • MQS_115_QEST
For more information about the newly supported data streams, see SMF data stream reference.
If you updated the policy files in the Configuration Tool, you can issue the MVS MODIFY command to the address spaces of the System Data Engine, Log Forwarder, and Data Streamer on z/OS to load the updated policy files dynamically. Refreshing policy files for z/OS address spaces
You can group Sysplex level resources to collect on a given LPAR.
Enhancements to the System Data Engine:
  • You can view SMF record exit with a MODIFY command.
  • You can set the maximum size of logical messages that are sent to the Data Streamer so you can control the limit.
Enhanced configuration for the Log Forwarder:
  • To avoid the Java OutOfMemory issue, new parameters are available to set the heap value and the maximum heap value that are used by the Log Forwarder Java™ application.
See the parameters DEFAULT_HEAP and MAXIMUM_HEAP in Customizing the Log Forwarder started task to collect z/OS log data.
Simplified configuration for the Log Forwarder:
  • In the Configuration Tool, you only need to update the values for Discovery Interval and Pattern Discovery Interval.
  • Sample1.zlf.conf file is no longer there for the Log Forwarder. If you resave the policy that is created before you deploy the Z Common Data Provider PTF UJ09384, the Sample1.zlf.conf file is renamed to Sample1.zlf.conf.hidden and will not be updated.
  • You do not need to copy the Log Forwarder configuration files to the environment directory.
Enhancements to the Data Collector:
  • You can specify the Apache Kafka topic name for a group of SMF data types when you create a policy for streaming data to Apache Kafka through Data Collector.
  • JSON format is supported for the output of RMF III data that is collected by the Data Collector.
  • You can enable dynamic tracing for the Data Collector without restarting the Data Collector.
  • RMF III CRYOVW report is supported.
Newly supported or enhanced subscribers:
  • A new analytics platform Fluentd is supported.
  • You can stream OMEGAMON® data to Instana or to a different Kafka.
  • To improve its high availability, backup subscribers are now supported by the Data Streamer. If the primary subscriber server is not available, data will be streamed to backup subscriber servers. You can also reset the server to the primary server when it is available.
Problem Insights server The Problem Insights server now supports email as a new event destination. Sending events to an email server
The Problem Insights server on z/OS now supports authentication and authorization through a Keycloak authentication service.
For events that are sent from the Problem Insights server for single message IDs or search strings, the search result string is now included in events that are sent to IBM Tivoli® Netcool®/OMNIbus, cloud-based event management services, or email. This feature enables you to view and act on the original source message.
For Splunk platform New IBM MQ dashboards are available for the Splunk platform. You can leverage these new out of the box dashboards to uncover IBM MQ insights and accelerate hybrid incident identification.
IBM Z Operational Log and Data Analytics provides a Splunk application that can be installed into the Splunk Cloud environment as a private application. The advantage is that you have faster delivery, increased security, and decreased environment complexity. Deploying the Z Operational Log and Data Analytics application in a Splunk Cloud environment
Z Data Analytics Platform The Z Data Analytics Platform now supports Podman as an OCI container runtime and management environment. Podman supported: Planning for components on zCX or Linux on X or Z
The management scripts for the Z Data Analytics Platform are renamed.
  • The script dockerManageZdap.sh is renamed as dockerDeployZdap.sh.
  • The script ibmzdap.sh is renamed as dockerManageZdap.sh.
The Z Data Analytics Platform now provides a data cleanup utility for deleting data that is no longer needed. Deleting data that is no longer needed
Documentation and digital assets The documentation now includes information about the detailed procedure for upgrading the product from earlier releases to the latest release. Upgrading to the latest release of Z Operational Log and Data Analytics
The documentation now includes a new deployment roadmap and an upgrade roadmap to help you better plan and complete tasks needed for deployment or upgrade.
A new video that introduces the features and benefits of IBM Z Operational Log and Data Analytics is available. Watch the video on IBM MediaCenter: Overview of IBM Z Operational Log and Data Analytics

Summary of new features in the May and June 2022 update

Table 6. Summary of new features in the May and June 2022 update
Feature More information
New features in the June update
Enhanced authentication mechanism for the Problem Insights server On Linux, the Problem Insights server now uses Keycloak for access control and for managing user IDs and roles. This enhancement enables you to integrate the Problem Insights server into the advanced identity and access management features that are offered by Keycloak.
New setup utility for deploying the Elastic Stack application into containerized environment If you run and manage the Elastic Stack on Docker, a setup utility is provided for you to install the IBM Z Operational Log and Data Analytics Elastic Stack application into the containerized environment. The setup utility enables you to quickly create a reference implementation of the predefined dashboards and saved searches on the Elastic Stack running in Docker containers. You can adapt this reference implementation as needed to your own container management environment.

See Deploying the Z Operational Log and Data Analytics Elastic Stack application into the Elastic Stack Docker containers.

Two new fields MESSAGEPREFIX and SUBSYSTEM for the Z Data Analytics platform and Elastic Stack
  • MESSAGEPREFIX: For records with a Message ID, it will contain the first three letters of that field, which is useful for identifying the owning product.
  • SUBSYSTEM: The field identifies the subsystem such as CICS, Db2, and IBM MQ that generated the record.
Dashboards, saved searches, and Logstash pipelines support all releases of the Elastic Stack 7.x and 8.x that are still supported by Elastic NV Planning for deployment of the Elastic Stack platform
New features in the May update - Z Common Data Provider PTF UJ08472, also included in the June update
New currency support:
  • New SMF_030 data streams
  • SMF 132
  • SMF 1153
  • SMF 1154
  • IMS records x40 subtype 1, x45 subtypes 2-8 and x47
  • Db2 102 Class 7 statistics
  • RMF III CRYOVW report type
  • z/OS sequential data set
See the following topics for more information about the newly supported data streams:
You can stream OMEGAMON data from Apache Kafka to analytics platforms like Splunk, the Elastic Stack, and Humio. Streaming OMEGAMON data from Apache Kafka to analytics platforms
You can create a policy to stream OMEGAMON data with a script. By running a shell script CDPParseYaml2Policy.sh, you can now generate policies in an easy way to stream OMEGAMON data without using the Configuration Tool.

See Creating a policy to stream OMEGAMON data stream.

The Data Streamer can read and stream RMF III reports from Kafka topics to the supported subscribers. When you configure the Data Streamer, you must update the environmental variables SYSLOG_TOPIC_NAME and RMF_TOPIC_NAME in the procedure HBODSPRO.

See Customizing the Data Streamer started task.

The Configuration Tool is enhanced to support creating policies to stream data to Apache Kafka through Data Collector. Managing policies for the Data Collector
The Data Collector and Log Forwarder remove the 500 KB buffer size limitation of SYSLOG messages and now can support streaming large (over 500 KB) SYSLOG message by allocating extra data buffer instead of discarding the message. If you use the Data Collector to stream SYSLOG data, a new parameter FULLDATA is available to decide whether to allocate extra data buffer to collect large (over 500 KB) SYSLOG messages.

See Customizing the Data Collector started task to collect SMF data and log data.

The data collection of OPERLOG data can be resumed with a warm start of the Data Collector. When you use the Data Collector to stream OPERLOG data, a new parameter Start is available for you to specify the start mode of the Data Collector. A warm start resumes data collection where it was previously stopped, while a cold start starts data collection anew.

See Customizing the Data Collector started task to collect SMF data and log data.

The Data Collector supports specifying prefix of the configuration files to distinguish between different policies. When you use the Data Collector to collect data, a new parameter POLICY is available for you to specify the prefix of the Data Collector configuration files.

See Customizing the Data Collector started task to collect SMF data and log data or Configuring the Data Collector to collect data in batch mode.

IBM Z Operational Log and Data Analytics now supports a standardized and extensible naming schema for Apache Kafka topics.
Exact file names for the log records that you send to subscribers are supported despite wildcard characters that are used in the names. Example:

If you configure Common Data Provider to collect and send all logs under /tmp/logs directory to a subscriber with the file path set as /tmp/logs/logs-2021*.log, in the subscriber, you can get the exact file names for the log records, for example, /tmp/logs/logs-2022-05-26.log.