With z/OS at the core of your business, a wealth of data is at your disposal. By charting your journey to open data analytics, you can derive new insights and advantages from each transaction. The following capabilities allow you accelerate this journey:
IBM Z Platform for Apache Spark allows multiple, disconnected data sources to be virtually integrated within a single application. Apache Spark is the enterprise engine for processing unified data and scaling integrations with business applications.
Python AI Toolkit for IBM z/OS maintains hundreds of open source packages available on z/OS for your data science, machine learning, and deep learning needs. Reviewing and acquiring the packages of your choice can be conducted through an intuitive and secure IBM repository.
IBM® Data Virtualization Manager for z/OS® provides virtual, integrated views of data residing on IBM Z. It enables users and applications read/write access to IBM Z data in place, without having to move, replicate or transform the data.
Build and train models anywhere, and deploy them on IBM Z and LinuxONE infrastructure.
IBM Z Platform for Apache Spark is a distribution of the open source, in-memory processing Apache Spark engine. Designed for big data, it enables users of z/OS to combine Apache Spark with the advantage of analyzing business-critical data in place.
Apache Spark offers a number of benefits, including:
-
Streamlined development. Users can leverage expertise with common programming languages, including Scala, Python, and
SQL . -
Simplified data access. Apache Spark offers access to enterprise data with familiar tools, and its built-in libraries offer easy querying and fast responses.
-
Wide algorithmic support. Users can rapidly develop and deploy all workloads, including machine learning, iterative, and batch.
-
Accelerated analytics results. With in-memory data processing, users can rapidly derive results.
Python AI Toolkit for IBM z/OS is a secure repository of popular open source packages for machine learning and data science. The interface allows for users to quickly browse hundreds of available options, read license and usage notes, and plan installation in a seamless flow.
Python AI Toolkit for IBM z/OS provides the following benefits:
-
Unlocks vetted open source software with IBM supply chain security
-
Industry-leading AI Python packages
-
Enables a familiar, flexible, and agile delivery experience
IBM Data Virtualization Manager for z/OS unlocks IBM Z data using popular, industry-standard APIs. Mainframe data is virtualized and mapped in place, making live data instantly accessible. Applications can be enabled to update and modify data with minimal impact to IBM Z systems.
IBM Data Virtualization Manager for z/OS can deliver the following benefits:
-
Create a single, integrated view of IBM Z and non-IBM Z data
-
Minimize risks associated with Extract, Transform, and Load (ETL) data movement by keeping data in-place within the security-rich IBM Z infrastructure
-
Enable the modernization of batch applications with direct real-time read/write access to relational and traditional non-relational IBM Z data sources
Note that IBM Z Platform for Apache Spark can only work if the prerequisite hardware and software is installed and operational.
The solution requires the use of one of the following IBM Z servers:
-
IBM z16
-
IBM z15 Models T01 or T02
-
IBM z14 Models M01-M05
-
IBM z14 Model ZR1
-
IBM z13
-
IBM z13s
The solution requires the following software:
-
z/OS V2R3 or higher
-
z/OS ICSF V2R1 or higher
-
IBM 64-bit SDK for z/OS Java Version 8, SR7 or higher.
-
Bourne Again Shell (bash) version 4.3.48 or higher
For further detail about planning, see the 'Before you begin' section of the IBM Z Platform for Apache Spark Installation and Customization Guide.
The tasks detailed in this section will likely require the collaboration of the following IT roles:
-
z/OS system programmer
-
z/OS system programmer with UNIX skills
-
Security administrator
-
Network administrator
You can install IBM Z Platform for Apache Spark through a Custom-Built Product Delivery Offering (CBPDO) or, alternatively, through a SystemPac or ServerPac.
To install through a CBPDO, follow these steps:
To install through SystemPac or ServerPac, refer to the IBM Documentation chapter on ServerPac: Using the Installation Dialog.
For further information to support your installation process, refer to Program Directory for IBM Z Platform for Apache Spark, GI13-4318-00 (PDF).
Complete the following steps in order to configure IBM Z Platform for Apache Spark successfully:
You may access Python AI Toolkit for IBM z/OS through the provided link.
Plan key actions to enable the successful configuration and use of Python AI Toolkit for IBM z/OS.
Note: IBM Open Enterprise SDK for Python is a pre-requisite for Python AI Toolkit for IBM z/OS. For more more information, view the product page.
The pip package installer is required to pull packages from the repository to your local environment. Using the given requirements.txt file allows for the simplest installation experience. If individual package installation is desired, by default, pip will install packages from pypi.org, but it can be configured to only pull packages from Python AI Toolkit for IBM z/OS.
To find the requirements.txt file and instructions to configure pip as described, refer to the Get Started tab of the application.
Once properly configured, the user can browse and install the available packages of their choice.
With the environment configured for use with Python AI Toolkit for IBM z/OS, installing packages from the repository only requires a few steps.
To install all the packages available, simply use the provided requirements.txt along with the pip install command.
Otherwise, if specific packages are desired, you can browse the repository and determine a package you would like to use, copy the provided installation command and run it on your local system.
For further instructions to install packages as described, refer to 'Acquire packages' in the Get Started tab of the application.
When installing with the given requirements.txt file, no configuration is required.
If individual packages are desired, you may opt to configure pip to perform the installation globally for all users.
For instructions to configure pip as described, refer to 'Set up environment' in the Get Started tab of the application.
While there are no other specific requirements for configuring Python AI Toolkit for IBM z/OS, you should periodically monitor the repository for updated package versions over time.
Complete the following prerequisite tasks prior to the installation and configuration of IBM Data Virtualization Manager for z/OS:
-
Review required naming conventions
-
Create server data sets
-
Define security authorizations
-
Configure Workload Manager (WLM)
-
APF-authorize LOAD library data sets
-
Optionally, copy your target libraries
For further detail about each prerequisite step, refer to the IBM Documentation chapter on Prerequisites.
Note that IBM Data Virtualization Manager for z/OS supports a broad range of data sources across a variety of platforms, including mainframe relational/non-relational databases, distributed data stores running on Linux, Unix, and Windows, and non-mainframe IBM databases such as IBM
For further detail, refer to the IBM Documentation chapter on Supported data sources.
Complete the following steps to successfully install IBM Data Virtualization Manager for z/OS:
Note that Data Virtualization Manager studio and JDBC Gateway also have a number of prerequisite requirements spanning permissions, memory, storage, software, and more.
The following configuration steps are possible with the Data Virtualization Manager server:
Access Python AI Toolkit for IBM z/OS
Python AI Toolkit for IBM z/OS: FAQs
Overview of Python AI Toolkit for IBM z/OS
Getting started with Python AI Toolkit for IBM z/OS
IBM is evolving the Open Data Analytics offering to give customers newer versions of the IzODA software faster.
Access technical content for the planning, installation, configuration, and use of IBM Z Platform for Apache Spark
This IBM Redbooks publication presents an overview of the IBM Data Virtualization Manager for IBM z/OS® offering and the role it plays in accessing traditional non-relational data on IBM Z
Access technical content for IBM Data Virtualization Manager for z/OS
Access technical content for IBM Open Enterprise SDK for Python
Read blog posts about IBM Open Enterprise SDK for Python
Watch an overview about IBM Open Enterprise SDK for Python
View featured resources for IBM Open Enterprise SDK for Python
Visit the portal to request enhancements to IBM Open Enterprise SDK for Python
Build and train models anywhere, and deploy them on IBM Z and LinuxONE infrastructure.
The content solution page for IBM Open Data Analytics for z/OS has been sunset. All traffic to that page is being redirected here.
Two new videos for Python AI Toolkit for IBM z/OS added. Updated FAQ document.
New content for Python AI Toolkit for IBM z/OS added.
New resource (blog post) added.