Cloud Pak for Data
common core services |
10.0.0 |
This release of Common core services includes the following features:
- Access more data with new connectors
-
- Denodo
- IBM watsonx.data™ Milvus
- Microsoft Azure PostgreSQL
- Integrated service connections
- You can now add new connections using the information from existing service instances. This
means that parameter values for the new connection can be automatically filled in from the existing
instance.
- Multi-node writing from the watsonx.data Presto
connector
- You can now write data from the watsonx.data Presto connector to DataStage using multiple nodes for parallel writes.
- New engine connections for the watsonx.data Presto data source
- You can now use the Presto (C++) engine
with the watsonx.data Presto data source, giving you
more options for querying your data.
To read more about these features, see What's new and changed in Common core services
in the IBM Cloud Pak for Data documentation.
If you install or upgrade a service that requires the common core services, the common core services will also be installed or upgraded.
|
Cloud Pak for Data
scheduling service |
1.40.0 |
- Related documentation:
-
|
AI Factsheets |
5.1.0 |
This release of AI Factsheets includes the following features:
- AI Factsheets on IBM
Z and LinuxONE
-
Now you can use AI Factsheets to track machine
learning models from request to production in AI use cases on IBM
Z and LinuxONE. Use the detailed factsheets to meet your
governance and compliance goals.
For details, see Planning for IBM Software Hub on IBM
Z and LinuxONE.
To read more about these features, see:
- Related documentation:
- AI Factsheets
|
Analytics Engine powered by Apache Spark |
5.1.0 |
This release of Analytics Engine powered by Apache Spark includes the following features:
- Automatic daily database snapshot backups
- IBM Analytics Engine now automatically backs up the metastore database each day. Administrators
can restore the database from the snapshots.
- For more information, see Automated Backup and Restore.
- Improved flexibility when managing Spark environment variables
- When configuring your Spark environment variables, you can now decide whether your changes apply to:
- All Spark instances and jobs
- A single instance of the Analytics Engine
- An individual Spark job
- IBM Power (ppc64le) supports Spark with R4.3
- Spark with R4.3 is supported on IBM Power (ppc64le) starting in 5.1.
- Schedule Spark workloads on remote physical locations
- You can now install Analytics Engine powered by Apache Spark on a
remote physical location so that you can run Spark workloads on remote clusters. This capability is
not enabled by default.
- For information about how to enable it, see Setting up a remote physical location for
IBM Software Hub and Installing Analytics Engine powered by Apache Spark on a remote physical
location.
- Related documentation:
- Analytics Engine powered by Apache Spark
|
Cognos
Analytics |
27.0.0 |
This release of Cognos
Analytics
includes the following features:
- Optimize individual pod memory and ephemeral storage
- You can now use a script to fine-tune memory and ephemeral storage in pods to improve service
performance. For details, see Fine-tuning memory and ephemeral storage in pods.
- Save report outputs to file system
- You can now use scripts to configure Cognos
Analytics
to save report outputs to a file system. Report outputs are saved on a persistent volume in the
project where the Cognos
Analytics service instance is
provisioned. For details, see Saving report outputs to file system.
- Updated software version
- This release of the service provides Version 12.0.4 of the Cognos
Analytics software. For details, see Release 12.0.4 in the Cognos
Analytics documentation.
- Related documentation:
- Cognos
Analytics
|
Cognos Dashboards |
5.1.0 |
This release of Cognos Dashboards
includes the following features:
- Updated software version
- This release of the service provides Version 12.0.4 of the Cognos
Analytics dashboards software. For details, see Release 12.0.4 - Dashboards in the Cognos
Analytics documentation.
To read more about these features, see What's new and changed in Cognos Dashboards in the IBM Cloud Pak for Data documentation.
- Related documentation:
- Cognos Dashboards
|
Data Gate |
7.0.0 |
- Related documentation:
- Data Gate
|
Data Privacy |
5.1.0 |
- Related documentation:
- Data Privacy
|
Data Product Hub |
5.1.0 |
This release of Data Product Hub includes the following features:
- Monitor your insights with dashboards
- Both administrators and data producers can now use the Insights dashboard to monitor their data
products and community activity. The Insights dashboard provides a comprehensive, centralized
overview of open tasks, delivered data products, and more. By delivering real-time metrics, the
Insights dashboard provides detailed data insights at scale and increases workflow efficiency across
the data community.
- Create custom business domains to organize your data products
- Improve your data community's organization and optimize your data products' searchability by
creating custom business domains. With new, custom business domains, you can easily organize your
data products into intuitive categories and curate your community for your business needs.
- Add custom properties to data products
- You can now create and add custom properties to data products to optimize searchability,
classification, and organization. By adding custom properties, you can structure information and
curate your data product to meet specific business needs.
- Enhance and update your published data products
- To further enhance and improve data products, you can now create new versions of your published
data products. When creating a new version, you can edit data assets, change delivery methods, and
manage the access level. By continuously creating new versions of published data products, you can
help ensure data accuracy and currency.
- Pre-approve data consumers for data products requiring approval
- For data consumers who frequently access data products requiring approval, you can now create a
list of pre-approved users or user groups. By defining a pre-approved list, you streamline the
subscription process for both data consumers and producers and improve efficiency in delivering data
products.
- Send notifying comments for requests for new data products
- To ensure that comments do not go unnoticed, you can now send notifying comments for requests
for new data products from your task inbox. The approver for a data product request can enter
comments or questions for the requester. The requester receives notifications directly in the user
interface or by email. The approver is also notified when the requester responds.
- Expedite data product approvals with custom approval workflows
- Administrators can now create custom approval workflows for data products that require approval.
With custom approvals, multiple levels for approving a data product are controlled by a single
workflow. Each user in the workflow receives a task and a notification at the appropriate time to
ensure all approval levels are met before the data product is delivered. The approval process is
based upon a workflow configuration, which is in turn based upon an imported workflow template file.
To read more about these features, see What's new and changed in Data Product Hub
in the IBM Data Product Hub documentation.
- Related documentation:
- Data Product Hub
|
Data Refinery |
10.0.0 |
This release of Data Refinery includes the following features:
- Schedule Data Refinery jobs in Git-based
projects
- You can now schedule jobs for Data Refinery jobs in Git-based projects. You can set up scheduling when you create the job.
- Related documentation:
- Data Refinery
|
Data Replication |
5.1.0 |
This release of Data Replication includes the following features:
- Audit logging
-
Data Replication now integrates with the IBM Software Hub audit logging service. Auditable events for
Data Replication are forwarded to the
security information and event management (SIEM) solution that you integrate with.
For more information, see:
- Add multiple licenses to your Data Replication service installation
-
If you have more than one replication license and want to gain access to all of the capabilities
provided by your licenses within the same IBM Software Hub instance, you can now add these licenses to
your Data Replication service installation as
replication extensions. For details, see Extending Data Replication capabilities.
- Use an IBM Data Replication Access Server
connection for replicating data between remote source and target data stores in your project
- You can now replicate data between remote source and target data stores with the Data Replication service by using an IBM Data Replication Access Server connection to a remote
Change Data Capture (CDC) replication engines deployment.
For details, see Setting up
remote CDC replication.
- Use watsonx.data as a target data
store for replicating data in your project
- You can now replicate data to watsonx.data with the Data Replication service.
- Use IBM
Db2 for z/OS as a source data store for
replicating data in your project
- You can now replicate data from IBM
Db2 for z/OS with the Data Replication service.
- Use IBM
Db2 Database as a source data store for
replicating data in your project
- You can now replicate data from IBM
Db2 Database
with the Data Replication service.
To read more about these features, see What's new and changed in Data Replication in the IBM Cloud Pak for Data documentation.
- Related documentation:
- Data Replication
|
DataStage |
5.1.0 |
This release of DataStage includes the following features:
- Implement version control in your projects by enabling Git integration
-
Enable Git integration to sync your project with a Git repository. You can clone changes into
your project or commit selected changes to the specified repository and branch.
- Test your flows by using MettleCI unit testing
-
Create a test case component to run tests on a DataStage flow. Generate, upload, or intercept
data to run tests and compare the expected and actual results.
-
- Run jobs on remote engines by using DataStage Anywhere
-
Use DataStage Anywhere to run jobs on a remote engine that is deployed in a location of your
choice. You can deploy an engine within your own environment, in an on-premises location, or in any
cloud or data center.
- New user interface for the Transformer stage
- You can now enable the new Transformer stage UI in your flow settings to see the stage's visual
enhancements and usability improvements. In the new tile-based interface, you can view all elements
on one screen, with mapping links between input and output columns. The new interface includes the
following enhancements:
- Zoom in and out
- Drag and drop columns into position
- Undo and redo actions
- Search all elements with find
- Prompt to save on exit
- Manage workloads for your DataStage
instance
- You can now specify workload management settings, including job limits and memory usage, in the
settings of your DataStage instance.
- Run the Watsonx.data connector
in ELT run mode
- You can now run the Watsonx.data connector in ELT mode with SQL Pushdown. Running in ELT mode transforms data directly on the
target system and can increase the efficiency of your flows.
-
- Assign the Super operator role to users
- You can now assign the Super operator role to users. Users with this role can create and run
jobs, but cannot view or edit DataStage
assets.
To read more about these
features, see What's new and changed in DataStage in the IBM Cloud Pak for Data documentation.
- Related documentation:
- DataStage
|
Data Virtualization |
3.1.0 |
This release of Data Virtualization includes the following feature:
- Speed up schema listings and table counts by configuring your source setup
- You can now improve the performance of operations such as listing schemas in the Explore view
and counting tables on the Data sources page. Configure whether the service uses a custom query or
an API method to list schemas and count tables at the source type level or the Connection Identifier
(CID) level.
To read more about these features, see What's new and changed in Data Virtualization
in the IBM Cloud Pak for Data documentation.
- Related documentation:
- Data Virtualization
|
Db2 |
5.1.0 |
- Related documentation:
- Db2
|
Db2
Big SQL |
7.8.0 |
- Related documentation:
- Db2
Big SQL
|
Db2
Data Management Console |
5.1.0 |
- Related documentation:
- Db2
Data Management Console
|
Db2 Warehouse |
5.1.0 |
- Related documentation:
- Db2 Warehouse
|
Decision Optimization |
10.0.0 |
This release of Decision Optimization
includes the following features:
- Compare tables in Decision Optimization experiments to see
differences between scenarios
- You can now compare tables in a Decision Optimization
experiment in both the Prepare data or Explore
solution view. This comparison shows data value differences between scenarios displayed
next to each other.

To read more about these features, see:
- Related documentation:
- Decision Optimization
|
EDB Postgres |
12.20, 13.16, 14.13, 15.8,
16.4 |
- Related documentation:
- EDB Postgres
|
Execution Engine for Apache Hadoop |
5.1.0 |
This release of Execution Engine for Apache Hadoop includes the following features:
- Execution Engine for Apache Hadoop now uses Java 11
-
The Hadoop Execution Engine RPM is now upgraded to use Java 11 instead of Java 8. All of the edge
nodes must also be upgraded to Java 11 before you can install the RPM.
For more information, see Installing Execution Engine for Apache Hadoop on Apache Hadoop clusters.
- Related documentation:
- Execution Engine for Apache Hadoop
|
IBM Knowledge Catalog |
5.1.0 |
This release of IBM Knowledge Catalog includes the following features:
- Enhanced gen AI based enrichment (IBM Knowledge Catalog Premium and IBM Knowledge Catalog Standard)
-
- The granite-8b-code-instruct model replaces the previously used granite 13b model for generating
asset and column descriptions. The new model provides more accurate results and needs less memory
and storage.
- Business term abbreviations are now taken into account when display names are generated during
metadata enrichment. If a source asset or column name matches any defined business term
abbreviation, this abbreviation is used to expand the name.
- In the metadata enrichment results, you can now remove suggested display names or descriptions
in bulk.
- Enhanced management and scheduling of metadata enrichment jobs
-
- You can now configure execution windows for your metadata enrichment jobs to balance workloads.
Jobs then run only within the configured time frames.
- On the new run metrics dashboard, you can monitor the progress of the individual enrichment
tasks for an active metadata enrichment job run. In addition, you can explore run information for
completed job runs to identify if and where issues occurred.
- Enhanced data quality monitoring (IBM Knowledge Catalog and IBM Knowledge Catalog Premium)
- You can now better target the data elements for monitoring of data quality:
- You can now configure data quality SLA rules without asset-level filters. The rules can be
applied to any number of columns that have the same name or the same terms assigned, regardless of
the containing data asset.
- You can now select and run data quality SLA rules as part of metadata enrichment. The rules are
no longer enabled in the enrichment settings for the project.
- Segment data assets by column values to focus on the information you need
- You can now chunk data assets into smaller data assets based on selected column values to help
you access only the data that you’re interested in. You can work with connected data assets in your
project or directly select a data asset and column from a connection in your project without
creating a connected data asset first.
- Import, enrich, and assess data quality of data from additional data sources
- You can now import metadata from Dremio
data lakes, enrich that data, and assess its quality.
- Simplify the importing of metadata to better understand your data
- You can now import metadata by using a new experience that is integrated with IBM Manta Data
Lineage service. The metadata import experience process is simplified and provides more lineage
import configuration options, which can help you to understand how data flows in more detail.
- IBM Knowledge Catalog, IBM Knowledge Catalog Premium, and IBM Knowledge Catalog Standard now store data in a Neo4j graph database
- All editions of IBM Knowledge Catalog now use a
Neo4j graph database to store lineage and
relationship information. Neo4j provides
greater data consistency while improving scaling and performance.
For new
installations of IBM Knowledge Catalog, IBM Knowledge Catalog Premium, or IBM Knowledge Catalog Standard, the Neo4j graph database is installed automatically with
the service.
Neo4j is the
graph database that is used with the IBM Manta Data
Lineage
service. If you want to use the MANTA Automated Data Lineage
service as your lineage service or if you want to enable the relationship explorer feature, you can
enable the use of FoundationDB instead of Neo4j during installation or upgrade.
For details, see Determining the optional features to enable.
To read more about these features, see What's new and changed in IBM Knowledge Catalog
in the IBM Cloud Pak for Data documentation.
- Related documentation:
- IBM Knowledge Catalog
|
IBM
Match 360 |
4.3.49 |
This release of IBM
Match 360 includes the following features:
- Navigate the service in a new way
-
You now use a new navigation menu to move between different tools and capabilities within
IBM
Match 360. The navigation options are improved
and reorganized into a single menu. You can minimize the navigation menu to get more screen space to
configure, view, and work with your master data. You can also now switch between different data
types more easily while configuring or working with your master data.
Additionally, record and entity profiles are now enhanced to give you a clearer view of their
associated attribute and relationship details.
- IBM
Match 360 now stores data in a Neo4j graph
database
-
IBM
Match 360 now uses a Neo4j graph database
to store your master data records, entities, and relationships. Neo4j provides greater data
consistency while improving scaling and performance.
For new installations of IBM
Match 360 on Version 5.1, the Neo4j graph database is
installed automatically with the service. For upgrades to Version 5.1 from previous releases,
IBM
Match 360 will continue to use FoundationDB and OpenSearch by default.
- Master data entities can now be stored in the database
-
Data engineers can now configure IBM
Match 360
to store and persist entity composite views in the graph database instead of composing them on
demand. When an entity type is configured to persist, the composited attributes of each entity get
stored in the database similar to the way that record attributes are stored, meaning that entity
data is now more stable and resilient.
When entities are configured to persist, data stewards and business users can search directly on
entity data, including supplementary attributes, audit attributes, and system properties such as
record count and entity ID. Users can search for persisted entities by using the simple or advanced
search mechanisms in the master data explorer interface.
As an additional benefit, searches and exports of persisted entity data are faster than was
previously possible.
To read more about these features, see What's new and changed in IBM Match 360 in the IBM Cloud Pak for Data documentation.
- Related documentation:
- IBM
Match 360
|
Informix |
8.2.0 |
This release of Informix includes the following features:
- Informix now uses Informix Dynamic Server 15.0
- The Informix service has been upgraded to use Informix Dynamic Server 15.0, which is a major
release that includes several enhancements in areas such as administration, ease of use, and
storage.
- Related documentation:
- Informix
|
MANTA Automated Data Lineage |
42.8.0 |
- Related documentation:
- MANTA Automated Data Lineage
|
MongoDB |
7.0.14-ent,
8.0.0-ent |
- Related documentation:
- MongoDB
|
OpenPages |
9.4.0 |
This release of OpenPages includes the following features:
- Use global search to find OpenPages
records
-
You can now use global search in the search panel on the dashboard to find more records relevant
to your search terms, and not only exact text matches.
For example, if you search for “management”, the search now finds records that contain all
variations of the root word “manage”, such as “management”, “managements”, “manager”, and so on.
Search results are ranked in order from most to least relevant.
Version 9.4.0 of the OpenPages service includes various fixes.
For details, see What's new and changed in OpenPages.
- Related documentation:
- OpenPages
|
Orchestration Pipelines |
5.1.0 |
This release of Orchestration Pipelines includes the following features:
- Streamline pipeline configuration by specifying settings at the project level
-
You can now configure pipeline settings at the project level to specify how all assets in the
project are created. You can still configure settings, such as autosave, cache, and error policy
settings, at the asset level. Any settings that you specify for an individual asset override the
project settings.
- Simplify job selection by adding multiple jobs to your canvas at the same time
-
You can now add jobs to pipelines in batch. You can drag the node Run multiple
jobs from the node panel onto the canvas, then select one or more job assets, such as
the Run DataStage job or the Run Pipelines job. All of
the selected assets are added to the canvas with one click.
To read more about these features, see:
- Related documentation:
- Orchestration Pipelines
|
Planning Analytics |
5.1.0 |
This release of Planning Analytics
includes the following features:
- Back up the Planning Analytics service online
-
You can now create online backups of the Planning Analytics
service by using the backup and restore utilities on IBM Software Hub. Previously, you could create online backups
only by using the Planning Analytics administration
console.
For more information about how to create online backups of the IBM Software Hub cluster and how to restore from an online
backup, see Backing up and restoring
IBM Software Hub.
- Updated versions of Planning Analytics software
- This release of the service provides the following software versions:
- Planning Analytics Workspace Version 2.0.99
For details, see
2.0.99 - What's new in the Planning Analytics Workspace documentation.
- Planning Analytics Spreadsheet Services Version 2.0.99
For details, see
2.0.99 - Feature updates in the TM1
Web documentation.
- Planning Analytics for Microsoft Excel Version 2.0.99
For details, see
2.0.99 - Feature updates in the Planning Analytics for Microsoft Excel documentation.
- TM1 Database Version 12.4.2 (formerly Planning Analytics Engine).
For details, see What's new in TM1 Database Version 12 in the TM1 Database
Version 12 documentation.
- Related documentation:
- Planning Analytics
|
Product Master |
7.0.0 |
- Related documentation:
- Product Master
|
RStudio® Server
Runtimes |
10.0.0 |
- Related documentation:
- RStudio Server
Runtimes
|
SPSS
Modeler |
10.0.0 |
This release of SPSS
Modeler
includes the following features:
- Promote flows to deployment spaces
-
You can now directly promote SPSS
Modeler flows
from projects to deployment spaces without having to export the project and then import it into the
deployment space.
- Analyze Japanese text data in SPSS
Modeler with
Text Analytics
-
You can now use the Text Analytics nodes in SPSS
Modeler to analyze text data that is written in Japanese.
Text Analytics nodes use advanced linguistic technologies and text mining techniques to analyze text
data and extract concepts, patterns, and categories.
- Connect to new data sources with SPSS
Modeler
-
You can now connect SPSS
Modeler to the following
new data sources for read and write access:
- Microsoft Azure Databricks
- Microsoft
Azure Synapse Analytics
- Use Kerberos for Apache Impala
-
You can now use Kerberos for authentication
with an Apache Impala connector. However, when using
Kerberos authentication, you cannot use SQL
Pushback.
To read more about these features, see:
- Related documentation:
- SPSS
Modeler
|
Synthetic Data Generator |
10.0.0 |
This release of Synthetic Data Generator
includes the following features:
- Connect to new data sources with Synthetic Data Generator
-
You can now connect Synthetic Data Generator to the following
new data sources for read and write access:
- Microsoft Azure Databricks
- Microsoft
Azure Synapse Analytics
- Use Kerberos for Apache Impala
-
You can now use Kerberos for authentication
with an Apache Impala connector. However, when using
Kerberos authentication, you cannot use SQL
Pushback.
For more information, see Creating Synthetic data in the IBM watsonx documentation.
To read more about these features, see What's new and changed in Synthetic Data Generator
in the IBM watsonx documentation.
- Related documentation:
- Synthetic Data Generator
|
Voice Gateway |
1.6.0 |
- Related documentation:
- Voice Gateway
|
Watson Discovery |
5.1.0 |
- Related documentation:
- Watson Discovery
|
Watson
Machine Learning |
5.1.0 |
This release of Watson
Machine Learning
includes the following features:
- Deploy multi-source SPSS
Modeler flows
- You can now create deployments for SPSS
Modeler
flows that use multiple input streams to provide data to the model.
- Create deep learning experiments with Watson
Machine Learning
- You no longer need Watson
Machine Learning Accelerator to train deep
learning experiments. If you have the IBM Software Hubscheduling service and Watson
Machine Learning installed, you can now train a neural network using
the Deep learning experiment builder in Watson Studio. Train deeper neural networks and explore
more complicated hyperparameter spaces.
- New runtime available to deploy AI asset on s390x hardware
- You can now use Runtime 24.1 to deploy AI assets on s390x hardware.
To read more about these
features, see:
- Related documentation:
- Watson
Machine Learning
|
Watson
OpenScale |
5.1.0 |
This release of Watson
OpenScale includes the following features:
- Import model deployment configuration settings
- When you’re adding deployments to configure evaluations for production models, you can now
import the settings from your preproduction model deployment to provide model details.
- Configure global explanations with LIME
- You can now use the LIME (Local Interpretable Model-Agnostic explanations) algorithm to
configure global explanations. To use LIME to configure global explanations, you must enable the
global explanation parameter when you configure explainability.
- Run quality evaluations with historical data
- You can now use an API to evaluate historical feedback data for online deployments and prompt
templates. By running quality evaluations with historical data, you can analyze your model
performance over time with a wider scope.
To read more about these features, see:
- Related documentation:
- Watson
OpenScale
|
Watson Speech services |
5.1.0 |
- Related documentation:
- Watson Speech services
|
Watson Studio |
10.0.0 |
This release of Watson Studio includes the following features:
- Schedule jobs in Git-based projects
-
You can now schedule jobs within Git-based projects. You can set up scheduling when you create
the job.
- Use your IBM
watsonx.data Spark engine with Jupyter
notebooks in Watson Studio
-
If you have IBM
watsonx.data and Watson Studio provisioned, you can now create new
Python environment templates that are based on the Spark engine that runs on your IBM
watsonx.data instance. Then you can run your code in a
Jupyter notebook directly from the Watson Studio user interface. For more information, see watsonx.data documentation.
To read more about these features, see:
- Related documentation:
- Watson Studio
|
Watson Studio Runtimes |
10.0.0 |
- Related documentation:
- Watson Studio Runtimes
|
watsonx.ai |
10.0.0 |
This release of watsonx.ai includes the following features:
- Install watsonx.ai on a single node
OpenShift® (SNO) cluster
- You can now install watsonx.ai in the
full service or lightweight engine mode on a single node OpenShift (SNO) cluster if high availability and scalability is
not a requirement. For details, see IBM Software Hub platform hardware requirements and
Choosing an installation
mode in IBM watsonx.ai.
- New software specification for deploying custom foundation models
- You can now deploy custom foundation models by using the new
watsonx-cfm-caikit-1.1 software specification. This software specification is not
available with every model architecture.
- New model architectures for deploying custom foundation models
- You can now deploy custom foundation models from the following model architectures with the
vLLM runtime:
Bloom
Databricks
exaone
Falcon
GPTJ
Gemma
Gemma2
GPT_BigCode
GPT_Neox
GPTJ
GPT2
Granite
Jais
Llama
Marlin
Mistral
Mixtral
MPT
Nemotron
Olmo
Persimmon
Phi
Phi3
Qwen2
- Deploy custom foundation models on MIG-enabled clusters
-
You can now deploy custom foundation models on a cluster with Multi-Instance GPU (MIG)
enablement. MIG is useful when you want to deploy an application that does not require the full
power of an entire GPU.
Review hardware requirements and configure MIG support to deploy your custom
foundation models. For more information, see Requirements
for deploying custom foundation models on MIG-enabled clusters.
- Deploy custom foundation models on specific GPU nodes
- You can now deploy custom foundation models on specific GPU nodes when you have multiple GPU
nodes available for deployment. Review the process of creating a customized hardware specification
to use a specific GPU node for deploying your custom foundation model.
- Automate the building of RAG patterns with AutoAI
- Use AutoAI to automate the
retrieval-augmented generation (RAG) process for a generative AI solution. Upload a collection of
documents and transform them into vectors that can be used to improve the output from a large
language model. Compare optimized pipelines to select the best RAG pattern for your
application.
- Simplify complex documents by using the text extraction API
- Simplify your complex documents so that they can be processed by foundation models as part of a
generative AI workflow. The text extraction API uses document understanding processing to extract
text from document structures such as images, diagrams, and tables that foundation models often
cannot interpret correctly.
- Chat with multimodal foundation models about images
- Add an image to your prompt and chat about the content of the image with a multimodal foundation
model that supports image-to-text tasks. You can chat about images from the Prompt Lab in chat mode or by using the Chat
API.
- Build conversational workflows with the watsonx.ai chat API
- Use the watsonx.ai chat API to add
generative AI capabilities, including agent-driven calls to third-party tools and services, into
your applications.
- Add contextual information to foundation model prompts in Prompt Lab
- Help a foundation model generate factual and up-to-date answers in RAG use cases by adding
relevant contextual information to your prompt as grounding data. You can upload relevant documents
or connect to a third-party vector store that has relevant data. When a new question is submitted,
the question is used to query the grounding data for relevant facts. The top search results plus the
original question are submitted as model input to help the foundation model incorporate relevant
facts in its output.
- Work with new foundation models in Prompt Lab
- You can now use the following foundation models for inferencing from the API and from the
Prompt Lab in watsonx.ai:
- Granite Guardian 3.0 models in 2 billion and 8 billion parameter sizes
- Granite Instruct 3.0 models in 2 billion and 8 billion parameter sizes
granite-20b-code-base-sql-gen
granite-20b-code-base-schema-linking
codestral-22b
- Llama 3.2 Instruct models in 1 billion and 3 billion parameter sizes
- Llama 3.2 Vision Instruct models in 11 billion and 90 billion parameter sizes
llama-guard-3-11b-vision
mistral-small
ministral-8b
pixtral-12b
For details, see Foundation models in watsonx.ai.
- Work with new embedding models for text matching and retrieval tasks
- You can now use the following embedding models to vectorize text in watsonx.ai:
For details, see Foundation models in watsonx.ai.
- Enhance search and retrieval tasks with the text rerank API
- Use the text rerank method in the watsonx.ai REST API together with the
ms-marco-minilm-l-12-v2 reranker model to reorder a set of document passages based
on their similarity to a specified query. Reranking adds precision to your answer retrieval
workflows. For details about the ms-marco-minilm-l-12-v2 model,
see Foundation models
in watsonx.ai.
- Review benchmarks that show how foundation models perform on common tasks
-
Review foundation model benchmarks to learn about the capabilities of foundation models deployed
in watsonx.ai before you try them out.
Compare how various foundation models perform on the tasks that matter most for your use case.
- Configure AI guardrails for user input and foundation model output separately in Prompt Lab
- Adjust the sensitivity of the AI guardrails that find and remove harmful content when you
experiment with foundation model prompts in Prompt Lab. You can set different filter
sensitivity levels for user input and model output text, and you can save effective AI guardrails
settings in prompt templates.
To read more about these features, see What's new and changed in watsonx.ai in the IBM watsonx documentation.
- Related documentation:
- watsonx.ai
|
watsonx Assistant |
5.1.0 |
This release of watsonx Assistant includes the following features:
- Conversational search analytics
- You can now analyze the performance of your conversational search by using conversational search
analytics. For more information, see Conversational search analytics.
- Debug conversational search with Inspector
- You can now debug conversational search issues using the Inspector in
your watsonx Assistant. For more information, see
Debugging failures for Conversational search or skill based
actions.
- Respond using a table format
- The watsonx Assistant web chat can now provide
responses in table format. The table response type is only available in the web chat. For more
information, see Response types reference.
- Pass values to skill-based actions
- watsonx Assistant users can now pass values to
skill-based actions within the conversation flow. To enable this feature, external skill providers
must implement the new endpoint for skill-based actions. For more information, see Passing values to a subaction.
- Related documentation:
- watsonx Assistant
|
watsonx Code Assistant for Red Hat®
Ansible® Lightspeed |
5.1.0 |
- Related documentation:
- watsonx Code Assistant for Red Hat
Ansible Lightspeed
|
watsonx Code Assistant for Z |
5.1.0 |
- Related documentation:
- watsonx Code Assistant for Z
|
watsonx.data |
2.1.0 |
This release of watsonx.data includes the following features:
- Data sources, catalogs, and storage enhancements
- This release includes the following data sources, catalogs, and storage enhancements:
- Now, you can connect to Apache Phoenix data sources. For more information, see Apache Phoenix.
- You can create or add a new data source to the engine without attaching a catalog to it. You can
attach a catalog to the data source at a later stage.
- Related documentation:
- watsonx.data
|
watsonx.governance |
2.1.0 |
This release of watsonx.governance includes the following features:
- Enhancements for governance management
-
These features extend the way you can extend and manage governance activity.
- Distribute governance activities across multiple clusters
-
You can now distribute governance activities across multiple servers and sync data between the
remote servers to a primary governance cluster. Use this capability to isolate production assets on
one server and control access to them for greater security.
For details, see Managing multiple clusters for watsonx.governance.
- Track governance activity for custom models and tuned models
-
You can now extend governance to include tracking prompt templates for custom foundation models
and tuned models. You can capture the data for the prompt templates, including evaluation results,
in a factsheet as part of an AI use case.
- Enhancements for asset evaluation
-
Use these features to improve the quality of your asset evaluations and drive decision-making.
- Run quality evaluations with historical data
-
You can now use an API to evaluate historical feedback data records for online deployments and
prompt templates to analyze your model performance over time with a wider scope.
- Configure generative AI quality evaluations with LLM-as-a-judge
-
When you configure generative AI quality evaluations, you can now configure settings to calculate
metrics with LLM-as-a-judge models. LLM-as-a-judge models are LLM models that you can use to
evaluate the performance of other models.
- Import model deployment configuration settings
-
When you’re adding deployments to configure evaluations for machine learning models in
production, you can import settings from your pre-production model deployment to provide model
details for evaluations.
- Configure global explanations with LIME
-
You can now use the LIME (Local Interpretable Model-Agnostic explanations) algorithm to configure
global explanations for machine learning models. To use LIME to configure global explanations, you
must enable the global explanation parameter when you configure explainability.
- Enable GPU metrics computation
-
You can use detectors on GPUs to increase the speed of data safety, answer quality, and retrieval
quality metric computations.
For details, see Enabling GPU metrics computation.
- Enhancements for the governance console
-
Use these features to better manage your use cases in the Governance console.
- Add custom tabs to the governance console dashboard
-
You can now add up to three custom tabs to the dashboard. For example, you might want to use
custom tabs to group related panels and widgets in separate places on the
Home page.
- Create stacked bar charts in the governance console
-
You can now configure a stacked bar chart on the dashboard and in the View
Designer panel. Use a stacked bar chart to compare the proportional contributions of
each item to the total within a category. For example, for the Model object type, you might create a
stacked bar chart that shows risks grouped by Model Category and stacked by
Computed Tier.
- Use expressions to set field values based on a respondent's questionnaire answers
-
When you create response actions in governance console, you can now enter an expression for the
value of a field. For example, you can enter the following values:
[$TODAY$] for the current date
[$END_USER$] for the name of the signed on user
[$System Fields:Description$] to set the field to the value of the
Description field of the object
- Enhancements to the watsonx.governance Model
Risk Governance solution
-
This release includes the following enhancements:
- New Model Group object type
-
Groups similar models together. For example, versions of a model that use a similar approach to
solve a business problem might be grouped under a Model Group object.
- New Use Case Risk Scoring calculation
-
Aggregates metrics by breach status into risk scores to give an overall view into how the
underlying models of a use case are performing.
- New Discovered AI library business entity
-
Provides a default place to store any AI deployments that are not following sanctioned governance
practices within an organization (also known as “shadow AI”).
To read more about these features, see What's new and changed in watsonx.governance in the IBM watsonx documentation.
- Related documentation:
- watsonx.governance
|
watsonx Orchestrate |
5.1.0 |
This release of watsonx Orchestrate includes the following features:
- Create projects in skill studio to automate complex tasks and processes
- As a builder, you can create projects in Skill studio and publish them as
skills to the Skills and apps page, where you enhance the skills and make
them available in the skill catalog. You can use the skill by entering a phrase in the watsonx Orchestrate chat or by adding these skill as actions on
AI assistants. You can create skills from types like workflows, decision models, and generative AI
responses. For details, see https://www.ibm.com/docs/en/watsonx/watson-orchestrate/current?topic=building-projects.
- Use formatting for skill descriptions
- As a builder, you can create skills that have skill input descriptions in plain text. You can
format any of the descriptions by using bold, italics, underline, and you can add hyperlinks.
- Use slot logic for validations
- As a builder, when you enhance a conversational skill, you can specify validations on the inputs
(slot filling) that are obtained from the nonlinear multi-turn conversation interaction.
- Related documentation:
- watsonx Orchestrate
|