| Cloud Pak for Data
common core services |
12.0.0 |
This release of common core services
includes the following features:
- Access more data with new connectors
-
- Amazon Aurora for MySQL
- Amazon Aurora for PostgreSQL
- ClickHouse
- Confluence
- Datastax HCD
- Iceberg Metastore
- SAP Business Warehouse for DataStage
- Write data to Microsoft Azure
Data Lake Storage with the watsonx.data connection
- You can now write data to the Microsoft Azure Data Lake Storage data source by using the
watsonx.data connection.
- Write data that uses the compressed gzip format for Amazon S3
- You can now write in the compressed data format gzip for the Amazon S3 data source.
You can load data that is in
this format into the Snowflake
connection.
- Access new versions of your data sources
- You can now connect to updated versions of the following data sources to take advantage of the
latest features and improvements:
If you install or upgrade a service that requires the common core services, the common core services will also be installed or upgraded.
|
| Cloud Pak for Data
scheduling service |
1.60.0 |
This release of scheduling service includes the following features:
- Schedule ARM CPUs and GPUs on remote physical locations
-
Premium If you use remote physical
locations to expand your IBM Software
Hub deployment
to remote clusters, you can now use the scheduling service to schedule ARM CPUs and ARM GPUs on
the remote physical locations.
If the remote cluster has a cluster autoscaler, you can use the following options:
- Use the
--max_cpu_arm option to allow the scheduling service to schedule additional ARM-based CPUs
if the workload exceeds the current available ARM CPU.
- Use the
--max_gpu_arm option to allow the scheduling service to schedule additional ARM-based GPUs
if the workload exceeds the current available ARM GPU.
For more information, see Registering a remote physical location with an instance of IBM Software
Hub.
Restriction: You cannot schedule ARM-based CPU or ARM-based GPU on existing remote
physical locations. If you want to schedule GPUs, you must either:
- Related documentation:
-
|
| AI Factsheets |
7.0.0 |
- Related documentation:
- AI Factsheets
|
| Analytics Engine powered by Apache Spark |
5.3.0 |
Version 5.3.0 of the Analytics Engine powered by Apache Spark service includes various fixes.
- Related documentation:
- Analytics Engine powered by Apache Spark
|
| Cognos
Analytics |
29.0.0 |
This release of Cognos
Analytics
includes the following features:
- Manage fonts and style sheets with the Artifacts API
- The new version, 2.3.0, of the Cognos
Analytics
Artifacts API contains new artifacts: fonts and stylesheets .Now you can use the API to list,
upload, download and delete font and stylesheet files. For details, see Managing artifacts with Cognos
Analytics APIs.
- Updated software version for Cognos Analytics
- This release of the service provides Version 12.1.1 of the Cognos
Analytics software. For details, see Release 12.1.1 in the Cognos
Analytics documentation.
- Related documentation:
- Cognos
Analytics
|
| Cognos Dashboards |
5.3.0 |
This release of Cognos Dashboards
includes the following features:
- Updated software version
- This release of the service provides Version 12.1.1 of the Cognos
Analytics dashboards software. For details, see Release 12.1.1 - Dashboards in the Cognos Analytics documentation.
- Related documentation:
- Cognos Dashboards
|
| Data Gate |
9.0.0 |
This release of Data Gate
includes the following features:
- Remote Db2 support for Power
- You can now connect an additional type of target database to Data Gate: a remote Db2 target database that is running on a Power computer (PPC64LE architecture). For details, see Connecting to a remote Db2 instance.
- Certificates signed by third-party CA
- Data Gate now supports SSL-encrypted
connections to the Db2 or Db2 Warehouse target
database that are secured with certificates signed by an external certificate authority (CA).
- Related documentation:
- Data Gate
|
| Data Privacy |
5.3.0 |
This release of Data Privacy
includes the following features:
- Duplicate data protection rules
- You can now duplicate existing data protection rules or edit the rule details to create new
rules.
- Activate and deactivate data protection rules in the UI
- You no longer have to delete rules that you don't want to use, and then recreate them later if
you need them again. Now, you can deactivate and activate data protection rules and revoke the rules
without deleting them. See and manage the status of any rule on the Rules
page.
- Related documentation:
- Data Privacy
|
| Data Product Hub |
5.3.0 |
This release of Data Product Hub
includes the following features:
- Create and access data contracts in Open Data Contract Standard v3
-
Streamline your management of data contracts by using Open Data Contract Standard v3 (ODCS v3)
format in Data Product Hub
-
Producers: You can now create data contracts in ODCS v3 format. Create contracts from
scratch or by using a predefined template.
-
Consumers: You can access and review data contracts directly in Data Product Hub or
download them in YAML format, along with any associated test status information.
- Deliver data products from Azure Databricks
-
You can now subscribe to a data product that is created in Azure Databricks by using the Access
in Azure Databricks delivery method. Consumers can directly access Azure Databricks resources by
using Data Product Hub. After delivery of the data products, consumers see details on how to access
the specific resources in Azure Databricks.
- Deliver data assets to a project by using the access in watsonx.data delivery method
-
You can now choose to import data product assets to a project by using the access in watsonx.data delivery method.
- Manage and view data product reviews
-
Consumers can now create, edit, and delete reviews of data products. Producers cannot manage
reviews.
- Related documentation:
- Data Product Hub
|
| Data Refinery |
12.0.0 |
- Related documentation:
- Data Refinery
|
| Data Replication |
5.3.0 |
This release of Data Replication includes the following features:
- Replicate range-partitioned Db2 tables to
supported target data stores
- You can now use the Data Replication
service to replicate Db2 tables that are
partitioned based on the range of values in one or more columns. These types of tables are also
known as range-partitioned tables. You can replicate range-partitioned tables to Db2
on Cloud, Db2 Warehouse on Cloud, IBM
watsonx.data, and Apache Kafka target data stores.
- Related documentation:
- Data Replication
|
| DataStage |
5.3.0 |
This release of DataStage
includes the following features:
- Connect to Microsoft Fabric Warehouse
-
You can now connect to Microsoft Fabric Warehouse using the new connector in DataStage, enabling
seamless integration with your data workflows.
- Use IBM Instana to validate your workload runs
-
You can now automatically discover, monitor and visualize components of your cluster in real
time. With IBM Instana observability features, you can efficiently detect and address performance
problems, minimizing the time spent on troubleshooting
- Related documentation:
- DataStage
|
| Data Virtualization |
3.3.0 |
This release of Data Virtualization
includes the following features:
- Enable high concurrency and greater scalability of query processing by using Data Virtualization agents
-
Data Virtualization agents now run in their own
dedicated pods instead of within the Data Virtualization
primary head pod, for better system scalability.
-
For new installs, the number of Data Virtualization
agent pods are automatically provisioned based on the sizing option you choose in the web
client.
-
If your existing Data Virtualization instance uses
custom sizing, then upgrading your Data Virtualization
instance automatically adds five agent pods, each requiring two CPUs. The increased resource usage
is typically balanced if your custom cluster was deployed with sufficient resources to accommodate
the extra load without dropping below a stable minimum. However, if you have custom sizing and
limited resources, then you might experience a net increase in resource usage.
To customize the number of Data Virtualization agent pods, or adjust the CPU usage and memory
settings, see Customizing the
pod size and resource usage of Data Virtualization
agents.
- Automatically apply personal credentials setting when importing data sources
-
Personal credentials are now enabled by default in Data Virtualization. When you add a data source on the platform
connection with personal credentials turned on, the same setting is automatically applied when the
data source is imported into Data Virtualization.
In order to successfully access, virtualize, and query the data source through Data Virtualization, each user needs to configure their own
credentials on the platform connections side.
- Migrate Data Virtualization assets to and from your
Git repository
-
You can now export and import your Data Virtualization
assets across different environments (for example, from development to QA or production) from your
Git repository by using Data Virtualization APIs. By
using Git, Admin users can quickly synchronize assets, like tables, nicknames, and views, by
promoting the Data Virtualization objects from a Data Virtualization instance to a Git branch, and by pulling
updates from Git back into Data Virtualization.
You can migrate the following objects with Git:
- Nicknames (excluding those with personal credentials)
- Schemas
- Tables (excluding those with personal credentials)
- Views
- Authorization statements (GRANTs)
- Statistics
See Migrating Data Virtualization objects by
using Git.
- Use new views to simplify troubleshooting and admin tasks
-
-
You can now troubleshoot connection failures by using automated diagnostic tests. When a
datasource connection fails, Data Virtualization
automatically runs a series of connectivity tests (including ping, OpenSSL, netcat, and traceroute)
to identify the root cause. The results are logged in ConnectivityTest.log on each
qpagent, along with a unique DIAGID included in the error message which you can use with the
LISTCONNECTIVITYTESTWARNINGS view to retrieve detailed logs. The DIAGID is cleared when the
datasource connection becomes available again.
-
You can now display the list of columns of the tables in a RDBMS source by using the LISTCOLUMNS
view.
-
You can now set configuration properties specific to Data Virtualization and Federation directly for your Data Virtualization connection by using the
SETCONNECTIONCONFIGPROPERTY and SETCONNECTIONCONFIGPROPERTIES stored procedures. Additionally, you
can now set Federation-specific options for existing SETRDBCX procedures.
- See the full list of Stored
procedures and Views.
- Grant collaborators the new INSPECT data source privilege to view source metadata
-
You can now grant INSPECT privilege to users or the DV_METADATA_READER role to enable those users
import lineage metadata with MANTA Automated Data Lineage.
To get started in the web client, navigate to the Data sources page and then
select the Manage access setting on your data source, and then select the
grantees. You can also grant the INSPECT privilege to the DV_METADATA_READER role by selecting
Grant INSPECT privilege to the DV_METADATA_READER role. In the
INSPECT column, you can grant or revoke the INSPECT privilege for the
grantee.
See INSPECT privilege in Data source connection access restrictions in Data
Virtualization and Configuring Data Virtualization connections for lineage imports
.
- Connect to Apache Cassandra and watsonx.data Presto data sources
- You can now connect to Apache Cassandra and
watsonx.data Presto from Data Virtualization.
- Related documentation:
- Data Virtualization
|
| Db2 |
5.3.0 |
This release of Db2
includes the following features:
- Deploy Db2 database with non-root
deployment (Restricted-v2 SCC)
-
Now, when deploying a new Db2 database in
your IBM Software
Hub cluster, you can enable non-root
deployment by selecting the checkbox Deploy Db2 with non-root deployment on the Advanced
Configurations page. Selecting the Restricted-v2 option uses Red Hat®
OpenShift’s default restricted-v2 Security Context
Constraint (SCC) to meet strict security requirements while maintaining full functionality.
This SCC ensures that:
- Workloads run with non-root privileges.
- Use of
sudo or elevated permissions is not allowed.
For more information on permission levels and requirements, see Deploying Db2 with non-root access in a restricted-v2 SCC on
IBM Software
Hub.
- Related documentation:
- Db2
|
| Db2
Big SQL |
8.3.0 |
- Related documentation:
- Db2
Big SQL
|
| Db2
Data Management Console |
5.3.0 |
- Related documentation:
- Db2
Data Management Console
|
| Db2 Warehouse |
5.3.0 |
This release of Db2 Warehouse includes the following
features:
- Deploy Db2 Warehouse database
with non-root deployment (Restricted-v2 SCC)
-
Now, when deploying a new Db2 Warehouse database in your IBM Software
Hub cluster, you can enable non-root deployment by
selecting the checkbox Deploy Db2 Warehouse with non-root deployment
on the Advanced Configurations page. Selecting the Restricted-v2 option uses Red Hat
OpenShift’s default restricted-v2 Security Context
Constraint (SCC) to meet strict security requirements while maintaining full functionality.
This SCC ensures that:
- Workloads run with non-root privileges.
- Use of
sudo or elevated permissions is not allowed.
For more information on permission levels and requirements, see Deploying Db2 Warehouse with non-root access in a
restricted-v2 SCC on IBM Software
Hub.
- Query external data with Datalake tables
-
You can now use Datalake tables to work with data stored in open formats like PARQUET and ORC
directly from Db2 Warehouse,
without moving the data into the database.
With Datalake tables, you can do the following:
- Query external data: Define a Datalake table in Db2 Warehouse and use it in complex queries
with other Db2 Warehouse
tables.
- Export Db2 data to object storage while keeping it queryable by using:
INSERT
SELECT INFO
CREATE DATALAKE TABLE AS SELECT
- Import data from a Datalake table into a table in the database. You can perform operations such
as casts, joins, and dropping columns to manipulate data during importing.
- Related documentation:
- Db2 Warehouse
|
| Decision Optimization |
12.0.0 |
This release of Decision Optimization
includes the following features:
- Compare models in Decision Optimization experiments
- You can now compare models from different scenarios in a Decision Optimization experiment and compare the log files when the models
are solved. When you compare models this way, you can see the different scenarios side by side.
- Related documentation:
- Decision Optimization
|
| EDB Postgres |
13.22, 14.19, 15.14, 16.10, 17.6 |
- Related documentation:
- EDB Postgres
|
| Execution Engine for Apache Hadoop |
5.3.0 |
Version 5.3.0 of the Execution Engine for Apache Hadoop service includes various fixes.
- Related documentation:
- Execution Engine for Apache Hadoop
|
| IBM Knowledge Catalog |
5.3.0 |
This release of IBM Knowledge Catalog includes the following features:
- Create SQL-based assets and data quality rules with text instead of SQL
- Now you can describe the data asset or the data quality rule that you want to create in plain
English and convert this text query into an SQL query. You can then run the generated query to
create the asset or the rule.
Tech preview This is a
technology preview and is not supported for use in production environments.
- Disable certain generative AI capabilities for selected projects
- Even if the product
is installed with
generative AI capabilities, you might not want to use these capabilities in all of your projects.
You can now disable these capabilities per project. In projects where the capabilities are disabled,
you can't work with natural language queries to create SQL-based assets and data quality rules. In
addition, LLM-based name, description, or term generation and term assignment in metadata enrichment
are disabled.
- Define catalog-specific custom properties for assets
- You can now restrict custom properties for assets to a specific catalog. By using
catalog-specific custom properties, you can more effectively display values that pertain only to
selected domains and ensure that the right information is available to the right users.
- To list custom properties that are restricted by a given catalog, use the sort by scope option
and scroll down to the items for the catalog that you're interested in.
-
- Manage columns for catalogs
- You can now select which columns to display in the asset listing grid by clicking
Manage columns on the catalog page. Select your columns, reorder them if necessary,
and save your preferences to keep the information that is most relevant for you readily available.
For example, you can modify the view to show you a list of assets with the display name, owners, and
date added columns only.
- Optimize term assignment
- With the new tuning options for term assignment, now you can influence the weighting of term
suggestions for better precision or recall.
- Import primary keys and foreign keys and visualize them in Relationship Explorer
- Import primary keys and foreign keys with metadata import instead of metadata enrichment. After
import, you can access the associated relationships through the RHS panel and Relationship Explorer.
- Versioning of governance artifacts
- Track historical changes for the artifacts, schedule new versions to be published in the future,
and restore or archive previous versions with the new Versions panel.
- Related documentation:
- IBM Knowledge Catalog
|
| IBM Manta Data
Lineage |
5.3.0 |
This release of IBM Manta Data
Lineage includes the following features:
- Export data lineage to Collibra
- You can now export data lineage and view it in Collibra. If you transfer lineage information into
Collibra data governance platform, you can see
a comprehensive view of your data flows and dependencies within your governance framework.
- Starting parents are introduced in the data lineage graph
- When you select an asset to be a starting asset in the lineage, all assets that are higher in
the hierarchy are marked as starting parents. Also, all child assets of the selected asset are
marked as starting assets. This distinction clarifies which assets are selected as the starting
points for the lineage.
- Related documentation:
- IBM Manta Data
Lineage
|
| IBM
Master Data Management |
4.10.28 |
This release of IBM
Master Data Management
includes the following features:
- IBM Match 360 is now known as IBM
Master Data Management
-
The IBM Match 360 service is renamed to IBM
Master Data Management.
- View historical data for entities, records, and relationships in your master data
-
You can now view the history of each entity, record, and relationship in your master data and
compare historical attribute values to the current version. Select any past update to view the
attribute values at that point in time and also see whether each update was initiated by a user,
source system, or linkage action. You can use this capability to help with audit tracking and
analysis of data changes over time.
A data engineer can configure whether the service keeps historical data. Storing history details
increases the storage requirements of your database.
- Configure potential match workflows for each entity type
-
You can now configure a potential match workflow for each entity type in your data model.
Potential match workflows identify matching issues within your data, then create and assign tasks
for data stewards to resolve them. Potential matches are now also searchable by source to help data
stewards to identify and resolve potential duplicate records and entities. Configure all your task
workflows from the Task types page.
- Edit any attribute of a master data entity, including composite values
-
You can now edit any of an entity's attribute values, even if the original value was derived from
the entity's member records by applying attribute composition rules. After a data steward overrides
a composite value, the user-defined value will continue to be maintained even if the composition of
the entity changes.
This capability is available only for entity types that a data engineer has configured to enable
entity persistence.
- View more information about group types and hierarchy types
-
When you open a hierarchy type or group type from the data types pages, you can now see more
details at a glance, including who created the type, when they created it, and a list of group or
hierarchy instances that are based on the selected type. You can also navigate directly from the
data types page to the Workspace view to manage each hierarchy and group instance, and its
members.
- Related documentation:
- IBM
Master Data Management
|
| IBM
StreamSets |
6.3.0 |
- Related documentation:
- IBM
StreamSets
|
| Informix |
10.0.0 |
- Related documentation:
- Informix
|
| MANTA Automated Data Lineage |
42.14.1 |
- Related documentation:
- MANTA Automated Data Lineage
|
| MongoDB |
7.0.18-ent, 8.0.6-ent |
- Related documentation:
- MongoDB
|
| OpenPages |
9.6.0 |
Version 9.6.0 of the OpenPages service includes various fixes.
- Related documentation:
- OpenPages
|
| Orchestration Pipelines |
5.3.0 |
This release of Orchestration Pipelines
includes the following features:
- See the parameters of utility scripts in the Expression Builder
- When you use the Expression Builder, you can now see the parameters (inputs) that the utility
scripts require. This enhancement shows you what arguments you need to input, which leads to fewer
errors.
The utility scripts run using a CEL (Common Expression Language) expression.
- New default embedded runtime
- You can now use a new default embedded runtime, removing the dependency on the Tekton runtime.
This update eliminates compatibility issues with OpenShift Pipelines, ensuring stable co-existence. The
new runtime delivers faster performance and greater scalability for your pipeline workloads.
- Related documentation:
- Orchestration Pipelines
|
| Planning Analytics |
5.3.0 |
This release of Planning Analytics
includes the following features:
- Updated versions of Planning Analytics software
- This release of the service provides the following software versions:
- Planning Analytics Workspace Version 3.1.2
For details, see 3.1.2 - What's new in the Planning Analytics Workspace documentation.
- Planning Analytics Spreadsheet Services Version 3.1.2
For details, see
3.1.2 - Feature updates in the TM1
Web documentation.
- Planning Analytics for Microsoft Excel Version 3.1.2
For details, see
3.1.2 - Feature updates in the Planning Analytics for Microsoft Excel documentation.
Version 5.3.0 of the Planning Analytics service includes various fixes.
- Related documentation:
- Planning Analytics
|
| Product Master |
9.0.0 |
This release of Product Master
includes the following features:
- Categorize multiple items at once
- You can now categorize more than one item at a time by using the bulk categorization capability
on the Data explorer, Search, or Free text search pages.
- Create rules on attribute collections
- You can now create rules on attribute collections, and across different specifications.
- Generate report with only selected data
- In the Generate report feature, you can now specify to include only the selected attributes,
rows, and columns that get included in the exported Microsoft Excel worksheet.
- Suspect Duplicate Processing with Data survivorship
- Now, when you run Suspect Duplicate Processing, the Data survivorship rules improve how IBM
Product Master matches and merges duplicate records. You can also match data that is still being
processed.
- Enhanced Data completeness dashboard
- You can now use the Data completeness dashboard to check the completeness of your data based on
categories and items within workflows.
- Dashboard builder enhancements
- You can now use the new Criteria console on the
tab to
view the mapping of all criteria to their associated charts.
- You can now drag and drop charts, click select chart data to view details, and also export
dashboard data from the Dashboard tab.
- Related documentation:
- Product Master
|
| RStudio® Server
Runtimes |
12.0.0 |
Version 12.0.0 of the RStudio Server
Runtimes service includes various fixes.
- Related documentation:
- RStudio Server
Runtimes
|
| SPSS
Modeler |
12.0.0 |
This release of SPSS
Modeler
includes the following features:
- Override storage type in a Data Asset node
- You can now easily override the inferences that the Data Asset node makes about storage types
when it imports data. The Data Asset node reads a sample of the data that it imports to infer what
type of data is in a table field, such as an integer, date, or string. Previously, changing the
storage type after this inference was difficult. Now you can override the storage type for a field
in the settings for the Data Asset node.
- Analyze Chinese text data in SPSS
Modeler with
Text Analytics
- You can now use the Text Analytics nodes in SPSS Modeler to analyze text data that is written in
Simplified Chinese. Text Analytics nodes use advanced linguistic technologies and text mining
techniques to analyze text data and extract concepts, patterns, and categories.
- Save visualizations to the project
- When you create an SPSS
Modeler job, you can now
choose to save any visualizations that are generated to the project. For example, you can save the
graphical outputs from a Chart node, Plot node, or Evaluation node. You can view the files in the
Assets tab and download them.
- Related documentation:
- SPSS
Modeler
|
| Synthetic Data Generator |
12.0.0 |
This release of Synthetic Data Generator
includes the following features:
- Create jobs for generating unstructured synthetic data by using the user interface
- In addition to using the REST API, you can now use the user interface to create jobs for
generating unstructured synthetic data.
- Use multi-table nodes to generate synthetic data with referential integrity
- You can now create synthetic data by using several data tables from a database connection
instead of just one data table. The multi-table nodes use referential integrity to preserve
parent–child dependencies across multiple tables, and they also preserve distributions and
correlations within the same table. When you generate synthetic data by using these nodes, the
synthetic data mimics all these dependencies, which allows you to make synthetic data that more
closely mimics the entire dataset.
- Scripting with Synthetic Data Generator
- You can now programmatically make changes to nodes and flows in Synthetic Data Generator by using scripts written in Python or Jython. Use
either the Scripting pane or embedded API to run your scripts. You can use
scripting to automate tasks that are highly repetitive or to quickly change all of the nodes in a
flow.
- Scripting with parameters
- You can also use parameters in the Python scripts that you write for Synthetic Data Generator. You can use parameters as a way of passing values
at runtime rather than hard coding them directly in a script. This flexibility makes it easier to
adjust settings for a Synthetic Data Generator flow while reusing
the same scripts.
- Override storage type in an Import node
- You can now override the inferences that the Import node makes about storage types when it
imports data. The Import node reads a sample of the data that it imports to infer what type of data
is in a table field, such as an integer, date, or string. Previously, changing the storage type
after this inference was difficult. Now you can easily override the storage type for a field in the
settings for the Import node.
- Save evaluation metrics to the project
- When you create an Synthetic Data Generator job, you can now
choose to save the data that is generated when an Evaluate node runs. You can then view the data in
the Assets tab and download it.
- Related documentation:
- Synthetic Data Generator
|
| Voice Gateway |
1.12.0 |
Version 1.12.0 of the Voice Gateway service includes various fixes.
- Related documentation:
- Voice Gateway
|
| Watson Discovery |
5.3.0 |
- Related documentation:
- Watson Discovery
|
| Watson
Machine Learning |
5.3.0 |
This release of Watson
Machine Learning
includes the following features:
- Deploy your Python code, R code, and Tensorflow models by using the latest software
specifications
-
For R scripts, you can now use the latest software specification that is based on R 4.4 to deploy
your code.
You can also deploy Python functions, Python scripts, and Tensorflow models with the new
runtime-25.1-py3.12-cuda version of the 25.1 software specification and use GPUs to
inference your code.
- Optimize AutoAI for Machine Learning
experiments across hyperparameter tuning for score and runtime
-
You can now optimize hyperparameter tuning for the highest score and shortest runtime when you
run AutoAI for Machine Learning experiments.
Previously, this optimization was available only during algorithm selection. Now, you can apply
score and runtime optimization during hyperparameter tuning or across both phases for a balanced
approach, helping you build high-performing models faster while reducing training time.
- Related documentation:
- Watson
Machine Learning
|
| Watson
OpenScale |
5.3.0 |
Version 5.3.0 of the Watson
OpenScale service includes various fixes.
- Related documentation:
- Watson
OpenScale
|
| Watson Speech services |
5.3.0 |
This release of Watson Speech services includes the following features:
- Spoken language identification
- Language Identification (LID) now automatically detects spoken languages in audio streams. The
model continuously analyzes incoming audio and returns the identified language when it reaches a
certain confidence threshold. The model updates the detected language throughout the session and
enables early session termination after the initial response. The client applications can switch
languages for downstream processing, making it ideal for multilingual environments such as call
centers. For details, see Spoken language identification.
- Speech transcript enrichment
- Speech transcript enrichment improves the readability and usability of raw Automatic Speech
Recognition (ASR) transcripts. This post-processing service automatically adds punctuation and
applies intelligent capitalization to enhance the structure and clarity of spoken content. For
details, see Speech transcript enrichment. For details on how to configure
the enrichment features, see Configure Speech transcript enrichment.
- Natural voices
- You can now use the following natural voices with Watson Speech services:
- US English (en-US) Ellie Natural voice
- US English (en-US) Emma Natural voice
- US English (en-US) Ethan Natural voice
- US English (en-US) Jackson Natural voice
- US English (en-US) Victoria Natural voice
- UK English (en-GB) Chloe Natural voice
- UK English (en-GB) George Natural voice
- CA English (en-CA) Hannah Natural voice
- Brazilian Portuguese (pt-BR) Lucas Natural voice
- Brazilian Portuguese (pt-BR) Camila Natural voice
For details, see Natural voices.
- Related documentation:
- Watson Speech services
|
| Watson Studio |
12.0.0 |
This release of Watson Studio includes the following features:
- Secure project by using permanent admins
-
You can now set yourself or other users as permanent admins to keep projects secure and prevent
orphaned projects. If you have permission to manage projects, you become a permanent admin
automatically when you create or join a project. Permanent admins have more permissions than regular
admins. They can maintain full control of projects and can manage other permanent admins.
- Easily maintain clear project documentation for projects with deprecated Git integration
-
You can now create and manage project documentation directly in the new Documentation editor for
projects that have deprecated Git integration. By using the Documentation editor, you can easily
organize details, collaborate with your team, and maintain clear records. Access the Documentation
editor from the project Overview page.
- Related documentation:
- Watson Studio
|
| Watson Studio Runtimes |
12.0.0 |
This release of Watson Studio Runtimes includes the following features:
- Run Python and R code by using environments that are based on Spark 4 engine
-
You can now run your Python and R code by using the latest environments that are based on
Spark 4 engine.
- Run R 4.4 code in Jupyter Notebooks
-
You can now Run R 4.4 code in Jupyter Notebooks by using Runtime 25.1 on R 4.4
- Related documentation:
- Watson Studio Runtimes
|
| watsonx.ai™ |
12.0.0 |
This release of watsonx.ai includes the following features:
- New foundation models in watsonx.ai
-
You can now use the following foundation models for inferencing from the Prompt Lab and the API:
granite-4-h-tiny
granite-docling-258M
ibm-defense-4-0-micro
For details, see Foundation models in watsonx.ai.
- Access models across multiple providers with the model gateway
-
You can now securely configure and interact with foundation models from multiple providers with
the model gateway by using the API. In addition, you can manage the model gateway with integrated
load-balancing, access policies, and rate limits.
- Capture semantic meaning and refine retrieved results by using custom embedding and reranking
models
-
You can now add custom embedding and reranking models to watsonx.ai and use them to capture semantic
meaning and refine retrieved results.
For details, see Registering custom foundation models for global deployment.
- All custom foundation models now use the vLLM inferencing server
-
All custom foundation models now use the vLLM inferencing server. If your deployed models use the
TGIS inferencing server, you might have to migrate them.
- New text classification API in watsonx.ai
- You can now use the new text classification method in the watsonx.ai REST API to classify your document
before you extract textual content to use in a RAG solution.
You can classify your document with
the classification API into one of several supported common document types without running a longer
extraction task. By pre-processing the document, you can then customize your text extraction request
to efficiently extract relevant details from your document.
- Use the user interface for Synthetic Data Generator to generate
unstructured synthetic data
- The user interface for creating jobs to generate unstructured synthetic data is now generally
available. The user interface in Synthetic Data Generator makes
creating and running jobs easier by organizing all the settings and seed document requirements into
simple options and fields.
- Improvements for AutoAI for RAG
experiments
- You can now use the following features for your AutoAI for RAG experiments:
-
- Use semantic chunking in AutoAI for RAG
experiments
-
You can now use the semantic chunking method to break down documents in an AutoAI for RAG experiment. Semantic chunking splits
documents based on meaning, making it well-suited for complex or unstructured data.
-
- Use chat API models in AutoAI for RAG
experiments
-
You can now use chat API models in AutoAI
for RAG experiments, instead of prompt template models. These models must have chat capabilities to
work in AutoAI for RAG experiments.
-
- Auto-deploy top pattern in AutoAI for RAG
experiments
-
You can now enable automatic deployment of the top-performing pattern after an AutoAI for RAG experiment completes. You can turn on
auto-deployment when you set up the experiment. Auto-deployment helps reduce manual steps and
further automates the experiment workflow.
-
- Use multiple vector indexes in AutoAI for
RAG experiments
-
You can now select up to 20 vector indexes for your document collection in an AutoAI for RAG experiment. During experiment setup,
when you add document and evaluation sources, choose Knowledge bases and then select up to 20
connections. You can then define details for each connection, such as index name and embedding
models. Using multiple indexes gives you more flexibility and can improve the quality and
performance of your experiments.
-
- Use SQL database schemas in AutoAI for RAG
experiments
-
You can now choose an SQL database schema as a knowledge base in an AutoAI for RAG experiment. You can use SQL connections
such as Db2, PostgreSQL, and MySQL. When using SQL sources, chunking settings are disabled,
and only answer correctness metrics are available for optimization. With an SQL RAG, structured data
can be retrieved directly from the relational database, which can improve answer accuracy and
relevance when compared with document-based sources.
- Related documentation:
- watsonx.ai
|
| watsonx Assistant |
5.3.0 |
Version 5.3.0 of the watsonx Assistant service includes various security
fixes.
- Related documentation:
- watsonx Assistant
|
| watsonx™
BI |
3.3.0 |
This release of watsonx
BI includes the following features:
- Manage new roles for watsonx
BI
users through IBM Software
Hub Access control
- As a watsonx
BI user, you can now
use IBM Software
Hub Access control to assign roles to
your team that are specific to watsonx
BI. You can assign the following roles to users or groups: Business Intelligence Administrator,
Business Intelligence Analyst, or Business Intelligence Consumer.
- For more information, see Roles and permissions for watsonx
BI.
- Configure flat file service to upload files to watsonx
BI
- You can configure flat file service to upload files and use them as a source of data in
Conversations.
- For more information, see Configuring flat file service.
- Integrate with IBM
watsonx.data Premium
- You can now easily access watsonx
BI
from a tile in
watsonx.data Premium. By combining
watsonx.data Premium and watsonx
BI, you can use business intelligence
tools to get deeper insights into your structured and unstructured data.
- For more information, see IBM
watsonx.data Premium.
Version 3.3.0 of the watsonx
BI service includes various security
fixes.
- Related documentation:
- watsonx
BI
|
| watsonx Code Assistant™ |
5.3.0 |
- Related documentation:
- watsonx Code
Assistant
|
| watsonx Code Assistant for Red Hat
Ansible® Lightspeed |
5.3.0 |
- Related documentation:
- watsonx Code Assistant for Red Hat
Ansible Lightspeed
|
| watsonx Code Assistant for Z |
5.3.0 |
This release of watsonx Code Assistant for Z includes the following features:
- Transform COBOL files to JAVA by using SQLite
- You can now transform COBOL files to JAVA by using SQLite as a local alternative to the Db2 on
Cloud instance with certain limitations. For more information, see Transform COBOL files to JAVA by using SQLite.
- Bulk generation provides better synchronization with Red Hat extension's Outline view
- The bulk generation feature now operates in the single-threaded mode, translating one program at
a time for better synchronization with the Red Hat extension's Outline view.
- Translate large paragraphs of COBOL code
- You can now translate large paragraphs of COBOL code because the default timeout is now set to
10 minutes across all workflows.
- Related documentation:
- watsonx Code Assistant for Z
|
| watsonx Code Assistant for Z Agentic |
2.7.0 |
- Related documentation:
- watsonx Code Assistant for Z
|
| watsonx Code Assistant for Z Code Explanation |
2.8.0 |
- Related documentation:
- watsonx Code Assistant for Z Code Explanation
|
| watsonx Code Assistant for Z Code Generation |
2.8.0 |
- Related documentation:
- watsonx Code Assistant for Z Code Generation
|
| watsonx.data |
2.3.0 |
For
a complete list of new and updated features in this release, see the Release notes in the watsonx.data documentation.
Version 2.3.0 of the watsonx.data service includes various
fixes.
- Related documentation:
- watsonx.data
|
|
watsonx.data Premium |
2.3.0 |
For
a complete list of new and updated features in this release, see the Release notes in the
watsonx.data Premium documentation.
Version 2.3.0 of the
watsonx.data Premium service includes various
fixes.
- Related documentation:
-
watsonx.data Premium
|
|
watsonx.data intelligence |
2.3.0 |
This release of
watsonx.data intelligence includes the following features:
- Curate unstructured data with a new tool
- With the new unstructured data curation tool, you can now import and analyze unstructured
documents, group these documents based on the analysis results, and process the documents further
based on the grouping.
You set up an analysis flow where you import metadata, detect the format
and the language of documents, and classify the documents based on predefined or custom document
classes. As a second step, you set up a processing flow where you transform these grouped documents,
generate entities and embeddings, and create document sets and document libraries that you can then
use in your gen AI projects.
The unstructured
data curation tool replaces the unstructured data import and unstructured data enrichment tools. See
the Unstructured data import, unstructured data enrichment, and
base document sets deprecation notice.
- Create SQL-based assets and data quality rules with text instead of SQL
- Now you can describe the data asset or the data quality rule that you want to create in plain
English and convert this text query into an SQL query. You can then run the generated query to
create the asset or the rule.
Tech preview This is a
technology preview and is not supported for use in production environments.
- Disable certain generative AI capabilities for selected projects
- Even if the product
is installed with
generative AI capabilities, you might not want to use these capabilities in all of your projects.
You can now disable these capabilities per project. In projects where the capabilities are disabled,
you can't work with natural language queries to create SQL-based assets and data quality rules. In
addition, LLM-based name, description, or term generation and term assignment in metadata enrichment
are disabled.
- Define catalog-specific custom properties for assets
- You can now restrict custom properties for assets to a specific catalog. By using
catalog-specific custom properties, you can more effectively display values that pertain only to
selected domains and ensure that the right information is available to the right users.
- To list custom properties that are restricted by a given catalog, use the sort by scope option
and scroll down to the items for the catalog that you're interested in.
-
- Manage columns for catalogs
- You can now select which columns to display in the asset listing grid by clicking
Manage columns on the catalog page. Select your columns, reorder them if necessary,
and save your preferences to keep the information that is most relevant for you readily available.
For example, you can modify the view to show you a list of assets with the display name, owners, and
date added columns only.
- Optimize term assignment
- With the new tuning options for term assignment, now you can influence the weighting of term
suggestions for better precision or recall.
- Import primary keys and foreign keys and visualize them in Relationship Explorer
- Import primary keys and foreign keys with metadata import instead of metadata enrichment. After
import, you can access the associated relationships through the RHS panel and Relationship Explorer.
- Versioning of governance artifacts
- Track historical changes for the artifacts, schedule new versions to be published in the future,
and restore or archive previous versions with the new Versions panel.
- Export data lineage to Collibra
- You can now export data lineage and view it in Collibra. If you transfer lineage information into
Collibra data governance platform, you can see
a comprehensive view of your data flows and dependencies within your governance framework.
- Starting parents are introduced in the data lineage graph
- When you select an asset to be a starting asset in the lineage, all assets that are higher in
the hierarchy are marked as starting parents. Also, all child assets of the selected asset are
marked as starting assets. This distinction clarifies which assets are selected as the starting
points for the lineage.
- Disable data lineage for the unstructured data flows
- Data lineage is generated for Unstructured Data Integration and unstructured data curation flows
by default. You can disable the lineage generation for unstructured data to control when lineage is
created.
- Create and access data contracts in Open Data Contract Standard v3
- Streamline your management of data contracts by using Open Data Contract Standard v3 (ODCS v3)
format in Data Product Hub
- Producers: You can now create data contracts in ODCS v3format. Create contracts from
scratch or by using a predefined template.
- Consumers: You can access and review data contracts directly in Data Product Hub or
download them in YAML format, along with any associated test status information.
This optimized process enhances collaboration, ensures data quality, and enhances trust
between producers and consumers.
- Deliver data products from Microsoft Azure Databricks
- You can now subscribe to a data product that is created in Azure Databricks by using the Access
in Azure Databricks delivery method. Consumers can directly access Azure Databricks resources. After
delivery of the data products, consumers see details on how to access the specific resources in
Azure Databricks.
- Deliver data assets to a project by using the access in watsonx.data delivery method
- You can now choose to import data product assets to a project by using the access in watsonx.data delivery method.
- Manage and view data product reviews
-
Consumers can now create, edit, and delete reviews of data products. Producers cannot manage
reviews.
- Related documentation:
-
watsonx.data intelligence
|
| watsonx.governance™ |
2.3.0 |
This release of watsonx.governance includes the following features:
- Model groups in Governance Console are synchronized with watsonx.governance
-
Model groups in Governance Console are now synchronized with AI use cases in watsonx.governance.
When you create or update a model group in Governance Console, your changes are synchronized with
the AI use case approaches in watsonx.governance. Similarly, when you create or update an approach
in watsonx.governance, your changes are synchronized with the model groups in Governance Console.
The synchronization of model groups and approaches helps to ensure consistent tracking and
governance of AI models.
You can turn this feature on or off in Configuration and settings.
- Migrate assets to inventories in watsonx.governance
-
You can now migrate AI use cases and external models to inventories by using a command-line tool.
Previously, these assets were stored in the platform assets catalog or other catalogs.
- Evaluate and govern prompt templates that use chat as the input
-
You can now evaluate and govern prompt templates that use chat as the input. To
get started, track the prompt template in a use case.
Note the following restrictions for prompt templates that use chat:
- You can evaluate the prompt templates only in a production deployment space.
- The evaluation of feedback data is not supported.
- The prompt templates are evaluated only on the payload data, which gets scored against the
deployment endpoint. Therefore, metrics that compute values based on the similarity between
reference texts and predictions, such as BLEU, ROUGE, and so on, are not evaluated.
- Only the GENAIQ monitor is supported.
- Use Guardrail Manager to manage AI guardrails
-
Configure the predefined guardrails, select actions, and manage them in an inventory through the
UI by using the new Guardrail Manager. You can also manage different guardrail configurations in
Guardrail Manager. Your guardrail configurations are stored in your inventory, and you can share the
inventory with others.
- Related documentation:
- watsonx.governance
|
| watsonx
Orchestrate |
7.0.0 |
This release of watsonx
Orchestrate
includes the following features:
- Manage user access and show or hide reasoning traces
- You can now set user access at the agent level to make visible the agent’s reasoning trace in
chat. Reasoning traces show how an agent generates a response during a conversation. Your users can
use this visibility to debug or validate the agent’s logic, and can view how an agent forms a
response even some domains choose to hide them to keep the chat interface simple. For details, see
Using agents in Orchestrate Chat.
- Enter new input types and customization in forms
- You can now use forms in user activities to organize multiple types of interactions within a
structured layout. You can complete the following tasks:
- Add a File upload field to collect documents or images.
- Edit column labels in Multi choice and Single
choice fields.
- Use the Field option to display static information during a chat.
For details, see Forms in user activities.
- Use chat session context as input to improve agentic workflows
- By using chat session content, you can have improved responses, reduced repetitive inputs, and
context-aware workflows. Chat session context provides context from the chat as input to the prompt.
You can enable the chat session context and get the last five conversations from the chat history.
These conversations are converted to form fields as input to the prompt. If the conversation mapping
fails, the user is prompted to provide input in the chat. For details, see Using the chat session context as input.
- Handle interruptions in voice conversations by using VAD settings
- You can define how your voice agent responds to user interruptions. By using Voice Activity
Detection (VAD) setting, you can optimize the responsiveness and accuracy of speech recognition by
identifying active voice segments and ignoring silence or background noise. VAD reduces latency and
improves the user experience in voice-driven workflows. For details, see Configuring voice settings for agents.
- Configure ElevenLabs TTS for natural voice output
- You can configure ElevenLabs as a text-to-speech (TTS) provider in an agent builder to create
voice agents with natural, human-like speech. Add your own API key, choose voice settings like speed
and stability, and preview voices before you apply them. For details, see Configuring voice settings for agents.
- Related documentation:
- watsonx
Orchestrate
|