What's new in IBM Software Hub

See what new features and improvements are available in the latest release of IBM® Software Hub.

What's new in Version 5.3

IBM Software Hub Version 5.3 introduces support for IBM Instana® Observability, which helps you monitor your environment for performance and stability issues. This release also introduces enhancements to the IBM Software Hub AI assistant and simplifies automatically scaling your environment.

Version 5.3 also introduces support for Argo CD installation with a limited number of services.

The release includes significant updates to many services and introduces two new services:

  • IBM watsonx Code Assistant™ for Z Understand
  • IBM watsonx.data™ integration

For more information, review the information in the following sections:

In addition, review the following topics:

Platform enhancements

The following table lists the new features that were introduced in IBM Software Hub Version 5.3.

What's new What does it mean for me?
Send monitoring data to Instana®

If you have an existing IBM Instana Observability installation, you can deploy an Instana agent on the cluster where IBM Software Hub is installed so that you can send data to IBM Instana Observability.

You can enable Instana metrics collection for the platform and services to see real-time data and root-cause analysis so that you can quickly act on performance and stability issues that affect IBM Software Hub.

For details, see Integrating IBM Software Hub with IBM Instana Observability.

Integrate with Git to promote assets

If you have multiple instances of IBM Software Hub, you can use a web-based Git repository to manage assets across instances. For example, you can use GitHub to promote assets from your development instance to your staging instance to ensure that the assets are the same on both environments.

You can use the following web-based Git repositories with IBM Software Hub:

  • GitHub
  • GitHub Enterprise
  • GitLab
  • GitLab Self-Managed

For details, see:

Manage automatic scaling from the web client

Previously, if you wanted to enable automatic scaling through the Horizontal Pod Autoscaler, you needed to use the cpd-cli. You can now enable and disable automatic scaling from the IBM Software Hub web client.

IBM Software Hub Premium Cartridge

The following features are available in IBM Software Hub Premium Cartridge:

What's new What does it mean for me?
Bring your own applications to IBM Software Hub

Premium You can use local or remote physical locations to deploy your own applications on IBM Software Hub. Whether you have an existing application that you want to integrate with IBM Software Hub or whether you want to develop a new application, you can use custom applications to deploy the application on your cluster.

You can deploy the following types of applications:

  • Git-based Dockerfile applications
  • OpenShift® template applications
  • Kubernetes resource applications
Easy-to-read health reports
Premium The IBM Software Hub AI assistant now uses tables to display monitoring data and health reports. The AI assistant returns a summary of the information and includes an option to see detailed information for various metrics, such as:
  • VPC and memory use by pod
  • Storage performance health check
  • Network performance health check
Dig deeper into monitoring data
GitOps with Argo CD

Premium You can use Argo CD to install and upgrade IBM Software Hub. Store your IBM Software Hub Helm charts in a Git repository so that you have a single source of truth for your IBM Software Hub deployments.

Argo CD is supported for a subset of the services on IBM Software Hub.

For details, see Argo CD installation documentation in GitHub.

Schedule ARM CPUs and GPUs on remote physical locations

Premium If you use remote physical locations to expand your IBM Software Hub deployment to remote clusters, you can now use the scheduling service to schedule ARM CPUs and ARM GPUs on the remote physical locations.

If the remote cluster has a cluster autoscaler, you can use the following options:
  • Use the --max_cpu_arm option to allow the scheduling service to schedule additional ARM-based CPUs if the workload exceeds the current available ARM CPU.
  • Use the --max_gpu_arm option to allow the scheduling service to schedule additional ARM-based GPUs if the workload exceeds the current available ARM GPU.

For more information, see Registering a remote physical location with an instance of IBM Software Hub.

Restriction: You cannot schedule ARM-based CPU or ARM-based GPU on existing remote physical locations. If you want to schedule GPUs, you must either:

Service enhancements

The following table lists the new features that are introduced for existing services in IBM Software Hub Version 5.3:

Software Version What does it mean for me?
Cloud Pak for Data common core services 12.0.0
This release of common core services includes the following features:
Access more data with new connectors
  • Amazon Aurora for MySQL
  • Amazon Aurora for PostgreSQL
  • ClickHouse
  • Confluence
  • Datastax HCD
  • Iceberg Metastore
  • SAP Business Warehouse for DataStage
Write data to Microsoft Azure Data Lake Storage with the watsonx.data connection
You can now write data to the Microsoft Azure Data Lake Storage data source by using the watsonx.data connection.
Write data that uses the compressed gzip format for Amazon S3
You can now write in the compressed data format gzip for the Amazon S3 data source.

You can load data that is in this format into the Snowflake connection.

Access new versions of your data sources
You can now connect to updated versions of the following data sources to take advantage of the latest features and improvements:
  • PostgreSQL version 18

Version 12.0.0 of the common core services includes various fixes.

For details, see What's new and changed in the common core services.

If you install or upgrade a service that requires the common core services, the common core services will also be installed or upgraded.

Cloud Pak for Data scheduling service 1.60.0
This release of scheduling service includes the following features:
Schedule ARM CPUs and GPUs on remote physical locations

Premium If you use remote physical locations to expand your IBM Software Hub deployment to remote clusters, you can now use the scheduling service to schedule ARM CPUs and ARM GPUs on the remote physical locations.

If the remote cluster has a cluster autoscaler, you can use the following options:
  • Use the --max_cpu_arm option to allow the scheduling service to schedule additional ARM-based CPUs if the workload exceeds the current available ARM CPU.
  • Use the --max_gpu_arm option to allow the scheduling service to schedule additional ARM-based GPUs if the workload exceeds the current available ARM GPU.

For more information, see Registering a remote physical location with an instance of IBM Software Hub.

Restriction: You cannot schedule ARM-based CPU or ARM-based GPU on existing remote physical locations. If you want to schedule GPUs, you must either:

Version 1.60.0 of the scheduling service includes various fixes.

For details, see What's new and changed in the scheduling service.

Related documentation:
AI Factsheets 7.0.0

Version 7.0.0 of the AI Factsheets service includes various fixes.

For details, see What's new and changed in AI Factsheets.

Related documentation:
AI Factsheets
Analytics Engine powered by Apache Spark 5.3.0

Version 5.3.0 of the Analytics Engine powered by Apache Spark service includes various fixes.

Related documentation:
Analytics Engine powered by Apache Spark
Cognos Analytics 29.0.0
This release of Cognos Analytics includes the following features:
Manage fonts and style sheets with the Artifacts API
The new version, 2.3.0, of the Cognos Analytics Artifacts API contains new artifacts: fonts and stylesheets .Now you can use the API to list, upload, download and delete font and stylesheet files. For details, see Managing artifacts with Cognos Analytics APIs.
Updated software version for Cognos Analytics
This release of the service provides Version 12.1.1 of the Cognos Analytics software. For details, see Release 12.1.1 in the Cognos Analytics documentation.

Version 29.0.0 of the Cognos Analytics service includes various fixes.

For details, see What's new and changed in Cognos Analytics.

Related documentation:
Cognos Analytics
Cognos Dashboards 5.3.0
This release of Cognos Dashboards includes the following features:
Updated software version
This release of the service provides Version 12.1.1 of the Cognos Analytics dashboards software. For details, see Release 12.1.1 - Dashboards in the Cognos Analytics documentation.

Version 5.3.0 of the Cognos Dashboards service includes various fixes.

For details, see What's new and changed in Cognos Dashboards.

Related documentation:
Cognos Dashboards
Data Gate 9.0.0
This release of Data Gate includes the following features:
Remote Db2 support for Power
You can now connect an additional type of target database to Data Gate: a remote Db2 target database that is running on a Power computer (PPC64LE architecture). For details, see Connecting to a remote Db2 instance.
Certificates signed by third-party CA
Data Gate now supports SSL-encrypted connections to the Db2 or Db2 Warehouse target database that are secured with certificates signed by an external certificate authority (CA).

Version 9.0.0 of the Data Gate service includes various fixes.

For details, see What's new and changed in Data Gate.

Related documentation:
Data Gate
Data Privacy 5.3.0
This release of Data Privacy includes the following features:
Duplicate data protection rules
You can now duplicate existing data protection rules or edit the rule details to create new rules.
Activate and deactivate data protection rules in the UI
You no longer have to delete rules that you don't want to use, and then recreate them later if you need them again. Now, you can deactivate and activate data protection rules and revoke the rules without deleting them. See and manage the status of any rule on the Rules page.

Version 5.3.0 of the Data Privacy service includes various fixes.

For details, see What's new and changed in Data Privacy.

Related documentation:
Data Privacy
Data Product Hub 5.3.0
This release of Data Product Hub includes the following features:
Create and access data contracts in Open Data Contract Standard v3

Streamline your management of data contracts by using Open Data Contract Standard v3 (ODCS v3) format in Data Product Hub

  • Producers: You can now create data contracts in ODCS v3 format. Create contracts from scratch or by using a predefined template.

  • Consumers: You can access and review data contracts directly in Data Product Hub or download them in YAML format, along with any associated test status information.

Deliver data products from Azure Databricks

You can now subscribe to a data product that is created in Azure Databricks by using the Access in Azure Databricks delivery method. Consumers can directly access Azure Databricks resources by using Data Product Hub. After delivery of the data products, consumers see details on how to access the specific resources in Azure Databricks.

Deliver data assets to a project by using the access in watsonx.data delivery method

You can now choose to import data product assets to a project by using the access in watsonx.data delivery method.

Manage and view data product reviews

Consumers can now create, edit, and delete reviews of data products. Producers cannot manage reviews.

Version 5.3.0 of the Data Product Hub service includes various fixes.

For details, see What's new and changed in Data Product Hub.

Related documentation:
Data Product Hub
Data Refinery 12.0.0

Version 12.0.0 of the Data Refinery service includes various fixes.

For details, see What's new and changed in Data Refinery.

Related documentation:
Data Refinery
Data Replication 5.3.0
This release of Data Replication includes the following features:
Replicate range-partitioned Db2 tables to supported target data stores
You can now use the Data Replication service to replicate Db2 tables that are partitioned based on the range of values in one or more columns. These types of tables are also known as range-partitioned tables. You can replicate range-partitioned tables to Db2 on Cloud, Db2 Warehouse on Cloud, IBM watsonx.data, and Apache Kafka target data stores.

Version 5.3.0 of the Data Replication service includes various fixes.

For details, see What's new and changed in Data Replication.

Related documentation:
Data Replication
DataStage 5.3.0
This release of DataStage includes the following features:
Connect to Microsoft Fabric Warehouse

You can now connect to Microsoft Fabric Warehouse using the new connector in DataStage, enabling seamless integration with your data workflows.

Use IBM Instana to validate your workload runs

You can now automatically discover, monitor and visualize components of your cluster in real time. With IBM Instana observability features, you can efficiently detect and address performance problems, minimizing the time spent on troubleshooting

Version 5.3.0 of the DataStage service includes various fixes.

For details, see What's new and changed in DataStage.

Related documentation:
DataStage
Data Virtualization 3.3.0
This release of Data Virtualization includes the following features:
Enable high concurrency and greater scalability of query processing by using Data Virtualization agents
Data Virtualization agents now run in their own dedicated pods instead of within the Data Virtualization primary head pod, for better system scalability.
  • For new installs, the number of Data Virtualization agent pods are automatically provisioned based on the sizing option you choose in the web client.

  • If your existing Data Virtualization instance uses custom sizing, then upgrading your Data Virtualization instance automatically adds five agent pods, each requiring two CPUs. The increased resource usage is typically balanced if your custom cluster was deployed with sufficient resources to accommodate the extra load without dropping below a stable minimum. However, if you have custom sizing and limited resources, then you might experience a net increase in resource usage.

    To customize the number of Data Virtualization agent pods, or adjust the CPU usage and memory settings, see Customizing the pod size and resource usage of Data Virtualization agents.

Automatically apply personal credentials setting when importing data sources

Personal credentials are now enabled by default in Data Virtualization. When you add a data source on the platform connection with personal credentials turned on, the same setting is automatically applied when the data source is imported into Data Virtualization.

In order to successfully access, virtualize, and query the data source through Data Virtualization, each user needs to configure their own credentials on the platform connections side.

Migrate Data Virtualization assets to and from your Git repository

You can now export and import your Data Virtualization assets across different environments (for example, from development to QA or production) from your Git repository by using Data Virtualization APIs. By using Git, Admin users can quickly synchronize assets, like tables, nicknames, and views, by promoting the Data Virtualization objects from a Data Virtualization instance to a Git branch, and by pulling updates from Git back into Data Virtualization.

You can migrate the following objects with Git:
  • Nicknames (excluding those with personal credentials)
  • Schemas
  • Tables (excluding those with personal credentials)
  • Views
  • Authorization statements (GRANTs)
  • Statistics

See Migrating Data Virtualization objects by using Git.

Use new views to simplify troubleshooting and admin tasks
  • You can now troubleshoot connection failures by using automated diagnostic tests. When a datasource connection fails, Data Virtualization automatically runs a series of connectivity tests (including ping, OpenSSL, netcat, and traceroute) to identify the root cause. The results are logged in ConnectivityTest.log on each qpagent, along with a unique DIAGID included in the error message which you can use with the LISTCONNECTIVITYTESTWARNINGS view to retrieve detailed logs. The DIAGID is cleared when the datasource connection becomes available again.

  • You can now display the list of columns of the tables in a RDBMS source by using the LISTCOLUMNS view.

  • You can now set configuration properties specific to Data Virtualization and Federation directly for your Data Virtualization connection by using the SETCONNECTIONCONFIGPROPERTY and SETCONNECTIONCONFIGPROPERTIES stored procedures. Additionally, you can now set Federation-specific options for existing SETRDBCX procedures.

  • See the full list of Stored procedures and Views.
Grant collaborators the new INSPECT data source privilege to view source metadata

You can now grant INSPECT privilege to users or the DV_METADATA_READER role to enable those users import lineage metadata with MANTA Automated Data Lineage.

To get started in the web client, navigate to the Data sources page and then select the Manage access setting on your data source, and then select the grantees. You can also grant the INSPECT privilege to the DV_METADATA_READER role by selecting Grant INSPECT privilege to the DV_METADATA_READER role. In the INSPECT column, you can grant or revoke the INSPECT privilege for the grantee.

See INSPECT privilege in Data source connection access restrictions in Data Virtualization and Configuring Data Virtualization connections for lineage imports .

Connect to Apache Cassandra and watsonx.data Presto data sources
You can now connect to Apache Cassandra and watsonx.data Presto from Data Virtualization.

Version 3.3.0 of the Data Virtualization service includes various fixes.

For details, see What's new and changed in Data Virtualization.

Related documentation:
Data Virtualization
Db2 5.3.0
This release of Db2 includes the following features:
Deploy Db2 database with non-root deployment (Restricted-v2 SCC)

Now, when deploying a new Db2 database in your IBM Software Hub cluster, you can enable non-root deployment by selecting the checkbox Deploy Db2 with non-root deployment on the Advanced Configurations page. Selecting the Restricted-v2 option uses Red Hat® OpenShift’s default restricted-v2 Security Context Constraint (SCC) to meet strict security requirements while maintaining full functionality.

This SCC ensures that:
  • Workloads run with non-root privileges.
  • Use of sudo or elevated permissions is not allowed.

For more information on permission levels and requirements, see Deploying Db2 with non-root access in a restricted-v2 SCC on IBM Software Hub.

Version 5.3.0 of the Db2 service includes various fixes.

For details, see What's new and changed in Db2.

Related documentation:
Db2
Db2 Big SQL 8.3.0

Version 8.3.0 of the Db2 Big SQL service includes various fixes.

For details, see What's new and changed in Db2 Big SQL.

Related documentation:
Db2 Big SQL
Db2 Data Management Console 5.3.0

Version 5.3.0 of the Db2 Data Management Console service includes various fixes.

For details, see What's new and changed in Db2 Data Management Console.

Related documentation:
Db2 Data Management Console
Db2 Warehouse 5.3.0
This release of Db2 Warehouse includes the following features:
Deploy Db2 Warehouse database with non-root deployment (Restricted-v2 SCC)

Now, when deploying a new Db2 Warehouse database in your IBM Software Hub cluster, you can enable non-root deployment by selecting the checkbox Deploy Db2 Warehouse with non-root deployment on the Advanced Configurations page. Selecting the Restricted-v2 option uses Red Hat OpenShift’s default restricted-v2 Security Context Constraint (SCC) to meet strict security requirements while maintaining full functionality.

This SCC ensures that:
  • Workloads run with non-root privileges.
  • Use of sudo or elevated permissions is not allowed.

For more information on permission levels and requirements, see Deploying Db2 Warehouse with non-root access in a restricted-v2 SCC on IBM Software Hub.

Query external data with Datalake tables

You can now use Datalake tables to work with data stored in open formats like PARQUET and ORC directly from Db2 Warehouse, without moving the data into the database.

With Datalake tables, you can do the following:

  • Query external data: Define a Datalake table in Db2 Warehouse and use it in complex queries with other Db2 Warehouse tables.
  • Export Db2 data to object storage while keeping it queryable by using:
    • INSERT
    • SELECT INFO
    • CREATE DATALAKE TABLE AS SELECT
  • Import data from a Datalake table into a table in the database. You can perform operations such as casts, joins, and dropping columns to manipulate data during importing.

Version 5.3.0 of the Db2 Warehouse service includes various fixes.

For details, see What's new and changed in Db2 Warehouse.

Related documentation:
Db2 Warehouse
Decision Optimization 12.0.0
This release of Decision Optimization includes the following features:
Compare models in Decision Optimization experiments
You can now compare models from different scenarios in a Decision Optimization experiment and compare the log files when the models are solved. When you compare models this way, you can see the different scenarios side by side.

Version 12.0.0 of the Decision Optimization service includes various fixes and updates.

For details, see What's new and changed in Decision Optimization.

Related documentation:
Decision Optimization
EDB Postgres 13.22, 14.19, 15.14, 16.10, 17.6

This release of the EDB Postgres service includes various fixes.

For details, see What's new and changed in EDB Postgres.

Related documentation:
EDB Postgres
Execution Engine for Apache Hadoop 5.3.0

Version 5.3.0 of the Execution Engine for Apache Hadoop service includes various fixes.

Related documentation:
Execution Engine for Apache Hadoop
IBM Knowledge Catalog 5.3.0
This release of IBM Knowledge Catalog includes the following features:
Create SQL-based assets and data quality rules with text instead of SQL
Now you can describe the data asset or the data quality rule that you want to create in plain English and convert this text query into an SQL query. You can then run the generated query to create the asset or the rule.

Tech preview This is a technology preview and is not supported for use in production environments.

Disable certain generative AI capabilities for selected projects
Even if the product is installed with generative AI capabilities, you might not want to use these capabilities in all of your projects. You can now disable these capabilities per project. In projects where the capabilities are disabled, you can't work with natural language queries to create SQL-based assets and data quality rules. In addition, LLM-based name, description, or term generation and term assignment in metadata enrichment are disabled.
Define catalog-specific custom properties for assets
You can now restrict custom properties for assets to a specific catalog. By using catalog-specific custom properties, you can more effectively display values that pertain only to selected domains and ensure that the right information is available to the right users.
To list custom properties that are restricted by a given catalog, use the sort by scope option and scroll down to the items for the catalog that you're interested in.
Manage columns for catalogs
You can now select which columns to display in the asset listing grid by clicking Manage columns on the catalog page. Select your columns, reorder them if necessary, and save your preferences to keep the information that is most relevant for you readily available. For example, you can modify the view to show you a list of assets with the display name, owners, and date added columns only.
Optimize term assignment
With the new tuning options for term assignment, now you can influence the weighting of term suggestions for better precision or recall.
Import primary keys and foreign keys and visualize them in Relationship Explorer
Import primary keys and foreign keys with metadata import instead of metadata enrichment. After import, you can access the associated relationships through the RHS panel and Relationship Explorer.
Versioning of governance artifacts
Track historical changes for the artifacts, schedule new versions to be published in the future, and restore or archive previous versions with the new Versions panel.

Version 5.3.0 of the IBM Knowledge Catalog service includes various fixes.

For details, see What's new and changed in IBM Knowledge Catalog.

Related documentation:
IBM Knowledge Catalog
IBM Manta Data Lineage 5.3.0
This release of IBM Manta Data Lineage includes the following features:
Export data lineage to Collibra
You can now export data lineage and view it in Collibra. If you transfer lineage information into Collibra data governance platform, you can see a comprehensive view of your data flows and dependencies within your governance framework.
Starting parents are introduced in the data lineage graph
When you select an asset to be a starting asset in the lineage, all assets that are higher in the hierarchy are marked as starting parents. Also, all child assets of the selected asset are marked as starting assets. This distinction clarifies which assets are selected as the starting points for the lineage.

Version 5.3.0 of the IBM Manta Data Lineage service includes various fixes.

For details, see What's new and changed in IBM Manta Data Lineage.

Related documentation:
IBM Manta Data Lineage
IBM Master Data Management 4.10.28
This release of IBM Master Data Management includes the following features:
IBM Match 360 is now known as IBM Master Data Management

The IBM Match 360 service is renamed to IBM Master Data Management.

View historical data for entities, records, and relationships in your master data

You can now view the history of each entity, record, and relationship in your master data and compare historical attribute values to the current version. Select any past update to view the attribute values at that point in time and also see whether each update was initiated by a user, source system, or linkage action. You can use this capability to help with audit tracking and analysis of data changes over time.

A data engineer can configure whether the service keeps historical data. Storing history details increases the storage requirements of your database.

Configure potential match workflows for each entity type

You can now configure a potential match workflow for each entity type in your data model. Potential match workflows identify matching issues within your data, then create and assign tasks for data stewards to resolve them. Potential matches are now also searchable by source to help data stewards to identify and resolve potential duplicate records and entities. Configure all your task workflows from the Task types page.

Edit any attribute of a master data entity, including composite values

You can now edit any of an entity's attribute values, even if the original value was derived from the entity's member records by applying attribute composition rules. After a data steward overrides a composite value, the user-defined value will continue to be maintained even if the composition of the entity changes.

This capability is available only for entity types that a data engineer has configured to enable entity persistence.

View more information about group types and hierarchy types

When you open a hierarchy type or group type from the data types pages, you can now see more details at a glance, including who created the type, when they created it, and a list of group or hierarchy instances that are based on the selected type. You can also navigate directly from the data types page to the Workspace view to manage each hierarchy and group instance, and its members.

Version 4.10.28 of the IBM Master Data Management service includes various fixes.

For details, see What's new and changed in IBM Master Data Management.

Related documentation:
IBM Master Data Management
IBM StreamSets 6.3.0

Version 6.3.0 of the IBM StreamSets service includes various fixes.

For details, see What's new and changed in IBM StreamSets.

Related documentation:
IBM StreamSets
Informix 10.0.0

Version 10.0.0 of the Informix service includes various fixes.

For details, see Fix list for Informix Server 15.0.0.1 release.

Related documentation:
Informix
MANTA Automated Data Lineage 42.14.1

Version 42.14.1 of the MANTA Automated Data Lineage service includes various fixes.

For details, see What's new and changed in MANTA Automated Data Lineage.

Related documentation:
MANTA Automated Data Lineage
MongoDB 7.0.18-ent, 8.0.6-ent

This release of the MongoDB service include various fixes.

For details, see What's new and changed in MongoDB.

Related documentation:
MongoDB
OpenPages 9.6.0

Version 9.6.0 of the OpenPages service includes various fixes.

Related documentation:
OpenPages
Orchestration Pipelines 5.3.0
This release of Orchestration Pipelines includes the following features:
See the parameters of utility scripts in the Expression Builder
When you use the Expression Builder, you can now see the parameters (inputs) that the utility scripts require. This enhancement shows you what arguments you need to input, which leads to fewer errors.

The utility scripts run using a CEL (Common Expression Language) expression.

New default embedded runtime
You can now use a new default embedded runtime, removing the dependency on the Tekton runtime. This update eliminates compatibility issues with OpenShift Pipelines, ensuring stable co-existence. The new runtime delivers faster performance and greater scalability for your pipeline workloads.

Version 5.3.0 of the Orchestration Pipelines service includes various fixes.

For details, see What's new and changed in Orchestration Pipelines.

Related documentation:
Orchestration Pipelines
Planning Analytics 5.3.0
This release of Planning Analytics includes the following features:
Updated versions of Planning Analytics software
This release of the service provides the following software versions:
  • Planning Analytics Workspace Version 3.1.2

    For details, see 3.1.2 - What's new in the Planning Analytics Workspace documentation.

  • Planning Analytics Spreadsheet Services Version 3.1.2

    For details, see 3.1.2 - Feature updates in the TM1 Web documentation.

  • Planning Analytics for Microsoft Excel Version 3.1.2

    For details, see 3.1.2 - Feature updates in the Planning Analytics for Microsoft Excel documentation.

Version 5.3.0 of the Planning Analytics service includes various fixes.

Related documentation:
Planning Analytics
Product Master 9.0.0
This release of Product Master includes the following features:
Categorize multiple items at once
You can now categorize more than one item at a time by using the bulk categorization capability on the Data explorer, Search, or Free text search pages.
Create rules on attribute collections 
You can now create rules on attribute collections, and across different specifications.
Generate report with only selected data
In the Generate report feature, you can now specify to include only the selected attributes, rows, and columns that get included in the exported Microsoft Excel worksheet.
Suspect Duplicate Processing with Data survivorship 
Now, when you run Suspect Duplicate Processing, the Data survivorship rules improve how IBM Product Master matches and merges duplicate records. You can also match data that is still being processed.
Enhanced Data completeness dashboard
You can now use the Data completeness dashboard to check the completeness of your data based on categories and items within workflows.
Dashboard builder enhancements
You can now use the new Criteria console on the Dashboards > Charts tab to view the mapping of all criteria to their associated charts.
You can now drag and drop charts, click select chart data to view details, and also export dashboard data from the Dashboard tab.

Version 9.0.0 of the Product Master service includes various fixes.

For details, see What's new and changed in Product Master.

Related documentation:
Product Master
RStudio® Server Runtimes 12.0.0

Version 12.0.0 of the RStudio Server Runtimes service includes various fixes.

Related documentation:
RStudio Server Runtimes
SPSS Modeler 12.0.0
This release of SPSS Modeler includes the following features:
Override storage type in a Data Asset node
You can now easily override the inferences that the Data Asset node makes about storage types when it imports data. The Data Asset node reads a sample of the data that it imports to infer what type of data is in a table field, such as an integer, date, or string. Previously, changing the storage type after this inference was difficult. Now you can override the storage type for a field in the settings for the Data Asset node.
Analyze Chinese text data in SPSS Modeler with Text Analytics
You can now use the Text Analytics nodes in SPSS Modeler to analyze text data that is written in Simplified Chinese. Text Analytics nodes use advanced linguistic technologies and text mining techniques to analyze text data and extract concepts, patterns, and categories.
Save visualizations to the project
When you create an SPSS Modeler job, you can now choose to save any visualizations that are generated to the project. For example, you can save the graphical outputs from a Chart node, Plot node, or Evaluation node. You can view the files in the Assets tab and download them.

Version 12.0.0 of the SPSS Modeler service includes various fixes.

For details, see What's new and changed in SPSS Modeler.

Related documentation:
SPSS Modeler
Synthetic Data Generator 12.0.0
This release of Synthetic Data Generator includes the following features:
Create jobs for generating unstructured synthetic data by using the user interface
In addition to using the REST API, you can now use the user interface to create jobs for generating unstructured synthetic data.
Use multi-table nodes to generate synthetic data with referential integrity
You can now create synthetic data by using several data tables from a database connection instead of just one data table. The multi-table nodes use referential integrity to preserve parent–child dependencies across multiple tables, and they also preserve distributions and correlations within the same table. When you generate synthetic data by using these nodes, the synthetic data mimics all these dependencies, which allows you to make synthetic data that more closely mimics the entire dataset.
Scripting with Synthetic Data Generator
You can now programmatically make changes to nodes and flows in Synthetic Data Generator by using scripts written in Python or Jython. Use either the Scripting pane or embedded API to run your scripts. You can use scripting to automate tasks that are highly repetitive or to quickly change all of the nodes in a flow.
Scripting with parameters
You can also use parameters in the Python scripts that you write for Synthetic Data Generator. You can use parameters as a way of passing values at runtime rather than hard coding them directly in a script. This flexibility makes it easier to adjust settings for a Synthetic Data Generator flow while reusing the same scripts.
Override storage type in an Import node
You can now override the inferences that the Import node makes about storage types when it imports data. The Import node reads a sample of the data that it imports to infer what type of data is in a table field, such as an integer, date, or string. Previously, changing the storage type after this inference was difficult. Now you can easily override the storage type for a field in the settings for the Import node.
Save evaluation metrics to the project
When you create an Synthetic Data Generator job, you can now choose to save the data that is generated when an Evaluate node runs. You can then view the data in the Assets tab and download it.

Version 12.0.0 of the Synthetic Data Generator service includes various fixes.

For details, see What's new and changed in Synthetic Data Generator.

Related documentation:
Synthetic Data Generator
Voice Gateway 1.12.0

Version 1.12.0 of the Voice Gateway service includes various fixes.

Related documentation:
Voice Gateway
Watson Discovery 5.3.0

Version 5.3.0 of the Watson Discovery service includes various fixes.

For details, see What's new and changed in Watson Discovery.

Related documentation:
Watson Discovery
Watson Machine Learning 5.3.0
This release of Watson Machine Learning includes the following features:
Deploy your Python code, R code, and Tensorflow models by using the latest software specifications

For R scripts, you can now use the latest software specification that is based on R 4.4 to deploy your code.

You can also deploy Python functions, Python scripts, and Tensorflow models with the new runtime-25.1-py3.12-cuda version of the 25.1 software specification and use GPUs to inference your code.

Optimize AutoAI for Machine Learning experiments across hyperparameter tuning for score and runtime

You can now optimize hyperparameter tuning for the highest score and shortest runtime when you run AutoAI for Machine Learning experiments. Previously, this optimization was available only during algorithm selection. Now, you can apply score and runtime optimization during hyperparameter tuning or across both phases for a balanced approach, helping you build high-performing models faster while reducing training time.

Version 5.3.0 of the Watson Machine Learning service includes various fixes.

For details, see What's new and changed in Watson Machine Learning.

Related documentation:
Watson Machine Learning
Watson OpenScale 5.3.0

Version 5.3.0 of the Watson OpenScale service includes various fixes.

Related documentation:
Watson OpenScale
Watson Speech services 5.3.0
This release of Watson Speech services includes the following features:
Spoken language identification
Language Identification (LID) now automatically detects spoken languages in audio streams. The model continuously analyzes incoming audio and returns the identified language when it reaches a certain confidence threshold. The model updates the detected language throughout the session and enables early session termination after the initial response. The client applications can switch languages for downstream processing, making it ideal for multilingual environments such as call centers. For details, see Spoken language identification.
Speech transcript enrichment
Speech transcript enrichment improves the readability and usability of raw Automatic Speech Recognition (ASR) transcripts. This post-processing service automatically adds punctuation and applies intelligent capitalization to enhance the structure and clarity of spoken content. For details, see Speech transcript enrichment. For details on how to configure the enrichment features, see Configure Speech transcript enrichment.
Natural voices
You can now use the following natural voices with Watson Speech services:
  • US English (en-US) Ellie Natural voice
  • US English (en-US) Emma Natural voice
  • US English (en-US) Ethan Natural voice
  • US English (en-US) Jackson Natural voice
  • US English (en-US) Victoria Natural voice
  • UK English (en-GB) Chloe Natural voice
  • UK English (en-GB) George Natural voice
  • CA English (en-CA) Hannah Natural voice
  • Brazilian Portuguese (pt-BR) Lucas Natural voice
  • Brazilian Portuguese (pt-BR) Camila Natural voice
For details, see Natural voices.

Version 5.3.0 of the Watson Speech to Text service includes various fixes.

For details, see What's new and changed in Watson Speech to Text.

Related documentation:
Watson Speech services
Watson Studio 12.0.0
This release of Watson Studio includes the following features:
Secure project by using permanent admins

You can now set yourself or other users as permanent admins to keep projects secure and prevent orphaned projects. If you have permission to manage projects, you become a permanent admin automatically when you create or join a project. Permanent admins have more permissions than regular admins. They can maintain full control of projects and can manage other permanent admins.

Easily maintain clear project documentation for projects with deprecated Git integration

You can now create and manage project documentation directly in the new Documentation editor for projects that have deprecated Git integration. By using the Documentation editor, you can easily organize details, collaborate with your team, and maintain clear records. Access the Documentation editor from the project Overview page.

Version 12.0.0 of the Watson Studio service includes various fixes.

For details, see What's new and changed in Watson Studio.

Related documentation:
Watson Studio
Watson Studio Runtimes 12.0.0
This release of Watson Studio Runtimes includes the following features:
Run Python and R code by using environments that are based on Spark 4 engine

You can now run your Python and R code by using the latest environments that are based on Spark 4 engine.

Run R 4.4 code in Jupyter Notebooks

You can now Run R 4.4 code in Jupyter Notebooks by using Runtime 25.1 on R 4.4

Version 12.0.0 of the Watson Studio Runtimes service includes various fixes.

For details, see What's new and changed in Watson Studio Runtimes.

Related documentation:
Watson Studio Runtimes
watsonx.ai™ 12.0.0
This release of watsonx.ai includes the following features:
New foundation models in watsonx.ai

You can now use the following foundation models for inferencing from the Prompt Lab and the API:

  • granite-4-h-tiny
  • granite-docling-258M
  • ibm-defense-4-0-micro

For details, see Foundation models in watsonx.ai.

Access models across multiple providers with the model gateway

You can now securely configure and interact with foundation models from multiple providers with the model gateway by using the API. In addition, you can manage the model gateway with integrated load-balancing, access policies, and rate limits.

Capture semantic meaning and refine retrieved results by using custom embedding and reranking models

You can now add custom embedding and reranking models to watsonx.ai and use them to capture semantic meaning and refine retrieved results.

For details, see Registering custom foundation models for global deployment.

All custom foundation models now use the vLLM inferencing server

All custom foundation models now use the vLLM inferencing server. If your deployed models use the TGIS inferencing server, you might have to migrate them.

New text classification API in watsonx.ai
You can now use the new text classification method in the watsonx.ai REST API to classify your document before you extract textual content to use in a RAG solution.

You can classify your document with the classification API into one of several supported common document types without running a longer extraction task. By pre-processing the document, you can then customize your text extraction request to efficiently extract relevant details from your document.

Use the user interface for Synthetic Data Generator to generate unstructured synthetic data
The user interface for creating jobs to generate unstructured synthetic data is now generally available. The user interface in Synthetic Data Generator makes creating and running jobs easier by organizing all the settings and seed document requirements into simple options and fields.
Improvements for AutoAI for RAG experiments
You can now use the following features for your AutoAI for RAG experiments:
  • Use semantic chunking in AutoAI for RAG experiments

    You can now use the semantic chunking method to break down documents in an AutoAI for RAG experiment. Semantic chunking splits documents based on meaning, making it well-suited for complex or unstructured data.

  • Use chat API models in AutoAI for RAG experiments

    You can now use chat API models in AutoAI for RAG experiments, instead of prompt template models. These models must have chat capabilities to work in AutoAI for RAG experiments.

  • Auto-deploy top pattern in AutoAI for RAG experiments

    You can now enable automatic deployment of the top-performing pattern after an AutoAI for RAG experiment completes. You can turn on auto-deployment when you set up the experiment. Auto-deployment helps reduce manual steps and further automates the experiment workflow.

  • Use multiple vector indexes in AutoAI for RAG experiments

    You can now select up to 20 vector indexes for your document collection in an AutoAI for RAG experiment. During experiment setup, when you add document and evaluation sources, choose Knowledge bases and then select up to 20 connections. You can then define details for each connection, such as index name and embedding models. Using multiple indexes gives you more flexibility and can improve the quality and performance of your experiments.

  • Use SQL database schemas in AutoAI for RAG experiments

    You can now choose an SQL database schema as a knowledge base in an AutoAI for RAG experiment. You can use SQL connections such as Db2, PostgreSQL, and MySQL. When using SQL sources, chunking settings are disabled, and only answer correctness metrics are available for optimization. With an SQL RAG, structured data can be retrieved directly from the relational database, which can improve answer accuracy and relevance when compared with document-based sources.

Version 12.0.0 of the watsonx.ai service includes various fixes.

For details, see What's new and changed in watsonx.ai.

Related documentation:
watsonx.ai
watsonx Assistant 5.3.0

Version 5.3.0 of the watsonx Assistant service includes various security fixes.

Related documentation:
watsonx Assistant
watsonx™ BI 3.3.0
This release of watsonx BI includes the following features:
Manage new roles for watsonx BI users through IBM Software Hub Access control
As a watsonx BI user, you can now use IBM Software Hub Access control to assign roles to your team that are specific to watsonx BI. You can assign the following roles to users or groups: Business Intelligence Administrator, Business Intelligence Analyst, or Business Intelligence Consumer.
For more information, see Roles and permissions for watsonx BI.
Configure flat file service to upload files to watsonx BI
You can configure flat file service to upload files and use them as a source of data in Conversations.
For more information, see Configuring flat file service.
Integrate with IBM watsonx.data Premium
You can now easily access watsonx BI from a tile in watsonx.data Premium. By combining watsonx.data Premium and watsonx BI, you can use business intelligence tools to get deeper insights into your structured and unstructured data.
For more information, see IBM watsonx.data Premium.

Version 3.3.0 of the watsonx BI service includes various security fixes.

Related documentation:
watsonx BI
watsonx Code Assistant™ 5.3.0

Version 5.3.0 of the watsonx Code Assistant service includes various security fixes.

For details, see What's new and changed in watsonx Code Assistant.

Related documentation:
watsonx Code Assistant
watsonx Code Assistant for Red Hat Ansible® Lightspeed 5.3.0

Version 5.3.0 of the watsonx Code Assistant for Red Hat Ansible Lightspeed service includes various fixes.

For details, see What's new and changed in watsonx Code Assistant for Red Hat Ansible Lightspeed.

Related documentation:
watsonx Code Assistant for Red Hat Ansible Lightspeed
watsonx Code Assistant for Z 5.3.0
This release of watsonx Code Assistant for Z includes the following features:
Transform COBOL files to JAVA by using SQLite
You can now transform COBOL files to JAVA by using SQLite as a local alternative to the Db2 on Cloud instance with certain limitations. For more information, see Transform COBOL files to JAVA by using SQLite.
Bulk generation provides better synchronization with Red Hat extension's Outline view
The bulk generation feature now operates in the single-threaded mode, translating one program at a time for better synchronization with the Red Hat extension's Outline view.
Translate large paragraphs of COBOL code
You can now translate large paragraphs of COBOL code because the default timeout is now set to 10 minutes across all workflows.

Version 5.3.0 of the watsonx Code Assistant for Z service includes various fixes.

For details, see What's new and changed in watsonx Code Assistant for Z.

Related documentation:
watsonx Code Assistant for Z
watsonx Code Assistant for Z Agentic 2.7.0

Version 2.7.0 of the watsonx Code Assistant for Z Agentic service includes various fixes.

For details, see What's new and changed in watsonx Code Assistant for Z Agentic.

Related documentation:
watsonx Code Assistant for Z
watsonx Code Assistant for Z Code Explanation 2.8.0

Version 2.8.0 of the watsonx Code Assistant for Z Code Explanation service includes various fixes.

For details, see What's new and changed in watsonx Code Assistant for Z Code Explanation.

Related documentation:
watsonx Code Assistant for Z Code Explanation
watsonx Code Assistant for Z Code Generation 2.8.0

Version 2.8.0 of the watsonx Code Assistant for Z Code Generation service includes various fixes.

For details, see What's new and changed in watsonx Code Assistant for Z Code Generation.

Related documentation:
watsonx Code Assistant for Z Code Generation
watsonx.data 2.3.0
For a complete list of new and updated features in this release, see the Release notes in the watsonx.data documentation.

Version 2.3.0 of the watsonx.data service includes various fixes.

Related documentation:
watsonx.data
watsonx.data Premium 2.3.0
For a complete list of new and updated features in this release, see the Release notes in the watsonx.data Premium documentation.

Version 2.3.0 of the watsonx.data Premium service includes various fixes.

Related documentation:
watsonx.data Premium
watsonx.data intelligence 2.3.0
This release of watsonx.data intelligence includes the following features:
Curate unstructured data with a new tool
With the new unstructured data curation tool, you can now import and analyze unstructured documents, group these documents based on the analysis results, and process the documents further based on the grouping.

You set up an analysis flow where you import metadata, detect the format and the language of documents, and classify the documents based on predefined or custom document classes. As a second step, you set up a processing flow where you transform these grouped documents, generate entities and embeddings, and create document sets and document libraries that you can then use in your gen AI projects.

Results of an analysis flow in an unstructured data curation asset

The unstructured data curation tool replaces the unstructured data import and unstructured data enrichment tools. See the Unstructured data import, unstructured data enrichment, and base document sets deprecation notice.

Create SQL-based assets and data quality rules with text instead of SQL
Now you can describe the data asset or the data quality rule that you want to create in plain English and convert this text query into an SQL query. You can then run the generated query to create the asset or the rule.

Tech preview This is a technology preview and is not supported for use in production environments.

Disable certain generative AI capabilities for selected projects
Even if the product is installed with generative AI capabilities, you might not want to use these capabilities in all of your projects. You can now disable these capabilities per project. In projects where the capabilities are disabled, you can't work with natural language queries to create SQL-based assets and data quality rules. In addition, LLM-based name, description, or term generation and term assignment in metadata enrichment are disabled.
Define catalog-specific custom properties for assets
You can now restrict custom properties for assets to a specific catalog. By using catalog-specific custom properties, you can more effectively display values that pertain only to selected domains and ensure that the right information is available to the right users.
To list custom properties that are restricted by a given catalog, use the sort by scope option and scroll down to the items for the catalog that you're interested in.
Manage columns for catalogs
You can now select which columns to display in the asset listing grid by clicking Manage columns on the catalog page. Select your columns, reorder them if necessary, and save your preferences to keep the information that is most relevant for you readily available. For example, you can modify the view to show you a list of assets with the display name, owners, and date added columns only.
Optimize term assignment
With the new tuning options for term assignment, now you can influence the weighting of term suggestions for better precision or recall.
Import primary keys and foreign keys and visualize them in Relationship Explorer
Import primary keys and foreign keys with metadata import instead of metadata enrichment. After import, you can access the associated relationships through the RHS panel and Relationship Explorer.
Versioning of governance artifacts
Track historical changes for the artifacts, schedule new versions to be published in the future, and restore or archive previous versions with the new Versions panel.
Export data lineage to Collibra
You can now export data lineage and view it in Collibra. If you transfer lineage information into Collibra data governance platform, you can see a comprehensive view of your data flows and dependencies within your governance framework.
Starting parents are introduced in the data lineage graph
When you select an asset to be a starting asset in the lineage, all assets that are higher in the hierarchy are marked as starting parents. Also, all child assets of the selected asset are marked as starting assets. This distinction clarifies which assets are selected as the starting points for the lineage.
Disable data lineage for the unstructured data flows
Data lineage is generated for Unstructured Data Integration and unstructured data curation flows by default. You can disable the lineage generation for unstructured data to control when lineage is created.
Create and access data contracts in Open Data Contract Standard v3
Streamline your management of data contracts by using Open Data Contract Standard v3 (ODCS v3) format in Data Product Hub
  • Producers: You can now create data contracts in ODCS v3format. Create contracts from scratch or by using a predefined template.
  • Consumers: You can access and review data contracts directly in Data Product Hub or download them in YAML format, along with any associated test status information.

This optimized process enhances collaboration, ensures data quality, and enhances trust between producers and consumers.

Deliver data products from Microsoft Azure Databricks
You can now subscribe to a data product that is created in Azure Databricks by using the Access in Azure Databricks delivery method. Consumers can directly access Azure Databricks resources. After delivery of the data products, consumers see details on how to access the specific resources in Azure Databricks.
Deliver data assets to a project by using the access in watsonx.data delivery method
You can now choose to import data product assets to a project by using the access in watsonx.data delivery method.

Manage and view data product reviews

Consumers can now create, edit, and delete reviews of data products. Producers cannot manage reviews.

Version 2.3.0 of the watsonx.data intelligence service includes various fixes.

For details, see What's new and changed in watsonx.data intelligence.

Related documentation:
watsonx.data intelligence
watsonx.governance™ 2.3.0
This release of watsonx.governance includes the following features:
Model groups in Governance Console are synchronized with watsonx.governance

Model groups in Governance Console are now synchronized with AI use cases in watsonx.governance.

When you create or update a model group in Governance Console, your changes are synchronized with the AI use case approaches in watsonx.governance. Similarly, when you create or update an approach in watsonx.governance, your changes are synchronized with the model groups in Governance Console.

The synchronization of model groups and approaches helps to ensure consistent tracking and governance of AI models.

You can turn this feature on or off in Configuration and settings.

Migrate assets to inventories in watsonx.governance

You can now migrate AI use cases and external models to inventories by using a command-line tool. Previously, these assets were stored in the platform assets catalog or other catalogs.

Evaluate and govern prompt templates that use chat as the input

You can now evaluate and govern prompt templates that use chat as the input. To get started, track the prompt template in a use case.

Note the following restrictions for prompt templates that use chat:
  • You can evaluate the prompt templates only in a production deployment space.
  • The evaluation of feedback data is not supported.
  • The prompt templates are evaluated only on the payload data, which gets scored against the deployment endpoint. Therefore, metrics that compute values based on the similarity between reference texts and predictions, such as BLEU, ROUGE, and so on, are not evaluated.
  • Only the GENAIQ monitor is supported.
Use Guardrail Manager to manage AI guardrails

Configure the predefined guardrails, select actions, and manage them in an inventory through the UI by using the new Guardrail Manager. You can also manage different guardrail configurations in Guardrail Manager. Your guardrail configurations are stored in your inventory, and you can share the inventory with others.

Version 2.3.0 of the watsonx.governance service includes various fixes.

For details, see What's new and changed in watsonx.governance.

Related documentation:
watsonx.governance
watsonx Orchestrate 7.0.0
This release of watsonx Orchestrate includes the following features:
Manage user access and show or hide reasoning traces
You can now set user access at the agent level to make visible the agent’s reasoning trace in chat. Reasoning traces show how an agent generates a response during a conversation. Your users can use this visibility to debug or validate the agent’s logic, and can view how an agent forms a response even some domains choose to hide them to keep the chat interface simple. For details, see Using agents in Orchestrate Chat.
Enter new input types and customization in forms
You can now use forms in user activities to organize multiple types of interactions within a structured layout. You can complete the following tasks:
  • Add a File upload field to collect documents or images.
  • Edit column labels in Multi choice and Single choice fields.
  • Use the Field option to display static information during a chat.
For details, see Forms in user activities.
Use chat session context as input to improve agentic workflows
By using chat session content, you can have improved responses, reduced repetitive inputs, and context-aware workflows. Chat session context provides context from the chat as input to the prompt. You can enable the chat session context and get the last five conversations from the chat history. These conversations are converted to form fields as input to the prompt. If the conversation mapping fails, the user is prompted to provide input in the chat. For details, see Using the chat session context as input.
Handle interruptions in voice conversations by using VAD settings
You can define how your voice agent responds to user interruptions. By using Voice Activity Detection (VAD) setting, you can optimize the responsiveness and accuracy of speech recognition by identifying active voice segments and ignoring silence or background noise. VAD reduces latency and improves the user experience in voice-driven workflows. For details, see Configuring voice settings for agents.
Configure ElevenLabs TTS for natural voice output
You can configure ElevenLabs as a text-to-speech (TTS) provider in an agent builder to create voice agents with natural, human-like speech. Add your own API key, choose voice settings like speed and stability, and preview voices before you apply them. For details, see Configuring voice settings for agents.

Version 7.0.0 of the watsonx Orchestrate service includes various fixes.

For details, see What's new and changed in watsonx Orchestrate.

Related documentation:
watsonx Orchestrate

New services

Service Category What does it mean for me?
watsonx Code Assistant for Z Understand AI

watsonx Code Assistant for Z Understand is a new service that can be installed on IBM Software Hub Version 5.3.

Architects and developers can use watsonx Code Assistant for Z Understand to work with generative AI to understand and analyze your mainframe applications. With watsonx Code Assistant for Z Understand, you can use a chat interface and access different data sources to get application insights and discover business rules.

Related documentation:
watsonx Code Assistant for Z Understand
watsonx.data integration Analytics

watsonx.data integration is a new service that can be installed on IBM Software Hub Version 5.3.

IBM watsonx.data integration provides unified tools that you can use to transform, integrate, and observe your data. You can use a range of diverse data integration styles, such as streaming, replication, observability, and bulk or batch processing.

watsonx.data integration provides these capabilities:
Transform batch data
Create DataStage flows that extract data from multiple source systems, transform the data as required, and deliver the data to target systems. With batch data flows, you can use both ETL (Extract, Transform, Load) and ELT (Extract, Load, Transform) patterns.
Stream real-time data
Stream real-time data with StreamSets to create streaming data flows that act on time-sensitive data. A streaming data flow runs continuously to read, process, and write data as soon as the data becomes available. You can add processors to streaming data flows to transform the data as it moves from source to target systems.
Replicate data
Build a Data Replication pipeline that synchronizes data between a source and target data store. Use the Data Replication tool for near-real-time data delivery with low impact to source data stores.
Prepare unstructured data
Use Unstructured Data Integration to ingest, transform, and enrich unstructured data from diverse sources.
Observe data
Use Data Observability to create alerts that notify you when a data integration process encounters errors or behaves differently than you expect. Investigate data incidents to solve any problems or issues that occur in data quality, integrity, and access.
Related documentation:
watsonx.data integration

Installation enhancements

What's new What does it mean for me?
Red Hat OpenShift
You can install IBM Software Hub Version 5.3 on the following versions of Red Hat OpenShift Container Platform:
  • Version 4.14.0 or later fixes
  • Version 4.16.4 or later fixes
  • Version 4.17.0 or later fixes
  • Version 4.18.6 or later fixes
  • Version 4.19.0 or later fixes
  • Version 4.20.0 or later fixes
Cluster-scoped resources are created separately

Starting in Version 5.3, many services in IBM Software Hub use Helm for installation and upgrade. For services that support Helm, a cluster administrator must create cluster-scoped resources, such as custom resource definitions, cluster roles, cluster role binding, and webhooks. This change gives cluster administrators more insight into the cluster-scoped resources that are required for each service.

Single command to create operators and custom resources

The install-components command replaces the apply-olm and apply-cr commands. The command ensures that the required operators are created before the custom resources are created.

Removals and deprecations

What's changed What does it mean for me?
The IBM Certificate manager is deprecated

IBM Software Hub Version 5.3 uses the Red Hat OpenShift Container Platform cert-manager Operator.

If you are upgrading to IBM Software Hub Version 5.3, ensure that you migrate to the Red Hat OpenShift Container Platform cert-manager Operator before you upgrade IBM Software Hub.

For details, see Migrating from the IBM Certificate manager to the Red Hat OpenShift Certificate manager

The setup-instance command is deprecated

The cpd-cli manage setup-instance command is deprecated. Use the new cpd-cli manage install-components command to install and upgrade the IBM Software Hub platform.

For details, see:

Installation
Installing the required components for an instance of IBM Software Hub
Upgrade from 5.1
Upgrading IBM Software Hub
Upgrade from 5.2
Upgrading IBM Software Hub
The apply-olm command is deprecated

The apply-olm command is replaced by the install-components command.

The apply-cr command is deprecated

The apply-cr command is replaced by the install-components command.

Prompt tuning for IBM watsonx Code Assistant for Red Hat Ansible Lightspeed is removed.

The ability to tune a model's behavior for watsonx Code Assistant for Red Hat Ansible Lightspeed service is removed and is no longer available for use.

Previous releases

Looking for information about previous releases? See the following topics in IBM Documentation: