What's new and changed in Data Virtualization

Data Virtualization updates can include new features and fixes. Releases are listed in reverse chronological order so that the latest release is at the beginning of the topic.

You can see a list of the new features for the platform and all of the services at What's new in IBM Software Hub.

Installing or upgrading Data Virtualization

Ready to install or upgrade Data Virtualization?

  • To install Data Virtualization along with the other IBM® Software Hub services, see Installing IBM Software Hub.
  • To upgrade Data Virtualization along with the other IBM Software Hub services, see Upgrading IBM Software Hub.
  • To install or upgrade Data Virtualization independently, see Data Virtualization.
    Remember: All of the IBM Software Hub components associated with an instance of IBM Software Hub must be installed at the same version.

IBM Software Hub Version 5.3.1

A new version of Data Virtualization was released in February 2026 with IBM Software Hub 5.3.1.

Operand version: 3.3.1

This release includes the following changes:

New features
This release of Data Virtualization includes the following features:
Audit user-initiated SQL statements to monitor user activity
Data Virtualization Admins can now audit SQL statements that users run to capture statement details including the time, username, and the status of the statement. Audited statements are then sent to the IBM Software Hub auditing system and then automatically forwarded to your configured SIEM solution.
See Auditing user-initiated SQL statements in Data Virtualization.
Updates
The following updates were introduced in this release:
Leverage pagination types for REST API connectors
Documentation is now available on how to use the Autonomous REST Composer (ARC) tool to apply pagination to break high-volume REST API queries into smaller chunks. You can then download the model file, and use it to connect to your REST API. See Configuring pagination for the REST API connector.
New requirements for Db2 for z/OS and Db2 for i connections after upgrading to 5.3.1
  • After upgrade, Db2 for z/OS and Db2 for i connections might become invalid. To activate the connections again, edit the connection, and then manually upload the db2jcc4 JDBC driver and license JAR files.
  • To create new Db2 for z/OS and Db2 for i connections, you must manually upload the db2jcc4 JDBC driver and license JAR files.

    For steps on uploading the files, see Db2 for i and Db2 for z/OS sections in Supported data sources in Data Virtualization.

Issues fixed in this release
The following issues were fixed in this release:
The virtualization status might be stuck in the Virtualize data page even after virtualization is complete
When you navigate to the Virtualize page and then virtualize an object in the Files tab, the virtualization status for that object on the Virtualized data page might be stuck in Started even after the process completes.
Resolution: This issue is now fixed.
The wrong connection credentials setting is displayed after changing credentials setting through USER_PERSONAL_CREDENTIAL=TRUE
If you change a personal credential to a shared credential through USER_PERSONAL_CREDENTIAL=TRUE, then connections might incorrectly appear in the web client as a personal credential connection when it is still a shared credential.
Resolution: This issue is now fixed.
Cannot create caches after changing credentials setting through USER_PERSONAL_CREDENTIAL=TRUE
If you change a shared credential data source to personal credentials through USER_PERSONAL_CREDENTIAL=TRUE, then Data Virtualization cannot create caches (both user-defined and auto-cache) for virtual objects based on that connection.
Resolution: This issue is now fixed.
Data Virtualization agent pod logs are missing from the log collection diagnostic job
When you run a IBM Cloud Pak® for Data diagnostic for Data Virtualization, the diagnostic zip file does not contain logs from the Data Virtualization agent pods.
Resolution: This issue is now fixed.
Tables are not published to catalog after virtualization
When you add a table to the cart and then virtualize it, the virtualization completes after a period of time but it is not published to the catalog. In addition, this error appears: The operation has timed out.
Resolution: This issue is now fixed.
Audit entries with multibyte characters are converted to hexadecimal string
If an audited attribute contains multibyte characters, then those characters are converted to its hexadecimal equivalent when captured in the audit logs. For example: a table named DËMO in schema TEST is logged as "objschema":"TEST","objname":"44C38B4D4F".
Resolution: This issue is now fixed.
Unable to create the schema in the Shopping Cart page after dropping the same schema
If you ever dropped a schema, then you will be unable to create the same schema in the Shopping Cart.
Resolution: This issue is now fixed.
You cannot preview long string values in headings in CSV, TSV, or Excel files
When you use the first row as column headings, the string values in that row must not exceed the maximum Db2 identifier length of 128 characters and cannot be duplicated. If your file has string names in the header row with values that are too long or are duplicated, an error message is displayed when you try to preview your file in Data Virtualization.
400: Missing ResultSet:java.sql.SQLSyntaxErrorException: Long column type column or parameter 'COLUMN2' not permitted in declared global temporary tables or procedure definitions.
Column heading names are case-insensitive and converted to uppercase in the API response, which is exposed by the console. Therefore, a column that is named ABC is considered the same as a column that is named abc. However, the columns can be renamed to mixed case when you virtualize your data source.
Resolution: This issue is now fixed.
Cannot create caches after changing a shared credentials connection to personal credentials
If you change a data source connection that uses shared credentials to use personal credentials through USER_PERSONAL_CREDENTIAL=TRUE, then Data Virtualization cannot create caches (both user-defined and auto-cache) for virtual objects based on that connection.
Resolution: This issue is now fixed.
Customer-reported issues fixed in this release
For a list of customer-reported issues that were fixed in this release, see the Fix List for IBM Cloud Pak for Data on the IBM Support website.
Security issues fixed in this release
The following security issues were fixed in this release:
Version 5.3.1 of the Data Virtualization service includes various security fixes. For more information, refer to the security bulletins released for Data Virtualization.
Deprecated features
The following features were deprecated in this release:
SAP HANA connector
Data Virtualization no longer supports the SAP HANA connector using a built-in shipped driver. You need to upload the driver. To create new SAP HANA connections in Data Virtualization, upload the SAP HANA JDBC driver (ngdbc.jar) obtained from SAP.

If you have existing connections that use the shipped driver, you must obtain and upload the native SAP HANA driver before upgrading, or those connections will stop working after the upgrade.

See Connecting to SAP HANA in Data Virtualization.

IBM Software Hub Version 5.3.0

A new version of Data Virtualization was released in December 2025 with IBM Software Hub 5.3.0.

Operand version: 3.3.0

This release includes the following changes:

New features
This release of Data Virtualization includes the following features:
Enable high concurrency and greater scalability of query processing by using Data Virtualization agents
Data Virtualization agents now run in their own dedicated pods instead of within the Data Virtualization primary head pod, for better system scalability.
  • For new installs, the number of Data Virtualization agent pods are automatically provisioned based on the sizing option you choose in the web client.

  • If your existing Data Virtualization instance uses custom sizing, then upgrading your Data Virtualization instance automatically adds five agent pods, each requiring two CPUs. The increased resource usage is typically balanced if your custom cluster was deployed with sufficient resources to accommodate the extra load without dropping below a stable minimum. However, if you have custom sizing and limited resources, then you might experience a net increase in resource usage.

    To customize the number of Data Virtualization agent pods, or adjust the CPU usage and memory settings, see Customizing the pod size and resource usage of Data Virtualization agents.

Automatically apply personal credentials setting when importing data sources

Personal credentials are now enabled by default in Data Virtualization. When you add a data source on the platform connection with personal credentials turned on, the same setting is automatically applied when the data source is imported into Data Virtualization.

In order to successfully access, virtualize, and query the data source through Data Virtualization, each user needs to configure their own credentials on the platform connections side.

Migrate Data Virtualization assets to and from your Git repository

You can now export and import your Data Virtualization assets across different environments (for example, from development to QA or production) from your Git repository by using Data Virtualization APIs. By using Git, Admin users can quickly synchronize assets, like tables, nicknames, and views, by promoting the Data Virtualization objects from a Data Virtualization instance to a Git branch, and by pulling updates from Git back into Data Virtualization.

You can migrate the following objects with Git:
  • Nicknames (excluding those with personal credentials)
  • Schemas
  • Tables (excluding those with personal credentials)
  • Views
  • Authorization statements (GRANTs)
  • Statistics

See Migrating Data Virtualization objects by using Git.

Use new views to simplify troubleshooting and admin tasks
  • You can now troubleshoot connection failures by using automated diagnostic tests. When a datasource connection fails, Data Virtualization automatically runs a series of connectivity tests (including Ping, OpenSSL, Netcat, and Traceroute) to identify the root cause. The results are logged in ConnectivityTest.log on each qpagent, along with a unique DIAGID included in the error message which you can use with the LISTCONNECTIVITYTESTWARNINGS view to retrieve detailed logs. The DIAGID is cleared when the datasource connection becomes available again.

  • You can now display the list of columns of the tables in a RDBMS source by using the LISTCOLUMNS view.

  • You can now set configuration properties specific to Data Virtualization and Federation directly for your Data Virtualization connection by using the SETCONNECTIONCONFIGPROPERTY and SETCONNECTIONCONFIGPROPERTIES stored procedures. Additionally, you can now set Federation-specific options for existing SETRDBCX procedures.

  • See the full list of Stored procedures and Views.
Grant collaborators the new INSPECT data source privilege to view source metadata

You can now grant INSPECT privilege to users or the DV_METADATA_READER role to enable those users import lineage metadata with MANTA Automated Data Lineage.

To get started in the web client, navigate to the Data sources page and then select the Manage access setting on your data source, and then select the grantees. You can also grant the INSPECT privilege to the DV_METADATA_READER role by selecting Grant INSPECT privilege to the DV_METADATA_READER role. In the INSPECT column, you can grant or revoke the INSPECT privilege for the grantee.

See INSPECT privilege in Data source connection access restrictions in Data Virtualization and Configuring Data Virtualization connections for lineage imports.

Connect to Apache Cassandra and watsonx.data™ Presto data sources
You can now connect to Apache Cassandra and watsonx.data Presto from Data Virtualization.
Updates
The following updates were introduced in this release:
Improvements to how connection IDs are handled during import

When you are importing a connection, you can choose to map it to an existing connection on the target cluster by specifying the CID. You might benefit from this action if your source and target environments use different data sources but share the same schema.

See step 4 in Modifying Data Virtualization control data to support differences in environments.

Selectively filter groups for migration

You can now migrate specific service instance groups as part of your cpd-cli Data Virtualization migration workflow. Previously, group migration was limited to either all or none.

For more information, see Migrating Data Virtualization data by using the cpd-cli.

Improved access by using personal credentials in reload table scenarios

You can now reload tables when the data source that you added has personal credentials enabled. In addition, the Virtualize page now shows only the tables you can access. Each user must have their credentials configured in Platform Connections to access, virtualize, and query data.

Updated audit logging structure

The format for audit events that Data Virtualization generates better complies with the CADF and IBM Software Hub audit logging formats to ensure consistency with audit events that other services generate.

See the Sample Data Virtualization CADF audit records section.

Additional workflow steps for creating virtualized tables
You can now find steps to create a virtualized table from these items:
Issues fixed in this release
The following issues were fixed in this release:
Data Virtualization passthrough route c-db2u-dv-db2u is missing after upgrade
When you upgrade your Data Virtualization instance, the passthrough route is missing and Data Virtualization displays the error FAIL: route c-db2u-dv-db2u does not exist in namespace zen.
Resolution: This issue is now fixed.
Connections created using the cpd-cli connection create command can't be found or used
When you create connections by using the cpd-cli connection create command, the connections created cannot be found or used in Data Virtualization, but they are available and can be used in projects and catalogs. For more information on connection create, see connection create.
Resolution: This issue is now fixed.
Virtualized asset column name not updating after COS column update
For assets that are in your cart, if you attempt to update the data file (COS) column name or choose to change all the column names to uppercase, and then virtualize the asset, the column name fails to update.
Resolution: This issue is now fixed.
Audit records with transaction IDs containing special characters might be missing
Some audit records might be missing from the IBM Software Hub audit service because the Db2U Audit service is unable to process some randomly generated transaction IDs that contain special characters. There is currently no workaround.
Resolution: This issue is now fixed.
Audit log records might display incomplete information
An audit log record might display only a portion of attributes such as timestamp, action user ID, and etc., when unprintable characters occur in the audit record.
Resolution: This issue is now fixed.
Error displays when column masking or row filtering rules apply to a column with double quotation marks in its name
When you query an object where a column masking or row filtering rule applies to a column with double quotation marks in its name, the query fails to run and this unexpected token error appears: SQLCODE=-104; SQLSTATE=42601.
Resolution: This issue is now fixed.
Data Virtualization instance upgrade fails with 'Error creating module: SYSIBMADM.IDAX_MESS_P_RAISE_MESSAGE'
When you upgrade your Data Virtualization instance, running the cpd-cli service-instance upgrade commands show that the instance upgrade is in the Upgrade in progress stage for more than 2 hours.
Resolution: This issue is now fixed.
Virtualized asset column name not updating after COS column update
For assets that are in your cart, if you attempt to update the data file (COS) column name or choose to change all the column names to uppercase, and then virtualize the asset, the column name fails to update.
Resolution: This issue is now fixed.
Customer-reported issues fixed in this release
For a list of customer-reported issues that were fixed in this release, see the Fix List for IBM Cloud Pak for Data on the IBM Support website.

Security issues fixed in this release

Version 5.3 of the Data Virtualization service includes various security fixes. For more information, refer to the security bulletins released for Data Virtualization.