Supported connectors for discovery, enrichment, and data quality of structured data

You can connect to many data sources from which you can import asset metadata and then enrich those data assets and assess their data quality. You can create dynamic views of data in these sources. You can also write the output of data quality analyses to supported data sources.

Base Premium Standard Unless otherwise noted, this information applies to all editions of IBM Knowledge Catalog.

A dash (—) in any of the columns indicates that the data source is not supported for this purpose.

By default, data quality rules and the underlying DataStage flows support standard platform connections. Not all connectors that were supported in traditional DataStage and potentially used in custom DataStage flows are supported in IBM Knowledge Catalog.

Requirements and restrictions

Understand the requirements and restrictions for connections to be used in data curation and data quality assessment.

Required permissions

Users must be authorized to access the connections to the data sources. For metadata import, the user running the import must have the SELECT or a similar permission on the databases in question.

General prerequisites

Connection assets must exist in the project for connections that are used in these cases:

  • For running metadata enrichment including advanced analysis (in-depth primary key analysis, in-depth relationship analysis, or advanced data profiling) on assets in a metadata enrichment
  • For running data quality rules
  • For creating query-based data assets (dynamic views)
  • For writing output of data quality checks or frequency distribution tables

Supported source data formats

In general, metadata import, metadata enrichment, and data quality rules support the following data formats:

  • All: Tables from relational and nonrelational data sources

    Delta Lake and Iceberg table format for certain file-storage connectors. For analyses to work as expected, import specific files instead of top-level directories:

    • For Delta Lake tables, import _delta_log files.
    • For Iceberg tables, import metadata/version-hint.text files.
  • Metadata import: Any format from file-based connections to the data sources and tool-specific formats from connections to external tools. For Microsoft Excel workbooks, each sheet is imported as a separate data asset. The data asset name equals the name of the Excel sheet.

  • Metadata enrichment: Tabular: CSV, TSV, Avro, Parquet, Microsoft Excel (For workbooks uploaded from the local file system, only the first sheet in a workbook is profiled.)

  • Data quality rules: Tabular: Avro, CSV, Parquet, ORC; for data assets uploaded from the local file system, CSV only

Base Premium Data quality features are available only in IBM Knowledge Catalog and IBM Knowledge Catalog Premium.

Database support for analysis output tables

In general, output tables with analysis results from data quality analysis run as part of metadata enrichment, advanced profiling, or running data quality rules can be written to these databases:

If a specific database connector also supports output tables, the Target for output tables column shows a checkmark.

File-storage connectors

Supported file-based connectors
Connector Metadata import Metadata enrichment Definition-based rules
Amazon S3
Delta Lake tables, Iceberg tables
Apache HDFS
Box 1
Generic S3
Delta Lake tables, Iceberg tables
✓;
Google Cloud Storage
Delta Lake tables, Iceberg tables
IBM Cloud Object Storage
Delta Lake tables, Iceberg tables
IBM Cognos Analytics 2 1
IBM Match 360
Microsoft Azure Data Lake Storage
Delta Lake tables, Iceberg tables

Notes:

1 Advanced analysis is not supported for this data source.

2 Cognos Analytics connections that use secrets from a vault as credentials cannot be used for metadata import.

Database connectors

Supported database connections
Connector Metadata import Metadata enrichment Definition-based rules SQL-based rules SQL-based data assets Target for output tables
Amazon RDS for MySQL



Amazon RDS for Oracle


Amazon RDS for PostgreSQL



Amazon Redshift 1
Apache Cassandra
Apache Hive 5
Apache Impala with Apache Kudu
Denodo
Dremio
Google BigQuery 7
Greenplum
IBM Cloud Databases for MongoDB 1
IBM Cloud Databases for MySQL 1
IBM Cloud Databases for PostgreSQL 1
Connector Metadata import Metadata enrichment Definition-based rules SQL-based rules SQL-based data assets Target for output tables
IBM Data Virtualization
IBM Data Virtualization Manager for z/OS 2 1
IBM Db2
IBM Db2 Big SQL 1
IBM Db2 for z/OS 1
IBM Db2 on Cloud
IBM Db2 Warehouse 1
IBM Informix
IBM Netezza Performance Server
IBM watsonx.data
MariaDB
Microsoft Azure Databricks 8
Microsoft Azure SQL Database 9
Microsoft SQL Server
Connector Metadata import Metadata enrichment Definition-based rules SQL-based rules SQL-based data assets Target for output tables
MongoDB
MySQL
Oracle 3
PostgreSQL
Presto
Salesforce.com 4
SAP ASE 1
SAP HANA 1
SAP IQ 1
SAP OData
Authentication method: username and password
6
SingleStoreDB
Snowflake 9
Teradata

Notes:

1 Advanced analysis is not supported for this data source.

2 With Data Virtualization Manager for z/OS, you add data and COBOL copybooks assets from mainframe systems to catalogs in IBM Cloud Pak for Data. Copybooks are files that describe the data structure of a COBOL program. Data Virtualization Manager for z/OS helps you create virtual tables and views from COBOL copybook maps. You can then use these virtual tables and views to import and catalog mainframe data from mainframes into IBM Cloud Pak for Data in the form of data assets and COBOL copybook assets.

The following types of COBOL copybook maps are not imported: ACI, Catalog, Natural

When the import is finished, you can go to the catalog to review the imported assets, including the COBOL copybook maps, virtual tables, and views. You can use these assets in the same ways as other assets in Cloud Pak for Data.

For more information, see Adding COBOL copybook assets.

3 Table and column descriptions are imported only if the connection is configured with one of the following Metadata discovery options:

  • No synonyms
  • Remarks and synonyms

4 Some objects in the SFORCE schema are not supported. See Salesforce.com.

5 To create metadata-enrichment output tables in Apache Hive at an earlier version than 3.0.0, you must apply the workaround described in Writing metadata enrichment output to an earlier version of Apache Hive than 3.0.0 in the IBM Software Hub documentation.

6 Information whether the data asset is a table or a view cannot be retrieved and is thus not shown in the enrichment results.

7 Output tables for advanced profiling: If you rerun advanced profiling at too short intervals, results might accumulate because the data might not be updated fast enough in Google BigQuery. wait at least 90 minutes before rerunning advanced profiling with the same output target. For more information, see Stream data availability. Alternatively, you can define a different output table.

8 Hive metastore and Unity catalog

9 Advanced analysis is supported for this data source starting with IBM Knowledge Catalog 5.1.1.

Other data sources

An administrator can upload JDBC drivers to enable connections to more data sources. However, the administrator must test these custom JDBC drivers to ensure they are compatible with the tools that are used to connect to a data source. Metadata import, metadata enrichment, or data quality rules are not guaranteed to work with all JDBC implementations. See Importing JDBC drivers in the IBM Software Hub documentation and Generic JDBC.

Connections that are established by using third-party JDBC drivers were tested for the following data sources:

  • Amazon Redshift (Metadata enrichment)
  • Apache Cassandra
  • Apache Hive
  • Apache Kudu (Data quality rules)
  • Databricks (Data quality rules)
  • Snowflake (Metadata enrichment)
  • Teradata (Metadata enrichment)

Learn more

Parent topic: Supported connectors for curation and data quality