Creating SQL-based data quality rules

You can create SQL-based data quality rules for assessing the quality of the data in your project. Use such rules for measuring errors rather than for evaluating compliance with specific quality criteria.

To have an SQL-based data quality rule contribute to the data quality scores of a column and the containing asset, add that column as a related item to the corresponding rule. Use the Validates the data quality of relationship type. The same score and issues are reported for all columns that are linked with this relationship type.

Requirement
For AI-generated rule descriptions and expression explanations, the following project settings must be enabled:

To create an SQL-based data quality rule:

  1. Open a project, click New asset > Measure and monitor data quality.

  2. Define details:

    • Specify a name for the data quality rule.

    • Optional: Provide a description. If AI-generated descriptions are enabled, an expression explanation and a description that is based on that explanation are generated when you save the rule. This description is then automatically updated whenever you change the SQL statement unless you edited the description.

    • Optional: Select a data quality dimension to describe the primary data quality metric for the rule logic in this asset. The selected dimension can be used as report category, for filtering, or for visualizing selected data.

      When you create your rule from a data quality definition, a data quality dimension might already be set. You can keep it, delete the dimension setting, or select a different dimension.

  3. Select the SQL query rule logic.

  4. Specify the source of the data to check by selecting an existing connection or creating a new one. For supported data sources, see Supported data sources for curation and data quality.

    If you select an existing connection and this connection was created with personal credentials, you will need to unlock the connection.

  5. Create your SQL query. You can craft the SQL query yourself, or enter a plain text query and have that converted to SQL by using one of the available models.

    The option to work with plain text queries is technology preview functionality. Technology preview functionality is available for use in development and testing, but is not meant for use in production.

    To work with plain text queries, the project must be enabled for natural language queries, but this is a one-time operation.For details, see Data intelligence tools settings.

    You can provide additional context to the models for more accurate and context-aware SQL output by uploading query samples. For more information, see Providing additional context for text-to-SQL conversions.

    Remember: Generated queries are not guaranteed to be identical from run to run, so that you might see variations.
    1. In the free-form text area of the SQL tab, enter your text query, select one of the available models (Select model icon), and generate the SQL statement. Because of how SQL-based rules work, make sure to enter a query that returns the exception records. For example, “Order date is in the future” or “Customer age is below 18”.

      Tip: The meta-llama/llama-3-3-70b model is better at following prompt instructions for converting text into SQL and thus might provide more accurate results compared to Granite models.

      When you enter your text query, you can provide additional information that serves as custom prompt to the model. Precede that additional information with the disclaimer Note: .

    2. Copy the generated SQL statement to the SQL area. You can still update the SQL query as required.

    Whether you craft the query yourself or use a generated query, make sure that the SELECT statement meets the following conditions:

    • The statement returns only columns with unique names. Columns with duplicate names will cause validation errors.
    • The statement returns the number of records that do not meet your data quality condition. SQL-based rules work in a different way than rules created from data quality definitions. They report the records that the SELECT statement returns as failed or Not met. Plus, the total number of records that is reported equals the number of returned records, not the number of checked records.

    For example, assuming you have a table db2admin.credit_card with 31 rows and you want to check how many records with card type AMEX are in the table, the difference is as follows:

    Data quality rule from a data quality definition
    You check for records where the card type is AMEX.
    Expression in the data quality definition: Col = 'AMEX'
    Bound expression in the data quality rule: credit_card.card_type = 'AMEX'
    Sample result: Total: 31 | Met: 4 (12.9%) Not met: 27 (87.1%)

    SQL-based data quality rule
    You check for records where the card type is not AMEX.
    SELECT statement: select card_type from db2admin.credit_card where card_type <> 'AMEX'
    Sample result: Total: 27 | Met: 0 (0%) Not met: 27 (100%)

    Also check the set of Sample SQL statements for data quality rules. These samples demonstrate how you can write SQL rules to return records that do not meet your quality criteria. You can copy the provided statements into your own data quality rules and adjust them as needed.

    • Consider these conventions for specifying column, table, and schema names in your SELECT statement:

      • Table and schema names in PostgreSQL data sources are case-sensitive. You might need to enclose the names in double quotation marks like in this example: "schema"."table_name"
      • Try to avoid SELECT * queries. Such queries might cause validation errors when column names change. Narrow down the column selection.
      • If the column name does not start with an alphabetic character or contains characters other than alphabetic characters, numeric characters, or underscores, use an alias for the column name.

    At any time, you can test the SQL statements. Note that the test returns only the names of the columns selected by your query. No actual processing is done. A validity check is done when you click Next. You can't proceed unless your query passes this check.

    In projects where AI-generated content is enabled, an explanation for the SQL statement is generated when you save the rule. The explanation is updated for every change to the SQL statement.

  6. Configure output settings and content.

    Select whether you want rule output to be written to a database. If not, only some statistical information is provided in the rule's run history. For more information, see Configuring output settings for data quality rules.

  7. Review your configuration. To make sure your rule is configured properly, you can test it before you actually save it to the project. The output of the rule test is directly displayed and matches what you configured in the output settings.

    To make changes to the configuration, click the Edit icon edit icon on the tile and update the settings. When your review is complete, click Create. The rule and its related DataStage flow are added to the project. The default name of the DataStage flow is DataStage flow of data rule <rulename>.

If your rule is configured properly without any information missing, it has the status Ready. This status means that the rule can be run. The rule status Not ready indicates that the rule can't be run because of SQL syntax errors, modified dependencies, or other rule definition issues. For example, the password for accessing the data source changed. This status is more likely to be seen for data quality rules that were created by using the IBM Knowledge Catalog API: Create data quality rule. When you create data quality rules by using the API, make sure to also test and validate the rule.

To confirm that a rule is still valid before you run it manually, you can check the rule status by selecting Validate from the overflow menu.

Learn more

Next steps