Snowflake lineage configuration
To import lineage metadata from Snowflake, create a connection, data source definition and metadata import job.
This information applies to IBM Manta Data Lineage service.
To import lineage metadata for Snowflake, complete these steps:
- Create a data source definition.
- Create a connection to the data source in a project.
- Create a metadata import.
Creating a data source definition
Create a data source definition. Select Snowflake as the data source type.
Creating a connection to Snowflake
Create a connection to the data source in a project. For connection details, see Snowflake connection.
Creating a metadata import
Create a metadata import. Learn more about options that are specific to Snowflake data source:
Include and exclude lists
You can include or exclude assets up to the schema level. Provide databases and schemas in the format database/schema. Each part is evaluated as a regular expression. Assets which are added later in the data source will also be included or excluded if they match the conditions specified in the lists. Example values:
myDB/
: all schemas inmyDB
database.myDB2/.*
: all schemas inmyDB2
database.myDB3/mySchema1
:mySchema1
schema frommyDB3
database.myDB4/mySchema[1-5]
: any schema in mymyDB4
database with a name that starts withmySchema
and ends with a digit between 1 and 5.
External inputs
If you use external Snowflake SQL scripts, you can add them in a .zip file as an external input. You can organize the structure of a .zip file as subfolders that represent databases and schemas. After the scripts are scanned, they are added under respective databases and schemas in the selected catalog or project. The .zip file can have the following structure:
<database_name>
<schema_name>
<script_name.sql>
<database_name>
<script_name.sql>
<script_name.sql>
replace.csv
The replace.csv
file contains placeholder replacements for the scripts that are added in the .zip file. For more information about the format, see Placeholder replacements.
Advanced import options
- Table stages extraction
- You can add a regular expression to list table stages from which you want staged files to be extracted. Use a fully qualified name and enclose each segment with double quotation marks. Leave the field empty if you do not want to extract staged files from any table stages. Example value:
\\\"mydb\\\"\\.\\\"schema1\\\"\\.\\\".*\\\"|\\\"mydb\\\"\\.\\\"myschema\\\"\\.\\\"abc.*\\\
- Performance profile
- For selected data sources you can choose a performance profile. Depending on your current needs, the lineage metadata import might be faster or more complete. You can choose between the following profiles:
- Fast: Low time and memory consumption are the priorities in this profile. If your input is large, lineage might not be complete.
- Balanced: Both performance and lineage completness are important. It is a compromise bewteen the lineage completness and time and memory that is spent on lineage import.
- Complete: The completness for lineage is the priority in this profile. If your input is large, the lineage import might take a significant amount of resources and time.
- Custom profile: You can create your own performance profile by providing values for the following properties:
- Dataflow Analysis Timeout Limit: Specifies the maximum estimated time (in seconds) after which the dataflow analysis of a single input is stopped. The time is checked when each node is added, or in some cases when
edges are created. Therefore, in some cases, the timeout might slightly exceed the specified limit. If you set the value to 0, the analysis is not stopped. Example value:
60
. - Dataflow Analysis Edge Limit: Specifies the maximum number of edges that are allowed for a single input during the dataflow analysis. If this limit is exceeded, all filter edges are removed and no more filter edges
are added. If the limit is still exceeded even after that, the analysis is stopped and the input fails. To disable the limit, set the value to 0. Example value:
2500
.
- Dataflow Analysis Timeout Limit: Specifies the maximum estimated time (in seconds) after which the dataflow analysis of a single input is stopped. The time is checked when each node is added, or in some cases when
edges are created. Therefore, in some cases, the timeout might slightly exceed the specified limit. If you set the value to 0, the analysis is not stopped. Example value:
- Transformation logic extraction
- You can enable building transformation logic descriptions from SQL code in SQL scripts.
Learn more
Parent topic: Supported connectors for lineage import