Creating jobs with Masking flow

A masking flow job defines the data that is to be masked, whether it's a full copy of data tables or a subset of data tables with relationships intact. You can create a job from the Assets page of a project.

Note: Masking flow supports Hadoop Distributed File System (HDFS) as a target; however, writing to HDFS is limited to parquet files.

To create a masking flow job:

  1. From the project's Assets page, select the asset from the section for your asset type and choose Mask from the menu icon with the lists of options (actions icon three vertical dots) at the end of the table row.

  2. Optional. Define the job details by entering a name and a description.

  3. On the Select target page, select the target connection where you want to insert masked data copy. The source connection is used to read data. You can also add a new connection. The schema maps the source table to the target table. Table definitions must already be configured in the source schema.

  4. Optional. On the Partitions page, you can edit the partition details for the asset.

  5. Optional. On the Schedule page, you can add a one-time or repeating schedule. See Job scheduling options.

  6. Review the job settings. Then create the job and run it immediately, or create the job and run it later.

    Once the masking flow job run has been created, it is listed in Job runs tab. To see job details, click on the job run. For a complete list of jobs in your project, go to the Assets tab.

Learn more