About the Data Import tool process

Use the Data Import tool for non-production purposes, such as to import sample data for demonstration or testing.

General steps for the Data Import process:
  1. Create the source system reference. Point the Data Import engine to source data, such as a specific CSV file or JDBC connection. The Data Import engine introspects the source data by looking at the metadata for the source system. For CSV files, the metadata includes items, such as the number of columns. For JDBC sources, the metadata includes items, such as table names, column names, and data types.
  2. Use an ASCII editor to create a mapping file. The mapping defines the source and destination columns for Data Import to read from and insert into. Any data conversion or custom SQL is defined at this stage.
  3. Start the mapping to begin importing data. The Data Import engine starts an asynchronous job to process the defined mapping. A unique job ID is created each time that a mapping is started. Data is read from the source columns and inserted into the destination columns according to the mapping statements in the mapping file. The Data Import engine reads from the source until the end of the input CSV file is reached, or until all records are read from a JDBC source connection.
  4. Check the status of a job ID. You can use the Data Import tool to determine the status of the job.
  5. Optionally, use the command line to display any error records for the job.
Data Import creates and uses identifiers to track systems, import mappings, and import jobs. The following identifiers are created as part of the Data Import process:
system_id
Specifies the unique identifier for a specific source system or CSV file.
mapping_id
Specifies the unique identifier for a mapping from source system data to the CFDB.
job_id
Specifies the unique identifier for each Data Import job.
CAUTION:
For a new installation, the initial set of IDs might have the same value, for example, system_id=1, mapping_id=1, and import_id=1. To avoid errors, take note of the IDs that are generated during the data import process.
Restrictions:
When you use the command line tool to import data, note the following restrictions:
  • Data Import uses secured web services. A valid user name and password must be supplied.
  • Data Import can import only into the Counter Fraud database.
  • Data Import can import only from a CSV file or through a JDBC connection. If your source data is in another format (IXF), use standard database tools to import the source data into a temporary database. Then use the Data Import JDBC capability to import through a JDBC connection.
  • Data Import can import from only one CSV file at a time. If your data spans multiple CSV files, you must combine the CSV files into one file, or configure and run two Data Import jobs.
  • If using a CSV file, the file must have the same name as the properties (mapping) file; otherwise, Data Import will not be able to find it.
  • CSV data files must have a header row and the header columns must be unique.
    Note: CSV header columns are case-sensitive. For example, if a header column in the CSV data files is FIELD1, but the reference in the properties file is Field1, Data Import reports that the column does not exist in the CSV.

Considerations for importing stereotype data

If you are importing data into fields in the Counter Fraud database that are defined as "STEREOTYPE," you might need to remove leading spaces and trailing spaces from the data. Stereotypes cannot contain spaces, and you will encounter errors in ICFM if your imported data includes spaces.

For best results, use the DB2® TRIM command to remove spaces from the stereotype fields. For example, enter the following command to remove leading and trailing spaces from the STEREOTYPE field in the PHYSICAL_OBJECT table.
PHYSICAL_OBJECT.STEREOTYPE = INPUT_COLUMN_NAME USING ACTION_MATCH  "TRIM (%1)"