IMPORT utility populates data into a
table with an input file in a file type of ASC, DEL, or IXF. The
target can be a table, a typed table, or a view. However, imports to
detached tables and temporary tables are not permitted. It is also
recommended that you use the
clause so errors, warnings, and informational messages are
To successfully import data, you must have SYSADM or DBADM authority, or underlying privileges (SELECT, INSERT, CONTROL, or CREATETAB) on the target table or database, depending on which option you use. To import data into a table that has protected rows and columns, you must have LBAC credentials that allow write access to all protected data in the table. In addition, importing to table with protected rows requires that your LBAC credentials are part of the security policy protecting the table.
IMPORTcommand with five options
IMPORT FROM file_name OF file_type MESSAGES message_file [ INSERT | INSERT_UPDATE | REPLACE | REPLACE_CREATE | CREATE ] INTO target_table_name
INSERToption inserts imported data to the table. The target table must already exist.
INSERT_UPDATEinserts data to the table, or updates existing rows of the table with matching primary keys. The target table must exist with a primary key defined.
REPLACEoption deletes all existing data and inserts imported data to an existing target table.
- With the
REPLACE_CREATEoption, if the target table exists, the utility deletes existing data and inserts new data as if the
REPLACEoption were specified. If the target table is not defined, the table and its associated indices will be created before data is being imported. As you can imagine, the input file must be in PC/IXF format because that format contains a structured description of an exported table. If the target table is a parent table referenced by a foreign key,
REPLACE_CREATEcannot be used.
CREATEoption creates the target table and its indices, then imports data into the new table. The only file format supported is PC/IXF. You can also specify the name of the tablespace where the new table should be created.
IMPORT FROM emp.ixf OF IXF MESSAGES msg.out CREATE INTO employee IN datatbsp INDEX IN indtbsp
IMPORT is basically a utility to insert data
into a table in bulk. This bulk insert operation is just like a normal
insert statement in that the activity is logged, indices are updated,
referential integrity is checked, and table constraints are checked.
IMPORT commits only once, at
the end of the operation. If a large number of rows are imported or
inserted into the table, sufficient transaction logs are required for
rollback and recovery. You can request periodic commits to prevent the
logs from getting full. By committing the inserts regularly, you also
reduce the number of rows being lost if a failure occurs during the
import operation. The
forces a COMMIT after a set number of records are imported. You can
also specify the AUTOMATIC option, which allows the import internally
determines when a commit needs to be performed. The utility will
consider issuing a commit to avoid running into log full situation or
to avoid lock escalation.
IMPORT FROM myfile.ixf OF IXF COMMITCOUNT 500 MESSAGES msg.out INSERT INTO newtable
If for some reason the above command fails during its execution, you could use the message file to determine the last row that was successfully imported and committed. Then, you could restart the import with the RESTARTCOUNT option. Note that the behavior of the SKIPCOUNT option is the same as RESTARTCOUNT. In the command below, the utility will skip the first 30,000 records before beginning the IMPORT operation.
IMPORT FROM myfile.ixf OF IXF COMMITCOUNT 500 RESTARTCOUNT 30000 ROWCOUNT 100000 MESSAGES msg.out INSERT INTO newtable
In the example, notice that the
option is also used. It specifies the number of physical records to be
imported. Because the
is used, the import utility will skip the first 30,000 records and
import the next 100,000 records into the table.
By default, the import utility will acquire an exclusive lock on the
target table before any rows are inserted. The exclusive lock is
released as soon as the import completes. This is the behavior of the
ALLOW NO ACCESS option. To allow concurrent
applications to access the table data, you can use the
option. Note that this option is not compatible with the
REPLACE_CREATE import options.
ALLOW WRITE ACCESSoption example
IMPORT FROM myfile.ixf OF IXF ALLOW WRITE ACCESS MESSAGES msg.out INSERT INTO newtable
To import XML files, use the
XML FROM option
to specify one or more paths where XML files are stored. Otherwise,
the import utility will look for the XML files in the current
directory. You can choose how the XML documents are parsed; strip
whitespace or preserve whitespace. If the
XMLPARSE option is not specified, the
parsing behavior for XML documents will be determined by the
CURRENT XMLPARSE OPTION special
IMPORT FROM myfile.ixf OF IXF XML FROM d:\xmlpath XMLPARSE PRESERVE WHITESPACE MESSAGES msg.out INSERT INTO newtable
When you insert or update an XML document, you might want to determine
whether the structure, content, and data types of the XML document are
valid. The import utility also supports XML validation through the use
XMLVALIDATE option. There are three
USING XDS— Recall that you can export XML schema information and store it in the SCH attribute of the XML Data Specifier (XDS). The value of the SCH attribute will be used to perform validation. If there is no SCH attribute in the XDS, the
DEFAULT, IGNORE, or
MAPwill be considered.
USING SCHEMA schema-sqlid— Use the XML schema specified in this clause.
USING SCHEMALOCATION HINTS— Validate the XML documents against the schemas identified by the XML schema location hints in the source XML documents.
IMPORT FROM myfile.ixf OF IXF XML FROM d:\xmlpath XMLPARSE PRESERVE WHITESPACE XMLVALIDATE USING XDS DEFAULT S1.SCHEMA_A IGNORE (S1.SCHEMA_X, S1.SCHEMA_Y, S1.SCHEMA_Z) MAP (S1.SCHEMA_A, S1.SCHEMA_B) COMMITCOUNT 500 RESTARTCOUNT 30000 MESSAGES msg.out INSERT INTO newtable
IMPORT command will:
- Insert data from myfile.ixf and XML files located in d:\xmlpath.
- Whitespace is preserved when the XML document is parsed.
- Each XML document is validated using the schema information identified in the SCH attribute of the XDS. However, if XDS for any particular row doesn't contain a SCH attribute, S1.SCHEMA_A will be used instead.
- For SCH attribute specified as S1.SCHEMA_X, or S1.SCHEMA_Y, or S1.SCHEMA_Z, validation will not be performed for the imported XML document.
- If the SCH attribute is specified as S1.SCHEMA_A, it will then mapped to S1.SCHEMA_B. Note that although the DEFAULT clause specifies S1.SCHEMA_A, any subsequent mapping will not be performed.
- The import utility will issue a commit after every 500 rows are imported.
- The import operation is started at record 30,001. The first 30,000 records are skipped.
- Any errors, warnings, and informational messages are written to the msg.out file.
- New data are inserted (or appended) into the newtable.
This example only gives you some idea of how the imported XML documents
can be validated. There are more examples in the DB2 Information
Center that demonstrate the power of the
IMPORT utility also supports file type
modifiers to customize the import operation. A complete list of
modifiers can be found in the DB2 Command Reference, under IMPORT. A
few of them are outlined here:
- Uses non-atomic compound SQL to insert data. x number of statements will be attempted each time.
- Uses the specified schema for the index during index creation.
- Truncates any trailing blank spaces when loading data into a variable-length field.
- Indicates that LOB data is being imported. The utility will check the LOBS FROM clause to get the path of the input LOB files.
Listing 16. File type modifiers example
IMPORT FOR inputfile.asc OF ASC LOBS FROM /u/db2load/lob1, /u/db2load/lob2 MODIFIED BY compount=5 lobinsfile INSERT INTO newtable
The Data Studio provides easy-to-use graphical interfaces to perform import operations. All the import options and file modifiers discussed in the previous section are also available in this interface.