DB2 Version 9.7 for Linux, UNIX, and Windows

IMPORT command

Inserts data from an external file with a supported file format into a table, hierarchy, view or nickname. LOAD is a faster alternative, but the load utility does not support loading data at the hierarchy level.

Quick link to File type modifiers for the import utility.

Authorization

Required connection

Database. If implicit connect is enabled, a connection to the default database is established. Utility access to Linux, UNIX, or Windows database servers from Linux, UNIX, or Windows clients must be a direct connection through the engine and not through a DB2 Connect™ gateway or loop back environment.

Command syntax

Read syntax diagramSkip visual syntax diagram
>>-IMPORT FROM--filename--OF--filetype-------------------------->

>--+-------------------------+--+------------------------+------>
   |            .-,--------. |  |           .-,--------. |   
   |            V          | |  |           V          | |   
   '-LOBS FROM----lob-path-+-'  '-XML FROM----xml-path-+-'   

>--+-------------------------------+---------------------------->
   |              .--------------. |   
   |              V              | |   
   '-MODIFIED BY----filetype-mod-+-'   

>--+-------------------------------------------------------------------------------------------------------+-->
   |                 .-,------------------------.                                                          |   
   |                 V                          |                                                          |   
   '-METHOD--+-L--(----column-start--column-end-+--)--+------------------------------------------------+-+-'   
             |                                        |                     .-,-------------------.    | |     
             |                                        |                     V                     |    | |     
             |                                        '-NULL INDICATORS--(----null-indicator-list-+--)-' |     
             |       .-,-----------.                                                                     |     
             |       V             |                                                                     |     
             +-N--(----column-name-+--)------------------------------------------------------------------+     
             |       .-,---------------.                                                                 |     
             |       V                 |                                                                 |     
             '-P--(----column-position-+--)--------------------------------------------------------------'     

>--+------------------------------------+----------------------->
   '-XMLPARSE--+-STRIP----+--WHITESPACE-'   
               '-PRESERVE-'                 

>--+--------------------------------------------------------------------------------------+-->
   '-XMLVALIDATE USING--+-XDS--+-----------------------+--| Ignore and Map parameters |-+-'   
                        |      '-DEFAULT--schema-sqlid-'                                |     
                        +-SCHEMA--schema-sqlid------------------------------------------+     
                        '-SCHEMALOCATION HINTS------------------------------------------'     

   .-ALLOW NO ACCESS----.                                   
>--+--------------------+--+----------------------------+------->
   '-ALLOW WRITE ACCESS-'  '-COMMITCOUNT--+-n---------+-'   
                                          '-AUTOMATIC-'     

>--+---------------------+--+-------------+--------------------->
   '-+-RESTARTCOUNT-+--n-'  '-ROWCOUNT--n-'   
     '-SKIPCOUNT----'                         

>--+-----------------+--+-----------+--------------------------->
   '-WARNINGCOUNT--n-'  '-NOTIMEOUT-'   

>--+------------------------+----------------------------------->
   '-MESSAGES--message-file-'   

>--+-+-INSERT---------+--INTO--+-table-name--+-------------------------+-+----------------------+-><
   | +-INSERT_UPDATE--+        |             |    .-,-------------.    | |                      |   
   | +-REPLACE--------+        |             |    V               |    | |                      |   
   | '-REPLACE_CREATE-'        |             '-(----insert-column-+--)-' |                      |   
   |                           '-| hierarchy description |---------------'                      |   
   '-CREATE--INTO--+-table-name--+-------------------------+--------------+--| tblspace-specs |-'   
                   |             |    .-,-------------.    |              |                         
                   |             |    V               |    |              |                         
                   |             '-(----insert-column-+--)-'              |                         
                   '-| hierarchy description |--+-AS ROOT TABLE---------+-'                         
                                                '-UNDER--sub-table-name-'                           

Ignore and Map parameters

|--+--------------------------------+--------------------------->
   |            .-,------------.    |   
   |            V              |    |   
   '-IGNORE--(----schema-sqlid-+--)-'   

>--+----------------------------------------------------+-------|
   |         .-,-----------------------------------.    |   
   |         V                                     |    |   
   '-MAP--(----(--schema-sqlid--,--schema-sqlid--)-+--)-'   

hierarchy description

   .-ALL TABLES---------.           
|--+-| sub-table-list |-+--+----+------------------------------->
                           '-IN-'   

>--HIERARCHY--+-STARTING--sub-table-name-+----------------------|
              '-| traversal-order-list |-'   

sub-table-list

      .-,-------------------------------------------.      
      V                                             |      
|--(----sub-table-name--+-------------------------+-+--)--------|
                        |    .-,-------------.    |        
                        |    V               |    |        
                        '-(----insert-column-+--)-'        

traversal-order-list

      .-,--------------.      
      V                |      
|--(----sub-table-name-+--)-------------------------------------|

tblspace-specs

|--+----------------------------------------------------------------------------------+--|
   '-IN--tablespace-name--+---------------------------+--+--------------------------+-'   
                          '-INDEX IN--tablespace-name-'  '-LONG IN--tablespace-name-'     

Command parameters

ALL TABLES
An implicit keyword for hierarchy only. When importing a hierarchy, the default is to import all tables specified in the traversal order.
ALLOW NO ACCESS
Runs import in the offline mode. An exclusive (X) lock on the target table is acquired before any rows are inserted. This prevents concurrent applications from accessing table data. This is the default import behavior.
ALLOW WRITE ACCESS
Runs import in the online mode. An intent exclusive (IX) lock on the target table is acquired when the first row is inserted. This allows concurrent readers and writers to access table data. Online mode is not compatible with the REPLACE, CREATE, or REPLACE_CREATE import options. Online mode is not supported in conjunction with buffered inserts. The import operation will periodically commit inserted data to prevent lock escalation to a table lock and to avoid running out of active log space. These commits will be performed even if the COMMITCOUNT option was not used. During each commit, import will lose its IX table lock, and will attempt to reacquire it after the commit. This parameter is required when you import to a nickname and COMMITCOUNT must be specified with a valid number (AUTOMATIC is not considered a valid option).
AS ROOT TABLE
Creates one or more sub-tables as a stand-alone table hierarchy.
COMMITCOUNT n | AUTOMATIC
Performs a COMMIT after every n records are imported. When a number n is specified, import performs a COMMIT after every n records are imported. When compound inserts are used, a user-specified commit frequency of n is rounded up to the first integer multiple of the compound count value. When AUTOMATIC is specified, import internally determines when a commit needs to be performed. The utility will commit for either one of two reasons:
  • to avoid running out of active log space
  • to avoid lock escalation from row level to table level
If the ALLOW WRITE ACCESS option is specified, and the COMMITCOUNT option is not specified, the import utility will perform commits as if COMMITCOUNT AUTOMATIC had been specified.
The ability of the import operation to avoid running out of active log space is affected by the DB2® registry variable DB2_FORCE_APP_ON_MAX_LOG:
  • If DB2_FORCE_APP_ON_MAX_LOG is set to FALSE and the COMMITCOUNT AUTOMATIC command option is specified, the import utility will be able to automatically avoid running out of active log space.
  • If DB2_FORCE_APP_ON_MAX_LOG is set to FALSE and the COMMITCOUNT n command option is specified, the import utility will attempt to resolve the log full condition if it encounters an SQL0964C (Transaction Log Full) while inserting or updating a record. It will perform an unconditional commit and then will reattempt to insert or update the record. If this does not help resolve the issue (which would be the case when the log full is attributed to other activity on the database), then the IMPORT command will fail as expected, however the number of rows committed may not be a multiple of the COMMITCOUNT n value. To avoid processing the rows that were already committed when you retry the import operation, use the RESTARTCOUNT or SKIPCOUNT command parameters.
  • If DB2_FORCE_APP_ON_MAX_LOG is set to TRUE (which is the default), the import operation will fail if it encounters an SQL0964C while inserting or updating a record. This can occur irrespective of whether you specify COMMITCOUNT AUTOMATIC or COMMITCOUNT n.

    The application is forced off the database and the current unit of work is rolled back. To avoid processing the rows that were already committed when you retry the import operation, use the RESTARTCOUNT or SKIPCOUNT command parameters.

CREATE
Note: The CREATE parameter is deprecated and may be removed in a future release. For additional details, see "IMPORT command options CREATE and REPLACE_CREATE are deprecated".

Creates the table definition and row contents in the code page of the database. If the data was exported from a DB2 table, sub-table, or hierarchy, indexes are created. If this option operates on a hierarchy, and data was exported from DB2, a type hierarchy will also be created. This option can only be used with IXF files.

This parameter is not valid when you import to a nickname.

Note: If the data was exported from an MVS™ host database, and it contains LONGVAR fields whose lengths, calculated on the page size, are more than 254, CREATE might fail because the rows are too long. See "Imported table re-creation" for a list of restrictions. In this case, the table should be created manually, and IMPORT with INSERT should be invoked, or, alternatively, the LOAD command should be used.
DEFAULT schema-sqlid
This option can only be used when the USING XDS parameter is specified. The schema specified through the DEFAULT clause identifies a schema to use for validation when the XML Data Specifier (XDS) of an imported XML document does not contain an SCH attribute identifying an XML Schema.

The DEFAULT clause takes precedence over the IGNORE and MAP clauses. If an XDS satisfies the DEFAULT clause, the IGNORE and MAP specifications will be ignored.

FROM filename
Specifies the file that contains the data to be imported. If the path is omitted, the current working directory is used.
HIERARCHY
Specifies that hierarchical data is to be imported.
IGNORE schema-sqlid
This option can only be used when the USING XDS parameter is specified. The IGNORE clause specifies a list of one or more schemas to ignore if they are identified by an SCH attribute. If an SCH attribute exists in the XML Data Specifier for an imported XML document, and the schema identified by the SCH attribute is included in the list of schemas to ignore, then no schema validation will occur for the imported XML document.

If a schema is specified in the IGNORE clause, it cannot also be present in the left side of a schema pair in the MAP clause.

The IGNORE clause applies only to the XDS. A schema that is mapped by the MAP clause will not be subsequently ignored if specified by the IGNORE clause.

IN tablespace-name
Identifies the table space in which the table will be created. The table space must exist, and must be a REGULAR table space. If no other table space is specified, all table parts are stored in this table space. If this clause is not specified, the table is created in a table space created by the authorization ID. If none is found, the table is placed into the default table space USERSPACE1. If USERSPACE1 has been dropped, table creation fails.
INDEX IN tablespace-name
Identifies the table space in which any indexes on the table will be created. This option is allowed only when the primary table space specified in the IN clause is a DMS table space. The specified table space must exist, and must be a REGULAR or LARGE DMS table space.
Note: Specifying which table space will contain an index can only be done when the table is created.
insert-column
Specifies the name of a column in the table or the view into which data is to be inserted.
INSERT
Adds the imported data to the table without changing the existing table data.
INSERT_UPDATE
Adds rows of imported data to the target table, or updates existing rows (of the target table) with matching primary keys.
INTO table-name
Specifies the database table into which the data is to be imported. This table cannot be a system table, a created temporary table, a declared temporary table, or a summary table.

One can use an alias for INSERT, INSERT_UPDATE, or REPLACE, except in the case of an earlier server, when the fully qualified or the unqualified table name should be used. A qualified table name is in the form: schema.tablename. The schema is the user name under which the table was created.

LOBS FROM lob-path
Specifies one or more paths that store LOB files. The names of the LOB data files are stored in the main data file (ASC, DEL, or IXF), in the column that will be loaded into the LOB column. The maximum number of paths that can be specified is 999. This will implicitly activate the LOBSINFILE behavior.

This parameter is not valid when you import to a nickname.

LONG IN tablespace-name
Identifies the table space in which the values of any long columns (LONG VARCHAR, LONG VARGRAPHIC, LOB data types, or distinct types with any of these as source types) will be stored. This option is allowed only if the primary table space specified in the IN clause is a DMS table space. The table space must exist, and must be a LARGE DMS table space.
MAP schema-sqlid
This option can only be used when the USING XDS parameter is specified. Use the MAP clause to specify alternate schemas to use in place of those specified by the SCH attribute of an XML Data Specifier (XDS) for each imported XML document. The MAP clause specifies a list of one or more schema pairs, where each pair represents a mapping of one schema to another. The first schema in the pair represents a schema that is referred to by an SCH attribute in an XDS. The second schema in the pair represents the schema that should be used to perform schema validation.

If a schema is present in the left side of a schema pair in the MAP clause, it cannot also be specified in the IGNORE clause.

Once a schema pair mapping is applied, the result is final. The mapping operation is non-transitive, and therefore the schema chosen will not be subsequently applied to another schema pair mapping.

A schema cannot be mapped more than once, meaning that it cannot appear on the left side of more than one pair.

MESSAGES message-file
Specifies the destination for warning and error messages that occur during an import operation. If the file already exists, the import utility appends the information. If the complete path to the file is not specified, the utility uses the current directory and the default drive as the destination. If message-file is omitted, the messages are written to standard output.
METHOD
L
Specifies the start and end column numbers from which to import data. A column number is a byte offset from the beginning of a row of data. It is numbered starting from 1.
Note: This method can only be used with ASC files, and is the only valid option for that file type.
N
Specifies the names of the columns in the data file to be imported. The case of these column names must match the case of the corresponding names in the system catalogs. Each table column that is not nullable should have a corresponding entry in the METHOD N list. For example, given data fields F1, F2, F3, F4, F5, and F6, and table columns C1 INT, C2 INT NOT NULL, C3 INT NOT NULL, and C4 INT, method N (F2, F1, F4, F3) is a valid request, while method N (F2, F1) is not valid.
Note: This method can only be used with IXF files.
P
Specifies the field numbers (numbered from 1) of the input data fields to be imported. Each table column that is not nullable should have a corresponding entry in the METHOD P list. For example, given data fields F1, F2, F3, F4, F5, and F6, and table columns C1 INT, C2 INT NOT NULL, C3 INT NOT NULL, and C4 INT, method P (2, 1, 4, 3) is a valid request, while method P (2, 1) is not valid. This method can only be used with file types IXF or DEL, and is the only valid method for the DEL file type.
For each of the fields specified by method P, you need to define a corresponding column in the action statement, unless all columns are accounted for or the first x columns are going to be loaded, as shown in the following example:
  db2 load from datafile1.del of del method P(1, 3, 4)
      replace into table1 (c1, c3, c4)
MODIFIED BY filetype-mod
Specifies file type modifier options. See File type modifiers for the import utility.
NOTIMEOUT
Specifies that the import utility will not time out while waiting for locks. This option supersedes the locktimeout database configuration parameter. Other applications are not affected.
NULL INDICATORS null-indicator-list
This option can only be used when the METHOD L parameter is specified. That is, the input file is an ASC file. The null indicator list is a comma-separated list of positive integers specifying the column number of each null indicator field. The column number is the byte offset of the null indicator field from the beginning of a row of data. There must be one entry in the null indicator list for each data field defined in the METHOD L parameter. A column number of zero indicates that the corresponding data field always contains data.

A value of Y in the NULL indicator column specifies that the column data is NULL. Any character other than Y in the NULL indicator column specifies that the column data is not NULL, and that column data specified by the METHOD L option will be imported.

The NULL indicator character can be changed using the MODIFIED BY option, with the nullindchar file type modifier.

OF filetype
Specifies the format of the data in the input file:
  • ASC (non-delimited ASCII format)
  • DEL (delimited ASCII format), which is used by a variety of database manager and file manager programs
  • WSF (work sheet format), which is used by programs such as:
    • Lotus® 1-2-3®
    • Lotus Symphony®
  • IXF (Integration Exchange Format, PC version) is a binary format that is used exclusively by DB2.
Important: Support for the WSF file format is deprecated and might be removed in a future release. It is recommended that you start using a supported file format instead of WSF files before support is removed.

The WSF file type is not supported when you import to a nickname.

REPLACE
Deletes all existing data from the table by truncating the data object, and inserts the imported data. The table definition and the index definitions are not changed. This option can only be used if the table exists. If this option is used when moving data between hierarchies, only the data for an entire hierarchy, not individual subtables, can be replaced.

This parameter is not valid when you import to a nickname.

This option does not honor the CREATE TABLE statement's NOT LOGGED INITIALLY (NLI) clause or the ALTER TABLE statement's ACTIVE NOT LOGGED INITIALLY clause.

If an import with the REPLACE option is performed within the same transaction as a CREATE TABLE or ALTER TABLE statement where the NLI clause is invoked, the import will not honor the NLI clause. All inserts will be logged.

Workaround 1
Delete the contents of the table using the DELETE statement, then invoke the import with INSERT statement
Workaround 2
Drop the table and recreate it, then invoke the import with INSERT statement.

This limitation applies to DB2 Universal Database™ Version 7 and DB2 UDB Version 8

REPLACE_CREATE
Note: The REPLACE_CREATE parameter is deprecated and may be removed in a future release. For additional details, see "IMPORT command options CREATE and REPLACE_CREATE are deprecated".

If the table exists, deletes all existing data from the table by truncating the data object, and inserts the imported data without changing the table definition or the index definitions.

If the table does not exist, creates the table and index definitions, as well as the row contents, in the code page of the database. See Imported table re-creation for a list of restrictions.

This option can only be used with IXF files. If this option is used when moving data between hierarchies, only the data for an entire hierarchy, not individual subtables, can be replaced.

This parameter is not valid when you import to a nickname.

RESTARTCOUNT n
Specifies that an import operation is to be started at record n+1. The first n records are skipped. This option is functionally equivalent to SKIPCOUNT. RESTARTCOUNT and SKIPCOUNT are mutually exclusive.
ROWCOUNT n
Specifies the number n of physical records in the file to be imported (inserted or updated). Allows a user to import only n rows from a file, starting from the record determined by the SKIPCOUNT or RESTARTCOUNT options. If the SKIPCOUNT or RESTARTCOUNT options are not specified, the first n rows are imported. If SKIPCOUNT m or RESTARTCOUNT m is specified, rows m+1 to m+n are imported. When compound inserts are used, user specified ROWCOUNT n is rounded up to the first integer multiple of the compound count value.
SKIPCOUNT n
Specifies that an import operation is to be started at record n+1. The first n records are skipped. This option is functionally equivalent to RESTARTCOUNT. SKIPCOUNT and RESTARTCOUNT are mutually exclusive.
STARTING sub-table-name
A keyword for hierarchy only, requesting the default order, starting from sub-table-name. For PC/IXF files, the default order is the order stored in the input file. The default order is the only valid order for the PC/IXF file format.
sub-table-list
For typed tables with the INSERT or the INSERT_UPDATE option, a list of sub-table names is used to indicate the sub-tables into which data is to be imported.
traversal-order-list
For typed tables with the INSERT, INSERT_UPDATE, or the REPLACE option, a list of sub-table names is used to indicate the traversal order of the importing sub-tables in the hierarchy.
UNDER sub-table-name
Specifies a parent table for creating one or more sub-tables.
WARNINGCOUNT n
Stops the import operation after n warnings. Set this parameter if no warnings are expected, but verification that the correct file and table are being used is desired. If the import file or the target table is specified incorrectly, the import utility will generate a warning for each row that it attempts to import, which will cause the import to fail. If n is zero, or this option is not specified, the import operation will continue regardless of the number of warnings issued.
XML FROM xml-path
Specifies one or more paths that contain the XML files.
XMLPARSE
Start of changeSpecifies how XML documents are parsed. If this option is not specified, the parsing behavior for XML documents will be determined by the value of the CURRENT IMPLICIT XMLPARSE OPTION special register.
STRIP WHITESPACE
Specifies to remove whitespace when the XML document is parsed.
PRESERVE WHITESPACE
Specifies not to remove whitespace when the XML document is parsed.
End of change
XMLVALIDATE
Specifies that XML documents are validated against a schema, when applicable.
USING XDS
XML documents are validated against the XML schema identified by the XML Data Specifier (XDS) in the main data file. By default, if the XMLVALIDATE option is invoked with the USING XDS clause, the schema used to perform validation will be determined by the SCH attribute of the XDS. If an SCH attribute is not present in the XDS, no schema validation will occur unless a default schema is specified by the DEFAULT clause.

The DEFAULT, IGNORE, and MAP clauses can be used to modify the schema determination behavior. These three optional clauses apply directly to the specifications of the XDS, and not to each other. For example, if a schema is selected because it is specified by the DEFAULT clause, it will not be ignored if also specified by the IGNORE clause. Similarly, if a schema is selected because it is specified as the first part of a pair in the MAP clause, it will not be re-mapped if also specified in the second part of another MAP clause pair.

USING SCHEMA schema-sqlid
XML documents are validated against the XML schema with the specified SQL identifier. In this case, the SCH attribute of the XML Data Specifier (XDS) will be ignored for all XML columns.
USING SCHEMALOCATION HINTS
XML documents are validated against the schemas identified by XML schema location hints in the source XML documents. If a schemaLocation attribute is not found in the XML document, no validation will occur. When the USING SCHEMALOCATION HINTS clause is specified, the SCH attribute of the XML Data Specifier (XDS) will be ignored for all XML columns.
See examples of the XMLVALIDATE option below.

Examples

Example 1

The following example shows how to import information from myfile.ixf to the STAFF table:
   db2 import from myfile.ixf of ixf messages msg.txt insert into staff

SQL3150N  The H record in the PC/IXF file has product "DB2    01.00", date
"19970220", and time "140848".
 
SQL3153N  The T record in the PC/IXF file has name "myfile", 
qualifier "        ", and source "            ".
 
SQL3109N  The utility is beginning to load data from file "myfile".
 
SQL3110N  The utility has completed processing.  "58" rows were read 
from the input file.
 
SQL3221W  ...Begin COMMIT WORK. Input Record Count = "58".
 
SQL3222W  ...COMMIT of any database changes was successful.
 
SQL3149N  "58" rows were processed from the input file.  "58" rows were
successfully inserted into the table.  "0" rows were rejected.

Example 2 (Importing into a table with an identity column)

TABLE1 has 4 columns:
  • C1 VARCHAR(30)
  • C2 INT GENERATED BY DEFAULT AS IDENTITY
  • C3 DECIMAL(7,2)
  • C4 CHAR(1)

TABLE2 is the same as TABLE1, except that C2 is a GENERATED ALWAYS identity column.

Data records in DATAFILE1 (DEL format):
   "Liszt"
   "Hummel",,187.43, H
   "Grieg",100, 66.34, G
   "Satie",101, 818.23, I
Data records in DATAFILE2 (DEL format):
   "Liszt", 74.49, A
   "Hummel", 0.01, H
   "Grieg", 66.34, G
   "Satie", 818.23, I
The following command generates identity values for rows 1 and 2, since no identity values are supplied in DATAFILE1 for those rows. Rows 3 and 4, however, are assigned the user-supplied identity values of 100 and 101, respectively.
   db2 import from datafile1.del of del replace into table1
To import DATAFILE1 into TABLE1 so that identity values are generated for all rows, issue one of the following commands:
   db2 import from datafile1.del of del method P(1, 3, 4)
      replace into table1 (c1, c3, c4)
   db2 import from datafile1.del of del modified by identityignore
      replace into table1
To import DATAFILE2 into TABLE1 so that identity values are generated for each row, issue one of the following commands:
   db2 import from datafile2.del of del replace into table1 (c1, c3, c4)
   db2 import from datafile2.del of del modified by identitymissing
      replace into table1

If DATAFILE1 is imported into TABLE2 without using any of the identity-related file type modifiers, rows 1 and 2 will be inserted, but rows 3 and 4 will be rejected, because they supply their own non-NULL values, and the identity column is GENERATED ALWAYS.

Examples of using the XMLVALIDATE clause

Example 1 (XMLVALIDATE USING XDS)

For the following XMLVALIDATE clause:
XMLVALIDATE USING XDS
  IGNORE (S1.SCHEMA_A)
  MAP ((S1.SCHEMA_A, S2.SCHEMA_B))
The import would fail due to invalid syntax, since the IGNORE of S1.SCHEMA_A would conflict with the MAP of S1.SCHEMA_A to S2.SCHEMA_B.

Example 2 (XMLVALIDATE USING XDS)

For the following XMLVALIDATE clause:
XMLVALIDATE USING XDS
  DEFAULT S8.SCHEMA_H
  IGNORE (S9.SCHEMA_I, S10.SCHEMA_J)
  MAP ((S1.SCHEMA_A, S2.SCHEMA_B), (S3.SCHEMA_C, S5.SCHEMA_E),
    (S6.SCHEMA_F, S3.SCHEMA_C), (S4.SCHEMA_D, S7.SCHEMA_G))
For an XML column that contains the following XDS:
<XDS FIL='xmlfile.001.xml' />
The XML schema with SQL identifier "S8.SCHEMA_H" is used to validate the document in file "xmlfile.001.xml", since "S8.SCHEMA_H" was specified as the default schema to use.
For an XML column that contains the following XDS:
<XDS FIL='xmlfile.002.xml' OFF='10' LEN='500' SCH='S10.SCHEMA_J' />
No schema validation occurs for the document in file "xmlfile.002.xml", since although the XDS specifies "S10.SCHEMA_J" as the schema to use, that schema is part of the IGNORE clause. The document contents can be found at byte offset 10 in the file (meaning the 11th byte), and is 500 bytes long.
For an XML column that contains the following XDS:
<XDS FIL='xmlfile.003.xml' SCH='S6.SCHEMA_F' />
The XML schema with SQL identifier "S3.SCHEMA_C" is used to validate the document in file "xmlfile.003.xml". This is because the MAP clause specifies that schema "S6.SCHEMA_F" should be mapped to schema "S3.SCHEMA_C". Note that further mapping does not take place, therefore the mapping of schema "S3.SCHEMA_C" to schema "S5.SCHEMA_E" does not apply in this case.
For an XML column that contains the following XDS:
<XDS FIL='xmlfile.004.xml' SCH='S11.SCHEMA_K' />
The XML schema with SQL identifier "S11.SCHEMA_K" is used to validate the document in file "xmlfile.004.xml". Note that none of the DEFAULT, IGNORE, or MAP specifications apply in this case.

Example 3 (XMLVALIDATE USING XDS)

For the following XMLVALIDATE clause:
XMLVALIDATE USING XDS
  DEFAULT S1.SCHEMA_A
  IGNORE (S1.SCHEMA_A)
For an XML column that contains the following XDS:
<XDS FIL='xmlfile.001.xml' />
The XML schema with SQL identifier "S1.SCHEMA_A" is used to validate the document in file "xmlfile.001.xml", since "S1.SCHEMA_1" was specified as the default schema to use.
For an XML column that contains the following XDS:
<XDS FIL='xmlfile.002.xml' SCH='S1.SCHEMA_A' />
No schema validation occurs for the document in file "xmlfile.002", since although the XDS specifies "S1.SCHEMA_A" as the schema to use, that schema is part of the IGNORE clause.

Example 4 (XMLVALIDATE USING XDS)

For the following XMLVALIDATE clause:
XMLVALIDATE USING XDS
  DEFAULT S1.SCHEMA_A
  MAP ((S1.SCHEMA_A, S2.SCHEMA_B), (S2.SCHEMA_B, S1.SCHEMA_A))
For an XML column that contains the following XDS:
<XDS FIL='xmlfile.001.xml' />
The XML schema with SQL identifier "S1.SCHEMA_A" is used to validate the document in file "xmlfile.001.xml", since "S1.SCHEMA_1" was specified as the default schema to use. Note that since the DEFAULT clause was applied, the MAP clause is not subsequently applied. Therefore the mapping of schema "S1.SCHEMA_A" to schema "S2.SCHEMA_B" does not apply in this case.
For an XML column that contains the following XDS:
<XDS FIL='xmlfile.002.xml' SCH='S1.SCHEMA_A' />
The XML schema with SQL identifier "S2.SCHEMA_B" is used to validate the document in file "xmlfile.002.xml". This is because the MAP clause specifies that schema "S1.SCHEMA_A" should be mapped to schema "S2.SCHEMA_B". Note that further mapping does not take place, therefore the mapping of schema "S2.SCHEMA_B" to schema "S1.SCHEMA_A" does not apply in this case.
For an XML column that contains the following XDS:
<XDS FIL='xmlfile.003.xml' SCH='S2.SCHEMA_B' />
The XML schema with SQL identifier "S1.SCHEMA_A" is used to validate the document in file "xmlfile.003.xml". This is because the MAP clause specifies that schema "S2.SCHEMA_B" should be mapped to schema "S1.SCHEMA_A". Note that further mapping does not take place, therefore the mapping of schema "S1.SCHEMA_A" to schema "S2.SCHEMA_B" does not apply in this case.

Example 5 (XMLVALIDATE USING SCHEMA)

For the following XMLVALIDATE clause:
XMLVALIDATE USING SCHEMA S2.SCHEMA_B
For an XML column that contains the following XDS:
<XDS FIL='xmlfile.001.xml' />
The document in file xmlfile.001.xml is validated using the XML schema with SQL identifier "S2.SCHEMA_B".
For an XML column that contains the following XDS:
<XDS FIL='xmlfile.002.xml' SCH='S1.SCHEMA_A' />
The document in file "xmlfile.002.xml" is validated using the XML schema with SQL identifier "S2.SCHEMA_B". Note that the SCH attribute is ignored, since validation is being performed using a schema specified by the USING SCHEMA clause.

Example 6 (XMLVALIDATE USING SCHEMALOCATION HINTS)

For an XML column that contains the following XDS:
<XDS FIL='xmlfile.001.xml' />
The XML schema used is determined by the schemaLocation attribute in the document contents, and no validation would occur if one is not present.
For an XML column that contains the following XDS:
<XDS FIL='xmlfile.002.xml' SCH='S1.SCHEMA_A' />
The XML schema used is determined by the schemaLocation attribute in the document contents, and no validation would occur if one is not present. Note that the SCH attribute is ignored, since validation is being performed using SCHEMALOCATION HINTS.

Usage notes

Be sure to complete all table operations and release all locks before starting an import operation. This can be done by issuing a COMMIT after closing all cursors opened WITH HOLD, or by issuing a ROLLBACK.

The import utility adds rows to the target table using the SQL INSERT statement. The utility issues one INSERT statement for each row of data in the input file. If an INSERT statement fails, one of two actions result:
  • If it is likely that subsequent INSERT statements can be successful, a warning message is written to the message file, and processing continues.
  • If it is likely that subsequent INSERT statements will fail, and there is potential for database damage, an error message is written to the message file, and processing halts.

The utility performs an automatic COMMIT after the old rows are deleted during a REPLACE or a REPLACE_CREATE operation. Therefore, if the system fails, or the application interrupts the database manager after the table object is truncated, all of the old data is lost. Ensure that the old data is no longer needed before using these options.

If the log becomes full during a CREATE, REPLACE, or REPLACE_CREATE operation, the utility performs an automatic COMMIT on inserted records. If the system fails, or the application interrupts the database manager after an automatic COMMIT, a table with partial data remains in the database. Use the REPLACE or the REPLACE_CREATE option to rerun the whole import operation, or use INSERT with the RESTARTCOUNT parameter set to the number of rows successfully imported.

Updates from the IMPORT command will always be committed at the end of an IMPORT task. The IMPORT command can also perform automatic commits during its execution to reduce the size of the lock list and the active log space. The IMPORT command will rollback if the active log becomes full during IMPORT processing.
  • By default, automatic commits are not performed for the INSERT or the INSERT_UPDATE option. They are, however, performed if the COMMITCOUNT parameter is not zero.
  • Offline import does not perform automatic COMMITs if any of the following conditions are true:
    • The target is a view, not a table
    • Compound inserts are used
    • Buffered inserts are used
  • By default, online import performs automatic commit to free both the active log space and the lock list. Automatic commits are not performed only if a COMMITCOUNT value of zero is specified.

Whenever the import utility performs a COMMIT, two messages are written to the message file: one indicates the number of records to be committed, and the other is written after a successful COMMIT. When restarting the import operation after a failure, specify the number of records to skip, as determined from the last successful COMMIT.

The import utility accepts input data with minor incompatibility problems (for example, character data can be imported using padding or truncation, and numeric data can be imported with a different numeric data type), but data with major incompatibility problems is not accepted.

You cannot REPLACE or REPLACE_CREATE an object table if it has any dependents other than itself, or an object view if its base table has any dependents (including itself). To replace such a table or a view, do the following:
  1. Drop all foreign keys in which the table is a parent.
  2. Run the import utility.
  3. Alter the table to recreate the foreign keys.

If an error occurs while recreating the foreign keys, modify the data to maintain referential integrity.

Referential constraints and foreign key definitions are not preserved when recreating tables from PC/IXF files. (Primary key definitions are preserved if the data was previously exported using SELECT *.)

Importing to a remote database requires enough disk space on the server for a copy of the input data file, the output message file, and potential growth in the size of the database.

If an import operation is run against a remote database, and the output message file is very long (more than 60 KB), the message file returned to the user on the client might be missing messages from the middle of the import operation. The first 30 KB of message information and the last 30 KB of message information are always retained.

Importing PC/IXF files to a remote database is much faster if the PC/IXF file is on a hard drive rather than on diskettes.

The database table or hierarchy must exist before data in the ASC, DEL, or WSF file formats can be imported; however, if the table does not already exist, IMPORT CREATE or IMPORT REPLACE_CREATE creates the table when it imports data from a PC/IXF file. For typed tables, IMPORT CREATE can create the type hierarchy and the table hierarchy as well.

PC/IXF import should be used to move data (including hierarchical data) between databases. If character data containing row separators is exported to a delimited ASCII (DEL) file and processed by a text transfer program, fields containing the row separators will shrink or expand. The file copying step is not necessary if the source and the target databases are both accessible from the same client.

The data in ASC and DEL files is assumed to be in the code page of the client application performing the import. PC/IXF files, which allow for different code pages, are recommended when importing data in different code pages. If the PC/IXF file and the import utility are in the same code page, processing occurs as for a regular application. If the two differ, and the FORCEIN option is specified, the import utility assumes that data in the PC/IXF file has the same code page as the application performing the import. This occurs even if there is a conversion table for the two code pages. If the two differ, the FORCEIN option is not specified, and there is a conversion table, all data in the PC/IXF file will be converted from the file code page to the application code page. If the two differ, the FORCEIN option is not specified, and there is no conversion table, the import operation will fail. This applies only to PC/IXF files on DB2 clients on the AIX® operating system.

For table objects on an 8 KB page that are close to the limit of 1012 columns, import of PC/IXF data files might cause DB2 to return an error, because the maximum size of an SQL statement was exceeded. This situation can occur only if the columns are of type CHAR, VARCHAR, or CLOB. The restriction does not apply to import of DEL or ASC files. If PC/IXF files are being used to create a new table, an alternative is use db2look to dump the DDL statement that created the table, and then to issue that statement through the CLP.

DB2 Connect can be used to import data to DRDA® servers such as DB2 for OS/390®, DB2 for VM and VSE, and DB2 for OS/400®. Only PC/IXF import (INSERT option) is supported. The RESTARTCOUNT parameter, but not the COMMITCOUNT parameter, is also supported.

When using the CREATE option with typed tables, create every sub-table defined in the PC/IXF file; sub-table definitions cannot be altered. When using options other than CREATE with typed tables, the traversal order list enables one to specify the traverse order; therefore, the traversal order list must match the one used during the export operation. For the PC/IXF file format, one need only specify the target sub-table name, and use the traverse order stored in the file.

The import utility can be used to recover a table previously exported to a PC/IXF file. The table returns to the state it was in when exported.

Data cannot be imported to a system table, a created temporary table, a declared temporary table, or a summary table.

Views cannot be created through the import utility.

Importing a multiple-part PC/IXF file whose individual parts are copied from a Windows system to an AIX system is supported. Only the name of the first file must be specified in the IMPORT command. For example, IMPORT FROM data.ixf OF IXF INSERT INTO TABLE1. The file data.002, etc should be available in the same directory as data.ixf.

On the Windows operating system:
  • Importing logically split PC/IXF files is not supported.
  • Importing bad format PC/IXF or WSF files is not supported.

Security labels in their internal format might contain newline characters. If you import the file using the DEL file format, those newline characters can be mistaken for delimiters. If you have this problem use the older default priority for delimiters by specifying the delprioritychar file type modifier in the IMPORT command.

The IMPORT utility does not match the number of columns in a table and the number of fields in a data file. The utility checks for a sufficient amount of data in the data file and if a row in the data file does not contain sufficient columns of data, the row may either be rejected with a warning message if the corresponding table columns without data are defined as NOT NULL, or be inserted successfully without a warning message if the corresponding table columns are defined as NULL. On the other hand, if a row contains a higher number of columns than required, the sufficient number of columns are processed while the remaining columns of data are omitted and no warning message is given.

Federated considerations

When using the IMPORT command and the INSERT, UPDATE, or INSERT_UPDATE command parameters, you must ensure that you have CONTROL privilege on the participating nickname. You must ensure that the nickname you want to use when doing an import operation already exists. There are also several restrictions you should be aware of as shown in the IMPORT command parameters section.

Some data sources, such as ODBC, do not support importing into nicknames.

File type modifiers for the import utility

Table 1. Valid file type modifiers for the import utility: All file formats
Modifier Description
compound=x x is a number between 1 and 100 inclusive. Uses nonatomic compound SQL to insert the data, and x statements will be attempted each time.

If this modifier is specified, and the transaction log is not sufficiently large, the import operation will fail. The transaction log must be large enough to accommodate either the number of rows specified by COMMITCOUNT, or the number of rows in the data file if COMMITCOUNT is not specified. It is therefore recommended that the COMMITCOUNT option be specified to avoid transaction log overflow.

This modifier is incompatible with INSERT_UPDATE mode, hierarchical tables, and the following modifiers: usedefaults, identitymissing, identityignore, generatedmissing, and generatedignore.

generatedignore This modifier informs the import utility that data for all generated columns is present in the data file but should be ignored. This results in all values for the generated columns being generated by the utility. This modifier cannot be used with the generatedmissing modifier.
generatedmissing If this modifier is specified, the utility assumes that the input data file contains no data for the generated columns (not even NULLs), and will therefore generate a value for each row. This modifier cannot be used with the generatedignore modifier.
identityignore This modifier informs the import utility that data for the identity column is present in the data file but should be ignored. This results in all identity values being generated by the utility. The behavior will be the same for both GENERATED ALWAYS and GENERATED BY DEFAULT identity columns. This means that for GENERATED ALWAYS columns, no rows will be rejected. This modifier cannot be used with the identitymissing modifier.
identitymissing If this modifier is specified, the utility assumes that the input data file contains no data for the identity column (not even NULLs), and will therefore generate a value for each row. The behavior will be the same for both GENERATED ALWAYS and GENERATED BY DEFAULT identity columns. This modifier cannot be used with the identityignore modifier.
lobsinfile lob-path specifies the path to the files containing LOB data.

Each path contains at least one file that contains at least one LOB pointed to by a Lob Location Specifier (LLS) in the data file. The LLS is a string representation of the location of a LOB in a file stored in the LOB file path. The format of an LLS is filename.ext.nnn.mmm/, where filename.ext is the name of the file that contains the LOB, nnn is the offset in bytes of the LOB within the file, and mmm is the length of the LOB in bytes. For example, if the string db2exp.001.123.456/ is stored in the data file, the LOB is located at offset 123 in the file db2exp.001, and is 456 bytes long.

The LOBS FROM clause specifies where the LOB files are located when the "lobsinfile" modifier is used. The LOBS FROM clause will implicitly activate the LOBSINFILE behavior. The LOBS FROM clause conveys to the IMPORT utility the list of paths to search for the LOB files while importing the data.

To indicate a null LOB, enter the size as -1. If the size is specified as 0, it is treated as a 0 length LOB. For null LOBS with length of -1, the offset and the file name are ignored. For example, the LLS of a null LOB might be db2exp.001.7.-1/.

no_type_id Valid only when importing into a single sub-table. Typical usage is to export data from a regular table, and then to invoke an import operation (using this modifier) to convert the data into a single sub-table.
nodefaults If a source column for a target table column is not explicitly specified, and the table column is not nullable, default values are not loaded. Without this option, if a source column for one of the target table columns is not explicitly specified, one of the following occurs:
  • If a default value can be specified for a column, the default value is loaded
  • If the column is nullable, and a default value cannot be specified for that column, a NULL is loaded
  • If the column is not nullable, and a default value cannot be specified, an error is returned, and the utility stops processing.
norowwarnings Suppresses all warnings about rejected rows.
rowchangetimestampignore This modifier informs the import utility that data for the row change timestamp column is present in the data file but should be ignored. This results in all ROW CHANGE TIMESTAMP being generated by the utility. The behavior will be the same for both GENERATED ALWAYS and GENERATED BY DEFAULT columns. This means that for GENERATED ALWAYS columns, no rows will be rejected. This modifier cannot be used with the rowchangetimestampmissing modifier.
rowchangetimestampmissing If this modifier is specified, the utility assumes that the input data file contains no data for the row change timestamp column (not even NULLs), and will therefore generate a value for each row. The behavior will be the same for both GENERATED ALWAYS and GENERATED BY DEFAULT columns. This modifier cannot be used with the rowchangetimestampignore modifier.
seclabelchar Indicates that security labels in the input source file are in the string format for security label values rather than in the default encoded numeric format. IMPORT converts each security label into the internal format as it is loaded. If a string is not in the proper format the row is not loaded and a warning (SQLSTATE 01H53) is returned. If the string does not represent a valid security label that is part of the security policy protecting the table then the row is not loaded and a warning (SQLSTATE 01H53, SQLCODE SQL3243W)) is returned.

This modifier cannot be specified if the seclabelname modifier is specified, otherwise the import fails and an error (SQLCODE SQL3525N) is returned.

seclabelname Indicates that security labels in the input source file are indicated by their name rather than the default encoded numeric format. IMPORT will convert the name to the appropriate security label if it exists. If no security label exists with the indicated name for the security policy protecting the table the row is not loaded and a warning (SQLSTATE 01H53, SQLCODE SQL3244W) is returned.

This modifier cannot be specified if the seclabelchar modifier is specified, otherwise the import fails and an error (SQLCODE SQL3525N) is returned.

Note: If the file type is ASC, any spaces following the name of the security label will be interpreted as being part of the name. To avoid this use the striptblanks file type modifier to make sure the spaces are removed.
usedefaults If a source column for a target table column has been specified, but it contains no data for one or more row instances, default values are loaded. Examples of missing data are:
  • For DEL files: two adjacent column delimiters (",,") or two adjacent column delimiters separated by an arbitrary number of spaces (", ,") are specified for a column value.
  • For DEL/ASC/WSF files: A row that does not have enough columns, or is not long enough for the original specification.
    Note: For ASC files, NULL column values are not considered explicitly missing, and a default will not be substituted for NULL column values. NULL column values are represented by all space characters for numeric, date, time, and /timestamp columns, or by using the NULL INDICATOR for a column of any type to indicate the column is NULL.
Without this option, if a source column contains no data for a row instance, one of the following occurs:
  • For DEL/ASC/WSF files: If the column is nullable, a NULL is loaded. If the column is not nullable, the utility rejects the row.
You cannot use the usedefaults file type modifier in INSERT_UPDATE mode if the input data is missing the value for a column that is part of a primary key and you are trying to update an existing row. Otherwise the existing row is not updated (SQL3116W).
Table 2. Valid file type modifiers for the import utility: ASCII file formats (ASC/DEL)
Modifier Description
codepage=x x is an ASCII character string. The value is interpreted as the code page of the data in the input data set. Converts character data from this code page to the application code page during the import operation.
The following rules apply:
  • For pure DBCS (graphic) mixed DBCS, and EUC, delimiters are restricted to the range of x00 to x3F, inclusive.
  • nullindchar must specify symbols included in the standard ASCII set between code points x20 and x7F, inclusive. This refers to ASCII symbols and code points.
Note: Start of change
  1. If data expansion occurs when the code page is converted from the application code page to the database code page, the data might be truncated and loss of data can occur.
End of change
dateformat="x" x is the format of the date in the source file.2 Valid date elements are:
  YYYY - Year (four digits ranging from 0000 - 9999)
  M    - Month (one or two digits ranging from 1 - 12)
  MM   - Month (two digits ranging from 01 - 12;
            mutually exclusive with M)
  D    - Day (one or two digits ranging from 1 - 31)
  DD   - Day (two digits ranging from 01 - 31;
            mutually exclusive with D)
  DDD  - Day of the year (three digits ranging
            from 001 - 366; mutually exclusive
            with other day or month elements)
A default value of 1 is assigned for each element that is not specified. Some examples of date formats are:
   "D-M-YYYY"
   "MM.DD.YYYY"
   "YYYYDDD"
implieddecimal The location of an implied decimal point is determined by the column definition; it is no longer assumed to be at the end of the value. For example, the value 12345 is loaded into a DECIMAL(8,2) column as 123.45, not 12345.00.
timeformat="x" x is the format of the time in the source file.2 Valid time elements are:
  H     - Hour (one or two digits ranging from 0 - 12
             for a 12 hour system, and 0 - 24
             for a 24 hour system)
  HH    - Hour (two digits ranging from 00 - 12
             for a 12 hour system, and 00 - 24
             for a 24 hour system; mutually exclusive
             with H)
  M     - Minute (one or two digits ranging
             from 0 - 59)
  MM    - Minute (two digits ranging from 00 - 59;
             mutually exclusive with M)
  S     - Second (one or two digits ranging
             from 0 - 59)
  SS    - Second (two digits ranging from 00 - 59;
             mutually exclusive with S)
  SSSSS - Second of the day after midnight (5 digits
             ranging from 00000 - 86400; mutually
             exclusive with other time elements)
  TT    - Meridian indicator (AM or PM)
A default value of 0 is assigned for each element that is not specified. Some examples of time formats are:
   "HH:MM:SS"
   "HH.MM TT"
   "SSSSS"
timestampformat="x" x is the format of the time stamp in the source file.2 Valid time stamp elements are:
  YYYY   - Year (four digits ranging from 0000 - 9999)
  M      - Month (one or two digits ranging from 1 - 12)
  MM     - Month (two digits ranging from 01 - 12;
              mutually exclusive with M and MMM)
  MMM    - Month (three-letter case-insensitive abbreviation for 
              the month name; mutually exclusive with M and MM)              
  D      - Day (one or two digits ranging from 1 - 31)
  DD     - Day (two digits ranging from 01 - 31; mutually exclusive with D)
  DDD    - Day of the year (three digits ranging from 001 - 366; 
              mutually exclusive with other day or month elements)
  H      - Hour (one or two digits ranging from 0 - 12
              for a 12 hour system, and 0 - 24 for a 24 hour system)
  HH     - Hour (two digits ranging from 00 - 12 
              for a 12 hour system, and 00 - 24 for a 24 hour system; 
              mutually exclusive with H)
  M      - Minute (one or two digits ranging from 0 - 59)
  MM     - Minute (two digits ranging from 00 - 59;
              mutually exclusive with M, minute)
  S      - Second (one or two digits ranging from 0 - 59)
  SS     - Second (two digits ranging from 00 - 59;
              mutually exclusive with S)
  SSSSS  - Second of the day after midnight (5 digits
              ranging from 00000 - 86400; mutually
              exclusive with other time elements)
  U (1 to 12 times)
         - Fractional seconds(number of occurrences of U represent the 
              number of digits with each digit ranging from 0 to 9
  TT     - Meridian indicator (AM or PM)
A default value of 1 is assigned for unspecified YYYY, M, MM, D, DD, or DDD elements. A default value of 'Jan' is assigned to an unspecified MMM element. A default value of 0 is assigned for all other unspecified elements. Following is an example of a time stamp format:
   "YYYY/MM/DD HH:MM:SS.UUUUUU"

The valid values for the MMM element include: 'jan', 'feb', 'mar', 'apr', 'may', 'jun', 'jul', 'aug', 'sep', 'oct', 'nov' and 'dec'. These values are case insensitive.

The following example illustrates how to import data containing user defined date and time formats into a table called schedule:
   db2 import from delfile2 of del
      modified by timestampformat="yyyy.mm.dd hh:mm tt"
      insert into schedule
usegraphiccodepage If usegraphiccodepage is given, the assumption is made that data being imported into graphic or double-byte character large object (DBCLOB) data fields is in the graphic code page. The rest of the data is assumed to be in the character code page. The graphic code page is associated with the character code page. IMPORT determines the character code page through either the codepage modifier, if it is specified, or through the code page of the application if the codepage modifier is not specified.

This modifier should be used in conjunction with the delimited data file generated by drop table recovery only if the table being recovered has graphic data.

Restrictions

The usegraphiccodepage modifier MUST NOT be specified with DEL files created by the EXPORT utility, as these files contain data encoded in only one code page. The usegraphiccodepage modifier is also ignored by the double-byte character large objects (DBCLOBs) in files.

xmlchar Specifies that XML documents are encoded in the character code page.

This option is useful for processing XML documents that are encoded in the specified character code page but do not contain an encoding declaration.

For each document, if a declaration tag exists and contains an encoding attribute, the encoding must match the character code page, otherwise the row containing the document will be rejected. Note that the character codepage is the value specified by the codepage file type modifier, or the application codepage if it is not specified. By default, either the documents are encoded in Unicode, or they contain a declaration tag with an encoding attribute.

xmlgraphic Specifies that XML documents are encoded in the specified graphic code page.

This option is useful for processing XML documents that are encoded in a specific graphic code page but do not contain an encoding declaration.

For each document, if a declaration tag exists and contains an encoding attribute, the encoding must match the graphic code page, otherwise the row containing the document will be rejected. Note that the graphic code page is the graphic component of the value specified by the codepage file type modifier, or the graphic component of the application code page if it is not specified. By default, documents are either encoded in Unicode, or they contain a declaration tag with an encoding attribute.

Note: If the xmlgraphic modifier is specified with the IMPORT command, the XML document to be imported must be encoded in the UTF-16 code page. Otherwise, the XML document may be rejected with a parsing error, or it may be imported into the table with data corruption.
Table 3. Valid file type modifiers for the import utility: ASC (non-delimited ASCII) file format
Modifier Description
nochecklengths If nochecklengths is specified, an attempt is made to import each row, even if the source data has a column definition that exceeds the size of the target table column. Such rows can be successfully imported if code page conversion causes the source data to shrink; for example, 4-byte EUC data in the source could shrink to 2-byte DBCS data in the target, and require half the space. This option is particularly useful if it is known that the source data will fit in all cases despite mismatched column definitions.
nullindchar=x x is a single character. Changes the character denoting a null value to x. The default value of x is Y.3

This modifier is case sensitive for EBCDIC data files, except when the character is an English letter. For example, if the null indicator character is specified to be the letter N, then n is also recognized as a null indicator.

reclen=x x is an integer with a maximum value of 32 767. x characters are read for each row, and a new-line character is not used to indicate the end of the row.
striptblanks Truncates any trailing blank spaces when loading data into a variable-length field. If this option is not specified, blank spaces are kept.
In the following example, striptblanks causes the import utility to truncate trailing blank spaces:
   db2 import from myfile.asc of asc
      modified by striptblanks
      method l (1 10, 12 15) messages msgs.txt
      insert into staff

This option cannot be specified together with striptnulls. These are mutually exclusive options. This option replaces the obsolete t option, which is supported for earlier compatibility only.

striptnulls Truncates any trailing NULLs (0x00 characters) when loading data into a variable-length field. If this option is not specified, NULLs are kept.

This option cannot be specified together with striptblanks. These are mutually exclusive options. This option replaces the obsolete padwithzero option, which is supported for earlier compatibility only.

Table 4. Valid file type modifiers for the import utility: DEL (delimited ASCII) file format
Modifier Description
chardelx x is a single character string delimiter. The default value is a double quotation mark ("). The specified character is used in place of double quotation marks to enclose a character string.34 If you want to explicitly specify the double quotation mark as the character string delimiter, it should be specified as follows:
    modified by chardel""
The single quotation mark (') can also be specified as a character string delimiter. In the following example, chardel'' causes the import utility to interpret any single quotation mark (') it encounters as a character string delimiter:
   db2 "import from myfile.del of del
      modified by chardel''
      method p (1, 4) insert into staff (id, years)"
coldelx x is a single character column delimiter. The default value is a comma (,). The specified character is used in place of a comma to signal the end of a column.34
In the following example, coldel; causes the import utility to interpret any semicolon (;) it encounters as a column delimiter:
   db2 import from myfile.del of del
      modified by coldel;
      messages msgs.txt insert into staff
decplusblank Plus sign character. Causes positive decimal values to be prefixed with a blank space instead of a plus sign (+). The default action is to prefix positive decimal values with a plus sign.
decptx x is a single character substitute for the period as a decimal point character. The default value is a period (.). The specified character is used in place of a period as a decimal point character.34
In the following example, decpt; causes the import utility to interpret any semicolon (;) it encounters as a decimal point:
   db2 "import from myfile.del of del
      modified by chardel''
      decpt; messages msgs.txt insert into staff"
delprioritychar The current default priority for delimiters is: record delimiter, character delimiter, column delimiter. This modifier protects existing applications that depend on the older priority by reverting the delimiter priorities to: character delimiter, record delimiter, column delimiter. Syntax:
   db2 import ... modified by delprioritychar ...
For example, given the following DEL data file:
   "Smith, Joshua",4000,34.98<row delimiter>
   "Vincent,<row delimiter>, is a manager", ...
   ... 4005,44.37<row delimiter>

With the delprioritychar modifier specified, there will be only two rows in this data file. The second <row delimiter> will be interpreted as part of the first data column of the second row, while the first and the third <row delimiter> are interpreted as actual record delimiters. If this modifier is not specified, there will be three rows in this data file, each delimited by a <row delimiter>.

keepblanks Preserves the leading and trailing blanks in each field of type CHAR, VARCHAR, LONG VARCHAR, or CLOB. Without this option, all leading and trailing blanks that are not inside character delimiters are removed, and a NULL is inserted into the table for all blank fields.
nochardel The import utility will assume all bytes found between the column delimiters to be part of the column's data. Character delimiters will be parsed as part of column data. This option should not be specified if the data was exported using DB2 (unless nochardel was specified at export time). It is provided to support vendor data files that do not have character delimiters. Improper usage might result in data loss or corruption.

This option cannot be specified with chardelx, delprioritychar or nodoubledel. These are mutually exclusive options.

nodoubledel Suppresses recognition of double character delimiters.
Table 5. Valid file type modifiers for the import utility: IXF file format
Modifier Description
forcein Directs the utility to accept data despite code page mismatches, and to suppress translation between code pages.

Fixed length target fields are checked to verify that they are large enough for the data. If nochecklengths is specified, no checking is done, and an attempt is made to import each row.

indexixf Directs the utility to drop all indexes currently defined on the existing table, and to create new ones from the index definitions in the PC/IXF file. This option can only be used when the contents of a table are being replaced. It cannot be used with a view, or when a insert-column is specified.
indexschema=schema Uses the specified schema for the index name during index creation. If schema is not specified (but the keyword indexschema is specified), uses the connection user ID. If the keyword is not specified, uses the schema in the IXF file.
nochecklengths If nochecklengths is specified, an attempt is made to import each row, even if the source data has a column definition that exceeds the size of the target table column. Such rows can be successfully imported if code page conversion causes the source data to shrink; for example, 4-byte EUC data in the source could shrink to 2-byte DBCS data in the target, and require half the space. This option is particularly useful if it is known that the source data will fit in all cases despite mismatched column definitions.
forcecreate Specifies that the table should be created with possible missing or limited information after returning SQL3311N during an import operation.
Table 6. IMPORT behavior when using codepage and usegraphiccodepage
codepage=N usegraphiccodepage IMPORT behavior
Absent Absent All data in the file is assumed to be in the application code page.
Present Absent All data in the file is assumed to be in code page N.

Warning: Graphic data will be corrupted when imported into the database if N is a single-byte code page.

Absent Present Character data in the file is assumed to be in the application code page. Graphic data is assumed to be in the code page of the application graphic data.

If the application code page is single-byte, then all data is assumed to be in the application code page.

Warning: If the application code page is single-byte, graphic data will be corrupted when imported into the database, even if the database contains graphic columns.

Present Present Character data is assumed to be in code page N. Graphic data is assumed to be in the graphic code page of N.

If N is a single-byte or double-byte code page, then all data is assumed to be in code page N.

Warning: Graphic data will be corrupted when imported into the database if N is a single-byte code page.

Note:
  1. The import utility does not issue a warning if an attempt is made to use unsupported file types with the MODIFIED BY option. If this is attempted, the import operation fails, and an error code is returned.
  2. Double quotation marks around the date format string are mandatory. Field separators cannot contain any of the following: a-z, A-Z, and 0-9. The field separator should not be the same as the character delimiter or field delimiter in the DEL file format. A field separator is optional if the start and end positions of an element are unambiguous. Ambiguity can exist if (depending on the modifier) elements such as D, H, M, or S are used, because of the variable length of the entries.
    For time stamp formats, care must be taken to avoid ambiguity between the month and the minute descriptors, since they both use the letter M. A month field must be adjacent to other date fields. A minute field must be adjacent to other time fields. Following are some ambiguous time stamp formats:
       "M" (could be a month, or a minute)
       "M:M" (Which is which?)
       "M:YYYY:M" (Both are interpreted as month.)
       "S:M:YYYY" (adjacent to both a time value and a date value)
    In ambiguous cases, the utility will report an error message, and the operation will fail.
    Following are some unambiguous time stamp formats:
       "M:YYYY" (Month)
       "S:M" (Minute)
       "M:YYYY:S:M" (Month....Minute)
       "M:H:YYYY:M:D" (Minute....Month)

    Some characters, such as double quotation marks and back slashes, must be preceded by an escape character (for example, \).

  3. Character values provided for the chardel, coldel, or decpt file type modifiers must be specified in the code page of the source data.
    The character code point (instead of the character symbol), can be specified using the syntax xJJ or 0xJJ, where JJ is the hexadecimal representation of the code point. For example, to specify the # character as a column delimiter, use one of the following:
       ... modified by coldel# ...
       ... modified by coldel0x23 ...
       ... modified by coldelX23 ...
  4. Delimiter considerations for moving data lists restrictions that apply to the characters that can be used as delimiter overrides.
  5. The following file type modifiers are not allowed when importing into a nickname:
    • indexixf
    • indexschema
    • dldelfiletype
    • nodefaults
    • usedefaults
    • no_type_idfiletype
    • generatedignore
    • generatedmissing
    • identityignore
    • identitymissing
    • lobsinfile
  6. The WSF file format is not supported for XML columns. Support for this file format is also deprecated and might be removed in a future release. It is recommended that you start using a supported file format instead of WSF files before support is removed
  7. The CREATE mode is not supported for XML columns.
  8. All XML data must reside in XML files that are separate from the main data file. An XML Data Specifier (XDS) (or a NULL value) must exist for each XML column in the main data file.
  9. XML documents are assumed to be in Unicode format or to contain a declaration tag that includes an encoding attribute, unless the XMLCHAR or XMLGRAPHIC file type modifier is specified.
  10. Rows containing documents that are not well-formed will be rejected.
  11. If the XMLVALIDATE option is specified, documents that successfully validate against their matching schema will be annotated with the schema information as they are inserted. Rows containing documents that fail to validate against their matching schema will be rejected. To successfully perform the validation, the privileges held by the user invoking the import must include at least one of the following:
    • DBADM authority
    • USAGE privilege on the XML schema to be used in the validation
  12. When importing into a table containing an implicitly hidden row change timestamp column, the implicitly hidden property of the column is not honoured. Therefore, the rowchangetimestampmissing file type modifier must be specified in the import command if data for the column is not present in the data to be imported and there is no explicit column list present.