IMPORT command
The IMPORT command inserts data from an external file with a supported file format into a table, hierarchy, view, or nickname. LOAD is a faster alternative, but the load utility does not support loading data at the hierarchy level.
Quick link to File type modifiers for the import utility.
Authorization
- The IMPORT command with the INSERT option requires one
of the following authorities:
- DATAACCESS authority
- DATAACCESS authority on the schema of each participating table or view
- INSERTIN and SELECTIN privilege on the schema of each participating table or view
- CONTROL privilege on each participating table, view, or nickname
- INSERT and SELECT privilege on each participating table or view
- The IMPORT command to an existing table with the
INSERT_UPDATE option, requires one of the following authorities:
- DATAACCESS authority
- DATAACCESS authority on the schema of each participating table or view
- INSERTIN, SELECTIN, UPDATEIN, and DELETEIN privilege on the schema of each participating table or view
- CONTROL privilege on each participating table, view, or nickname
- INSERT, SELECT, UPDATE, and DELETE privilege on each participating table or view
- The IMPORT command to an existing table that uses the
REPLACE or REPLACE_CREATE option, requires one of the
following authorities:
- DATAACCESS authority
- DATAACCESS authority on the schema of each participating table or view
- INSERTIN, SELECTIN, and DELETEIN privilege on the schema of each participating table or view
- CONTROL privilege on the table or view
- INSERT, SELECT, and DELETE privilege on the table or view
- The IMPORT command to a new table that uses the CREATE
or REPLACE_CREATE option, requires one of the following authorities:
- DBADM authority
- CREATETAB authority on the database and USE privilege on the table space, and one of:
- IMPLICIT_SCHEMA authority on the database, if the implicit or explicit schema name of the table does not exist.
- CREATEIN privilege or SCHEMAADM authority on the schema, if the schema name of the table refers to an existing schema.
- The IMPORT command to a hierarchy that does not exist that uses the
CREATE, or the REPLACE_CREATE option, requires one of the
following authorities:
- DBADM authority
- CREATETAB authority on the database and USE privilege on the table space and one of:
- IMPLICIT_SCHEMA authority on the database, if the schema name of the table does not exist.
- CREATEIN privilege or SCHEMAADM authority on the schema, if the schema of the table exists
- CONTROL privilege on every subtable in the hierarchy, if the REPLACE_CREATE option on the entire hierarchy is used.
- The IMPORT command to an existing hierarchy that uses the
REPLACE option requires one of the following authorities:
- DATAACCESS authority
- DATAACCESS authority on the schema of every subtable in the hierarchy
- CONTROL privilege on every subtable in the hierarchy
- To import data into a table that protects columns, the session authorization ID must have LBAC credentials that allow write access to all protected columns in the table. Otherwise, the import fails and an error (SQLSTATE 42512) is returned.
- To import data into a table that protects rows, the session authorization ID must hold LBAC
credentials that meet these criteria:
- It is part of the security policy that protects the table.
- It was granted to the session authorization ID for write access.
- For a table with protected rows, if the REPLACE or REPLACE_CREATE option is specified, the session authorization ID must have the authority to drop the table.
- To import data into a nickname, the session authorization ID must have the privilege to access and use a specified data source in pass-through mode.
- If the table activates row access control, then IMPORT REPLACE on that table would require the ability to drop the table. Specifically, you must have either CONTROL authority, SCHEMAADM authority, or DBADM authority on the table.
Required connection
Database. If implicit connect is enabled, a connection to the default database is established. Utility access to Linux®, UNIX, or Windows database servers from Linux, UNIX, or Windows clients must be a direct connection through the engine and not through a Db2 Connect gateway or loop back environment.
Command syntax
Command parameters
- FROM filename
- Specifies the file that contains the data to be imported. If the path is omitted, the current working directory is used.
- OF filetype
- Specifies the format of the data in the input file:
- ASC (non-delimited ASCII format)
- DEL (delimited ASCII format), which is used by various database manager and file manager programs
- IXF (Integration Exchange Format, PC version) is a binary format that is used exclusively by Db2®.
- LOBS FROM lob-path
- Specifies one or more paths that store LOB files.
The names of the LOB data files
are stored in the main data file (ASC, DEL, or IXF), in the column that is loaded into the LOB
column. The maximum number of paths that can be specified is 999. This parameter implicitly
activates the
LOBSINFILE
behavior.This parameter is not valid when you import to a nickname.
- XML FROM xml-path
- Specifies one or more paths that contain the XML files.
- MODIFIED BY filetype-mod
- Specifies file type modifier options. See File type modifiers for the import utility.
- METHOD
-
- L
- Specifies the start and end column numbers from which to import data. A column number is a byte
offset from the beginning of a row of data. It is numbered starting from 1. Note: This method can only be used with ASC files, and is the only valid option for that file type.
- N
- Specifies the names of the columns in the data file to be imported. The case of these column
names must match the case of the corresponding names in the system catalogs. Each table column that
is not nullable can have a corresponding entry in the METHOD N list. For
example, given data fields F1, F2, F3, F4, F5, and F6, and table columns C1 INT, C2 INT NOT NULL, C3
INT NOT NULL, and C4 INT, method N (F2, F1, F4, F3) is a valid request, while method N (F2, F1) is
not valid. Note: This method can only be used with IXF files.
- P
- Specifies the field numbers (numbered from 1) of the
input data fields to be imported. Each table column that is not nullable can have a corresponding
entry in the METHOD P list. For example, given data fields F1, F2, F3, F4, F5,
and F6, and table columns C1 INT, C2 INT NOT NULL, C3 INT NOT NULL, and C4 INT,
method P (2, 1, 4, 3)
is a valid request, whilemethod P (2, 1)
is not valid. This method can only be used with file types IXF or DEL, and is the only valid method for the DEL file type.For each of the fields specified by method P, you need to define a corresponding column in the action statement, unless all columns are accounted for or the first x columns are going to be loaded, as shown in the following example:db2 load from datafile1.del of del method P(1, 3, 4) replace into table1 (c1, c3, c4)
- NULL INDICATORS null-indicator-list
- This option can only be used when the METHOD L parameter is specified. That
is, the input file is an ASC file. The null indicator list is a comma-separated list of positive
integers that specifies the column number of each null indicator field. The column number is the
byte offset of the null indicator field from the beginning of a row of data. One entry is needed in
the null indicator list for each data field that is defined in the METHOD L
parameter. A column number of zero indicates that the corresponding data field always contains data.
A value of
Y
in the NULL indicator column specifies that the column data is NULL. Any character other thanY
in the NULL indicator column specifies that the column data is not NULL, and that column data that is specified by the METHOD L option is imported.The NULL indicator character can be changed by using the MODIFIED BY option, with the
nullindchar
file type modifier. - XMLPARSE
- Specifies
how XML documents are parsed. If this option is not specified, the parsing behavior for XML
documents is determined by the value of the CURRENT IMPLICIT XMLPARSE OPTION special register.
- STRIP WHITESPACE
- Specifies to remove whitespace when the XML document is parsed.
- PRESERVE WHITESPACE
- Specifies not to remove whitespace when the XML document is parsed.
- XMLVALIDATE
- Specifies that XML documents are validated against a schema, when applicable.
- USING XDS
- XML documents are validated against the XML schema that is identified by the XML Data Specifier
(XDS) in the main data file. By default, if the XMLVALIDATE option is called
with the USING XDS clause, the schema that is used to perform validation is
determined by the SCH attribute of the XDS. If an SCH attribute is not present in the XDS, no schema
validation occurs unless a default schema is specified by the DEFAULT clause.
The DEFAULT, IGNORE, and MAP clauses can be used to modify the schema determination behavior. These three optional clauses apply directly to the specifications of the XDS, and not to each other. For example, if a schema is selected because it is specified by the DEFAULT clause, it is not ignored if also specified by the IGNORE clause. Similarly, if a schema is selected because it is specified as the first part of a pair in the MAP clause, it is not remapped if also specified in the second part of another MAP clause pair.
- USING SCHEMA schema-sqlid
- XML documents are validated against the XML schema with the specified SQL identifier. In this case, the SCH attribute of the XML Data Specifier (XDS) is ignored for all XML columns.
- USING SCHEMALOCATION HINTS
- XML documents are validated against the schemas that are identified by XML schema location hints in the source XML documents. If a schemaLocation attribute is not found in the XML document, no validation occurs. When the USING SCHEMALOCATION HINTS clause is specified, the SCH attribute of the XML Data Specifier (XDS) is ignored for all XML columns.
- DEFAULT schema-sqlid
- This option can only be used when the USING XDS parameter is specified. The
schema that is specified through the DEFAULT clause identifies a schema to use
for validation when the XML Data Specifier (XDS) of an imported XML document does not contain an SCH
attribute that identifies an XML schema.
The DEFAULT clause takes precedence over the IGNORE and MAP clauses. If an XDS satisfies the DEFAULT clause, the IGNORE and MAP specifications are ignored.
- IGNORE schema-sqlid
- This option can only be used when the USING XDS parameter is specified. The
IGNORE clause specifies a list of one or more schemas to ignore if they are
identified by an SCH attribute. If an SCH attribute exists in the XML Data Specifier for an imported
XML document, and the schema that is identified by the SCH attribute is included in the list of
schemas to ignore, then no schema validation occurs for the imported XML document.
If a schema is specified in the IGNORE clause, it cannot also be present in the left side of a schema pair in the MAP clause.
The IGNORE clause applies only to the XDS. A schema that is mapped by the MAP clause is not ignored later if specified by the IGNORE clause.
- MAP schema-sqlid
- This option can only be used when the USING XDS parameter is specified. Use
the MAP clause to specify alternative schemas to use in place of those schemas
specified by the SCH attribute of an XML Data Specifier (XDS) for each imported XML document. The
MAP clause specifies a list of one or more schema pairs, where each pair
represents a mapping of one schema to another. The first schema in the pair represents a schema that
is referred to by an SCH attribute in an XDS. The second schema in the pair represents the schema
that can be used to perform schema validation.
If a schema is present in the left side of a schema pair in the MAP clause, it cannot also be specified in the IGNORE clause.
Once a schema pair mapping is applied, the result is final. Therefore, the mapping operation is non-transitive, and the schema that is chosen is not applied later to another schema pair mapping.
A schema cannot be mapped more than once, meaning that it cannot appear on the left side of more than one pair.
- ALLOW NO ACCESS
- Runs import in the offline mode. An exclusive (X) lock on the target table is acquired before any rows are inserted. This parameter prevents concurrent applications from accessing table data. This is the default import behavior.
- ALLOW WRITE ACCESS
- Runs import in the online mode. An intent exclusive (IX) lock on the target table is acquired when the first row is inserted. This parameter allows concurrent readers and writers to access table data. Online mode is not compatible with the REPLACE, CREATE, or REPLACE_CREATE import options. Online mode is not supported by buffered inserts. The import operation periodically commits inserted data to prevent lock escalation to a table lock and to avoid running out of active log space. These commits are performed even if the COMMITCOUNT option was not used. During each commit, import will lose its IX table lock, and will attempt to reacquire it after the commit. This parameter is required when you import to a nickname and COMMITCOUNT must be specified with a valid number (AUTOMATIC is not considered a valid option).
- COMMITCOUNT n | AUTOMATIC
- Performs a COMMIT after every n records are imported. When a number
n is specified, import performs a COMMIT after every n records
are imported. When compound inserts are used, a user-specified commit frequency of
n is rounded up to the first integer multiple of the compound count value. When
AUTOMATIC is specified, import internally determines when a commit needs to be
performed. The utility commits for either one of two reasons:
- To avoid running out of active log space.
- To avoid lock escalation from row level to table level.
The ability of the import operation to avoid running out of active log space is affected by the Db2 registry variable DB2_FORCE_APP_ON_MAX_LOG:- If DB2_FORCE_APP_ON_MAX_LOG is set to FALSE and the COMMITCOUNT AUTOMATIC command option is specified, the import utility is able to automatically avoid running out of active log space.
- If DB2_FORCE_APP_ON_MAX_LOG is set to FALSE and the COMMITCOUNT n command option is specified, the import utility attempts to resolve the log full condition if it encounters an SQL0964C (Transaction Log Full) while inserting or updating a record. It performs an unconditional commit and then reattempts to insert or update the record. If this action does not help resolve the issue (which would be the case when the log full is attributed to other activity on the database), then the IMPORT command fails as expected, however the number of rows that are committed cannot not be a multiple of the COMMITCOUNT n value. To avoid processing the rows that were already committed when you retry the import operation, use the RESTARTCOUNT or SKIPCOUNT command parameters.
- If DB2_FORCE_APP_ON_MAX_LOG is set to TRUE (which is the
default), the import operation fails if it encounters an SQL0964C while inserting or updating a
record. This can occur irrespective of whether you specify COMMITCOUNT
AUTOMATIC or COMMITCOUNT
n.
The application is forced off the database and the current unit of work is rolled back. To avoid processing the rows that were already committed when you retry the import operation, use the RESTARTCOUNT or SKIPCOUNT command parameters.
- RESTARTCOUNT n
- Specifies that an import operation is to be started at record n+1. The first n records are skipped. This option is functionally equivalent to SKIPCOUNT. RESTARTCOUNT and SKIPCOUNT are mutually exclusive.
- SKIPCOUNT n
- Specifies that an import operation is to be started at record n+1. The first n records are skipped. This option is functionally equivalent to RESTARTCOUNT. SKIPCOUNT and RESTARTCOUNT are mutually exclusive.
- ROWCOUNT n
- Specifies the number n of physical records in the file to be imported (inserted or updated). Allows a user to import only n rows from a file, starting from the record determined by the SKIPCOUNT or RESTARTCOUNT options. If the SKIPCOUNT or RESTARTCOUNT options are specified, the first n rows are imported. If SKIPCOUNT m or RESTARTCOUNT m is specified, rows m+1 to m+n are imported. When compound inserts are used, user specified ROWCOUNT n is rounded up to the first integer multiple of the compound count value.
- WARNINGCOUNT n
- Stops the import operation after n warnings. Set this parameter if no warnings are expected, but verification that the correct file and table are being used is required. If the import file or the target table is specified incorrectly, the import utility generates a warning for each row that it attempts to import, which causes the import to fail. If n is zero, or this option is not specified, the import operation continues regardless of the number of warnings issued.
- NOTIMEOUT
- Specifies that the import utility does not time out when waiting for locks. This option supersedes the locktimeout database configuration parameter. Other applications are not affected.
- MESSAGES message-file
- Specifies the destination for warning and error messages that occur during an import operation. If the file exists, the import utility appends the information. If the complete path to the file is not specified, the utility uses the current directory and the default drive as the destination. If message-file is omitted, the messages are written to standard output.
- INSERT
- Adds the imported data to the table without changing the existing table data.
- INSERT_UPDATE
- Adds rows of imported data to the target table, or updates existing rows (of the target table) with matching primary keys.
- REPLACE
- Deletes all existing data from the table by truncating the data object, and inserts the imported
data. The table definition and the index definitions are not changed. This option can only be used
if the table exists. If this option is used when moving data between hierarchies, only the data for
an entire hierarchy, not individual subtables, can be replaced.
This parameter is not valid when you import to a nickname.
This option does not accept the CREATE TABLE statement's NOT LOGGED INITIALLY (NLI) clause or the ALTER TABLE statement's ACTIVE NOT LOGGED INITIALLY clause.
This option cannot be used to import data into system-period temporal tables.
If an import with the REPLACE option is performed within the same transaction as a CREATE TABLE or ALTER TABLE statement where the NLI clause is called, the import does not accept the NLI clause. All inserts are logged.
- Workaround 1
- Delete the contents of the table by using the DELETE statement, then call the import with INSERT statement.
- Workaround 2
- Drop the table and re-create it, then call the import with INSERT statement.
- REPLACE_CREATE
-
Note: The REPLACE_CREATE parameter is deprecated and can be removed in a future release. For more information, see
IMPORT command options CREATE and REPLACE_CREATE are deprecated
.If the table exists, deletes all existing data from the table by truncating the data object, and inserts the imported data without changing the table definition or the index definitions.
If the table does not exist, creates the table and index definitions, and the row contents, in the code page of the database. See Imported table re-creation section for a list of restrictions.
This option can only be used with IXF files. If this option is used when moving data between hierarchies, only the data for an entire hierarchy, not individual subtables, can be replaced.
This parameter is not valid when you import to a nickname.
- INTO table-name
- Specifies the database table into which the data is to be imported. This table cannot be a
system table, a created temporary table, a declared temporary table, or a
summary table.
One can use an alias for INSERT, INSERT_UPDATE, or REPLACE, except if on an earlier server, when the fully qualified or the unqualified table name can be used. A qualified table name is in the form: schema.tablename. The schema is the username under which the table was created.
If the database table contains implicitly hidden columns, you must specify whether data for the hidden columns is included in the import operation. Use one of the following methods to indicate whether data for hidden columns is included:- Use insert-column to explicitly specify the columns into which data is to be
inserted.
db2 import from delfile1 of del insert into table1 (c1, c2, c3,...)
- Use one of the hidden column file type modifiers: specify
implicitlyhiddeninclude when the input file contains data for the hidden
columns, or implicitlyhiddenmissing when the input file does
not.
db2 import from delfile1 of del modified by implicitlyhiddeninclude insert into table1
- Use the DB2_DMU_DEFAULT registry variable on the client-side to set the default behavior when
data movement utilities encounter tables with implicitly hidden
columns.
db2set DB2_DMU_DEFAULT=IMPLICITLYHIDDENINCLUDE db2 import from delfile1 of del insert into table1
- Use insert-column to explicitly specify the columns into which data is to be
inserted.
- insert-column
- Specifies the name of a column in the table or the view into which data is to be inserted.
- ALL TABLES
- An implicit keyword for hierarchy only. When importing a hierarchy, the default is to import all tables specified in the traversal order.
- sub-table-list
- For typed tables with the INSERT or the INSERT_UPDATE option, a list of subtable names is used to indicate the subtables into which data is to be imported.
- HIERARCHY
- Specifies that hierarchical data is to be imported.
- STARTING sub-table-name
- A keyword for hierarchy only, requesting the default order, starting from sub-table-name. For PC/IXF files, the default order is the order that is stored in the input file. The default order is the only valid order for the PC/IXF file format.
- traversal-order-list
- For typed tables with the INSERT, INSERT_UPDATE, or the REPLACE option, a list of subtable names is used to indicate the traversal order of the importing subtables in the hierarchy.
- CREATE
-
Note: The CREATE parameter is deprecated and can be removed in a future release. For more information, see
IMPORT command options CREATE and REPLACE_CREATE are deprecated
.Creates the table definition and row contents in the code page of the database. If the data was exported from a Db2 table, subtable, or hierarchy, indexes are created. If this option operates on a hierarchy, and data was exported from Db2, a type hierarchy is also created. This option can only be used with IXF files.
This parameter is not valid when you import to a nickname.
Note: If the data was exported from an MVS host database, and it contains LONGVAR fields whose lengths, which are calculated on the page size, are more than 254, CREATE might fail because the rows are too long. SeeImported table re-creation
for a list of restrictions. In this case, the table can be created manually, and IMPORT with INSERT can be called, or, alternatively, the LOAD command can be used. - AS ROOT TABLE
- Creates one or more subtables as a stand-alone table hierarchy.
- UNDER sub-table-name
- Specifies a parent table for creating one or more subtables.
- IN tablespace-name
- Identifies the table space in which the table is created. The table space must exist, and must be a REGULAR table space or LARGE table space . If no other table space is specified, all table parts are stored in this table space. If this clause is not specified, the table is created in a table space that is created by the authorization ID. If none is found, the table is placed into the default table space USERSPACE1. If USERSPACE1 is dropped, table creation fails.
- INDEX IN tablespace-name
- Identifies the table space in which any indexes on the table is created. This option is allowed
only when the primary table space that is specified in the IN clause is a DMS
table space. The specified table space must exist, and must be a REGULAR or LARGE DMS table space.
Note: Specifying which table space contains an index can only be done when the table is created.
- LONG IN tablespace-name
- Identifies the table space in which the values of any long columns (LONG VARCHAR, LONG VARGRAPHIC, LOB data types, or distinct types with any of these as source types) is stored. This option is allowed only if the primary table space specified in the IN clause is a DMS table space. The table space must exist, and must be a LARGE DMS table space.
Examples
Example 1
db2 import from myfile.ixf of ixf messages msg.txt insert into staff
SQL3150N The H record in the PC/IXF file has product "DB2 01.00", date
"19970220", and time "140848".
SQL3153N The T record in the PC/IXF file has name "myfile",
qualifier " ", and source " ".
SQL3109N The utility is beginning to load data from file "myfile".
SQL3110N The utility has completed processing. "58" rows were read
from the input file.
SQL3221W ...Begin COMMIT WORK. Input Record Count = "58".
SQL3222W ...COMMIT of any database changes was successful.
SQL3149N "58" rows were processed from the input file. "58" rows were
successfully inserted into the table. "0" rows were rejected.
Example 2 (Importing into a table with an identity column)
- C1 VARCHAR(30)
- C2 INT GENERATED BY DEFAULT AS IDENTITY
- C3 DECIMAL(7,2)
- C4 CHAR(1)
TABLE2 is the same as TABLE1, except that C2 is a GENERATED ALWAYS identity column.
"Liszt"
"Hummel",,187.43, H
"Grieg",100, 66.34, G
"Satie",101, 818.23, I
"Liszt", 74.49, A
"Hummel", 0.01, H
"Grieg", 66.34, G
"Satie", 818.23, I
db2 import from datafile1.del of del replace into table1
db2 import from datafile1.del of del method P(1, 3, 4)
replace into table1 (c1, c3, c4)
db2 import from datafile1.del of del modified by identityignore
replace into table1
db2 import from datafile2.del of del replace into table1 (c1, c3, c4)
db2 import from datafile2.del of del modified by identitymissing
replace into table1
If DATAFILE1 is imported into TABLE2 without using any of the identity-related file type modifiers, rows 1 and 2 will be inserted, but rows 3 and 4 will be rejected, because they supply their own non-NULL values, and the identity column is GENERATED ALWAYS.
Examples of using the XMLVALIDATE clause
Example 1 (XMLVALIDATE USING XDS)
XMLVALIDATE USING XDS
IGNORE (S1.SCHEMA_A)
MAP ((S1.SCHEMA_A, S2.SCHEMA_B))
The import would fail due to invalid syntax, since the IGNORE of S1.SCHEMA_A
would conflict with the MAP of S1.SCHEMA_A to S2.SCHEMA_B.Example 2 (XMLVALIDATE USING XDS)
XMLVALIDATE USING XDS
DEFAULT S8.SCHEMA_H
IGNORE (S9.SCHEMA_I, S10.SCHEMA_J)
MAP ((S1.SCHEMA_A, S2.SCHEMA_B), (S3.SCHEMA_C, S5.SCHEMA_E),
(S6.SCHEMA_F, S3.SCHEMA_C), (S4.SCHEMA_D, S7.SCHEMA_G))
<XDS FIL='xmlfile.001.xml' />
The XML schema with SQL identifier
"S8.SCHEMA_H" is used to validate the document in file "xmlfile.001.xml", since
"S8.SCHEMA_H" was specified as the default schema to use.<XDS FIL='xmlfile.002.xml' OFF='10' LEN='500' SCH='S10.SCHEMA_J' />
No
schema validation occurs for the document in file "xmlfile.002.xml", since
although the XDS specifies "S10.SCHEMA_J" as the schema to use, that schema is part of the
IGNORE clause. The document contents can be found at byte offset 10 in the file
(meaning the 11th byte), and is 500 bytes long.<XDS FIL='xmlfile.003.xml' SCH='S6.SCHEMA_F' />
The XML schema with SQL
identifier "S3.SCHEMA_C" is used to validate the document in file
"xmlfile.003.xml". This is because the MAP clause
specifies that schema "S6.SCHEMA_F" can be mapped to schema "S3.SCHEMA_C". Note that further mapping
does not take place, therefore the mapping of schema "S3.SCHEMA_C" to schema "S5.SCHEMA_E" does not
apply in this case.<XDS FIL='xmlfile.004.xml' SCH='S11.SCHEMA_K' />
The XML schema with SQL
identifier "S11.SCHEMA_K" is used to validate the document in file
"xmlfile.004.xml". Note that none of the DEFAULT,
IGNORE, or MAP specifications apply in this case.Example 3 (XMLVALIDATE USING XDS)
XMLVALIDATE USING XDS
DEFAULT S1.SCHEMA_A
IGNORE (S1.SCHEMA_A)
<XDS FIL='xmlfile.001.xml' />
The XML schema with SQL identifier
"S1.SCHEMA_A" is used to validate the document in file "xmlfile.001.xml", since
"S1.SCHEMA_1" was specified as the default schema to use.<XDS FIL='xmlfile.002.xml' SCH='S1.SCHEMA_A' />
No schema validation
occurs for the document in file "xmlfile.002", since although the XDS specifies
"S1.SCHEMA_A" as the schema to use, that schema is part of the IGNORE
clause.Example 4 (XMLVALIDATE USING XDS)
XMLVALIDATE USING XDS
DEFAULT S1.SCHEMA_A
MAP ((S1.SCHEMA_A, S2.SCHEMA_B), (S2.SCHEMA_B, S1.SCHEMA_A))
<XDS FIL='xmlfile.001.xml' />
The XML schema with SQL identifier
"S1.SCHEMA_A" is used to validate the document in file "xmlfile.001.xml", since
"S1.SCHEMA_1" was specified as the default schema to use. Note that since the
DEFAULT clause was applied, the MAP clause is not later
applied. Therefore the mapping of schema "S1.SCHEMA_A" to schema "S2.SCHEMA_B" does not apply in
this case.<XDS FIL='xmlfile.002.xml' SCH='S1.SCHEMA_A' />
The XML schema with SQL
identifier "S2.SCHEMA_B" is used to validate the document in file
"xmlfile.002.xml". This is because the MAP clause
specifies that schema "S1.SCHEMA_A" can be mapped to schema "S2.SCHEMA_B". Note that further mapping
does not take place, therefore the mapping of schema "S2.SCHEMA_B" to schema "S1.SCHEMA_A" does not
apply in this case.<XDS FIL='xmlfile.003.xml' SCH='S2.SCHEMA_B' />
The XML schema with SQL
identifier "S1.SCHEMA_A" is used to validate the document in file
"xmlfile.003.xml". This is because the MAP clause
specifies that schema "S2.SCHEMA_B" can be mapped to schema "S1.SCHEMA_A". Note that further mapping
does not take place, therefore the mapping of schema "S1.SCHEMA_A" to schema "S2.SCHEMA_B" does not
apply in this case.Example 5 (XMLVALIDATE USING SCHEMA)
XMLVALIDATE USING SCHEMA S2.SCHEMA_B
<XDS FIL='xmlfile.001.xml' />
The document in file
xmlfile.001.xml is validated by using the XML schema with SQL identifier
"S2.SCHEMA_B".<XDS FIL='xmlfile.002.xml' SCH='S1.SCHEMA_A' />
The document in file
"xmlfile.002.xml" is validated by using the XML schema with SQL identifier
"S2.SCHEMA_B". Note that the SCH attribute is ignored, since validation is being performed by using
aschema that is specified by the USING SCHEMA clause.Example 6 (XMLVALIDATE USING SCHEMALOCATION HINTS)
<XDS FIL='xmlfile.001.xml' />
The XML schema used is determined by the
schemaLocation attribute in the document contents, and no validation would occur if one is not
present.<XDS FIL='xmlfile.002.xml' SCH='S1.SCHEMA_A' />
The XML schema used is
determined by the schemaLocation attribute in the document contents, and no validation would occur
if one is not present. Note that the SCH attribute is ignored, since validation is being performed
by using SCHEMALOCATION HINTS.Usage notes
Be sure to complete all table operations and release all locks before starting an import operation. It can be done by issuing a COMMIT after closing all cursors opened WITH HOLD, or by issuing a ROLLBACK.
- If it is likely that subsequent INSERT statements can be successful, a warning message is written to the message file, and processing continues.
- If it is likely that subsequent INSERT statements will fail, and there is potential for database damage, an error message is written to the message file, and processing halts.
The utility performs an automatic COMMIT after the old rows are deleted during a REPLACE or a REPLACE_CREATE operation. Therefore, if the system fails, or the application interrupts the database manager after the table object is truncated, all of the old data is lost. Ensure that the old data is no longer needed before using these options.
If the log becomes full during a CREATE, REPLACE, or REPLACE_CREATE operation, the utility performs an automatic COMMIT on inserted records. If the system fails, or the application interrupts the database manager after an automatic COMMIT, a table with partial data remains in the database. Use the REPLACE or the REPLACE_CREATE option to rerun the whole import operation, or use INSERT with the RESTARTCOUNT parameter set to the number of rows successfully imported.
- By default, automatic commits are not performed for the INSERT or the INSERT_UPDATE option. They are, however, performed if the COMMITCOUNT parameter is not zero.
- Offline import does not perform automatic COMMITs if any of the following conditions are true:
- The target is a view, not a table
- Compound inserts are used
- Buffered inserts are used
- By default, online import performs automatic commit to free both the active log space and the lock list. Automatic commits are not performed only if a COMMITCOUNT value of zero is specified.
Whenever the import utility performs a COMMIT, two messages are written to the message file: one indicates the number of records to be committed, and the other is written after a successful COMMIT. When restarting the import operation after a failure, specify the number of records to skip, as determined from the last successful COMMIT.
The import utility accepts input data with minor incompatibility problems (for example, character data can be imported by using padding or truncation, and numeric data can be imported with a different numeric data type), but data with major incompatibility problems is not accepted.
- Drop all foreign keys in which the table is a parent.
- Run the import utility.
- Alter the table to re-create the foreign keys.
If an error occurs while recreating the foreign keys, modify the data to maintain referential integrity.
Referential constraints and foreign key definitions are not preserved when recreating tables from
PC/IXF files. (Primary key definitions are preserved if the data was previously exported by
using SELECT *
.)
Importing to a remote database requires enough disk space on the server for a copy of the input data file, the output message file, and potential growth in the size of the database.
If an import operation is run against a remote database, and the output message file is very long (more than 60 KB), the message file returned to the user on the client might be missing messages from the middle of the import operation. The first 30 KB of message information and the last 30 KB of message information are always retained.
Importing PC/IXF files to a remote database is much faster if the PC/IXF file is on a hard drive rather than on diskettes.
You cannot use the IMPORT CREATE option with a PC/IXF file format on a table that has an index defined with an expression-based key.
The database table or hierarchy must exist before data in the ASC or DEL file formats can be imported; however, if the table does not already exist, IMPORT CREATE or IMPORT REPLACE_CREATE creates the table when it imports data from a PC/IXF file. For typed tables, IMPORT CREATE can create the type hierarchy and the table hierarchy as well.
PC/IXF import can be used to move data (including hierarchical data) between databases. If character data containing row separators is exported to a delimited ASCII (DEL) file and processed by a text transfer program, fields containing the row separators will shrink or expand. The file copying step is not necessary if the source and the target databases are both accessible from the same client.
The data in ASC and DEL files is assumed to be in the code page of the client application performing the import. PC/IXF files, which allow for different code pages, are recommended when importing data in different code pages. If the PC/IXF file and the import utility are in the same code page, processing occurs as for a regular application. If the two differ, and the FORCEIN option is specified, the import utility assumes that data in the PC/IXF file has the same code page as the application performing the import. This occurs even if there is a conversion table for the two code pages. If the two differ, the FORCEIN option is not specified, and there is a conversion table, all data in the PC/IXF file will be converted from the file code page to the application code page. If the two differ, the FORCEIN option is not specified, and there is no conversion table, the import operation fails . This option applies only to PC/IXF files on Db2 clients on the AIX® operating system.
For table objects on an 8 KB page that are close to the limit of 1012 columns, import of PC/IXF data files might cause Db2 to return an error, because the maximum size of an SQL statement was exceeded. This situation can occur only if the columns are of type CHAR, VARCHAR, or CLOB. The restriction does not apply to import of DEL or ASC files. If PC/IXF files are being used to create a new table, an alternative is use db2look to dump the DDL statement that created the table, and then to issue that statement through the CLP.
Db2 Connect can be used to import data to DRDA servers such as Db2 for z/OS®, Db2 for VM and VSE, and Db2 for OS/400®. Only PC/IXF import (INSERT option) is supported. The RESTARTCOUNT parameter, but not the COMMITCOUNT parameter, is also supported.
When using the CREATE option with typed tables, create every sub-table defined in the PC/IXF file; sub-table definitions cannot be altered. When using options other than CREATE with typed tables, the traversal order list enables one to specify the traverse order; therefore, the traversal order list must match the one used during the export operation. For the PC/IXF file format, one need only specify the target sub-table name, and use the traverse order that is stored in the file.
The import utility can be used to recover a table previously exported to a PC/IXF file. The table returns to the state it was in when exported.
Data cannot be imported to a system table, a created temporary table, a declared temporary table, or a summary table.
Views cannot be created through the import utility.
Importing a multiple-part PC/IXF file whose individual parts are copied from a Windows system to an AIX system is
supported. Only the name of the first file must be specified in the IMPORT
command. For example, IMPORT FROM data.ixf OF IXF INSERT INTO TABLE1
. The file
data.002, etc can be available in the same directory as
data.ixf.
- Importing logically split PC/IXF files is not supported.
- Importing bad format PC/IXF files is not supported.
Security labels in their internal format might contain newline characters. If you import the file
by using the DEL file format, those newline characters can be mistaken for delimiters. If you have
this problem use the older default priority for delimiters by specifying the
delprioritychar
file type modifier in the IMPORT command.
If the database table contains implicitly hidden columns, you must specify whether data for the hidden columns is included in the import operation.
The IMPORT utility does not match the number of columns in a table and the number of fields in a data file. The utility checks for a sufficient amount of data in the data file and if a row in the data file does not contain sufficient columns of data, the row can either be rejected with a warning message if the corresponding table columns without data are defined as NOT NULL, or be inserted successfully without a warning message if the corresponding table columns are defined as NULL. On the other hand, if a row contains a higher number of columns than required, the sufficient number of columns are processed while the remaining columns of data are omitted and no warning message is given.
Federated considerations
When using the IMPORT command and the INSERT, UPDATE, or INSERT_UPDATE command parameters, you must ensure that you have CONTROL privilege on the participating nickname. You must ensure that the nickname you want to use when doing an import operation exists . There are also several restrictions you can be aware of as shown in the IMPORT command parameters section.
Some data sources, such as ODBC, do not support importing into nicknames.
Column-organized tables
- The IMPORT command inherits all INSERT, UPDATE, or DELETE statement restrictions that pertain to column-organized tables.
- The CREATE INTO parameter of the IMPORT command cannot create a column-organized table.
- Import operations against column-organized tables use the CS isolation level (which is needed for insert operations), not the default RS isolation level, which is set when import packages are bound to the database.
- You cannot use an IXF file that you created by exporting data from a column-organized table with the CREATE parameter or the REPLACE_CREATE parameter of the IMPORT command.
File type modifiers for the import utility
Modifier | Description |
---|---|
compound=x | x is a number between 1 and 100 inclusive. Uses nonatomic compound SQL to
insert the data, and x statements will be attempted each time. If this modifier is specified, and the transaction log is not sufficiently large, the import operation will fail. The transaction log must be large enough to accommodate either the number of rows specified by COMMITCOUNT, or the number of rows in the data file if COMMITCOUNT is not specified. It is therefore recommended that the COMMITCOUNT option be specified to avoid transaction log overflow. This modifier is incompatible with:
|
generatedignore | This modifier informs the import utility that data for all generated columns is present in
the data file but can be ignored. This action results in all values for the generated columns being
generated by the utility. This modifier cannot be used with the generatedmissing
modifier. Random distribution tables that use the random by generation method have an internally generated column called the RANDOM_DISTRIBUTION_KEY. This modifier does not apply to that column, only to other generated columns in the table. Values for the RANDOM_DISTRIBUTION_KEY will be regenerated unless explicitly referenced in the column list. |
generatedmissing | If this modifier is specified, the utility assumes that the input data file contains no data
for the generated columns (not even NULLs), and will therefore generate a value for each row. This
modifier cannot be used with the generatedignore modifier.
Random distribution tables that use the random by generation method have an internally generated column called the RANDOM_DISTRIBUTION_KEY. This modifier does not apply to that column, only to other generated columns in the table. Values for the RANDOM_DISTRIBUTION_KEY will be regenerated unless explicitly referenced in the column list. |
identityignore | This modifier informs the import utility that data for the identity column is present in the
data file but can be ignored. This results in all identity values being generated by the utility.
The behavior will be the same for both GENERATED ALWAYS and GENERATED BY DEFAULT identity columns.
This means that for GENERATED ALWAYS columns, no rows will be rejected. This modifier cannot be used
with the identitymissing modifier. |
identitymissing | If this modifier is specified, the utility assumes that the input data file contains no data
for the identity column (not even NULLs), and will therefore generate a value for each row. The
behavior will be the same for both GENERATED ALWAYS and GENERATED BY DEFAULT identity columns. This
modifier cannot be used with the identityignore modifier. |
implicitlyhiddeninclude | If this modifier is specified, the utility assumes that the input data file
contains data for the implicitly hidden columns and this data will also be imported. This modifier
cannot be used with the implicitlyhiddenmissing modifier. See the Note: section for information about the precedence when
multiple modifiers are specified. This modifier does not apply to the hidden RANDOM_DISTRIBUTION_KEY column of a random distribution tables that uses the random by generation method. The value for that column will be read from the input data file only if the column was explicitly referenced in the column list. |
implicitlyhiddenmissing | If this modifier is specified, the utility assumes that the input data file
does not contain data for the implicitly hidden columns and the utility will generate values for
those hidden columns. This modifier cannot be used with the
implicitlyhiddeninclude modifier. See the Note: section for information about the precedence when
multiple modifiers are specified. This modifier does not apply to the hidden RANDOM_DISTRIBUTION_KEY column of a random distribution tables that uses the random by generation method. The value for that column will be read from the input data file only if the column was explicitly referenced in the column list. |
lobsinfile | lob-path specifies the path to the files containing LOB data. Each path contains at least one file that contains at least one LOB pointed to by a Lob
Location Specifier (LLS) in the data file. The LLS is a string representation of the location of a
LOB in a file stored in the LOB file path. The format of an LLS is
filename.ext.nnn.mmm/, where filename.ext
is the name of the file that contains the LOB, nnn is the offset in bytes of the
LOB within the file, and mmm is the length of the LOB in bytes. For example, if
the string The
LOBS FROM clause specifies where the LOB files are located when the
To indicate a null LOB, enter the size as -1. If the size is specified as 0, it is treated as a 0 length LOB. For null LOBS with length of -1, the offset and the file name are ignored. For example, the LLS of a null LOB might be db2exp.001.7.-1/. |
no_type_id | Valid only when importing into a single sub-table. Typical usage is to export data from a regular table, and then to invoke an import operation (by using this modifier) to convert the data into a single sub-table. |
nodefaults | If a source column for a target table column is not explicitly specified, and the table
column is not nullable, default values are not loaded. Without this option, if a source column for
one of the target table columns is not explicitly specified, one of the following occurs:
|
norowwarnings | Suppresses all warnings about rejected rows. |
periodignore | This modifier informs the import utility that data for the period columns is present in the data file but can be ignored. When this modifier is specified, all period column values are generated by the utility. This modifier cannot be used with the periodmissing modifier. |
periodmissing | If this modifier is specified, the utility assumes that the input data file contains no data for the period columns. When this modifier is specified, all period column values are generated by the utility. This modifier cannot be used with the periodignore modifier. |
rowchangetimestampignore | This modifier informs the import utility that data for the row change
timestamp column is present in the data file but can be ignored. This results in all ROW CHANGE
TIMESTAMP being generated by the utility. The behavior will be the same for both GENERATED ALWAYS
and GENERATED BY DEFAULT columns. This means that for GENERATED ALWAYS columns, no rows will be
rejected. This modifier cannot be used with the rowchangetimestampmissing
modifier. |
rowchangetimestampmissing | If this modifier is specified, the utility assumes that the input data file
contains no data for the row change timestamp column (not even NULLs), and will therefore generate a
value for each row. The behavior will be the same for both GENERATED ALWAYS and GENERATED BY DEFAULT
columns. This modifier cannot be used with the rowchangetimestampignore
modifier. |
seclabelchar | Indicates that security labels in the input source file are in the string
format for security label values rather than in the default encoded numeric format.
IMPORT converts each security label into the internal format as it is loaded. If
a string is not in the proper format the row is not loaded and a warning (SQLSTATE 01H53) is
returned. If the string does not represent a valid security label that is part of the security
policy protecting the table then the row is not loaded and a warning (SQLSTATE 01H53, SQLCODE
SQL3243W)) is returned. This modifier cannot be specified whether the seclabelname modifier is specified, otherwise the import fails and an error (SQLCODE SQL3525N) is returned. |
seclabelname | Indicates that security labels in the input source file are indicated by their
name rather than the default encoded numeric format. IMPORT will convert the name
to the appropriate security label if it exists. If no security label exists with the indicated name
for the security policy protecting the table the row is not loaded and a warning (SQLSTATE 01H53,
SQLCODE SQL3244W) is returned. This modifier cannot be specified whether the seclabelchar modifier is specified, otherwise the import fails and an error (SQLCODE SQL3525N) is returned. Note: If the file type is ASC, any spaces following the name of the security label will
be interpreted as being part of the name. To avoid this use the striptblanks file type modifier to
make sure the spaces are removed.
|
transactionidignore | This modifier informs the import utility that data for the TRANSACTION START ID column is present in the data file but can be ignored. When this modifier is specified, the value for the TRANSACTION START ID column is generated by the utility. This modifier cannot be used with the transactionidmissing modifier. |
transactionidmissing | If this modifier is specified, the utility assumes that the input data file contains no data for the TRANSACTION START ID columns. When this modifier is specified, the value for the TRANSACTION START ID column is generated by the utility. This modifier cannot be used with the transactionidignore modifier. |
usedefaults | If a source column for a target table column has been specified, but it contains no data for
one or more row instances, default values are loaded. Examples of missing data are:
usedefaults file type modifier in INSERT_UPDATE mode if the input
data is missing the value for a column that is part of a primary key when trying to update an
existing row. The existing row is not updated and SQL3116W is returned. |
Modifier | Description |
---|---|
codepage=x | x is an ASCII character string. The value is interpreted as the code page
of the data in the input data set. Converts character data from this code page to the application
code page during the import operation. The following rules apply:
Note:
|
dateformat="x" | x is the format of the date in the source file.2 Valid date
elements are:
A default value of 1 is assigned for each element that is not specified. Some examples of
date formats are:
|
implieddecimal | The location of an implied decimal point is determined by the column definition; it is no
longer assumed to be at the end of the value. For example, the value 12345 is
loaded into a DECIMAL(8,2) column as 123.45 , not
12345.00 . |
timeformat="x" | x is the format of the time in the source file.2 Valid time
elements are:
A default value of 0 is assigned for each element that is not specified.
Some examples of time formats are:
|
timestampformat="x" | x is the format of the time stamp in the source file.2 Valid
time stamp elements are:
A default value of 1 is assigned for unspecified YYYY, M, MM, D, DD, or DDD
elements. A default value of 'Jan' is assigned to an unspecified MMM element. A
default value of 0 is assigned for all other unspecified elements. Following is an
example of a time stamp format:
The valid values for the MMM element include: 'jan', 'feb', 'mar', 'apr', 'may', 'jun', 'jul', 'aug', 'sep', 'oct', 'nov' and 'dec'. These values are case insensitive. The
following example illustrates how to import data containing user defined date and time formats into
a table called schedule:
|
usegraphiccodepage | If usegraphiccodepage is given, the assumption is made that data being
imported into graphic or double-byte character large object (DBCLOB) data fields is in the graphic
code page. The rest of the data is assumed to be in the character code page. The graphic code page
is associated with the character code page. IMPORT determines the character code page through either
the codepage modifier, if it is specified, or through the code page of the
application if the codepage modifier is not specified. This modifier can be used in conjunction with the delimited data file generated by drop table recovery only if the table being recovered has graphic data. Restrictions The |
xmlchar | Specifies that XML documents are encoded in the character code page. This option is useful for processing XML documents that are encoded in the specified character code page but do not contain an encoding declaration. For each document, if a declaration tag exists and contains
an encoding attribute, the encoding must match the character code page, otherwise the row containing
the document will be rejected. Note that the character code page is the value specified by the
|
xmlgraphic | Specifies that XML documents are encoded in the specified graphic code page. This option is useful for processing XML documents that are encoded in a specific graphic code page but do not contain an encoding declaration. For each document, if a declaration tag exists and contains
an encoding attribute, the encoding must match the graphic code page, otherwise the row containing
the document will be rejected. Note that the graphic code page is the graphic component of the value
specified by the Note: If the
xmlgraphic modifier is specified with the IMPORT command, the
XML document to be imported must be encoded in the UTF-16 code page. Otherwise, the XML document can
be rejected with a parsing error, or it can be imported into the table with data
corruption. |
Modifier | Description |
---|---|
nochecklengths | If nochecklengths is specified, an attempt is made to import each row, even
if the source data has a column definition that exceeds the size of the target table column. Such
rows can be successfully imported if code page conversion causes the source data to shrink; for
example, 4-byte EUC data in the source could shrink to 2-byte DBCS data in the target, and require
half the space. This option is particularly useful if it is known that the source data will fit in
all cases despite mismatched column definitions. |
nullindchar=x | x is a single character. Changes the character denoting a null value to
x. The default value of x is Y .3
This modifier is case sensitive for EBCDIC data files, except when the character is an English
letter. For example, if the null indicator character is specified to be the letter
|
reclen=x | x is an integer with a maximum value of 32 767. x characters are read for each row, and a new-line character is not used to indicate the end of the row. |
striptblanks | Truncates any trailing blank spaces when loading data into a variable-length field. If this
option is not specified, blank spaces are kept. In the following example,
striptblanks causes the import utility to truncate trailing blank spaces:
This
option cannot be specified together with |
striptnulls | Truncates any trailing NULLs (0x00 characters) when loading data into a variable-length
field. If this option is not specified, NULLs are kept. This option cannot be specified together
with |
Modifier | Description |
---|---|
chardelx | x is a single character string delimiter. The default value is a double
quotation mark ("). The specified character is used in place of double quotation marks to enclose a
character string.34 If you want to explicitly specify the double quotation
mark as the character string delimiter, it can be specified as follows:
The single quotation mark (') can also be
specified as a character string delimiter. In the following example,
chardel''
causes the import utility to interpret any single quotation mark (') it encounters as a character
string delimiter:
|
coldelx | x is a single character column delimiter. The default value is a comma
(,). The specified character is used in place of a comma to signal the end of a
column.34
In the following example,
coldel; causes the import utility to interpret any
semicolon (;) it encounters as a column delimiter:
|
decplusblank | Plus sign character. Causes positive decimal values to be prefixed with a blank space instead of a plus sign (+). The default action is to prefix positive decimal values with a plus sign. |
decptx | x is a single character substitute for the period as a decimal point
character. The default value is a period (.). The specified character is used in place of a period
as a decimal point character.34
In the following example,
decpt; causes the import utility to interpret any
semicolon (;) it encounters as a decimal point:
|
delprioritychar | The current default priority for delimiters is: record delimiter, character delimiter, column
delimiter. This modifier protects existing applications that depend on the older priority by
reverting the delimiter priorities to: character delimiter, record delimiter, column delimiter.
Syntax:
For example,
given the following DEL data file:
With
the |
keepblanks | Preserves the leading and trailing blanks in each field of type CHAR, VARCHAR, LONG VARCHAR, or CLOB. Without this option, all leading and trailing blanks that are not inside character delimiters are removed, and a NULL is inserted into the table for all blank fields. |
nochardel | The import utility will assume all bytes found between the column delimiters to be part of
the column's data. Character delimiters will be parsed as part of column data. This option can not
be specified whether the data was exported by using Db2 (unless nochardel was
specified at export time). It is provided to support vendor data files that do not have character
delimiters. Improper usage might result in data loss or corruption. This option cannot be
specified with |
nodoubledel | Suppresses recognition of double character delimiters. |
Modifier | Description |
---|---|
forcein | Directs the utility to accept data despite code page mismatches, and to suppress translation
between code pages. Fixed length target fields are checked to verify that they are large enough
for the data. If |
indexixf | Directs the utility to drop all indexes currently defined on the existing table, and to
create new ones from the index definitions in the PC/IXF file. This option can only be used when the
contents of a table are being replaced. It cannot be used with a view, or when a
insert-column is specified. Note: The
indexixf parameter has been deprecated
and can be removed in a future release. For more details, see IMPORT command options CREATE and REPLACE_CREATE have been deprecated. |
indexschema=schema | Uses the specified schema for the index name during index creation. If
schema is not specified (but the keyword indexschema
is specified), uses the connection user ID. If the keyword is not specified, uses the schema
in the IXF file. |
nochecklengths | If nochecklengths is specified, an attempt is made to import each row, even
if the source data has a column definition that exceeds the size of the target table column. Such
rows can be successfully imported if code page conversion causes the source data to shrink; for
example, 4-byte EUC data in the source could shrink to 2-byte DBCS data in the target, and require
half the space. This option is particularly useful if it is known that the source data will fit in
all cases despite mismatched column definitions. |
forcecreate | Specifies that the table can be created with possible missing or limited information after returning SQL3311N during an import operation. |
codepage=N | usegraphiccodepage | IMPORT behavior |
---|---|---|
Absent | Absent | All data in the file is assumed to be in the application code page. |
Present | Absent | All data in the file is assumed to be in code page N . Warning:
Graphic data will be corrupted when imported into the database if |
Absent | Present | Character data in the file is assumed to be in the application code page. Graphic data is
assumed to be in the code page of the application graphic data. If the application code page is single-byte, then all data is assumed to be in the application code page. Warning: If the application code page is single-byte, graphic data will be corrupted when imported into the database, even if the database contains graphic columns. |
Present | Present | Character data is assumed to be in code page N . Graphic data is assumed to
be in the graphic code page of N . If Warning: Graphic data will be corrupted when imported into the
database if |
- The import utility does not issue a warning if an attempt is made to use unsupported file types with the MODIFIED BY option. If this is attempted, the import operation fails, and an error code is returned.
- Double quotation marks around the date format string are mandatory. Field separators cannot
contain any of the following: a-z, A-Z, and 0-9. The field separator can not be the same as the
character delimiter or field delimiter in the DEL file format. A field separator is optional if the
start and end positions of an element are unambiguous. Ambiguity can exist if (depending on the
modifier) elements such as D, H, M, or S are used, because of the variable length of the entries.
For time stamp formats, care must be taken to avoid ambiguity between the month and the minute descriptors, since they both use the letter M. A month field must be adjacent to other date fields. A minute field must be adjacent to other time fields. Following are some ambiguous time stamp formats:
In ambiguous cases, the utility will report an error message, and the operation will fail."M" (could be a month, or a minute) "M:M" (Which is which?) "M:YYYY:M" (Both are interpreted as month.) "S:M:YYYY" (adjacent to both a time value and a date value)
Following are some unambiguous time stamp formats:"M:YYYY" (Month) "S:M" (Minute) "M:YYYY:S:M" (Month....Minute) "M:H:YYYY:M:D" (Minute....Month)
Some characters, such as double quotation marks and back slashes, must be preceded by an escape character (for example, \).
- Character values provided for the chardel, coldel, or decpt file type modifiers must be
specified in the code page of the source data. The character code point (instead of the character symbol), can be specified by using the syntax xJJ or 0xJJ, where JJ is the hexadecimal representation of the code point. For example, to specify the # character as a column delimiter, use one of the following statements:
... modified by coldel# ... ... modified by coldel0x23 ... ... modified by coldelX23 ...
- Delimiter considerations for moving data lists restrictions that apply to the characters that can be used as delimiter overrides.
- The following file type modifiers are not allowed when importing into a nickname:
indexixf
indexschema
dldelfiletype
nodefaults
usedefaults
no_type_idfiletype
generatedignore
generatedmissing
identityignore
identitymissing
lobsinfile
- The CREATE mode is not supported for XML columns.
- All XML data must reside in XML files that are separate from the main data file. An XML Data Specifier (XDS) (or a NULL value) must exist for each XML column in the main data file.
- XML documents are assumed to be in Unicode format or to contain a declaration tag that includes an encoding attribute, unless the XMLCHAR or XMLGRAPHIC file type modifier is specified.
- Rows containing documents that are not well-formed will be rejected.
- If the XMLVALIDATE option is specified, documents that successfully
validate against their matching schema will be annotated with the schema information as they are
inserted. Rows containing documents that fail to validate against their matching schema will be
rejected. To successfully perform the validation, the privileges held by the user invoking the
import must include at least one of the following:
- DBADM authority
- USAGE privilege on the XML schema to be used in the validation
- When multiple modifiers suffixed with
ignore, include, missing, and
override are specified, they are applied in the order that they are listed. In
the following statement, data for implicitly hidden columns that are not identity columns is
included in the input data. While data for all identity columns, regardless of their implicitly
hidden status, is
not.
However, changing the order of the file type modifiers in the following statement means that data for all implicitly hidden columns (including hidden identity columns) is included in the input data. While data for identity columns that are not implicitly hidden is not.db2 import from delfile1 of del modified by implicitlyhiddeninclude identitymissing insert into table1
If the DB2_DMU_DEFAULT registry variable is set to IMPLICITLYHIDDENINCLUDE, then:db2 import from delfile1 of del modified by identitymissing implicitlyhiddeninclude insert into table1
is equivalent to:db2set DB2_DMU_DEFAULT=IMPLICITLYHIDDENINCLUDE db2 import from delfile1 of del modified by identitymissing insert into table1
db2 import from delfile1 of del modified by implicitlyhiddeninclude identitymissing insert into table1