LOAD command

The LOAD command efficiently loads large amounts of data into a Db2® table.

The LOAD command loads data at the page level, bypasses trigger firing and logging, and delays constraint checking and index building until after the data is loaded into the Db2 table.

Data that is stored on the server can be in the form of a file, tape, or named pipe. Data that is stored on a remotely connected client can be in the form of a fully qualified file or named pipe. Data can also be loaded from a user-defined cursor or by using a user-written script or application. If the COMPRESS attribute for the table is set to YES, the data that is loaded is subject to compression. This compression applies to all data and database partition for which a dictionary exists in the table, including data in the XML storage object of the table.

Quick link to File type modifiers for the load utility.

Restrictions

The load utility does not support loading data at the hierarchy level. The load utility is not compatible with range-clustered tables. The load utility does not support the NOT LOGGED INITIALLY parameter for the CREATE TABLE or ALTER TABLE statements.

Scope

This command can be issued against multiple database partitions in a single request.

Authorization

You must have one or more of the following authorities to run the LOAD command:
  • DATAACCESS authority on the database
  • DATAACCESS authority on the schema of the table
  • LOAD authority on the database or LOAD authority on the schema and the following privileges:
    • INSERT or INSERTIN privilege on the table when the load utility is called in the following modes:
      • INSERT mode
      • TERMINATE mode (to end a previous load insert operation)
      • RESTART mode (to restart a previous load insert operation)
    • INSERT and DELETE privileges on the table or INSERTIN and DELETEIN privileges on the schema when the load utility is called in the following modes:
      • REPLACE mode
      • TERMINATE mode (to end a previous load replace operation)
      • RESTART mode (to restart a previous load replace operation)
    • INSERT privilege on the exception table or INSERTIN privilege on the schema that contains the exception table, if such a table is used as part of the load operation.
  • To load data into a table that contains protected columns, the session authorization ID must have LBAC credentials. These credentials can be direct or indirect, through a group or a role that allows write access to all protected columns in the table. Otherwise, the load fails and an error is returned (SQLSTATE 5U014).
  • To load data into a table that contains protected rows, the session authorization ID must hold a security label that meets these criteria:
    • The security label is part of the security policy that protects the table.
    • The security label was granted to the session authorization ID directly or indirectly through a group or a role for write access or for all access.
    If the session authorization ID does not hold such a security label, then the load fails and an error (SQLSTATE 5U014) is returned. The security label protects a loaded row if the session authorization ID LBAC credentials do not allow it to write to the security label that protects that row in the data. However, this protection does not happen when the security policy that protects the table was created with the RESTRICT NOT AUTHORIZED WRITE SECURITY LABEL option of the CREATE SECURITY POLICY statement. In this case, the load fails and an error (SQLSTATE 42519) is returned.

    When you load data into a table with protected rows, the target table has one column with a data type of DB2SECURITYLABEL. If the input row of data does not contain a value for that column, that row is rejected unless the usedefaults file type modifier is specified in the load command. In this case, the security label you hold for write access from the security policy that protects the table is used. If you do not hold a security label for write access, the row is rejected and processing continues on to the next row.

  • For a table with protected rows, if the REPLACE option is specified, the session authorization ID must have the authority to drop the table.
  • If the LOCK WITH FORCE option is specified, any of SYSMAINT, SYSCTRL, or SYSADM authority is required.
  • If row access control is activated on the table, then LOAD REPLACE on that table would require the ability to drop the table. Specifically, you must have either CONTROL on the table or DROPIN or SCHEMAADM on the schema that contains the table or DBADM on the table.

Db2 server processes, including all load processes are owned by the instance owner. Because these processes use the identification of the instance owner to access needed files, the instance owner requires read access to input data files. These input data files must be readable by the instance owner, regardless of who runs the command.

Required connection

Database. If implicit connect is enabled, a connection to the default database is established.

Instance. An explicit attachment is not required. If a connection to the database is established, an implicit attachment to the local instance is attempted.

Command syntax

Read syntax diagramSkip visual syntax diagram LOAD CLIENT FROM ,filenameremotefilenamepipenamedevicecursorname OF filetype LOBS FROM,lob-pathXML FROM,xml-pathMODIFIED BYfile-type-modMETHODL(,col-startcol-end)NULL INDICATORS(,null-indicator-list)N(,col-name)P(,col-position)XMLPARSESTRIPPRESERVEWHITESPACEXMLVALIDATE USINGXDSDEFAULTschema-sqlidIgnore and Map parametersSCHEMAschema-sqlidSCHEMALOCATION HINTS
Ignore and Map parameters
Read syntax diagramSkip visual syntax diagramIGNORE(,schema-sqlid)MAP(,(schema-sqlid,schema-sqlid))
Read syntax diagramSkip visual syntax diagramSAVECOUNTnROWCOUNTnWARNINGCOUNTnMESSAGESmessage-fileTEMPFILES PATHtemp-pathnameINSERTREPLACEKEEPDICTIONARYRESETDICTIONARYRESETDICTIONARYONLYRESTARTTERMINATE INTOtable-name (,insert-column) FOR EXCEPTIONtable-name,12NORANGEEXCNOUNIQUEEXCSTATISTICS NO3STATISTICS USE PROFILECOPYNOYESUSE TSMOPENnum-sessSESSIONSTO,device/directoryMULTIPARTSIZEMB <size>LOADlib-nameOPENnum-sessSESSIONSNONRECOVERABLEWITHOUT PROMPTINGDATA BUFFERbuffer-sizeSORT BUFFERbuffer-sizeCPU_PARALLELISMnDISK_PARALLELISMnINDEXING MODEAUTOSELECTREBUILDINCREMENTALDEFERREDALLOW NO ACCESSALLOW READ ACCESSUSEtablespace-nameFETCH_PARALLELISMYESNOSET INTEGRITY PENDING CASCADEIMMEDIATEDEFERRED LOCK WITH FORCE SOURCEUSEREXITexecutableRedirect Input or Output parametersPARALLELIZEPARTITIONED DB CONFIGpartitioned-db-option
Redirect Input or Output parameters
Read syntax diagramSkip visual syntax diagramREDIRECTINPUT FROMBUFFERinput-bufferFILEinput-fileOUTPUT TO FILEoutput-fileOUTPUT TO FILEoutput-file
Notes:
  • 1 These keywords can appear in any order.
  • 2 Each of these keywords can appear only one time.
  • 3 For column-organized tables, the default is the STATISTICS USE PROFILE parameter.

Command parameters

CLIENT
Specifies that the data to be loaded exists on a remotely connected client. This option is ignored if the load operation is not being run from a remote client. This option is ignored if it is specified with the CURSOR file type.
Note:
  1. The dumpfile and lobsinfile modifiers and the XML FROM option refer to files on the server even when the CLIENT keyword is specified.
  2. Code page conversion is not done during a remote load operation. If the code page of the data is different than the code page of the server, specify the data code page by using the codepage modifier.
  3. The CLIENT keyword, when specified along with a remote storage object and run on a remote host, is deprecated and will be discontinued in a future release. See Deprecated Functionality 11.5.7. When the DB2_ENABLE_COS_SDK registry variable is set to ON, LOAD CLIENT with a remote storage alias and run on a remote host silently uses the existing libcurl method (subject to its known limitations). See DB2_ENABLE_COS_SDK.
In the following example, a data file (/u/user/data.del) exists on a remotely connected client is to be loaded into MYTABLE on the server database:
db2 load client from /u/user/data.del of del
   modified by codepage=850 insert into mytable
FROM filename | remotefilename | pipename | device | cursorname
Specifies the file, pipe, device, or cursor of an SQL statement that contains the data to load. If the input source is a file, pipe, or device, it must exist on the database partition where the database exists, unless the CLIENT option is specified.
A remotefilename refers to a file that is on remote storage, such as IBM® Cloud Object Storage or Amazon Simple Storage Service (S3), and is accessed by using a storage access alias. Local staging space is required to temporarily store the file that is transferred from the remote storage server; refer to Remote storage requirements. The syntax of remote file names is:
DB2REMOTE://<alias>//<storage-path>/<file-name>
If several names are specified, they are processed in sequence. If the last item specified is a tape device, the user is prompted for another tape. The following list shows valid response options:
c
Continue to use the device that generated the warning message (for example, when a new tape is mounted).
d
Stop using the device that generated the warning message (for example, when you have no more tapes).
t
Stop all devices.
Note:
  • Use the fully qualified file name. If the server is remote, the fully qualified file name must be used. If the database exists on the same database partition as the caller, relative paths can be used.
  • If data is exported into a file with the EXPORT command using the ADMIN_CMD procedure, the data file is owned by the fenced user ID. This file is not usually accessible by the instance owner. To run the LOAD from the CLP or the ADMIN_CMD procedure, the data file must be accessible by the instance owner ID. Therefore, read access to the data file must be granted to the instance owner.
  • Loading data from multiple IXF files is supported if the files are physically separate, but logically one file. It is not supported if the files are both logically and physically separate. If more than one logically and physically separate files are specified, then any file after the first one is ignored. (Multiple physical files would be considered one logically file I you created them with one invocation of the EXPORT command.)
  • If you load data that exists on a client computer, the data must be in the form of either a fully qualified file or a named pipe.
  • When you load XML data from files into tables in a partitioned database environment, the XML data files must be read-accessible to all the database partitions where loading is taking place.
OF filetype
Specifies the format of the data:
  • ASC (nondelimited ASCII format)
  • DEL (delimited ASCII format)
  • IXF (Integration Exchange Format, PC version) is a binary format that is used exclusively by Db2 databases.
  • CURSOR (a cursor declared against a SELECT or VALUES statement).
Note:
  • When you use a CURSOR file type to load XML data into a table in a distributed database environment, the PARTITION_ONLY and LOAD_ONLY modes are not supported.
  • Note the following difference when you run the LOAD command with a CURSOR file type:
    • If the DATABASE keyword was specified during the DECLARE CURSOR statement, the LOAD command internally creates a separate application to fetch the data.
    • If the DATABASE keyword is not specified during the DECLARE CURSOR statement, the LOAD command fetches data within the same application.
    This difference between the two cases can also cause locking behavior difference. In particular, a lock (such as a lock wait or lock timeout, depending on the database configuration) might be created if the following conditions are met:
    • You specify the DATABASE keyword for the same database as the currently connected database.
    • You connect to the same database as the current connection with the same user ID and password.

    The workaround for this lock issue is to omit the DATABASE keyword.

LOBS FROM lob-path
The path to the data files that contain LOB values to be loaded. The path must end with a slash. If the CLIENT option is specified, the path must be fully qualified. The names of the LOB data files are stored in the main data file (ASC, DEL, or IXF), in the column that is loaded into the LOB column. The maximum number of paths that can be specified is 999. This clause implicitly activates the LOBSINFILE behavior.

This option is ignored when specified with the CURSOR file type.

MODIFIED BY file-type-mod
Specifies file type modifier options. See File type modifiers for the load utility.
METHOD
L
Specifies the start and end column numbers from which to load data. A column number is a byte offset from the beginning of a row of data. It is numbered starting from 1. This method can be used only with ASC files, and is the only valid method for that file type.
NULL INDICATORS null-indicator-list
This option can be used only when the METHOD L parameter is specified (to indicate that the input file is an ASC file). The null indicator list is a comma-separated list of positive integers that specify the column number of each null indicator field. The column number is the byte offset of the null indicator field from the beginning of a row of data. The null indicator list must contain one entry for each data field that is defined in the METHOD L parameter. A column number of zero indicates that the corresponding data field always contains data.

A value of Y in the NULL indicator column specifies that the column data is NULL. Any character other than Y in the NULL indicator column specifies that the column data is not NULL and column data that is specified by the METHOD L option is loaded.

The NULL indicator character can be changed by using the MODIFIED BY option.

N
Specifies the names of the columns in the data file to be loaded. The case of these column names must match the case of the corresponding names in the system catalogs. Each table column that is not nullable must have a corresponding entry in the METHOD N list. For example, assume that data fields are F1, F2, F3, F4, F5, and F6, and table columns are C1 INT, C2 INT NOT NULL, C3 INT NOT NULL, and C4 INT. In this scenario, method N (F2, F1, F4, F3) is a valid request. However, method N (F2, F1) is not a valid request. This method can be used only with file types IXF or CURSOR.
P
Specifies the field numbers (numbered from 1) of the input data fields to be loaded. Each table column that is not nullable must have a corresponding entry in the METHOD P list. For example, assume that data fields are F1, F2, F3, F4, F5, and F6, and table columns are C1 INT, C2 INT NOT NULL, C3 INT NOT NULL, and C4 INT. In this scenario, method P (2, 1, 4, 3) is a valid request. However, method P (2, 1) is not a valid request. This method can be used only with file types IXF, DEL, or CURSOR, and is the only valid method for the DEL file type.
For each of the fields specified by method P, you must define a corresponding column in the action statement. This requirement can be lifted if all columns are accounted for, or the first x columns are going to be loaded. This scenario is shown in the following example:
  db2 load from datafile1.del of del method P(1, 3, 4)
      replace into table1 (c1, c3, c4) 
XML FROM xml-path
Specifies one or more paths that contain the XML files. XDSs are contained in the main data file (ASC, DEL, or IXF), in the column that is loaded into the XML column.
XMLPARSE
Specifies how XML documents are parsed. If this option is not specified, the parsing behavior for XML documents is determined by the value of the CURRENT IMPLICIT XMLPARSE OPTION special register.
STRIP WHITESPACE
Specifies to remove white space when the XML document is parsed.
PRESERVE WHITESPACE
Specifies not to remove white space when the XML document is parsed.
XMLVALIDATE
Specifies that XML documents are validated against a schema, when applicable.
USING XDS
XML documents are validated against the XML schema that is identified by the XML Data Specifier (XDS) in the main data file. By default, if the XMLVALIDATE option is used with the USING XDS clause, the schema that is used for validation is determined by the SCH attribute of the XDS. If an SCH attribute is not present in the XDS, no schema validation occurs unless a default schema is specified by the DEFAULT clause.
The DEFAULT, IGNORE, and MAP clauses can be used to modify the schema determination behavior. These three optional clauses apply directly to the specifications of the XDS, and not to each other. For example, if a schema is selected because it is specified by the DEFAULT clause, it is not ignored if it is also specified by the IGNORE clause. Similarly, a schema is not remapped if the following conditions are met:
  • The schema is selected because it is specified as the first part of a pair in the MAP clause.
  • The schema is also specified in the second part of another MAP clause pair.
USING SCHEMA schema-sqlid
XML documents are validated against the XML schema with the specified SQL identifier. In this case, the SCH attribute of the XDS is ignored for all XML columns.
USING SCHEMALOCATION HINTS
XML documents are validated against the schemas that are identified by XML schema location hints in the source XML documents. If a schemaLocation attribute is not found in the XML document, no validation occurs. When the USING SCHEMALOCATION HINTS clause is specified, the SCH attribute of the XDS is ignored for all XML columns.
See examples of the XMLVALIDATE option in the following section.
IGNORE schema-sqlid
This option can be used only when the USING XDS parameter is specified. The IGNORE clause specifies a list of one or more schemas to ignore if they are identified by an SCH attribute. No schema validation occurs for the loaded XML document if both of the following conditions are met:
  • An SCH attribute exists in the XML Data Specifier (XDS) for a loaded XML document.
  • The schema that is identified by the SCH attribute is included in the list of schemas to ignore.
Note:

If a schema is specified in the IGNORE clause, it cannot also be present in the left side of a schema pair in the MAP clause.

The IGNORE clause applies only to the XDS. A schema that is mapped by the MAP clause is not ignored later if it is specified by the IGNORE clause.

DEFAULT schema-sqlid
This option can be used only when the USING XDS parameter is specified. The schema that is specified in the DEFAULT clause identifies a schema to use for validation when the XDS of an XML document does not contain an SCH attribute that identifies an XML schema.

The DEFAULT clause takes precedence over the IGNORE and MAP clauses. If an XDS satisfies the DEFAULT clause, the IGNORE and MAP specifications are ignored.

MAP schema-sqlid
This option can be used only when the USING XDS parameter is specified. Use the MAP clause to specify alternative schemas to use in place of schemas specified by the SCH attribute of an XDS for each loaded XML document. The MAP clause specifies a list of one or more schema pairs, where each pair represents a mapping of one schema to another. The first schema in the pair represents a schema that is referred to by an SCH attribute in an XDS. The second schema in the pair represents the schema to be used for schema validation.

If a schema is present in the left side of a schema pair in the MAP clause, it cannot also be specified in the IGNORE clause.

After a schema pair mapping is applied, the result is final. The mapping operation is nontransitive. Therefore, the schema that is chosen is not applied to another schema pair mapping later.

A schema cannot be mapped more than one time, meaning that it cannot appear on the left side of more than one pair.

SAVECOUNT n
Specifies that the load utility is to establish consistency points after every n rows. This value is converted to a page count, and rounded up to intervals of the extent size. Since a message is issued at each consistency point, selected this option if the load operation is monitored by using LOAD QUERY. If the value of n is not sufficiently high, the synchronization of activities that are done at each consistency point might impact performance.

The default value is zero, meaning that no consistency points are established, unless necessary.

This option is not allowed when specified with the CURSOR file type or when you load a table that contains an XML column.

The SAVECOUNT parameter is not supported for column-organized tables.

ROWCOUNT n
Specifies the number of n physical records in the file to be loaded. If the anyorder file type modifier is enabled, any n rows can be loaded from the file. Otherwise, the first n rows are loaded.
Note: The anyorder file type modifier is enabled by default for certain table types. For more information, see anyorder.
WARNINGCOUNT n
Stops the load operation after n warnings. Set this parameter if no warnings are expected, but you want verification that the correct file and table are being used. If the load file or the target table is specified incorrectly, the load utility generates a warning for each row that it attempts to load, which causes the load to fail. If n is zero, or this option is not specified, the load operation continues regardless of the number of warnings issued.
If the load operation is stopped because the threshold of warnings is encountered, another load operation can be started in RESTART mode. The load operation automatically continues from the last consistency point. Alternatively, another load operation can be initiated in REPLACE mode, starting at the beginning of the input file.
In a partitioned database environment, a LOAD operation can have multiple load and partition agents. Each agent has a WARNINGCOUNT value. If the value of n is reached on a single agent, the LOAD operation fails. The n values are not cumulative. For example, if n is 3 and two agents each have a WARNINGCOUNT of 2, the LOAD operation is successful.
MESSAGES message-file
Specifies the destination for warning and error messages that occur during the load operation. If a message file is not specified, messages are written to standard output unless the instance is configured as a partitioned database environment. If the complete path to the file is not specified, the load utility uses the current directory and the default drive as the destination. If you specify the name of an existing file, the utility appends the information.

The message file is usually populated with messages at the end of the load operation. Therefore, it is not suitable for monitoring the progress of the operation.

TEMPFILES PATH temp-pathname
Specifies the name of the path to be used when temporary files are created during a load operation, and must be fully qualified according to the server database partition.
Temporary files take up file system space. Sometimes, this space requirement is substantial. The following list provides an estimate of how much file system space to allocate for all temporary files:
  • 136 bytes for each message that the load utility generates.
  • 15 KB if the data file contains long field data or LOBs. This quantity can grow significantly if you specify the INSERT parameter and the table already has a large amount of long field or LOB data.
  • On the server, storage space that is equivalent to the raw size of the input data, if the following conditions are met:
    • The column compression dictionary is to be built.
    • The data source, such as a pipe or a socket, cannot be reopened.
INSERT
Adds the loaded data to the table without changing the existing table data.

A LOAD INSERT operation into a column-organized table updates the table statistics by default if the table is new, or was truncated, and is empty at the start of the load operation.

REPLACE
Deletes all data from the table, and inserts the new data. The table definition and index definitions are not changed. If you specify this parameter when you move data between hierarchies, you can replace only the data for an entire hierarchy, not individual subtables.

You cannot use this parameter to load data into system-period temporal tables.

A LOAD REPLACE operation into a column-organized table maintains table statistics by default.

KEEPDICTIONARY
An existing compression dictionary is preserved across the LOAD REPLACE operation.

This option is the default for row-organized tables.

If the table COMPRESS attribute is YES, the newly replaced data is subject to being compressed by using the dictionary that existed before the invocation of the load. A new dictionary is built by using the data that is being replaced into the table if the following conditions are met:
  • A dictionary didn't exist previously in the table.
  • The table COMPRESS attribute is YES.

The amount of data that is required to build the compression dictionary in this case is subject to the policies of ADC. This data is populated into the table as decompressed data. After the dictionary is inserted into the table, the remaining data to be loaded is subject to being compressed with this dictionary. For a summary, see the following table.

Table 1. LOAD REPLACE KEEPDICTIONARY keyword.
Compress Table row data dictionary exists XML storage object dictionary exists1 Compression dictionary Data compression
YES YES YES Preserve table row data and XML dictionaries. Data to be loaded is subject to compression.
YES YES NO Preserve table row data dictionary and build a new XML dictionary. Table row data to be loaded is subject to compression. After XML dictionary is built, remaining XML data to be loaded is subject to compression.
YES NO YES Build table row data dictionary and preserve XML dictionary. After table row data dictionary is built, remaining table row data to be loaded is subject to compression. XML data to be loaded is subject to compression.
YES NO NO Build new table row data and XML dictionaries. After dictionaries are built, remaining data to be loaded is subject to compression.
NO YES YES Preserve table row data and XML dictionaries. Data to be loaded is not compressed.
NO YES NO Preserve table row data dictionary. Data to be loaded is not compressed.
NO NO YES No effect on table row dictionary. Preserve XML dictionary. Data to be loaded is not compressed.
NO NO NO No effect. Data to be loaded is not compressed.
Note:
  1. A compression dictionary can be created for the XML storage object of a table only when either of the following conditions are met:
    • The XML columns are added to the table in Db2 Version 9.7 or later.
    • The table is migrated by using an online table move.
  2. If LOAD REPLACE KEEPDICTIONARY operation is interrupted, load utility can recover after either LOAD RESTART or LOAD TERMINATE is issued. An existing XML storage object dictionary cannot be preserved after recovery from interrupted LOAD REPLACE KEEPDICTIONARY operation. A new XML storage object dictionary is created if LOAD RESTART is used.
RESETDICTIONARY
This directive instructs LOAD REPLACE processing to build a new dictionary for the table data object, if the table COMPRESS attribute is YES.

If the COMPRESS attribute is NO and a dictionary was already present in the table, it is removed and a new dictionary is not inserted into the table.

This option is the default for column-organized tables.

For column-organized tables, you cannot specify LOAD REPLACE … RESETDICTIONARY on a subset of database partitions (SQL27906N). If you include the OUTPUT_DBPARTNUMS option, all database partitions must be specified.

A compression dictionary can be built with just one user record. If the loaded data set size is zero and a dictionary exists, the existing dictionary is not preserved. The amount of data required to build a dictionary with this directive is not subject to the policies of ADC. For a summary, see the following table.

Table 2. LOAD REPLACE RESETDICTIONARY keyword.
Compress Table row data dictionary exists XML storage object dictionary exists1 Compression dictionary Data compression
YES YES YES Build new dictionaries2. If the DATA CAPTURE CHANGES option is enabled on the CREATE TABLE or ALTER TABLE statements, the current table row data dictionary is kept (and referred to as the historical compression dictionary). After dictionaries are built, remaining data to be loaded is subject to compression.
YES YES NO Build new dictionaries2. If the DATA CAPTURE CHANGES option is enabled on the CREATE TABLE or ALTER TABLE statements, the current table row data dictionary is kept (and referred to as the historical compression dictionary). After dictionaries are built, remaining data to be loaded is subject to compression.
YES NO YES Build new dictionaries. After dictionaries are built, remaining data to be loaded is subject to compression.
YES NO NO Build new dictionaries. After dictionaries are built, remaining data to be loaded is subject to compression.
NO YES YES Remove dictionaries. Data to be loaded is not compressed.
NO YES NO Remove table row data dictionary. Data to be loaded is not compressed.
NO NO YES Remove XML storage object dictionary. Data to be loaded is not compressed.
NO NO NO No effect. All table data is not compressed.
Notes:
  1. A compression dictionary can be created for the XML storage object of a table only when either of the following conditions are met:
    • The XML columns are added to the table in Db2 Version 9.7 or later.
    • The table is migrated by using an online table move.
  2. If a dictionary exists and the compression attribute is enabled, but no records exist to load into the table partition, a new dictionary cannot be built. In this scenario, the RESETDICTIONARY operation does not keep the existing dictionary.
RESETDICTIONARYONLY
This option creates a column compression dictionary that is based on the input file, without loading any rows. You can use this option to create the compression dictionary before you ingest any data by using SQL-based utilities.

This option is applicable to column-organized tables only.

You cannot specify LOAD REPLACE … RESETDICTIONARYONLY on a subset of database partitions (SQL27906N). If you include the OUTPUT_DBPARTNUMS option, all database partitions must be specified.

TERMINATE
One of four modes under which the load utility can run. This mode stops a previously interrupted load operation, and rolls back the operation to the point in time at which it started, even if consistency points were passed. The states of any table spaces that are involved in the operation return to normal, and all table objects are made consistent. Index objects might be marked as invalid, in which case index rebuild automatically takes place at next access. If you are ending a LOAD REPLACE operation, the table is truncated to an empty table after the LOAD TERMINATE operation. If you stop a LOAD INSERT operation, the table retains all of its original records after the LOAD TERMINATE operation. For summary of dictionary management, see Table 3.

The LOAD TERMINATE option does not remove a backup pending state from table spaces.

RESTART
Restarts an interrupted load operation. The load operation automatically continues from the last consistency point in the load, build, or delete phase. For summary of dictionary management, see Table 4.
The RESTART parameter is not supported for the following tables:
  • Column-organized tables.
  • Random distribution tables that use the random by generation method.
To recover a table of this type after a failed load operation, use the TERMINATE or REPLACE parameter.
INTO table-name
Specifies the database table into which the data is to be loaded. This table cannot be a system table, a declared temporary table, or a created temporary table. An alias, or the fully qualified or unqualified table name can be specified. A qualified table name is in the form schema.tablename. If an unqualified table name is specified, the table is qualified with the current schema.
If the database table contains implicitly hidden columns, you must specify whether data for the hidden columns is included in the load operation. Use one of the following methods to indicate whether data for hidden columns is included:
  • Use insert-column to explicitly specify the columns into which data is to be inserted.
    db2 load from delfile1 of del
       insert into table1 (c1, c2, c3,...)
  • Use one of the hidden column file type modifiers: specify implicitlyhiddeninclude when the input file contains data for the hidden columns, or implicitlyhiddenmissing when the input file does not.
    db2 load from delfile1 of del modified by implicitlyhiddeninclude
       insert into table1
  • Use the DB2_DMU_DEFAULT registry variable on the server-side to set the default behavior when data movement utilities encounter tables with implicitly hidden columns. Specify IMPLICITLYHIDDENINCLUDE when utilities assume that the implicitly hidden columns are included, or IMPLICITLYHIDDENMISSING when utilities assume that the implicitly hidden columns are not included.
    db2set DB2_DMU_DEFAULT=IMPLICITLYHIDDENINCLUDE
    db2 load from delfile1 of del insert into table1
insert-column
Specifies the table column into which the data is to be inserted.
The load utility cannot parse columns whose names contain one or more spaces. The following example fails:
db2 load from delfile1 of del noheader 
   method P (1, 2, 3, 4, 5, 6, 7, 8, 9)
   insert into table1 (BLOB1, S2, I3, Int 4, I5, I6, DT7, I8, TM9)
The failure occurs because of the Int 4 column. The solution is to enclose such column names with double quotation marks:
db2 load from delfile1 of del noheader 
   method P (1, 2, 3, 4, 5, 6, 7, 8, 9)
   insert into table1 (BLOB1, S2, I3, "Int 4", I5, I6, DT7, I8, TM9)
FOR EXCEPTION table-name
Specifies the exception table into which rows in error are copied. Any row that violates a unique index or a primary key index is copied. If you specify an unqualified table name, the table name is qualified with the current schema. The table cannot be a column-organized table.

Information that is written to the exception table is not written to the dump file. In a partitioned database environment, an exception table must be defined for those database partitions on which the loading table is defined. The dump file, otherwise, contains rows that cannot be loaded because they are invalid or have syntax errors.

When you load XML data, you cannot use the FOR EXCEPTION clause to specify a load exception table in the following cases:
  • When you use label-based access control (LBAC).
  • When you load data into a partitioned table.
NORANGEEXC
Indicates that if a row is rejected because of a range violation, it is not be inserted into the exception table.
NOUNIQUEEXC
Indicates that if a row is rejected because it violates a unique constraint, it is not be inserted into the exception table.
STATISTICS USE PROFILE
Instructs load to collect statistics during the load according to the profile defined for this table. Collecting statistics is the default behavior for column-organized tables. The profile must be created before you run the LOAD command. The profile is created by the RUNSTATS command. For row-organized tables, if the profile does not exist and this parameter is specified, a warning is returned and no statistics are collected. For column-organized tables, if the profile does not exist and this parameter is specified, the load utility uses the same default RUNSTATS command options that are used during an automatic RUNSTATS operation.

During load, distribution statistics are not collected for columns of type XML.

STATISTICS NO
Specifies that no statistics are to be collected, and that the statistics in the catalogs are not to be altered. This parameter is the default for row-organized tables.
COPY NO
Specifies that the table space in which the table exists is placed in backup pending state, if forward recovery is enabled. Forward recovery is enabled if the logarchmeth1 or logarchmeth2 configuration parameters are set to a value other than OFF. The COPY NO option also puts the table space state into the Load in Progress table space state. This state, which is transient, disappears when the load completes or fails. The data in any table in the table space cannot be updated or deleted until a table space backup or a full database backup is made. However, it is possible to access the data in any table by using the SELECT statement.

Specifying COPY NO on a recoverable database leaves the table spaces in a backup pending state. For example, indexes need to be refreshed if you run LOAD with COPY NO and INDEXING MODE DEFERRED. Certain queries on the table might require an index scan but fail until the indexes are refreshed. The index cannot be refreshed if it exists in a table space that is in the backup pending state. In that case, access to the table is not be allowed until a backup is taken. Index refresh is done automatically by the database when the index is accessed by a query.

COPY NO is the default option when the following conditions are met:
  • COPY NO, COPY YES, or NONRECOVERABLE is not specified.
  • The database is recoverable, which is the case when the logarchmeth1 or logarchmeth2 configuration parameters are set to value other than OFF.
COPY YES
Saves a copy of the loaded data. This parameter is invalid if forward recovery is turned off.
USE TSM
Specifies that the copy is stored by using IBM Tivoli® Storage Manager.
OPEN num-sess SESSIONS
The number of I/O sessions to be used with IBM Tivoli Storage Manager or the vendor product. The default value is 1.
TO device/directory
Specifies the device or directory on which the copy image is created.

The directory can be on a remote storage, such as IBM® Cloud Object Storage or Amazon Simple Storage Service (S3), and can be accessed by using a storage access alias. When DB2_ENABLE_COS_SDK is set to OFF, local staging space is required to temporarily store the copy image that is to be transferred to the remote storage server. The maximum copy image size for the remote storage is 5 GB. Refer to Remote Storage Requirements. When DB2_ENABLE_COS_SDK is set to ON, local staging space is not required. The maximum copy image size is determined by the MULTIPARTSIZEMB size, which is multiplied by the maximum number of parts that are allowed by the Cloud Object Storage provider.

LOAD lib-name
The name of the shared library (DLL on Windows operating systems) containing the vendor backup and restore I/O functions to be used. It can contain the full path. If the full path is not given, the default path is the location of the user exit programs.
NONRECOVERABLE
Specifies that the load transaction is to be marked as unrecoverable and that it is not possible to recover it by a subsequent rollforward action. The rollforward utility skips the transaction and marks the table into which data was being loaded as invalid. The utility also ignores any subsequent transactions against that table. After the rollforward operation completes, the only supported operations on such a table are:
  • Drop the table.
  • Restored the table from a backup (full or table space) that was taken after a commit point that follows the completion of the unrecoverable load operation.

With this option, table spaces are not put in backup pending state after the load operation. A copy of the loaded data is not made during the load operation. If one of COPY NO, COPY YES, or NONRECOVERABLE is not specified, and the database is not recoverable (logarchmeth1 and logarchmeth2 are both set to OFF), then NONRECOVERABLE is the default.

WITHOUT PROMPTING
Specifies that the list of data files contains all the files that are to be loaded, and that the devices or directories that are listed are sufficient for the entire load operation. If a continuation input file is not found, or the copy targets are filled before the load operation finishes, the load operation fails, and the table remains in load pending state.
If this option is not specified, you are prompted for a new tape in the following scenarios:
  • The tape device encounters an end of tape for the copy image.
  • The last item that is listed is a tape device.
DATA BUFFER buffer-size
Specifies the number of 4 KB pages (regardless of the degree of parallelism) to use as buffered space for transferring data within the utility. If you specify a value that is less than the algorithmic minimum, the minimum required resource is used, and no warning is returned.

This memory is allocated directly from the utility heap, whose size can be modified through the util_heap_sz database configuration parameter. Beginning in version 9.5, the value of the DATA BUFFER option of the LOAD command can temporarily exceed util_heap_sz if more memory is available in the system. In this situation, the utility heap is dynamically increased as needed until the database_memory limit is reached. This memory is released after the load operation completes.

If a value is not specified, an intelligent default is calculated by the utility at run time. The calculation is based on the following items:
  • A percentage of the free space available in the utility heap at the instantiation time of the loader.
  • Some characteristics of the table.
SORT BUFFER buffer-size
This option specifies a value that overrides the sortheap database configuration parameter during a load operation. It is relevant only when loading tables with indexes and only when the INDEXING MODE parameter is not specified as DEFERRED. The value that is specified cannot exceed the value of sortheap. This parameter is useful for throttling the sort memory that is used when you load tables with many indexes without changing the value of sortheap, which would also affect general query processing.
CPU_PARALLELISM n
Specifies the number of processes or threads that the load utility creates for parsing, converting, and formatting records when table objects are built. This parameter is designed to use the number of processes that run per database partition. This parameter is useful when you load presorted data, because record order in the source data is preserved. If the value of this parameter is zero, or was not specified, the load utility uses an intelligent default value (usually based on the number of CPUs available) at run time.
Note:
  1. If this parameter is used with tables that contain LOB or LONG VARCHAR fields, its value becomes 1, regardless of the number of system CPUs or the value that was specified by the user.
  2. Specifying a small value for the SAVECOUNT parameter causes the loader to run many more I/O operations to flush both data and table metadata. When CPU_PARALLELISM is greater than one, the flushing operations are asynchronous, which allows the loader to maximize CPU resources. When CPU_PARALLELISM is set to one, the loader waits on I/O during consistency points. A load operation with CPU_PARALLELISM set to two, and SAVECOUNT set to 10 000, completes faster than the same operation with CPU_PARALLELISM set to one, even with only one CPU.
DISK_PARALLELISM n
Specifies the number of processes or threads that the load utility creates for writing data to the table space containers. If a value is not specified, the utility selects an intelligent default based on the number of table space containers and the characteristics of the table.
INDEXING MODE
Specifies whether the load utility rebuilds indexes or extends them incrementally. The following values are valid:
AUTOSELECT
The load utility automatically decides between REBUILD or INCREMENTAL mode. The decision is based on the amount of data that is loaded and the depth of the index tree. Information relating to the depth of the index tree is stored in the index object. RUNSTATS is not required to populate this information. AUTOSELECT is the default indexing mode.
REBUILD
All indexes are rebuilt. The utility must have sufficient resources to sort all index key parts for both old and appended table data.

If the LogIndexBuild database configuration parameter is turned on, the transaction log contains the image of each index page after it is created. If the LogIndexBuild database configuration parameter is turned off, only the allocation and initialization of each page is logged by the Index Manager. The amount of logging is approximately 250 bytes per page, as opposed to the non-empty portion of each page.

INCREMENTAL
Indexes are extended with new data. This approach uses index free space. It requires only enough sort space to append index keys for the inserted records. This method is only supported in cases where the index object is valid and accessible at the start of a load operation. Conversely, it is not valid immediately following a load operation in which the DEFERRED mode was specified. If this mode is specified, but not supported due to the state of the index, a warning is returned, and the load operation continues in REBUILD mode. Similarly, if a load restart operation is begun in the load build phase, INCREMENTAL mode is not supported.

If the LogIndexBuild database configuration parameter is turned on, Db2 generates the log records for the insertion of every key into the index, and any pages that are split. The LogIndexBuild database configuration parameter is commonly turned off when HADR is not used. If this database configuration parameter is turned off, the amount of indexing that is logged by the Index Manager depends on whether the ALLOW READ ACCESS option was specified. If the ALLOW READ ACCESS option is specified, the log record is generated including logs for page splits. If the ALLOW READ ACCESS option is not specified, no log record from the Index Manager is generated.

DEFERRED
The load utility does not attempt index creation if this mode is specified. Indexes are marked as needing a refresh. The first access to such indexes that is unrelated to a load operation might force a rebuild, or indexes might be rebuilt when the database is restarted. This approach requires enough sort space for all key parts for the largest index. When you take this approach, the total time that is needed for index construction is longer than the time required in REBUILD mode. Therefore, when you run multiple load operations with deferred indexing, it is advisable (from a performance viewpoint) to allow the last load operation in the sequence run an index rebuild. Otherwise, the indexes need to be rebuilt at first non-load access.

Deferred indexing is only supported for tables with non-unique indexes so that duplicate keys that are inserted during the load phase are not persistent after the load operation.

ALLOW NO ACCESS
The load operation locks the target table for exclusive access during the load. The table state is set to Load In Progress during the load. ALLOW NO ACCESS is the default behavior. It is the only valid option for LOAD REPLACE.

When the table has constraints, the table state is set to Set Integrity Pending and Load In Progress. The SET INTEGRITY statement must be used to take the table out of Set Integrity Pending state.

ALLOW READ ACCESS
The load operation locks the target table in a share mode. The table state is set to both Load In Progress and Read Access. Readers can access the non-delta portion of the data while the table is being load. In other words, data that existed before the start of the load is accessible by readers to the table. Data that is being loaded is not available until the load is complete.
Important: The ALLOW READ ACCESS parameter is deprecated and might be removed in a future release. For more details, see ALLOW READ ACCESS parameter in the LOAD command is deprecated.

The ALLOW READ ACCESS parameter is not supported for column-organized tables.

LOAD TERMINATE or LOAD RESTART of an ALLOW READ ACCESS load can use this parameter. LOAD TERMINATE or LOAD RESTART of an ALLOW NO ACCESS load cannot use this parameter. Furthermore, this option is not valid if the indexes on the target table are marked as requiring a rebuild.

When the table has constraints, the table state is set to Set Integrity Pending, Load In Progress, and Read Access. At the end of the load, the table state Load In Progress is removed but the table states Set Integrity Pending and Read Access remain. The SET INTEGRITY statement must be used to take the table out of Set Integrity Pending. While the table is in Set Integrity Pending and Read Access states, the non-delta portion of the data is still accessible to readers. The new (delta) portion of the data remains inaccessible until the SET INTEGRITY statement completes. You can run multiple loads on the same table without issuing a SET INTEGRITY statement. However, only the original (checked) data remains visible until the SET INTEGRITY statement is issued.

ALLOW READ ACCESS also supports the following modifiers:
USE tablespace-name
If the indexes are being rebuilt, a shadow copy of the index is built in table space tablespace-name. The indexes are copied over to the original table space at the end of the load during an INDEX COPY PHASE. Only system temporary table spaces can be used with this option. If not specified, then the shadow index is created in the same table space as the index object. If the shadow copy is created in the same table space as the index object, the copy of the shadow index object over the old index object is instantaneous. If the shadow copy is in a different table space from the index object, a physical copy is created. This copy operation might involve considerable I/O and time. The copy happens while the table is offline at the end of a load during the INDEX COPY PHASE.

Without this option, the shadow index is built in the same table space as the original. By default, the original index and shadow index exist in the same table space simultaneously. Therefore, the one table space might have insufficient space to hold both indexes. Use option to ensure that you retain enough table space for the indexes.

This option is ignored if the user does not specify INDEXING MODE REBUILD or INDEXING MODE AUTOSELECT. This option is also ignored if INDEXING MODE AUTOSELECT is chosen and load chooses to incrementally update the index.

FETCH_PARALLELISM YES | NO
When this option is set to YES, the load utility attempts to parallelize fetching from the remote data source in either of the following scenarios:
  • When you run a load from a cursor where the cursor is declared by using the DATABASE keyword.
  • When you use the API sqlu_remotefetch_entry media entry.

You can parallelize fetching of data only if the cursor's select-statement is of the simple form "SELECT * FROM <tablename>". If set to NO, no parallel fetching is done. The default value is YES. For more information, see Moving data using the CURSOR file type.

SET INTEGRITY PENDING CASCADE
If LOAD puts the table into Set Integrity Pending state, use SET INTEGRITY PENDING CASCADE to specify whether the Set Integrity Pending state of the loaded table immediately cascades to all descendant tables. Descendant tables include descendant foreign key tables, descendant immediate materialized query tables, and descendant immediate staging tables.
IMMEDIATE
Indicates that Set Integrity Pending state is immediately extended to all descendant foreign key tables, descendant immediate materialized, query tables, and descendant staging tables. For a LOAD INSERT operation, Set Integrity Pending state is not extended to descendant foreign key tables even if the IMMEDIATE option is specified.

When the loaded table is later checked for constraint violations, descendant foreign key tables that were placed in Set Integrity Pending Read Access state are put into Set Integrity Pending No Access state. The loaded table is checked for constraint violations by using the IMMEDIATE CHECKED option of the SET INTEGRITY statement.

DEFERRED
Indicates that only the loaded table is placed in the Set Integrity Pending state. The states of the descendant foreign key tables, descendant immediate materialized query tables, and descendant immediate staging tables remain unchanged.

Descendant foreign key tables might later be implicitly placed in Set Integrity Pending state when their parent tables are checked for constraint violations. Constraint violations are checked by using the IMMEDIATE CHECKED option of the SET INTEGRITY statement. Descendant immediate materialized query tables and descendant immediate staging tables are implicitly placed in Set Integrity Pending state when one of its underlying tables is checked for integrity violations. A query of a table that is in the Set Integrity Pending state might succeed if an eligible materialized query table that is not in the Set Integrity Pending state is accessed by the query instead of the specified table. A warning (SQLSTATE 01586) is issued to indicate that descendant tables were placed in Set Integrity Pending state. See the Notes section of the SET INTEGRITY statement for when these descendant tables are put into Set Integrity Pending state.

If the SET INTEGRITY PENDING CASCADE option is not specified, only the loaded table is placed in Set Integrity Pending state. The state of descendant foreign key tables, descendant immediate materialized query tables, and descendant immediate staging tables remain unchanged. These tables can later be implicitly put into Set Integrity Pending state when the loaded table is checked for constraint violations.

If LOAD does not put the target table into Set Integrity Pending state, the SET INTEGRITY PENDING CASCADE option is ignored.

LOCK WITH FORCE
While loading data, the utility acquires various locks, including table locks. Rather than wait and possibly timeout when a lock is acquired, this option allows load to force off other applications that hold conflicting locks on the target table. Applications holding conflicting locks on the system catalog tables are not be forced off by the load utility. Forced applications roll back and release the locks the load utility needs. The load utility can then proceed. This option requires the same authority as the FORCE APPLICATIONS command (SYSADM, SYSCTRL, or SYSMAINT).

ALLOW NO ACCESS can force applications that hold conflicting locks at the start of the load operation. At the start of the load operation, the utility can force applications that are attempting to either query or modify the table.

ALLOW READ ACCESS can force applications that hold conflicting locks at the start or end of the load operation. At the start of the load operation, the load utility can force applications that are attempting to modify the table. At the end of the load operation, the load utility can force applications that are attempting to either query or modify the table.

SOURCEUSEREXIT executable
Specifies the name of an executable file that is called to feed data into the utility.

The SOURCEUSEREXIT parameter is not supported for column-organized tables.

REDIRECT
INPUT FROM
BUFFER input-buffer
The stream of bytes specified in input-buffer is passed into the STDIN file descriptor of the process that is running the executable file.
FILE input-file
The contents of this client-side file are passed into the STDIN file descriptor of the process that is running the executable file.
OUTPUT TO
FILE output-file
The STDOUT and STDERR file descriptors are captured to the fully qualified server-side file specified.
PARALLELIZE
Increases the throughput of data that enters the load utility by using multiple user exit processes simultaneously. This option is applicable only in multi-partition database environments and is ignored in single-partition database environments.
For more information, see Moving data using a customized application (user exit).
PARTITIONED DB CONFIG partitioned-db-option
Use this option to load into a table that is distributed across multiple database partitions. You can specify configuration options that are exclusive to partitioned databases. The partitioned-db-option values can be any of the following options:
PART_FILE_LOCATION x
OUTPUT_DBPARTNUMS x
PARTITIONING_DBPARTNUMS x
MODE x
MAX_NUM_PART_AGENTS x
ISOLATE_PART_ERRS x
STATUS_INTERVAL x
PORT_RANGE x
CHECK_TRUNCATION
MAP_FILE_INPUT x
MAP_FILE_OUTPUT x
TRACE x
NEWLINE
DISTFILE x
OMIT_HEADER
RUN_STAT_DBPARTNUM x
Detailed descriptions of these options are provided in Load configuration options for partitioned database environments.
RESTARTCOUNT
Deprecated.
USING directory
Deprecated.

Example 1

TABLE1 has five columns:
  • COL1 VARCHAR 20 NOT NULL WITH DEFAULT
  • COL2 SMALLINT
  • COL3 CHAR 4
  • COL4 CHAR 2 NOT NULL WITH DEFAULT
  • COL5 CHAR 2 NOT NULL
ASCFILE1 has seven elements:
  • ELE1 positions 01 - 20
  • ELE2 positions 21 - 22
  • ELE3 positions 23 - 23
  • ELE4 positions 24 - 27
  • ELE5 positions 28 - 31
  • ELE6 positions 32 - 32
  • ELE7 positions 33 - 40
ASCFILE1 has the following data records:
1...5....10....15....20....25....30....35....40
Test data 1         XXN 123abcdN
Test data 2 and 3   QQY    wxyzN
Test data 4,5 and 6 WWN6789    Y
The following command loads the table from the file:
db2 load from ascfile1 of asc modified by striptblanks reclen=40
   method L (1 20, 21 22, 24 27, 28 31)
   null indicators (0,0,23,32)
   insert into table1 (col1, col5, col2, col3)
Note:
  1. The specification of striptblanks in the MODIFIED BY parameter forces the truncation of blanks in VARCHAR columns (COL1, for example, which is 11, 17 and 19 bytes long, in rows 1, 2 and 3).
  2. The specification of reclen=40 in the MODIFIED BY parameter indicates that each input record does not end with a newline character, and that each record is 40 bytes long. The last 8 bytes are not used to load the table.
  3. Since COL4 is not provided in the input file, it is inserted into TABLE1 with its default value (it is defined NOT NULL WITH DEFAULT).
  4. Positions 23 and 32 are used to indicate whether COL2 and COL3 of TABLE1 are loaded with NULL for a specific row. If a column's null indicator position for a specific record contains Y, the column is NULL. If a column's null indicator contains N, the data values in the column's data positions of the input record (as defined in L(........)) are used as the source of column data for the row. In this example, neither column in row 1 is NULL; COL2 in row 2 is NULL; and COL3 in row 3 is NULL.
  5. In this example, the NULL INDICATORS for COL1 and COL5 are specified as 0 (zero), indicating that the data is not nullable.
  6. The NULL INDICATOR for a specific column can be anywhere in the input record, but the position must be specified, and the Y or N values must be supplied.

Example 2 (Loading LOBs from Files)

TABLE1 has three columns:
  • COL1 CHAR 4 NOT NULL WITH DEFAULT
  • LOB1 LOB
  • LOB2 LOB
ASCFILE1 has three elements:
  • ELE1 positions 01 - 04
  • ELE2 positions 06 - 13
  • ELE3 positions 15 - 22
The following files are located in either /u/user1 or /u/user1/bin:
  • ASCFILE2 has LOB data.
  • ASCFILE3 has LOB data.
  • ASCFILE4 has LOB data.
  • ASCFILE5 has LOB data.
  • ASCFILE6 has LOB data.
  • ASCFILE7 has LOB data.
The following data records are in ASCFILE1:
1...5....10...15...20...25...30.
REC1 ASCFILE2 ASCFILE3
REC2 ASCFILE4 ASCFILE5
REC3 ASCFILE6 ASCFILE7
The following command loads the table from the file:
db2 load from ascfile1 of asc
   lobs from /u/user1, /u/user1/bin
   modified by lobsinfile reclen=22
   method L (1 4, 6 13, 15 22)
   insert into table1
Note:
  1. The specification of lobsinfile in the MODIFIED BY parameter tells the loader that all LOB data is to be loaded from files.
  2. The specification of reclen=22 in the MODIFIED BY parameter indicates that each input record does not end with a newline character, and that each record is 22 bytes long.
  3. LOB data is contained in six files, ASCFILE2 through ASCFILE7. Each file contains the data that is used to load a LOB column for a specific row. The relationship between LOBs and other data is specified in ASCFILE1. The first record of this file tells the loader to place REC1 in COL1 of row 1. The contents of ASCFILE2 is used to load LOB1 of row 1, and the contents of ASCFILE3 is used to load LOB2 of row 1. Similarly, ASCFILE4 and ASCFILE5 is used to load LOB1 and LOB2 of row 2, and ASCFILE6 and ASCFILE7 is used to load the LOBs of row 3.
  4. The LOBS FROM parameter contains two paths that are searched for the named LOB files when those files are required by the loader.
  5. To load LOBs directly from ASCFILE1 (a nondelimited ASCII file), without the lobsinfile modifier, the following rules must be observed:
    • The total length of any record, including LOBs, cannot exceed 32 KB.
    • LOB fields in the input records must be of fixed-length, and LOB data padded with blanks as necessary.
    • The striptblanks modifier must be specified so that the trailing blanks used to pad LOBs can be removed as the LOBs are inserted into the database.

Example 3 (Using dump files)

Table FRIENDS is defined as:
table friends "( c1 INT NOT NULL, c2 INT, c3 CHAR(8) )"
An attempt is made to load the following data records into this table:
23, 24, bobby
, 45, john
4,, mary

However, the second row is rejected because the first INT is NULL, and the column definition specifies NOT NULL. Columns that contain initial characters that are not consistent with the DEL format generate an error, and the record is rejected. Such records can be written to a dump file.

DEL data that appears in a column outside of character delimiters is ignored, but generates a warning. In the following example, the utility loads sam in the third column of the table and the characters sdf are flagged in a warning. The record is not rejected.
22,34,"bob"
24,55,"sam" sdf
In the following example, the utility loads 22,34,"bob", and generates a warning that some data in column one following the 22 was ignored. The record is not rejected.
22 3, 34,"bob"

Example 4 (Loading a Table with an Identity Column)

TABLE1 has four columns:
  • C1 VARCHAR(30)
  • C2 INT GENERATED BY DEFAULT AS IDENTITY
  • C3 DECIMAL(7,2)
  • C4 CHAR(1)

TABLE2 is the same as TABLE1, except that C2 is a GENERATED ALWAYS identity column.

The following data records are in DATAFILE1 (DEL format):
"Liszt"
"Hummel",,187.43, H
"Grieg",100, 66.34, G
"Satie",101, 818.23, I
The following data records are in DATAFILE2 (DEL format):
"Liszt", 74.49, A
"Hummel", 0.01, H
"Grieg", 66.34, G
"Satie", 818.23, I
Note:
  1. The following command generates identity values for rows 1 and 2, since no identity values are supplied in DATAFILE1 for those rows. However, row 3 is assigned the user-supplied identity value of 100 and row 4 is assigned the user-supplied identity value of 101.
    db2 load from datafile1.del of del replace into table1
  2. To load DATAFILE1 into TABLE1 so that identity values are generated for all rows, issue one of the following commands:
    db2 load from datafile1.del of del method P(1, 3, 4)
       replace into table1 (c1, c3, c4)
    db2 load from datafile1.del of del modified by identityignore
       replace into table1
  3. To load DATAFILE2 into TABLE1 so that identity values are generated for each row, issue one of the following commands:
    db2 load from datafile2.del of del replace into table1 (c1, c3, c4)
    db2 load from datafile2.del of del modified by identitymissing
       replace into table1
  4. To load DATAFILE1 into TABLE2 so that the identity values of 100 and 101 are assigned to rows 3 and 4, issue the following command:
    db2 load from datafile1.del of del modified by identityoverride
       replace into table2

    In this case, rows 1 and 2 are rejected, because load options were specified to override system-generated identity values in favor of user-supplied values. However, if user-supplied values are not present, the row must be rejected, because identity columns are implicitly not NULL.

  5. If DATAFILE1 is loaded into TABLE2 without using any of the identity-related file type modifiers, rows 1 and 2 are loaded. However, rows 3 and 4 are rejected because they supply their own non-NULL values, and the identity column is GENERATED ALWAYS.

Example 5 (Loading a Table with a Row Change Timestamp Column)

TABLE1 has four columns:

  • C1 VARCHAR(30)
  • C2 ROW CHANGE TIMESTAMP GENERATED BY DEFAULT
  • C3 DECIMAL(7,2)
  • C4 CHAR(1)

TABLE2 is the same as TABLE1, except that C2 is a GENERATED ALWAYS column.

The following data records are in DATAFILE1 (DEL format):

"Liszt"
"Hummel",,187.43, H
"Grieg", 2006-05-23-15.55.53.209971, 66.34, G
"Satie", 2006-05-22-19.34.14.947681, 818.23, I

The following data records are in DATAFILE2 (DEL format):

"Liszt", 74.49, A
"Hummel", 0.01, H
"Grieg", 66.34, G
"Satie", 818.23, I
Note:
  1. The following command generates ROW CHANGE TIMESTAMP values for rows 1 and 2, since no ROW CHANGE TIMESTAMP values are supplied in DATAFILE1 for those rows. Rows 3 and 4, however, are assigned the user-supplied ROW CHANGE TIMESTAMP values of 2006-05-23-15.55.53.209971 and 2006-05-22-19.34.14.947681.
    db2 load from datafile1.del of del replace
    into table1
  2. To load DATAFILE1 into TABLE1 so that ROW CHANGE TIMESTAMP values are generated for all rows, issue one of the following commands:
    db2 load from datafile1.del of del method P(1, 3, 4) replace into table1
     (c1, c3, c4)
    
    db2 load from datafile1.del of del modified by rowchangetimestampignore
     replace into table1
    
  3. To load DATAFILE2 into TABLE1 so that ROW CHANGE TIMESTAMP values are generated for each row, issue one of the following commands:
    db2 load from datafile2.del of del replace into table1 (c1, c3, c4)
    
    db2 load from datafile2.del of del modified by rowchangetimestampmissing
     replace into table1
    
  4. To load DATAFILE1 into TABLE2 so that the ROW CHANGE TIMESTAMP values of 2006-05-23-15.55.53.209971 and 2006-05-22-19.34.14.947681 are assigned to rows 3 and 4, issue the following command:
    db2 load from datafile1.del of del modified by rowchangetimestampoverride
     replace into table2

    In this case, rows 1 and 2 are rejected, because the utility was instructed to override system-generated ROW CHANGE TIMESTAMP values in favor of user-supplied values. However, if user-supplied values are not present, the row must be rejected, because row change timestamp columns are implicitly not NULL.

  5. If DATAFILE1 is loaded into TABLE2 without using any of the file type modifiers that relate to ROW CHANGE, then rows 1 and 2 are loaded. However, rows 3 and 4 are rejected because they supply their own non-NULL values, and the row change timestamp column is GENERATED ALWAYS.

Example 6 (Loading a table by using the CURSOR file type)

Table ABC.TABLE1 has 3 columns:
ONE INT
TWO CHAR(10)
THREE DATE
Table ABC.TABLE2 has 3 columns:
ONE VARCHAR
TWO INT
THREE DATE
By running the following commands, you can load all the data from ABC.TABLE1 into ABC.TABLE2:
db2 declare mycurs cursor for select two,one,three from abc.table1
db2 load from mycurs of cursor insert into abc.table2
If ABC.TABLE1 exists in a database different from the database ABC.TABLE2 is in, the DATABASE, USER, and USING options of the DECLARE CURSOR statement can be used to run the load. For example, if ABC.TABLE1 exists in database DB1, and the user ID and password for DB1 are user1 and pwd1, running the following commands load all the data from ABC.TABLE1 into ABC.TABLE2:
db2 declare mycurs cursor database DB1 user user1 using pwd1
   for select two,one,three from abc.table1
db2 load from mycurs of cursor insert into abc.table2

Example - Loading XML data

The user created a data file with XDS fields to describe the documents that are to be inserted into the table. It might appear like this:

1, "<XDS FIL=""file1.xml"" />"
2, "<XDS FIL='file2.xml' OFF='23' LEN='45' />"

For the first row, the XML document is identified by the file named file1.xml. Since the character delimiter is the double quotation mark, and double quotation marks exist inside the XDS, the double quotation marks that are contained within the XDS are doubled. For the second row, the XML document is identified by the file whose name is file2.xml and starts at byte offset 23 and is 45 bytes.

The user issues a load command without any parsing or validation options for the XML column, and the data is loaded successfully:
LOAD
FROM data.del of DEL INSERT INTO mytable

Example - Loading XML data from CURSOR

Loading data from a cursor is the same as with a regular relational column type. This example has two tables, T1 and T2. Each table consists of a single XML column named C1. To LOAD from T1 into T2, first declare a cursor:

DECLARE
X1 CURSOR FOR SELECT C1 FROM T1;

Next, you can issue a LOAD by using the cursor type:

LOAD FROM X1 of
CURSOR INSERT INTO T2

Applying the XML specific LOAD options to the cursor type is the same as loading from a file.

Examples of using the XMLVALIDATE clause

XMLVALIDATE USING XDS
  • The user would like to validate according to the schema indicated in the XDS. If XDS values exist without SCH attributes, these documents are validated against the schema that is used in this clause.

    XMLVALIDATE
    USING XDS DEFAULT S1.SCHEMA_B
  • The user would like to validate but IGNORE certain schemas that are mentioned in the XDS.

    XMLVALIDATE USING XDS IGNORE S1.SCHEMA_C
  • The user would like to validate but remap some of the schemas in the XDS.
    XMLVALIDATE USING XDS MAP(  (S1.SCHEMA_A,
    S2.SCHEMA_B ), (S3.SCHEMA_C, S5.SCHEMA_E) )

    Based on the specified XDS, any document with an SCH attribute of S1.SCHEMA_A is validated against S2.SCHEMA_B. Also, any document with an SCH attribute of S3.SCHEMA_C is validated against S5.SCHEMA_E.

  • The user would like to use a combination of the DEFAULT, IGNORE, and MAP options:
    XMLVALIDATE USING XDS
      DEFAULT S8.SCHEMA_H
      IGNORE (S9.SCHEMA_I, S10.SCHEMA_J)
      MAP ((S1.SCHEMA_A, S2.SCHEMA_B), (S3.SCHEMA_C, S5.SCHEMA_E),
        (S6.SCHEMA_F, S3.SCHEMA_C), (S4.SCHEMA_D, S7.SCHEMA_G))
    • The following example shows an XML column that contains an XDS:
      <XDS FIL='xmlfile.001.xml' />
      The XML schema with SQL identifier S8.SCHEMA_H is used to validate the document in file xmlfile.001.xml, since S8.SCHEMA_H was specified as the default schema to use.
    • The following example shows an XML column that contains an XDS:
      <XDS FIL='xmlfile.002.xml' OFF='10' LEN='500' SCH='S10.SCHEMA_J' />
      No schema validation occurs for the document in file xmlfile.002.xml, since although the XDS specifies S10.SCHEMA_J as the schema to use, that schema is part of the IGNORE clause. The document contents can be found at byte offset 10 in the file (meaning the 11th byte), and is 500 bytes long.
    • The following example shows an XML column that contains an XDS:
      <XDS FIL='xmlfile.003.xml' SCH='S6.SCHEMA_F' />

      The XML schema with SQL identifier S3.SCHEMA_C is used to validate the document in file xmlfile.003.xml. This validation is used because the MAP clause specifies that schema S6.SCHEMA_F maps to schema S3.SCHEMA_C. Further mapping does not take place. Therefore, the mapping of schema S3.SCHEMA_C to schema S5.SCHEMA_E does not apply in this case.

    • The following example shows an XML column that contains an XDS:
      <XDS FIL='xmlfile.004.xml' SCH='S11.SCHEMA_K' />

      The XML schema with SQL identifier S11.SCHEMA_K is used to validate the document in file xmlfile.004.xml. None of the DEFAULT, IGNORE, or MAP specifications apply in this case.

XMLVALIDATE USING SCHEMA
The user wants to validate all XML documents according to a single SCHEMA. These examples are based on the following XMLVALIDATE clause:
XMLVALIDATE USING SCHEMA S2.SCHEMA_B
  • The following example shows an XML column that contains an XDS:
    <XDS FIL='xmlfile.001.xml' />

    The document in file xmlfile.001.xml is validated by using the XML schema with SQL identifier S2.SCHEMA_B.

  • The following example shows an XML column that contains an XDS:
    <XDS FIL='xmlfile.002.xml' SCH='S1.SCHEMA_A' />

    The document in file xmlfile.002.xml is validated by using the XML schema with SQL identifier S2.SCHEMA_B. The SCH attribute is ignored, since validation is done by using a schema that is specified in the USING SCHEMA clause.

XMLVALIDATE USING SCHEMALOCATION HINTS

The user would like to validate against schema information located within the document itself. These examples are based on the following XMLVALIDATE clause:

XMLVALIDATE
USING SCHEMALOCATION HINTS
  • The following example shows an XML column that contains an XDS:
    <XDS FIL='xmlfile.001.xml' />
    The XML schema that is used is determined by the schemaLocation attribute in the document contents, and no validation would occur if one is not present.
  • The following example shows an XML column that contains an XDS:
    <XDS FIL='xmlfile.002.xml' SCH='S1.SCHEMA_A' />
    The XML schema that is used is determined by the schemaLocation attribute in the document contents, and no validation would occur if one is not present. The SCH attribute is ignored, since validation is done by using SCHEMALOCATION HINTS.

Usage notes

  • Data is loaded in the sequence that appears in the input file. If a particular sequence is preferred, sort the data before a load is attempted. If preservation of the source data order is not required, you can use the ANYORDER file type modifier, described in the following File type modifiers for the load utility section.
  • The load utility builds indexes based on existing definitions. The exception tables are used to handle duplicates on unique keys. The utility does not enforce referential integrity, check for constraints, or update materialized query tables that depend on the tables that are part of the load operation. Tables that include referential or check constraints are placed in Set Integrity Pending state. Summary tables that are defined with REFRESH IMMEDIATE, and that depend on tables that are part of the load operation, are also placed in Set Integrity Pending state. Issue the SET INTEGRITY statement to take the tables out of Set Integrity Pending state. Load operations cannot be carried out on replicated materialized query tables.
  • If a clustering index exists on the table, sort the data on the clustering index before loading. However, you do not need to sort data before you load it into a multidimensional clustering (MDC) table.
  • If you specify an exception table when you load data into a protected table, any rows that are protected by invalid security labels are sent to that table. This operation might allow users that have access to the exception table to access data that they are otherwise unauthorized to access. Follow these guidelines for better security:
    • Be careful who you grant exception table access to.
    • Delete each row as soon as it is repaired and copied to the table that you are loading.
    • Drop the exception table as soon as you are done with it.
  • Security labels in their internal format might contain newline characters. If you load the file by using the DEL file format, those newline characters can be mistaken for delimiters. If you have this problem, use the older default priority for delimiters by specifying the delprioritychar file type modifier in the LOAD command.
  • To run LOAD with the CURSOR file type where the DATABASE keyword was specified during the DECLARE CURSOR statement, the same credentials are used for the following authentication:
    • The user ID and password that were used to authenticate against the currently connected database for the load operation.
    • The user ID and password that were used to authenticate against the source database, which was specified in the DATABASE option of the DECLARE CURSOR statement.

    If no user ID or password was specified for the connection to the loading database, a user ID and password for the source database must be specified during the DECLARE CURSOR statement.

  • Loading a multiple-part PC/IXF file whose individual parts are copied from a Windows system to an AIX® system is supported. The names of all the files must be specified in the LOAD command. For example, LOAD FROM DATA.IXF, DATA.002 OF IXF INSERT INTO TABLE1. Loading to the Windows operating system from logically split PC/IXF files is not supported.
  • When a failed LOAD is restarted, the behavior follows the existing behavior in that the BUILD phase is forced to use the REBUILD mode for indexes.
  • The load utility might generate a large copy of the image file when the COPY YES option is used. This behavior is expected when the LOAD command writes out an entire buffer of data to the copy image for every LOB/LF column value that is loaded. The buffer is an internal object, and its size is determined by several internal and external factors. Typically, the buffer size is between 68 KB and a few hundred KB.
  • Loading XML documents between databases is not supported and returns error message SQL1407N.
  • The LOAD utility does not support loading into tables that contain columns that reference fenced procedures. If you issue the LOAD command on such a table, an error message is returned (SQL1376N). To work around this restriction, you can redefine the routine to be unfenced, or use the import utility.
  • If a table contains a generated column expression in which the user-defined function is a compiled compound SQL, you can use the LOAD utility only with the generatedoverride file type modifier to insert data into the table. You can also use the import utility to insert data into these tables.
  • If the database table contains implicitly hidden columns, you must specify whether data for the hidden columns is included in the load operation.
  • The IMPORT utility does not match the number of columns in a table and the number of fields in a data file. The utility checks for sufficient data in the data file. If a row in the data file does not contain sufficient columns of data, either of the following actions are taken:
    • The row is rejected with a warning message if the corresponding table columns without data are defined as NOT NULL.
    • The row is inserted successfully without a warning message if the corresponding table columns are defined as NULL.

    Conversely, if a row contains a higher number of columns than required, the sufficient number of columns are processed while the remaining columns of data are omitted. When this scenario occurs, no warning message is given.

  • The STATISTICS options work only for the LOAD REPLACE option and do not work for other LOAD command options.
  • When the LOAD utility is used with the COPY YES option, and the table contains LOB columns, LOAD always enforces COMPACT behavior even when the LOB column is defined with NOT COMPACT.

Summary of LOAD TERMINATE and LOAD RESTART dictionary management

The following chart summarizes the compression dictionary management behavior for LOAD processing under the TERMINATE directive.

Table 3. LOAD TERMINATE dictionary management
Table COMPRESS attribute Does table row data dictionary existed before LOAD? XML storage object dictionary exists before LOAD1 TERMINATE: LOAD REPLACE KEEPDICTIONARY or LOAD INSERT TERMINATE: LOAD REPLACE RESETDICTIONARY
YES YES YES Keep existing dictionaries. Neither dictionary is kept. 2
YES YES NO Keep existing dictionary. Nothing is kept. 2
YES NO YES Keep existing dictionary. Nothing is kept.
YES NO NO Nothing is kept. Nothing is kept.
NO YES YES Keep existing dictionaries. Nothing is kept.
NO YES NO Keep existing dictionary. Nothing is kept.
NO NO YES Keep existing dictionary. Nothing is kept.
NO NO NO Do nothing. Do nothing.
Note:
  1. A compression dictionary can be created for the XML storage object of a table only when either of the following conditions are met:
    • The XML columns are added to the table in Db2 Version 9.7 or later.
    • The table is migrated by using an online table move.
  2. In the special case that the table has data capture enabled, the table row data dictionary is kept.

LOAD RESTART truncates a table up to the last consistency point reached. As part of LOAD RESTART processing, a compression dictionary exists in the table if it was present in the table at the time the last LOAD consistency point was taken. In that case, LOAD RESTART does not create a new dictionary. For a summary of the possible conditions, see Table 4.

Table 4. LOAD RESTART dictionary management
Table COMPRESS Attribute Table row data dictionary exists before LOAD consistency point?1 XML Storage object dictionary existed before last LOAD?2 RESTART: LOAD REPLACE KEEPDICTIONARY or LOAD INSERT RESTART: LOAD REPLACE RESETDICTIONARY
YES YES YES Keep existing dictionaries. Keep existing dictionaries.
YES YES NO Keep existing table row data dictionary and build XML dictionary subject to ADC. Keep existing table row data dictionary and build XML dictionary.
YES NO YES Build table row data dictionary subject to ADC. Keep existing XML dictionary. Build table row data dictionary. Keep existing XML dictionary.
YES NO NO Build table row data and XML dictionaries subject to ADC. Build table row data and XML dictionaries.
NO YES YES Keep existing dictionaries. Remove existing dictionaries.
NO YES NO Keep existing table row data dictionary. Remove existing table row data dictionary.
NO NO YES Keep existing XML dictionary. Remove existing XML dictionary.
NO NO NO Do nothing. Do nothing.
Notes:
  1. The SAVECOUNT option is not allowed when you load XML data, load operations that fail during the load phase restart from the beginning of the operation.
  2. A compression dictionary can be created for the XML storage object of a table only when either of the following conditions are met:
    • The XML columns are added to the table in Db2 Version 9.7 or later.
    • The table is migrated by using an online table move.

File type modifiers for the load utility

Table 5. Valid file type modifiers for all file formats
Modifier Description
anyorder This modifier specifies that the preservation of the source data order is not required. Using this modifier yields significant performance benefits on SMP systems. Use this modifier with the cpu_parallelism parameter. If the value of the cpu_parallelism parameter is 1, this modifier is ignored. This modifier is not supported if the value of the SAVECOUNT parameter is greater the 0, because crash recovery after a consistency point requires that data is loaded in sequence. This modifier is implicitly turned on for all load operations for column-organized tables, multidimensional clustering (MDC) tables, and range-partitioned tables.
cdeanalyzefrequency=x x is an integer between 0 - 99 inclusive. This value controls how much data is sampled in the ANALYZE phase to produce a compression dictionary. In a massively parallel processing (MPP), the sampling size is not aggregated across members. The ANALYZE phase is stopped when the first member reaches max.
generatedignore This modifier informs the load utility to ignore data for all generated columns, even though it is present in the data file. This instruction results in all column values generated by the utility, including generated column values. This modifier cannot be used with either the generatedmissing or the generatedoverride modifier.
generatedmissing If this modifier is specified, the utility assumes that the input data file contains no data for the generated column (not even NULLs). This instruction results in all column values generated by the utility, including generated column values. This modifier cannot be used with either the generatedignore or the generatedoverride modifier.
generatedoverride This modifier instructs the load utility to accept user-supplied data for all generated columns in the table (contrary to the normal rules for these types of columns). This modifier is useful when migrating data from another database system, or when you load a table from data that was recovered by using the RECOVER DROPPED TABLE option on the ROLLFORWARD DATABASE command. When this modifier is used, any rows with no data or NULL data for a non-nullable generated column are rejected (SQL3116W). When this modifier is used, the table is placed in Set Integrity Pending state. To take the table out of Set Integrity Pending state without verifying the user-supplied values, issue the following command after the load operation:
SET INTEGRITY FOR table-name GENERATED COLUMN
  IMMEDIATE UNCHECKED
To take the table out of Set Integrity Pending state and force verification of the user-supplied values, issue the following command after the load operation:
SET INTEGRITY FOR table-name IMMEDIATE CHECKED.

When this modifier is specified and a generated column exists in any of the partitioning keys, dimension keys, or distribution keys, then the LOAD command automatically converts the modifier to generatedignore. The LOAD command then proceeds with the load operation. The net effect is to regenerate all the generated column values.

This modifier cannot be used with either the generatedmissing or the generatedignore modifier.

This modifier cannot be used for column-organized tables (SQLSTATE 42858).

Random distribution tables that use the random by generation method have an internally generated column that is called RANDOM_DISTRIBUTION_KEY. This modifier does not apply to that column, only to other generated columns in the table. Values for the RANDOM_DISTRIBUTION_KEY are always regenerated.

identityignore This modifier informs the load utility to ignore data for the identity column, even though it is present in the data file. The result is that all identity values are generated by the utility. The behavior is the same for both GENERATED ALWAYS and GENERATED BY DEFAULT identity columns. For GENERATED ALWAYS columns, no rows are rejected. This modifier cannot be used with either the identitymissing or the identityoverride modifier.
identitymissing If this modifier is specified, the utility assumes that the input data file contains no data for the identity column (not even NULLs). Therefore, it generates a value for each row. The behavior is the same for both GENERATED ALWAYS and GENERATED BY DEFAULT identity columns. This modifier cannot be used with either the identityignore or the identityoverride modifier.
identityoverride Use this modifier only when an identity column defined as GENERATED ALWAYS is present in the table to be loaded. It instructs the utility to accept explicit, non-NULL data for such a column (contrary to the normal rules for these types of identity columns). This modifier is useful when in the following scenarios:
  • When you migrate data from another database system and the table must be defined as GENERATED ALWAYS.
  • When you load a table from data that was recovered by using the DROPPED TABLE RECOVERY option on the ROLLFORWARD DATABASE command.

When this modifier is used, any rows with no data or NULL data for the identity column are rejected (SQL3116W). This modifier cannot be used with either the identitymissing or the identityignore modifier. The load utility does not attempt to maintain or verify the uniqueness of values in the table's identity column when this option is used.

implicitlyhiddeninclude If this modifier is specified, the utility assumes that the input data file contains data for the implicitly hidden columns and this data is also loaded. This modifier cannot be used with the implicitlyhiddenmissing modifier. For more information about the precedence when multiple modifiers are specified, see Notes.

Random distribution tables that use the random by generation method have a hidden column as its distribution column called RANDOM_DISTRIBUTION_KEY. This modifier does not apply to that column, only to the other hidden columns in the table. The RANDOM_DISTRIBUTION_KEY is treated as if it was missing.

implicitlyhiddenmissing If this modifier is specified, the utility assumes that the input data file does not contain data for the implicitly hidden columns and the utility generates values for those hidden columns. This modifier cannot be used with the implicitlyhiddeninclude modifier. For more information about the precedence when multiple modifiers are specified, see Notes.
indexfreespace=x x is an integer between 0 and 99 inclusive. The value is interpreted as the percentage of each index page that is to be left as free space when load rebuilds the index. Load with INDEXING MODE INCREMENTAL ignores this option. The first entry in a page is added without restriction; subsequent entries are added to maintain the percent free space threshold. The default value is the one used at CREATE INDEX time.

This value takes precedence over the PCTFREE value that was specified in the CREATE INDEX statement. The indexfreespace option affects index leaf pages only.

lobsinfile lob-path specifies the path to the files that contain LOB data. The ASC, DEL, or IXF load input files contain the names of the files that have LOB data in the LOB column.

This option is not supported when specified along with the CURSOR file type.

The LOBS FROM clause specifies where the LOB files are located when the lobsinfile modifier is used. The LOBS FROM clause implicitly activates the lobsinfile behavior. The LOBS FROM clause conveys to the LOAD utility the list of paths to search for the LOB files while data is loaded.

Each path contains at least one file that contains at least one LOB pointed to by a Lob Location Specifier (LLS) in the data file. The LLS is a string representation of the location of a LOB in a file that is stored in the LOB file path. The format of an LLS is filename.ext.nnn.mmm/, where:
  • filename.ext is the name of the file that contains the LOB.
  • nnn is the offset in bytes of the LOB within the file.
  • mmm is the length of the LOB in bytes.

For example, if the string db2exp.001.123.456/ is stored in the data file, the LOB is located at offset 123 in the file db2exp.001, and is 456 bytes long.

To indicate a null LOB, enter the size as -1. If the size is specified as 0, it is treated as a 0 length LOB. For null LOBS with length of -1, the offset and the file name are ignored. For example, the LLS of a null LOB might be db2exp.001.7.-1/.

maxanalyzesize=x x is the size that has a value of <Number><Megabytes|Gigabytes>. The default size is 128 GB. maxanalyzesize controls how much data is sampled in the ANALYZE phase to produce compression dictionary. In a massively parallel processing (MPP), the sampling size is not aggregated across members. The ANALYZE phase is stopped when first member reaches max.
Note: A value of 0 means unlimited (full size).
For example:
modified by maxanalyzesize=1G
modified by maxanalyzesize=100M
noheader Skips the header verification code (applicable only to load operations into tables that exist in a single-partition database partition group).

If the default MPP load (mode PARTITION_AND_LOAD) is used against a table in a single-partition database partition group, the file is not expected to have a header. Thus the noheader modifier is not needed. If the LOAD_ONLY mode is used, the file is expected to have a header. The only circumstance in which you need to use the noheader modifier is if you wanted to run a LOAD_ONLY operation with a file that does not have a header.

norowwarnings Suppresses all warnings about rejected rows.
pagefreespace=x x is an integer between 0 and 100 inclusive. The value is interpreted as the percentage of each data page that is to be left as free space. If the specified value is invalid because of the minimum row size, the row is placed on a new page. An example of an invalid value is a row that is at least 3 000 bytes long, and an x value of 50. If a value of 100 is specified, each row is placed on a new page. The PCTFREE value of a table determines the amount of free space that is designated per page. If a pagefreespace value on the load operation or a PCTFREE value on a table are not set, the utility fills up as much space as possible on each page. The value set by pagefreespace overrides the PCTFREE value that was specified for the table.
periodignore This modifier informs the load utility to ignore data for the period columns, even though it is present in the data. When this modifier is specified, all period column values are generated by the utility. This modifier cannot be used with the periodmissing modifier or the periodoverride modifier.
periodmissing If this modifier is specified, the utility assumes that the input data file contains no data for the period columns. When this modifier is specified, all period column values are generated by the utility. This modifier cannot be used with the periodignore modifier or the periodoverride modifier.
periodoverride This modifier instructs the load utility to accept user-supplied data for GENERATED ALWAYS AS ROW BEGIN and GENERATED ALWAYS AS ROW END columns in a system-period temporal table. This behavior is contrary to the normal rules for these types of columns. The modifier can be useful when you want to maintain history data and load data that includes timestamps into a system-period temporal table. When this modifier is used, any rows with no data or NULL data in a ROW BEGIN or ROW END column are rejected.
rowchangetimestampignore This modifier informs the load utility to ignore data for the row change timestamp column, even though it is present in the data file. The result is that all ROW CHANGE TIMESTAMPs being generated by the utility. The behavior is the same for both GENERATED ALWAYS and GENERATED BY DEFAULT columns. For GENERATED ALWAYS columns, no rows are rejected. This modifier cannot be used with either the rowchangetimestampmissing or the rowchangetimestampoverride modifier.
rowchangetimestampmissing If this modifier is specified, the utility assumes that the input data file contains no data for the row change timestamp column (not even NULLs). Therefore, a value is generated for each row. The behavior is the same for both GENERATED ALWAYS and GENERATED BY DEFAULT columns. This modifier cannot be used with either the rowchangetimestampignore or the rowchangetimestampoverride modifier.
rowchangetimestampoverride Use this modifier only when a row change timestamp column defined as GENERATED ALWAYS is present in the table to be loaded. It instructs the utility to accept explicit, non-NULL data for such a column (contrary to the normal rules for these types of row change timestamp columns). This modifier is useful in the following scenarios:
  • When you migrate data from another database system and the table must be defined as GENERATED ALWAYS.
  • When you load a table from data that was recovered by using the DROPPED TABLE RECOVERY option on the ROLLFORWARD DATABASE command.

When this modifier is used, any rows with no data or NULL data for the ROW CHANGE TIMESTAMP column are rejected (SQL3116W). This modifier cannot be used with either the rowchangetimestampmissing or the rowchangetimestampignore modifier. The load utility does not attempt to maintain or verify the uniqueness of values in the table's row change timestamp column when this option is used.

seclabelchar Indicates that security labels in the input source file are in the string format for security label values rather than in the default encoded numeric format. LOAD converts each security label into the internal format as it is loaded. If a string is not in the proper format the row is not loaded and a warning (SQLSTATE 01H53, SQLCODE SQL3242W) is returned. If the string does not represent a valid security label that is part of the security policy protecting the table, then the row is not loaded and a warning (SQLSTATE 01H53, SQLCODE SQL3243W) is returned.

This modifier cannot be specified when the seclabelname modifier is specified. Otherwise, the load fails and an error (SQLCODE SQL3525N) is returned.

If you have a table that consists of a single DB2SECURITYLABEL column, the data file might look like this:
"CONFIDENTIAL:ALPHA:G2"
"CONFIDENTIAL;SIGMA:G2"
"TOP SECRET:ALPHA:G2"
To load or import this data, the seclabelchar file type modifier must be used:
LOAD FROM input.del OF DEL MODIFIED BY SECLABELCHAR INSERT INTO t1
seclabelname Indicates that security labels in the input source file are indicated by their name rather than the default encoded numeric format. LOAD converts the name to the appropriate security label if it exists. If no security label exists with the indicated name for the security policy that protects the table, the row is not loaded and a warning (SQLSTATE 01H53, SQLCODE SQL3244W) is returned.

This modifier cannot be specified when the seclabelchar modifier is specified. Otherwise, the load fails and an error (SQLCODE SQL3525N) is returned.

If you have a table that consists of a single DB2SECURITYLABEL column, the data file might consist of the following sample security label names:
"LABEL1"
"LABEL1"
"LABEL2"
To load or import this data, the seclabelname file type modifier must be used:
   LOAD FROM input.del OF DEL MODIFIED BY SECLABELNAME INSERT INTO t1
Note: If the file type is ASC, any spaces that follow the name of the security label are interpreted as being part of the name. To avoid this issue, use the striptblanks file type modifier to make sure the spaces are removed.
totalfreespace=x x is an integer greater than or equal to 0. The value is interpreted as the percentage of the total pages in the table that is to be appended to the end of the table as free space. For example, if x is 20, and the table has 100 data pages after the data has been loaded, 20 additional empty pages are appended. The total number of data pages for the table is 120. The data pages total does not factor in the number of index pages in the table. This option does not affect the index object. If two loads are done with this option specified, the second load does not reuse the extra space appended to the end by the first load.
transactionidignore This modifier informs the load utility to ignore data for the TRANSACTION START ID column, even though it is present in the data file. When this modifier is specified, the value for the TRANSACTION START ID column is generated by the utility. This modifier cannot be used with the transactionidmissing modifier or the transactionidoverride modifier.
transactionidmissing If this modifier is specified, the utility assumes that the input data file contains no data for the TRANSACTION START ID columns. When this modifier is specified, the value for the TRANSACTION START ID column is generated by the utility. This modifier cannot be used with the transactionidignore modifier or the transactionidoverride modifier.
transactionidoverride This modifier instructs the load utility to accept user-supplied data for the GENERATED ALWAYS AS TRANSACTION START ID column in a system-period temporal table. This behavior is contrary to the normal rules for this type of column. When this modifier is used, any rows with no data or NULL data in a TRANSACTION START ID column are rejected.
usedefaults If a source column for a target table column has been specified, but it contains no data for one or more row instances, default values are loaded. Examples of missing data are:
  • For DEL files, two adjacent column delimiters (",,") or two adjacent column delimiters separated by an arbitrary number of spaces (", ,") are specified for a column value.
  • For DEL/ASC files, a row that does not have enough columns, or is not long enough for the original specification. For ASC files, NULL column values are not considered explicitly missing, and a default value is not substituted for NULL column values. NULL column values are represented by all space characters for numeric, date, time, and /timestamp columns, or by using the NULL INDICATOR for a column of any type to indicate the column is NULL.
Without this option, if a source column contains no data for a row instance, one of the following occurs:
  • For DEL/ASC files, if the column is nullable, a NULL is loaded. If the column is not nullable, the utility rejects the row.
Table 6. Valid file type modifiers for ASCII file formats (ASC/DEL)
Modifier Description
codepage=x x is an ASCII character string. The value is interpreted as the code page of the data in the input data set. Converts character data (and numeric data specified in characters) from this code page to the database code page during the load operation.
The following rules apply:
  • For pure DBCS (graphic), mixed DBCS, and EUC, delimiters are restricted to the range of x00 to x3F, inclusive.
  • For DEL data specified in an EBCDIC code page, the delimiters might not coincide with the shift-in and shift-out DBCS characters.
  • nullindchar must specify symbols included in the standard ASCII set between code points x20 and x7F, inclusive and applies to ASCII symbols and code points. EBCDIC data can use the corresponding symbols, even though the code points are different.

This option is not supported when specified along with the CURSOR file type.

dateformat="x" x is the format of the date in the source file.1
The following date elements are valid:
YYYY - Year (four digits ranging from 0000 - 9999)
M    - Month (one or two digits ranging from 1 - 12)
MM   - Month (two digits ranging from 01 - 12;
         mutually exclusive with M)
D    - Day (one or two digits ranging from 1 - 31)
DD   - Day (two digits ranging from 01 - 31;
         mutually exclusive with D)
DDD  - Day of the year (three digits ranging
         from 001 - 366; mutually exclusive
         with other day or month elements)
A default value of 1 is assigned for each element that is not specified. The following examples are date formats:
"D-M-YYYY"
"MM.DD.YYYY"
"YYYYDDD"
dumpfile = x x is the fully qualified (according to the server database partition) name of an exception file to which rejected rows are written. A maximum of 32 KB of data is written per record. The following section is an example that shows how to specify a dump file:
db2 load from data of del
   modified by dumpfile = /u/user/filename
   insert into table_name

The file is created and owned by the instance owner. To override the default file permissions, use the dumpfileaccessall file type modifier.

Note:
  1. In a partitioned database environment, the path must be local to the loading database partition, so that concurrently running load operations do not attempt to write to the same file.
  2. The contents of the file are written to disk in an asynchronous buffered mode. If a load operation fails or is interrupted, the number of records that are committed to disk cannot be known with certainty, and might lack consistency after a LOAD RESTART. You can assume that the file is complete only after a load operation starts and completes in a single pass.
  3. If the specified file exists, it is not be re-created, but it is truncated.
dumpfileaccessall Grants read access to 'OTHERS' when a dump file is created.
This file type modifier is only valid when:
  1. It is used along with dumpfile file type modifier.
  2. The user has SELECT privilege on the load target table.
  3. It is issued on a Db2 server database partition that exists on a UNIX operating system.

If the specified file exists, its permissions are not changed.

fastparse Reduces syntax checking on user-supplied column values, and enhances performance. Tables are guaranteed to be architecturally correct (the utility performs sufficient data checking to prevent a segmentation violation or trap), however, the coherence of the data is not validated. Do not use the fastparse option unless it is certain that all of the input data is valid. If invalid data, such as an incorrectly formatted timestamp like :1>0-00-20-07.11.12.000000, is submitted with the fastparse option, some SQL operations can propagate the invalid data to other parts of the database without detection. When the invalid data is later detected, it might be difficult to track its origin, or how many other locations the data was copied to.
implieddecimal The location of an implied decimal point is determined by the column definition; it is no longer assumed to be at the end of the value. For example, the value 12345 is loaded into a DECIMAL(8,2) column as 123.45, not 12345.00.

This modifier cannot be used with the packeddecimal modifier.

timeformat="x" x is the format of the time in the source file.1 Valid time elements are:
H     - Hour (one or two digits ranging from 0 - 12
           for a 12 hour system, and 0 - 24
           for a 24 hour system)
HH    - Hour (two digits ranging from 00 - 12
           for a 12 hour system, and 00 - 24
           for a 24 hour system; mutually exclusive
             with H)
M     - Minute (one or two digits ranging
           from 0 - 59)
MM    - Minute (two digits ranging from 00 - 59;
           mutually exclusive with M)
S     - Second (one or two digits ranging
           from 0 - 59)
SS    - Second (two digits ranging from 00 - 59;
           mutually exclusive with S)
SSSSS - Second of the day after midnight (5 digits
           ranging from 00000 - 86400; mutually
           exclusive with other time elements)
TT    - Meridian indicator (AM or PM)
A default value of 0 is assigned for each element that is not specified. Some examples of time formats are:
"HH:MM:SS"
"HH.MM TT"
"SSSSS"
timestampformat="x" x is the format of the timestamp in the source file.1 The following list contains valid timestamp elements:
YYYY   - Year (four digits ranging from 0000 - 9999)
M      - Month (one or two digits ranging from 1 - 12)
MM     - Month (two digits ranging from 01 - 12;
            mutually exclusive with M and MMM)
MMM    - Month (three-letter case-insensitive abbreviation for 
            the month name; mutually exclusive with M and MM)              
D      - Day (one or two digits ranging from 1 - 31)
DD     - Day (two digits ranging from 01 - 31; mutually exclusive with D)
DDD    - Day of the year (three digits ranging from 001 - 366; 
            mutually exclusive with other day or month elements)
H      - Hour (one or two digits ranging from 0 - 12
            for a 12 hour system, and 0 - 24 for a 24 hour system)
HH     - Hour (two digits ranging from 00 - 12 
            for a 12 hour system, and 00 - 24 for a 24 hour system; 
            mutually exclusive with H)
M      - Minute (one or two digits ranging from 0 - 59)
MM     - Minute (two digits ranging from 00 - 59;
            mutually exclusive with M, minute)
S      - Second (one or two digits ranging from 0 - 59)
SS     - Second (two digits ranging from 00 - 59;
            mutually exclusive with S)
SSSSS  - Second of the day after midnight (5 digits
            ranging from 00000 - 86400; mutually
            exclusive with other time elements)
U (1 to 12 times)
         - Fractional seconds(number of occurrences of U represent the 
              number of digits with each digit ranging from 0 to 9

TT     - Meridian indicator (AM or PM)
timestampformat="x" (Continued)
A default value of 1 is assigned for unspecified YYYY, M, MM, D, DD, or DDD elements. A default value of 'Jan' is assigned to an unspecified MMM element. A default value of 0 is assigned for all other unspecified elements. The following section is an example of a timestamp format:
   "YYYY/MM/DD HH:MM:SS.UUUUUU"

The valid values for the MMM element include 'jan', 'feb', 'mar', 'apr', 'may', 'jun', 'jul', 'aug', 'sep', 'oct', 'nov', and 'dec'. These values are case-insensitive.

If the timestampformat modifier is not specified, the load utility formats the timestamp field by using one of two possible formats:
YYYY-MM-DD-HH.MM.SS 
YYYY-MM-DD HH:MM:SS

The load utility chooses the format by looking at the separator between the DD and HH. If it is a dash '-', the load utility uses the regular dashes and dots format (YYYY-MM-DD-HH.MM.SS). If it is a blank space, then the load utility expects a colon ':' to separate the HH, MM, and SS.

In either format, if you include the microseconds field (UUUUUU), the load utility expects the dot '.' as the separator. Either YYYY-MM-DD-HH.MM.SS.UUUUUU or YYYY-MM-DD HH:MM:SS.UUUUUU are acceptable.

The following example illustrates how to load data that contains user-defined date and time formats into a table called schedule:
db2 load from delfile2 of del
    modified by timestampformat="yyyy.mm.dd hh:mm tt"
    insert into schedule
usegraphiccodepage If usegraphiccodepage is specified, it is assumed that data loaded into graphic or double-byte character large object (DBCLOB) data fields is in the graphic code page. The rest of the data is assumed to be in the character code page. The graphic code page is associated with the character code page. The LOAD command determines the character code page through either the codepage modifier, if it is specified, or through the code page of the database if the codepage modifier is not specified.

Use this modifier along with the delimited data file that was generated by drop table recovery only if the table that you recovered has graphic data.

Restrictions

The usegraphiccodepage modifier must not be specified with DEL files that are created by the EXPORT utility, as these files contain data encoded in only one code page. The usegraphiccodepage modifier is also ignored by the double-byte character large objects (DBCLOBs) in files.

xmlchar Specifies that XML documents are encoded in the character code page.

This option is useful for processing XML documents that are encoded in the specified character code page but do not contain an encoding declaration.

For each document, if a declaration tag exists and contains an encoding attribute, the encoding must match the character code page, otherwise the row that contains the document is rejected. The character code page is the value that was specified by the codepage file type modifier, or the application code page if it is not specified. By default, either the documents are encoded in Unicode, or they contain a declaration tag with an encoding attribute.

xmlgraphic Specifies that XML documents are encoded in the specified graphic code page.

This option is useful for processing XML documents that are encoded in a specific graphic code page but do not contain an encoding declaration.

For each document, if a declaration tag exists and contains an encoding attribute, the encoding must match the graphic code page, otherwise the row that contains the document is rejected. The graphic code page is the graphic component of the value that was specified by the codepage file type modifier. If that modifier is not specified, the graphic component of the application code page is used. By default, documents are either encoded in Unicode, or they contain a declaration tag with an encoding attribute.

Table 7. Valid file type modifiers for ASC file formats (Non-delimited ASCII)
Modifier Description
binarynumerics Numeric (but not DECIMAL) data must be in binary form, not the character representation. This format avoids costly conversions.

This option is supported only with positional ASC, by using fixed-length records specified by the reclen option.

The following rules apply:
  • Conversion between data types is not done, except for BIGINT, INTEGER, and SMALLINT.
  • Data lengths must match their target column definitions.
  • FLOATs must be in IEEE Floating Point format.
  • Binary data in the load source file is assumed to be big-endian, regardless of the platform on which the load operation is running.

NULLs cannot be present in the data for columns that are affected by this modifier. Blanks (normally interpreted as NULL) are interpreted as a binary value when this modifier is used.

nochecklengths If nochecklengths is specified, an attempt is made to load each row, even if the source data has a column definition that exceeds the size of the target table column. Such rows can be successfully loaded if code page conversion causes the source data to shrink. For example, 4-byte EUC data in the source can shrink to 2-byte DBCS data in the target, and require half the space. This option is useful if it is known that the source data fits in all cases despite mismatched column definitions.
nullindchar=x x is a single character. This modifier changes the character that denotes a NULL value to x. The default value of x is Y.2

This modifier is case-sensitive for EBCDIC data files, except when the character is an English letter. For example, if the NULL indicator character is specified to be the letter N, then n is also recognized as a NULL indicator.

packeddecimal Loads packed-decimal data directly, since the binarynumerics modifier does not include the DECIMAL field type.

This option is supported only with positional ASC, by using fixed-length records specified by the reclen option.

The following values are supported for the sign nibble:
+ = 0xC 0xA 0xE 0xF
- = 0xD 0xB

NULLs cannot be present in the data for columns that are affected by this modifier. Blanks (normally interpreted as NULL) are interpreted as a binary value when this modifier is used.

Regardless of the server platform, the byte order of binary data in the load source file is assumed to be big-endian. When you use this modifier on Windows operating systems, the byte order must not be reversed.

This modifier cannot be used with the implieddecimal modifier.

reclen=x x is an integer with a maximum value of 32 767. x characters are read for each row, and a newline character is not used to indicate the end of the row.
striptblanks Truncates any trailing blank spaces when you load data into a variable-length field. If this option is not specified, blank spaces are kept.

This option cannot be specified together with striptnulls. These options are mutually exclusive. This option replaces the obsolete t option, which is supported for earlier compatibility only.

striptnulls Truncates any trailing NULLs (0x00 characters) when you load data into a variable-length field. If this option is not specified, NULLs are kept.

This option cannot be specified together with striptblanks. These options are mutually exclusive. This option replaces the obsolete padwithzero option, which is supported for earlier compatibility only.

zoneddecimal Loads zoned decimal data, since the binarynumerics modifier does not include the DECIMAL field type. This option is supported only with positional ASC, by using fixed-length records specified by the reclen option.
Half-byte sign values can be one of the following value:
+ = 0xC 0xA 0xE 0xF 0x3
- = 0xD 0xB 0x7

Supported values for digits are 0x0 to 0x9.

Supported values for zones are 0x3 and 0xF.

Table 8. Valid file type modifiers for DEL file formats (Delimited ASCII)
Modifier Description
chardelx x is a single character string delimiter. The default value is a double quotation mark ("). The specified character is used in place of double quotation marks to enclose a character string.2, 3
You can explicitly specify the double quotation mark (") as the character string delimiter by using the following syntax:
modified by chardel""
You can also specify the single quotation mark (') as a character string delimiter by using the following syntax:
modified by chardel''
coldelx x is a single character column delimiter. The default value is a comma (,). The specified character is used in place of a comma to signal the end of a column.2, 3
decplusblank Plus sign character. Causes positive decimal values to be prefixed with a blank space instead of a plus sign (+). The default action is to prefix positive decimal values with a plus sign.
decptx x is a single character substitute for the period as a decimal point character. The default value is a period (.). The specified character is used in place of a period as a decimal point character.2, 3
delprioritychar The current default priority for delimiters is: record delimiter, character delimiter, column delimiter. This modifier protects existing applications that depend on the older priority by reverting the delimiter priorities to: character delimiter, record delimiter, column delimiter. Syntax:
db2 load ... modified by delprioritychar ...
The following example is based on this DEL data file:
"Smith, Joshua",4000,34.98<row delimiter>
"Vincent,<row delimiter>, is a manager", ...
... 4005,44.37<row delimiter>

With the delprioritychar modifier specified, this data file contains only two rows. The second <row delimiter> is interpreted as part of the first data column of the second row, while the first and the third <row delimiter> are interpreted as actual record delimiters. If this modifier is not specified, this data file contains three rows in, each delimited by a <row delimiter>.

keepblanks Preserves the leading and trailing blanks in each field of type CHAR, VARCHAR, LONG VARCHAR, or CLOB. Without this option, all leading and trailing blanks that are not inside character delimiters are removed, and a NULL is inserted into the table for all blank fields.
The following example illustrates how to load data into table TABLE1 and preserve all leading and trailing spaces in the data file:
db2 load from delfile3 of del
   modified by keepblanks
   insert into table1
nochardel The load utility assumes that all bytes found between the column delimiters to be part of the column's data. Character delimiters are parsed as part of column data. Do not specify this option if the data was exported from a Db2 database system (unless nochardel was specified at export time). It is provided to support vendor data files that do not have character delimiters. Improper usage might result in data loss or corruption.

This option cannot be specified with chardelx, delprioritychar or nodoubledel. These options are mutually exclusive.

nodoubledel Suppresses recognition of double character delimiters.
Table 9. Valid file type modifiers for IXF file format
Modifier Description
forcein Directs the utility to accept data despite code page mismatches, and to suppress conversion between code pages.

Fixed-length target fields are checked to verify that they are large enough for the data. If nochecklengths is specified, no checking is done, and an attempt is made to load each row.

nochecklengths If nochecklengths is specified, an attempt is made to load each row, even if the source data has a column definition that exceeds the size of the target table column. Such rows can be successfully loaded if code page conversion causes the source data to shrink. For example, 4-byte EUC data in the source can shrink to 2-byte DBCS data in the target, and require half the space. This option is useful if it is known that the source data fits in all cases despite mismatched column definitions.

Notes

  1. Double quotation marks around the date format string are mandatory. Field separators cannot contain any of the following characters: a-z, A-Z, and 0-9. The field separator must not be the same as the character delimiter or field delimiter in the DEL file format. A field separator is optional if the start and end positions of an element are unambiguous. Ambiguity can exist if (depending on the modifier) elements such as D, H, M, or S are used, because of the variable length of the entries.
    For timestamp formats, care must be taken to avoid ambiguity between the month and the minute descriptors, since they both use the letter M. A month field must be next to other date fields. A minute field must be next to other time fields. Following are some ambiguous timestamp formats:
    "M" (could be a month, or a minute)
    "M:M" (Which is which?)
    "M:YYYY:M" (Both are interpreted as month.)
    "S:M:YYYY" (adjacent to both a time value and a date value)
    In ambiguous cases, the utility reports an error message, and the operation fails.
    Following are some unambiguous timestamp formats:
    "M:YYYY" (Month)
    "S:M" (Minute)
    "M:YYYY:S:M" (Month....Minute)
    "M:H:YYYY:M:D" (Minute....Month)

    Some characters, such as double quotation marks and backslashes, must be preceded by an escape character (for example, \).

  2. Character values provided for the chardel, coldel, or decpt file type modifiers must be specified in the code page of the source data.
    The character code point (instead of the character symbol), can be specified by using the syntax xJJ or 0xJJ, where JJ is the hexadecimal representation of the code point. For example, to specify the # character as a column delimiter, use one of the following statements:
    ... modified by coldel# ...
    ... modified by coldel0x23 ...
    ... modified by coldelX23 ...
  3. Delimiter considerations for moving data lists restrictions that apply to the characters that can be used as delimiter overrides.
  4. The load utility does not issue a warning if an attempt is made to use unsupported file types with the MODIFIED BY option. If you attempt to use an unsupported file type with the MODIFIED BY option, the load operation fails, and an error code is returned.
  5. When multiple modifiers suffixed with ignore, include, missing, and override are specified, they are applied in the order that they are listed. In the following statement, data for implicitly hidden columns that are not identity columns is included in the input data. While data for all identity columns, regardless of their implicitly hidden status, is not.
    db2 load from delfile1 of del modified by 
       implicitlyhiddeninclude identitymissing insert into table1
    However, changing the order of the file type modifiers in the following statement means that data for all implicitly hidden columns (including hidden identity columns) is included in the input data. While data for identity columns that are not implicitly hidden is not.
    db2 load from delfile1 of del modified by 
       identitymissing implicitlyhiddeninclude insert into table1
Table 10. LOAD behavior when codepage and usegraphiccodepage are used
codepage=N usegraphiccodepage LOAD behavior
Absent Absent All data in the file is assumed to be in the database code page, not the application code page, even if the CLIENT option is specified.
Present Absent All data in the file is assumed to be in code page N.
Important: Graphic data becomes corrupted when loaded into the database if N is a single-byte code page.
Absent Present Character data in the file is assumed to be in the database code page, even if the CLIENT option is specified. Graphic data is assumed to be in the code page of the database graphic data, even if the CLIENT option is specified.

If the database code page is single-byte, then all data is assumed to be in the database code page.

Important: Graphic data becomes corrupted when loaded into a single-byte database.
Present Present Character data is assumed to be in code page N. Graphic data is assumed to be in the graphic code page of N.

If N is a single-byte or double-byte code page, then all data is assumed to be in code page N.

Important: Graphic data becomes corrupted when loaded into the database if N is a single-byte code page.