IBM Support

Parallel INSERT

Product Documentation


There are multiple ways of populating a table. For example, the LOAD utility is efficient but is not logged and does not allow concurrent modifications to the table. However, the INSERT statement, on the other hand, is better suited when concurrent access is needed and is logged, but is generally not as fast as LOAD. Parallel INSERT that is supported for a column table as a target can help speed up a bulk INSERT.

INSERT statements may be parallelized in a system with database partitioning. However in a non-partitioned database, or when there are many more cores than a database partitions, parallelism is not exploited to the extent that it could be. When intra-query parallelism is enabled, even though the access and processing of the source may be adequately parallelized, the operation of inserting the data into the target table is serialized.

In V11.1.1.1, it is possible to enable parallel insert for experimentation in a test environment with: db2set DB2_EXTENDED_OPTIMIZATION=CDE_PAR_IUD -im. This will be enabled in a subsequent fixpack. The performance of inserting data in parallel in column oriented tables can be significantly improved depending on how many cores are there in a non-partitioned database or how many cores there are per database partition.


Parallel INSERT Overview

The parallel INSERT feature significantly improves the performance of individual INSERT statements adding to the innovative and powerful technology of BLU Acceleration. In a partitioned database system (DPF), there is already some parallelization at the database partition level even with the feature disabled. The parallel INSERT feature additionally parallelizes the INSERT within each database partition when Intra-Query parallelism is configured.

In prior versions of DB2 and when the parallel INSERT feature is disabled, while the query portion of the INSERT statement is parallelized, the INSERT itself is executed serially. With this feature enabled, the INSERT portion of the query uses multiple threads to insert the data to speed up the elapsed time of the statement. This parallel processing is extended to associated operations including compression, locking and logging. The degree of parallelism is automatically chosen as appropriate by the optimizer.

INSERT Parallelization applies to both regular column oriented tables and NOT LOGGED columnar Declared Global Temporary Tables (DGTTs). The source of the INSERT may be row orientated tables or column oriented tables. Only column oriented tables as targets are parallelized.

The DEGREE of parallelism chosen for the INSERT by the optimizer is shown in the RETURN operator of the EXPLAIN_ARGUMENT table.

DEGREEINTEGERIf the RETURN operator represents the return from column-organized data processing of the insertion, update, or deletion of rows, the DEGREE argument indicates the number of column-organized processing subagents that are used to process the insert, update, or delete operations in parallel.

Scenarios where parallel INSERT may not happen

Parallellism is exploited for large bulk INSERT. The reason for this is that the implementation is such that each parallel LOAD agent puts data into its own pages. As such small amounts of data going into separate pages will fragment the table and as a compromise between performance and disk space wastage, parallelism will not be enabled for INSERT statements with a few rows inserted.

INSERT from a VALUES clause is a common scenario that falls under this category. Given that the VALUES clause in a single statement typically contains a small number of rows, INSERT from VALUES will almost always not be parallelized. The INGEST command is similar. Under the covers this is sent as multiple individual INSERT with VALUES statements and will highly likely not be parallelized within a database partition.

Even for bulk INSERT from regular source tables, DB2 may not know how many rows are going to be produced from the source subquery. The DB2 optimizer decides whether the INSERT should be parallelized and to what degree it should parallelize. If it estimates that the number of rows to be inserted is not large enough, it will not parallelize the INSERT portion of the statement.

This is particularly relevant to the source tables not having statistics collected on them or with complex subqueries where the optimizer may underestimate the number of rows in the result of the subquery. As such, it is important to perform RUNSTATS on the source tables if they had not been collected or the tables were significantly changed since the last RUNSTATS. It may also be necessary to collect Column Group Statistics as part of the RUNSTATS if there are multiple predicates on a table (multiple local predicates or multiple join predicates between two tables on columns that may be statistically correlated).

Scenarios where parallel INSERT is NOT supported

1. Parallel INSERT into row oriented tables is not supported. The source of the INSERT can be either row or column oriented tables.
2. IMPORT is not parallelized as this uses one INSERT statement per row.
3. Compound statements that have nested INSERTs are not parallelized. The MERGE statement and other statements using the data-change-table-reference clause are also not parallelized.
4. A transaction that does INSERT after an uncommitted DELETE or UPDATE requires a COMMIT before the INSERT in order to be parallelized.


A sequence of operations with LOCK TABLE where the existence of a modifying statement (DELETE) prior to the INSERT prevents parallelism:
3) INSERT INTO X SELECT * FROM Y [Not Parallelized]

[{"Product":{"code":"SSEPGG","label":"DB2 for Linux- UNIX and Windows"},"Business Unit":{"code":"BU053","label":"Cloud & Data Platform"},"Component":"Database Objects\/Config - Database","Platform":[{"code":"PF002","label":"AIX"},{"code":"PF010","label":"HP-UX"},{"code":"PF016","label":"Linux"},{"code":"PF027","label":"Solaris"},{"code":"PF033","label":"Windows"}],"Version":"11.1","Edition":"","Line of Business":{"code":"LOB10","label":"Data and AI"}}]

Document Information

Modified date:
17 June 2018