DB2 for OS/390 and z/OS V7 Load Resume and SQL INSERT Performance

The rise of e-business applications have made the performance of mass inserts even more critical. This report documents the results of different ways of adding data to a DB2 table in DB2 for z/OS: SQL INSERT, online load resume, and offline load resume. The costs and benefits of each method are presented, as well as recommendations for improving performance for whichever method is used.

Share:

Mai Nguyen, DB2 Performance, IBM

Mai Nguyen works in the DB2 Performance department at IBM's Silicon Valley Lab.



Akira Shibamiya, DB2 Performance, IBM

Akira Shibamiya works in the DB2 Performance department at IBM's Silicon Valley Lab.



01 March 2002

Introduction

The performance of insert operations is of increasing concern for DB2® customers that require high volumes of data insertions, such as telecommunication companies, brokerage and credit card companies, and enterprise resource planning package users. The Internet is driving these insert rates even higher. Often, the question of how many rows can be inserted per unit of time becomes a critical factor in deciding which database management system product to use. The information provided in this report is intended to make it easier to understand how insert (used generically as a term to mean adding rows to a table) operations perform under different conditions in DB2 for OS/390® and z/OS.

What we did for this report is measure the performance of load resume and SQL INSERT on a partitioned table, both serially and in parallel by partition. For the load cases, we measured both offline load and the new online load facility that was introduced in Version 7. We also measured the impact of indexes by measuring the following cases:

  • One partitioning index.
  • One partitioning and one nonpartitioning index.
  • One partitioning and two nonpartitioning indexes.

Overview of online LOAD

The LOAD utility is used to insert bulk data into DB2 tables. In releases before Version 7, the LOAD utility would drain all access to the data, therefore allowing no concurrent transactions to run. In Version 7, an online version of load resume was introduced, which allows user transactions to run while data is being loaded. Online load resume acts much like SQL INSERT statements, and includes logging, page or row locking, index building, duplicate key, triggers, and referential constraint checking.

To activate an online load, use the SHRLEVEL(CHANGE) option.

Compatibility with other processes

With online load, read and write access is allowed. However, online load cannot run on the same target object as Online REORG SHRLEVEL CHANGE. But it can run concurrently with the following utilities:

  • COPY SHRLEVEL CHANGE
  • MERGER COPY
  • RECOVER ERROR RANGE
  • CONCURRENT COPY SHELEVEL REFERENCE
  • RUNSTATS SHRLEVEL CHANGE

Online load resume cannot run against a table space that is in COPY pending, CHECK pending or RECOVER pending. It cannot run against an index space that is in CHECK or RECOVER pending.


Measurement cases

Details of the measurement environment can be found in Appendix A, and the DDL and JCL we used is in the ddljcl.txt file available in the Download section. We measured the results of adding additional rows into a 10-partition table that had already been populated by 8 million rows of data. Each 118-byte row had 26 columns of CHAR, INTEGER and DECIMAL and were inserted into this table using LOAD REPLACE LOG(NO).

We used different methods to add 2 million additional rows of data to the table, including SQL INSERT, offline load resume, and online load resume. (See Appendix B for details of data and index sizes before and after we added the additional rows.) We varied the cases by adding additional indexes and by using both sequential and parallel processing for the loads or inserts.

The DDL and utility statements we used are included in the ddljcl.txt file available in the Download section.


Sequential results

The variables in the sequential cases were the number of indexes and the use of SORTKEYS for offline load.

Sequential measurements: one partitioning index

The results shown in Table 1 indicate that an online load is 3.4 times longer in CPU and 2.4 times longer in elapsed time than offline load. The difference comes from the "page-level" processing used by offline load, in which the number of log records written and the number of page updates correspond almost one to one with the number of pages processed.

Table 1. CPU and elapsed time summary with one index
Sequential INSERT
2M Rows
ONLINE LOAD
RESUME 2M Rows
OFFLINE LOAD
RESUME 2M Rows
CPU134 sec85 sec25 sec
ELAPSED TIME162 sec107 sec45 sec

Figure 1 shows these results graphically.

Figure 1. Graphical results of summary data with one index
Graphical results of summary data with one index

In contrast, both INSERT and online load use "row-level" processing, in which each data record has one corresponding log record and each index entry has a correponding log record. There is an additional 10% for occasional extra log records produced for spacemap page updates, etc. For example, if you have 2 million rows to insert, the number of log records can be calculated as follows:

2 million rows*(1 for data + 1 for index)*1.1 = 4.4 million log records

The number of page updates also corresponds to one per row or index entry rather than one per page as in the case of offline load. Table 2 shows the detailed statistics in which this activity is recorded.

Table 2. Detailed statistics for one-index measurement case
Sequential
INSERT
2M Rows
Online
LOAD RESUME
2M Rows
Offline
LOAD RESUME
2M Rows
NUMBER OF COMMITS20063765
LOG RECS WRITTEN4,457,6604,460,69759,310
BP0 - DB2 catalog
GET PAGE REQUESTS72,589265
PAGES UPDATED01,275135
BP1 - data
GET PAGE REQUESTS59,27060,12658,997
PAGES UPDATED2,176,4702,176,46158,904
BP2 - partitioned index
GET PAGE REQUESTS129,520155,75240,079
PAGES UPDATED2,078,0102,078,01052,989

However, Table 1 and Figure 1 both show that online load resume is 36% faster in CPU and 34% faster in elapsed time than SQL INSERT. This is because online load avoids any application program interface overhead by being able to directly invoke insert function internally without issuing SQL calls.

In SQL insert, the commit interval is 10,000 records. In online load rsume, the commit interval is dynamic. There are more get page requests and page updates to the catalog with online load resume because of the updates to SYSUTIL table.

Sequential measurements: one partitioning index and one nonpartitioning index

The results shown in Table 3 indicate that online load is 3 times longer in CPU and 4.6 times longer in elapsed time than offline load.

Table 3. CPU and elapsed time summary with two indexes
INSERT 2M Rows ONLINE LOAD
RESUME 2M Rows
OFFLINE LOAD
RESUME 2M Rows
OFFLINE LOAD RESUME
2M Rows with SORTKEYS
CPU233 sec172 sec58 sec57 sec
ELAPSED TIME317 sec252 sec84 sec55 sec

Figure 2 shows these results graphically.

Figure 2. Graphical results of summary data with two indexes and with SORTKEYS
Graphical results of summary data

As in the previous case of single table index, the difference comes from the "page-level" processing used by offline load in contrast to the "row-level" processing used by both online load and SQL INSERT, as clearly indicated by a huge difference in the number of page updates and in the number of log records written, as shown in Table 4. Approximately 6.6 million log records are written for offline load and INSERT, which can be estimated as follows:

2 million * (1 for data + 1 for partitioning index + 1 for nonpartitioning index)*1.1.

In addition, there is another big difference when a nonpartitioning index is present: the nonpartitioning index getpage count is about 40 times higher for INSERT and online load because of a random index probe for each row inserted. In offline load, index entries are first sorted in index key sequence prior to index build, resulting in much fewer index getpages, as well as fewer synchronous reads.

Table 4. Detailed statistics for two-index measurement case
INSERT 2M
Rows
Online
LOAD RESUME
2M Rows
Offline
LOAD RESUME
2M Rows
Offline
LOAD RESUME
2M Rows
with SORTKEYS
NUMBER OF COMMITS2006377242
LOG RECS WRITTEN6,529,8746,532,31459,36259,280
BP0 - DB2 catalog
GET PAGE REQUESTS73,226311306
PAGES UPDATED0658119122
BP1 - data
GET PAGE REQUESTS59,27060,12658,99758,997
PAGES UPDATED2,176,4702,176,46158,90458,904
BP2 - partitioning index
GET PAGE REQUESTS142,420142,85240,07940,079
PAGES UPDATED2,078,0102,078,01052,98952,989
BP3 - nonpartitioning indexes
GET PAGE REQUESTS6,038,8236,041,995154,759154,759
PAGES UPDATED2,056,1802,056,1802,064,2162,064,216
SYNCHRONOUS READS8,0438,0434839
PAGES WRITTEN81,76681,33616,14916,141

Compared to SQL INSERT, online load resume with two indexes present is 26% faster in CPU time and 20% faster in elapsed time. The percentage of improvement is smaller than the one-index case (which had improvement percentages of 36% and 34%, respectively). This lower percentage of improvement is because the additional index requires more CPU and elapsed time which offsets somewhat the improvements gained by avoiding the application programming interface required by INSERT.

LetÂ’s look at the comparisons of offline load both with and without the SORTKEYS option. Using SORTKEYS results in a 35% reduction in elapsed time. This is because SORTKEYS uses a parallel sort and build of the indexes, and avoids DASD I/Os to SYSUT1 and SORTOUT work file. If you are using offline load, always use SORTKEYS when there is more than one index.

Now letÂ’s look at the comparison of getpage and update requests to the catalog between online load resume and SQL INSERT. There are significantly more getpage requests and page updates with online load resume, because the utility is updating the SYSUTIL catalog table.

Sequential results: one partitioning index and two nonpartitioning indexes

The results shown in Table 5 indicate that online load has 2.8 times longer CPU time and 6.2 times longer elapsed time than offline load resume.

Table 5. CPU and elapsed time summary with three indexes
INSERT 2M Rows Online LOAD
RESUME 2M Rows
Offline LOAD
RESUME 2M Rows
Offline LOAD
RESUME 2M Rows
with SORTKEYS
CPU309 sec249 sec86 sec86 sec
ELAPSED TIME420 sec356 sec113 sec57 sec

Figure 3 shows these results graphically.

Figure 3. Graphical results of summary data with three indexes and with SORTKEYS
Graphical results of summary data

As in the previous case of single table index, the difference comes from the "page-level" processing used by offline load in contrast to the "row-level" processing used by both online load and SQL INSERT, as clearly indicated by a huge difference in the number of page updates and in the number of log records written, as shown in Table 6. Approximately 8.8 million log records are written for offline load and INSERT, which can be estimated as follows:

2 million * (1 for data + 1 for partitioning index + 2 for nonpartitioning indexes)*1.1

In addition, there is another big difference when a nonpartitioning index is present: the nonpartitioning index getpage count is about 40 times higher for INSERT and online load because of a random index probe for each row inserted. In offline load, index entries are first sorted in index key sequence prior to index build, resulting in much fewer index getpages, as well as fewer synchronous reads.

Compared to SQL INSERT, online load resume with three indexes present is 19% faster in CPU time and 15% faster in elapsed time. The percentage of improvement is smaller than the one- or two-index cases. eindex case (which had improvement percentages of 36% and 34%, respectively). This lower percentage of improvement is because the additional index requires more CPU and elapsed time which offsets somewhat the improvements gained by avoiding the application programming interface required by INSERT.

Table 6. Detailed statistics for three-index measurement case
INSERT
2M Rows
Online
LOAD RESUME
2M Rows
Offline
LOAD RESUME
2M Rows
Offline
LOAD RESUME
2M Rows
with SORTKEYS
NUMBER OF COMMITS2006377742
LOG RECS WRITTEN8,602,1008,604,56259,43259,313
BP0 - DB2 catalog
GET PAGE REQUESTS73,226311306
PAGES UPDATED0660121125
BP1 - data
GET PAGE REQUESTS59,27060,12658,99758,997
PAGES UPDATED2,176,4702,176,46158,90458,904
BP2 - partitioning index
GET PAGE REQUESTS142,420142,85240,07940,079
PAGES UPDATED2,078,0102,078,01052,98952,989
BP3 - nonpartitioning indexes
GET PAGE REQUESTS12,077,62512,084,257324,250324,251
PAGES UPDATED4,112,3704,112,3704,128,0034,128,444
SYNCHRONOUS READS16,08916,0899067
PAGES WRITTEN1,138.7K1,461.4K1,191.8K1,244.1K

Comparing offline load resume with and without SORTKEYS option, the use of SORTKEYS option, which results in parallel sort and build of indexes and avoiding DASD I/Os to SYSUT1 and SORTOUT work files for indexes, leads to a 50% faster elapsed time, bigger than the 35% improvement experienced for two index case. The use of SORTKEYS option is always recommended when there is more than one index, and the percentage improvement is expected to become greater as more indexes are involved.

In SQL INSERT, the commit interval is 10,000 records. In online load resume, the commit interval is dynamic. There are more getpage requests and page updates to the catalog with online load resume because of the updates to the SYSUTIL catalog table.


Parallel results

For the parallel cases, we used one nonpartitioning index and one partitioning index. The variable was increasing the amount of buffer pool resources for the buffer pool used by the nonpartitioning index (BP3).

Parallel results: One partitioning index and one nonpartitioning index (BP3=5000)

The results shown in Table 7 indicate that parallel processing for insert or load resulted in higher elapsed time.

Table 7. CPU and elapsed time summary with parallel processing and 5000 buffers for nonpartitioning index
Sequential INSERT
2M Rows
Online
LOAD RESUME
2M Rows
INSERT 2M Rows
in 10 Parallel Jobs
Parallel Partition
Online LOAD
RESUME 2M
Rows into
10 Partitions
CPU241 sec171 sec268 sec196 sec
ELAPSED TIME326 sec256 sec548 sec916 sec
TOTAL CLASS 3 TIME63 sec81 sec4511 sec8118 sec

Figure 4 shows the results graphically.

Figure 4. Graphical results of summary data for parallel processing with two indexes and BP3=5000
Graphical results of summary data

When we looked at the detailed statistics in Table 8, it became obvious that the elapsed time increase occurred because of I/O contention as well as internal latch contention, especially for the nonpartitioning index.

Table 8. Detailed statistics for parallel processing with 5000 buffers for nonpartitioning index
INSERT 2M RowsOnline
LOAD RESUME
2M Rows
INSERT
2M Rows
in 10
Parallel Jobs
Parallel Partition
Online
LOAD RESUME
2M Rows into
10 Partitions
NUMBER OF COMMITS200637n/an/a
BP1 - data
GET PAGE REQUESTS59,27060,126n/an/a
PAGES UPDATED2,176,4702,176,461n/an/a
PAGES WRITTEN1,966.7K1,671.9Kn/an/a
VERTI.DEF.WRITE THRESHOLD25,02321,494n/an/a
BP2 - partitioning index
GET PAGE REQUESTS142,420142,852n/an/a
PAGES UPDATED2,078,0102,078,010n/an/a
ASYNCHRONOUS WRITES15,48113,745n/an/a
PAGES WRITTEN445.1K379.3Kn/an/a
VERTI.DEF.WRITE THRESHOLD5,8005,000n/an/a
BP3 - nonpartitioning index
GET PAGE REQUESTS6,038,8236,041,995n/an/a
PAGES UPDATED2,056,1802,056,180n/an/a
SYNCHRONOUS READS8,0438,043n/an/a
ASYNCHRONOUS WRITES10,58110,634n/an/a
PAGES WRITTEN82,79982,008n/an/a
HORIZ.DEF.WRITE
THRESHOLD
00n/an/a
VERT.DEF.WRITE
THRESHOLD
221220n/an/a

Parallel results: One partitioning index and one nonpartitioning index (BP3=40000) buffers)

Because of the contention on the nonpartitioning index, we decided to increase the buffer resources devoted to that index to 40000 buffers and see what results we could get. Table 9 shows that increasing the buffer pool size to 40000 resulted in a significant drop in elapsed time, leading to a lower elapsed time than for sequential SQL insert or online load resume.

Table 9. Graphical results of summary data for parallel processing with two indexes and BP3=40000
Sequential INSERT
2M Rows
Online
LOAD RESUME
2M Rows
INSERT 2M Rows
in 10 Parallel Jobs
Parallel Partition
Online LOAD
RESUME 2M
Rows into
10 Partitions
CPU241 sec171 sec261 sec217 sec
ELAPSED TIME326 sec256 sec225 sec229 sec
TOTAL CLASS 3 TIME63 sec81 sec4,511 sec1,680 sec

Figure 5 shows the results graphically.

Figure 5. Parallel results when nonpartitioning index buffers are increased to 40000
Graphical summary of results from previous table

Summary and recommendations

We have described the performance characteristics of the three methods of inserting rows. Which method you should use must be determined only by weighing the advantages and disadvantages of each relative to your application requirement.

The use of SQL INSERT has the following major advantages:

  • Higher data availability and concurrency through the use of page or row locking instead of tablespace or partition locking.
  • More efficient processing for a small number of rows, for example less than 1000, by avoiding initialization and termination overhead associated with Load utility.
  • Freespace exploitation to take advantage of available spaces nearby to achieve better space utilization and better clustering index key sequence.
  • More functional capability such as the support for trigger.

The use of the LOAD utility has the following major advantages:

  • Reduced CPU, I/O, and elapsed time by processing a set of records instead of one record at a time.
  • LOG NO option to avoid log write I/O and log latch contention problems.
  • Optional statistics collection and image copy while Loading.
  • Easier operation

Online load resume, which was introduced in V7 of DB2 for OS/390 and z/OS, can provide a significant performance advantage over INSERT without sacrificing on concurrency. It does this by internally invoking the DB2 INSERT function and avoiding the overhead of the application program interface associated with each INSERT yet at the same time preserving other features of INSERT, such as row or page locking. In our measurements, we had a CPU time and elapsed time reductions when we used online load over INSERT. The fewer the number of indexes resulted in a greater percentage of improvement because insert processing adds onto the base cost of processing.

Offline load resume is still faster than online load resume, by 2 to 4 times in the cases measured here, because of the use of more efficient "page-level" processing rather than "row-level" processing. However, one big advantage of online load resume is that a target table space or partition is still available for read/write access by other concurrently running threads.

In order to achieve the best performance out of any of the three methods discussed in this paper, consider the following recommendations:

  • For offline load resume, use the SORTKEYS option for up to 50% faster elapsed time when there is more than one index.
  • For INSERT or online load resume:
    • Provide sufficient buffers for nonpartitioning indexes or nonclustering indexes to minimize random DASD I/Os. This is especially important for a large table with large indexes.
    • Use fast DASD devices, such as ESS Shark, and I/O striping, if necessary, to avoid a log I/O bottleneck.
    • To minimize data I/O, use zero FREESPACE to force INSERTs to the end of the data set. This can eliminate read I/Os and ensure optimal deferred write I/Os. However, this can result in additional overhead in clustering index access and DASD space usage.
    • For variable length records, consider the use of segmented tablespace for more efficient insert operations.
  • For an optimal performance in parallel operations, whether parallel insert or parallel load, a sufficient resource, such as database buffer pools, ESS DASD with Parallel Access Volume, nonpartitioning index pieces, and multiple processors with sufficient power, needs to be allocated. If there is a shortage of such resources, parallelism could result in longer elapsed time than sequential processing.

Appendix A. The measurement environment

The measurements were performed under the following hardware and software environment:

  • 3-way G6 processor model 9672-ZZ7 with 1792 megabytes of real storage and 6080 megabytes of expanded storage.
  • ESS E20 Shark DASDs.
  • DB2 V7 and OS/390 release 2.7.
  • Four buffer pools:
    • BP0 has 2000 buffers for DB2 catalogs and directories.
    • BP1 has 5000 buffers for data.
    • BP2 has 5000 buffers for partitioning indexes.
    • BP3 has 5000 buffer for nonpartitioning indexes.

Measurement cases:

  • Eight million rows of data, each 118-byte row had 26 columns of CHAR, INTEGER and DECIMAL were inserted into theses tables using LOAD REPLACE LOG(NO):
    • Partitioned table with 10 partitions and one partitioning index.
    • Partitioned table with 10 partitions and two indexes, one partitioning index and one nonpartitioning index.
    • Partitioned table with 10 partitions and three indexes, one partitioning index and two nonpartitioning indexes.

Data configuration:

  • DB2 catalogs and directories, all partitioning data, partitioning indexes, non-partitioning indexes and physical work files resided on separate volumes with:
    • Ten separate volumes for ten partitioning data.
    • Ten separate volumes for ten partitioning indexes and two volumes for two non-partitioning indexes.
    • All partitioning data, all indexes, DB2 catalogs, DB2 Directories and physical work files for DSFORT, SYSUT1, SORTOUT, and INFILE shared eight channel paths.
    • For parallel cases, ten SYSREC files resided on ten separate volumes sharing the same eight channel paths as DB2, work files, partitioning data and indexes.
    • The original input data was a BSAM file of 10 million rows sorted in ascending order. The file was divided into two data files of 8 million rows and 2 million rows.
  • The first data file of 8 million rows contained the first 800,000 rows of each partition. This file was used as input for LOAD REPLACE.
    • Partition 1 contains keys from 1 to 799,999
    • Partition 2 contains keys from 1,000,001 to 1,799,999
    • Partition 3 contains keys from 2,000,001 to 2,799,999
    • Partition 4 contains keys from 3,000,001 to 3,799,999
    • Partition 5 contains keys from 4,000,001 to 4,799,999
    • Partition 6 contains keys from 5,000,001 to 5,799,999
    • Partition 7 contains keys from 6,000,001 to 6,799,999
    • Partition 8 contains keys from 7,000,001 to 7,799,999
    • Partition 9 contains keys from 8,000,001 to 8,799,999
    • Partition 10 contains keys from 9,000,001 to 9,799,999
  • The second file of 2 million rows contained the last 200,000 rows of each partition. This input file was used for LOAD RESUME(YES) or SQL insert program.
    • Partition 1 contains keys from 800,000 to 1,000,000
    • Partition 2 contains keys from 1,800,000 to 2,000,000
    • Partition 3 contains keys from 2,800,000 to 3,000,000
    • Partition 4 contains keys from 3,800,000 to 4,000,000
    • Partition 5 contains keys from 4,800,000 to 5,000,000
    • Partition 6 contains keys from 5,800,000 to 6,000,000
    • Partition 7 contains keys from 6,800,000 to 7,000,000
    • Partition 8 contains keys from 7,800,000 to 8,000,000
    • Partition 9 contains keys from 8,800,000 to 9,000,000
    • Partition 10 contains keys from 9,800,000 to 10,000,000

Reports:

  • JES output
  • Accounting class 1,2,3 trace level long
  • Statistics class 1,2,3 details

Appendix B: Data and index sizes before data is inserted or loaded

Before insert or load:

Data:
Number of records: 8,000,000 records
Number of pages: 235,340 pages
Partitioning index
Number of records: 8,000,000 records
Number of pages: 51,620 pages
Cluster ratio: 100
Nonpartitioning index 1:
Number of records: 8,000,000 records
Number of pages: 9,928 pages
Cluster ratio: 80
Nonpartitioning index 2:
Number of records: 8,000,000 records
Number of pages: 9,940 pages
Cluster ratio: 80

After insert:

Data:
Number of records: 10,000,000 records
Number of pages: 298,800 pages
Partitioning index:
Number of records: 10,000,000 records
Number of pages: &nbsp 64,520 pages
Cluster ratio: 100
Nonpartitioning index 1:
Number of records: 10,000,000 records
Number of pages: 17,927 pages
Cluster ratio: 80
Nonpartitioning index 2:
Number of records: 10,000,000 records
Number of pages: 17,939 pages
Cluster ratio: 80

After online load resume:

Data:
Number of records: 10,000,000 records
Number of pages: 298,800 pages
Partitioning index
Number of records: 10,000,000 records
Number of pages: 64,520 pages
Cluster ratio: 100
Nonpartitioning index 1:
Number of records: 10,000,000 records
Number of pages: 17,927 pages
Cluster ratio: 80
Nonpartitioning index 2:
Number of records: 10,000,000 records
Number of pages: 17,939 pages
Cluster ratio: 80

After offline load resume

Data:
Number of records: 10,000,000 records
Number of pages: 294,120 pages
Partitioning index
Number of records: 10,000,000 records
Number of pages: 64,520 pages
Cluster ratio: 100
Nonpartitioning index 1:
Number of records: 10,000,000 records
Number of pages: 17,927 pages
Cluster ratio: 80
Nonpartitioning index 2:
Number of records: 10,000,000 records
Number of pages: 17,939 pages
Cluster ratio: 80

Acknowledgments

The authors acknowledge Jerry Heglar for his management support in producing this report.


Download

DescriptionNameSize
DDL and JCLddljcl.txt40KB

Comments

developerWorks: Sign in

Required fields are indicated with an asterisk (*).


Need an IBM ID?
Forgot your IBM ID?


Forgot your password?
Change your password

By clicking Submit, you agree to the developerWorks terms of use.

 


The first time you sign into developerWorks, a profile is created for you. Information in your profile (your name, country/region, and company name) is displayed to the public and will accompany any content you post, unless you opt to hide your company name. You may update your IBM account at any time.

All information submitted is secure.

Choose your display name



The first time you sign in to developerWorks, a profile is created for you, so you need to choose a display name. Your display name accompanies the content you post on developerWorks.

Please choose a display name between 3-31 characters. Your display name must be unique in the developerWorks community and should not be your email address for privacy reasons.

Required fields are indicated with an asterisk (*).

(Must be between 3 – 31 characters.)

By clicking Submit, you agree to the developerWorks terms of use.

 


All information submitted is secure.

Dig deeper into Information management on developerWorks


static.content.url=http://www.ibm.com/developerworks/js/artrating/
SITE_ID=1
Zone=Information Management
ArticleID=13847
ArticleTitle=DB2 for OS/390 and z/OS V7 Load Resume and SQL INSERT Performance
publish-date=03012002