DB2 for z/OS: DB2 database design

Principles of peak performance

Many design decisions made early during the development process can have a significant influence on the performance of your DB2® applications and databases. This article provides some general guidelines and recommendations for achieving better DB2 performance in z/OS® environments.

Share:

Fred Whitlark (fwhitlar@us.ibm.com), DB2 Consultant, IBM

Fred has worked in the I/T industry for over 27 years, and just completed his 25th anniversary with IBM last November. He has held various technical and marketing positions during his IBM career, including Systems Engineer, Large Systems Specialist, and Software Sales. For the past eight years, he has focused on DB2, and has been performing services contracts for customers in both the public and private sectors during the last six years.



19 August 2004

Introduction

The purpose of this article is to provide IBM® business partners with important information about DB2® database performance in a DB2 Universal Database™ (UDB) for z/OS® (hereafter simply referred to as DB2) environment. This article attempts to consolidate material from multiple sources, and then present that information as efficiently as possible. It is not intended to be comprehensive in scope or exhaustive in detail.

I have intended to discuss the things that most frequently affect the performance of DB2 databases. Neither every possible situation nor every potential consideration can be anticipated, let alone covered within the intended scope. I hope that this articlle provides general guidance in assisting DB2 customers to achieve optimum performance for DB2 databases in their environment. This article is intended to be a good starting point for addressing database performance at any given installation.

The scope of this article is database design performance. DB2 performance consists of much more than that, and is certainly affected by many factors other than the design of the database. For instance, the coding logic of programs and the actual SQL statements used therein could be classified as application design. DB2 systems performance would include such factors as installation options, sizing buffer pools, dispatching priorities for DB2-related address spaces, and so on.

The focus of this article is the design of the DB2 databases. But sometimes the lines between these categorizations of performance factors becomes somewhat blurred. For instance, the sizing and choice of how many buffer pools to configure in an installation would generally be considered a systems performance factor. But when assigning specific table spaces and indexes to those buffer pools, that could be viewed somewhat as a database design consideration.

I assume that the reader has a basic understanding of DB2 in a z/OS environment. The first few pages of this article discuss some basic concepts and principles of performance management in order to "level set." My recommendations tend to be somewhat general in nature, and I do not always spell out in great detail the technical descriptions or syntax. For more detailed information on these matters, you should check the most recent IBM documentation for the release of DB2 that is installed at the customer location.

The general design point for this document is DB2 for z/OS V7. Although DB2 for z/OS V8 has been announced and is generally available (GA), it will most likely be some months before the majority of IBM customers will implement DB2 V8 NFM (New Function Mode) for their production systems. There is another factor to consider here as well. Although every new release of DB2 undergoes extensive testing in both IBM and customer environments before being made generally available, customers will most likely have more confidence in general recommendations and Rules-of-Thumb based on earlier versions of DB2 (because of the length and breadth of actual experience to validate the conclusions) than for a version that is not yet in broad, general usage. I will mention a few features of DB2 V8 that can influence database design from a performance standpoint.

DISCLAIMER: The information contained in this document has not been submitted to any formal IBM test and is distributed AS IS. The use of this information or the implementation of any of these techniques is a user responsibility and depends on the user's ability to evaluate and integrate them into the customer's unique operational environment. While each item may have been reviewed by IBM for accuracy in a specific situation, there is no guarantee that the same or similar results will be obtained elsewhere. Users attempting to adapt these techniques to their own environments do so at their own risk.


Performance principles and methodology

When to consider performance

The time to consider the performance characteristics of applications and databases is during the initial design of those applications and databases, at the beginning of the development process. A reasonable estimate of the resources required for your DB2 applications and databases will help guide the user to make the appropriate design and implementation decisions early during the development process. If the performance of the applications accessing your databases is not addressed until later, then it will be more difficult and time-consuming to make the necessary modifications to obtain adequate response times and make your batch windows.

What to focus upon

When designing for performance, it is wise to concentrate most of the effort on your important DB2 data and programs. When defining what applications or transactions constitute this important workload, one or more of the following characteristics will apply:

  • They represent a large percentage of the total business workload
  • They have a critical response time requirement
  • They include complex logic and/or data access requirements
  • They access a large amount of data
  • They consume large amounts of resources
  • They directly interface with the customer (via the Web, ATM, and so on), in contrast with applications that are internal to the company

Database design

The design of a database occurs in two stages:

  • Logical database design
  • Physical database design

The logical model of the database is simply a representation of all the data requirements of the user, put into a normalized form. This model is usually the output or the end result of data modeling sessions. This model is seldom actually implemented as is. Rather, it is an idealized view of the data, before considering how to actually structure and store the data in the DBMS.

The logical model is then converted into a physical model, with consideration given to the architecture of the database objects. It is at this stage of design that data access requirements and performance factors need to be considered in some detail. During this process of producing a physical design, the two key elements are table design and index design. Both of these topics are discussed in some detail below.

Approach to DB2 performance management

In order to ensure adequate performance for your DB2 applications, it only makes sense to be proactive rather than reactive. It is important to include performance factors at an early stage of designing DB2 databases. Then try to establish a performance "base line" measurement that meets your service level agreement (SLA) as soon as possible during the project, so that you can track the performance characteristics and trends during staged rollouts and application changes. Continually monitor your DB2 system and applications in order to anticipate major problems before they fully develop.

Traditionally many customers do not worry about performance until the final stages of an application development project. But there is usually no good reason to wait. It is better to consider the performance features of database design as soon as the user interface and the processing logic have been specified. For example, the predicates of the SQL statements in your important DB2 work (see discussion above) should be a primary guide in creating the best indexes.

Even when you are able to develop an efficient initial design, it may be necessary to make modifications to your applications and or databases as volumes increase, or as system resources become constrained. If an application is not performing adequately, it is generally more desirable to add more columns to an existing index, or add new indexes to a table as your first options. Changing the table design, or modifying user requirements, or de-normalizing the tables, are usually less attractive options.


Understanding DB2 performance

Rules-of-thumb

Rules of thumb (also know as ROTs) can be useful when planning, monitoring and optimizing DB2 performance. ROTs are typically based upon previous experience (like observed averages over time), or on simplifications of more complex formulas.

It is important to keep in mind the fact that ROTs are useful only for rough estimating, but not for detailed analysis. There is a danger in accepting ROTs as exact quotes for performance, simply because they appear in some sort of document. At best they are estimates, and at worst they may not be valid for your particular DB2 environment.

ROTs should be developed for your environment (or adjusted to suit the characteristics of your environment). They should be related to your actual experience, rather than being blindly accepted, so that you have more confidence in them. It may be useful to start with ROTs that have been used or developed outside your particular environment. But these should be verified or modified, after collecting, analyzing and documenting the appropriate data from your DB2 system. IBM Redbooks are a good source for ROTs, as are the recommendations that are often included within performance monitoring tools.

Another consideration is that ROTs may evolve over time. Advances in hardware technology, improved software coding, system architectural changes and the like may render ROTs less reliable or even invalid. The biggest factor in an ROT changing over time is probably new releases of DB2 itself.

DB2 workload

Disk I/Os are often the biggest factor influencing response time, but the underlying performance problems are easier to understand by looking at the GETPAGE (GP) requests. When monitoring DB2 activity and analyzing reports, the number of GETPAGEs is probably the best indicator of total DB2 work performed.

Much of an installation's DB2 work falls into fairly recognizable categories:

  • Transactions: These are programs running under the control of transaction managers such as CICS and IMS/TM. The SQL is usually simple, but the transaction volume is heavy. Transactions must offer the user excellent response time, so these applications should not be made to wait long for the resources they need. Often it is the first user to invoke a transaction that incurs the cost of I/Os for reading in the indexes and data pages. Subsequent users may often find some of these resources in the buffer pool.
  • Queries: These are programs that are normally executed for decision support. The SQL may be very complex, but the volumes are usually much less than for transactions. Query users can often wait for minutes or even hours, depending upon how much data must be searched in order to produce the result set requested by the user. Queries often involve scans of entire tables, and sorting the results is another common characteristic of this type of workload.
  • Batch and Utilities: Batch and utility programs typically deal with large amounts of data, often processing the data in a sequential manner. It is important that these programs finish their processing within given windows. The use of well-placed, frequent COMMITs is an important feature to include in these applications. Batch and utilities tend to be large consumers of all kinds of system resources, and when pressed, may often escalate the workload.

Normalization

Normalization is the formal process of analyzing the data entities needed for an application, and then converting them into a set of well-designed structures. The general design objectives of the logical data model are correctness, consistency, non-redundancy and simplicity. Furthermore, the tenets of relational theory require the database to be normalized.

There are a number of consecutively numbered rules, called normal forms that define normalized data in considerable detail. It is not the purpose here to discuss these rules in detail. Most experts would suggest that designers strive to follow the first three rules, so that the data is said to be in third normal form.

To de-normalize a table means that you modify the normalized design by violating one or more of the normal forms to which the table had previously conformed. This de-normalization process is sometimes done for performance reasons. Further detailed information on normalization can be found in most books whose main subject is relational databases.

Types of DB2 table spaces

Within a defined DB2 database, the actual tables must be created within DB2 objects called table spaces. The user can define four different kinds of table spaces in DB2:

  • Simple: A simple table space may contain more than one DB2 table. The space is comprised of pages, each of which may contain rows from any table defined in the table space.
  • Segmented: A segmented table space may contain more than one DB2 table. The space is comprised of groups of pages, known as segments. Each segment contains rows from only one table defined in the table space.
  • Partitioned: A partitioned table space may contain only a single table. The space is divided into partitions based on the key range of a partitioning index. Each partition may be treated as an independent entity, allowing concurrent processing by SQL and DB2 utilities.
  • LOB: A LOB table space is for LOB (large object) data only. LOBs include three data types: BLOBs (binary large objects), CLOBs (character large objects) and DBCLOBs (double-byte character large objects).

Table space and table design considerations

Record size and page size

Fixed-length records are preferred over varying length records, because the DB2 code is optimized for processing fixed-length records. If a record is fixed length, it never needs to be moved from the page on which it is originally stored. However, a record of variable length can possibly be updated to a length that no longer fits on the original page, and so it must be moved to another page. Whenever that record is subsequently accessed, an additional page reference must occur. A new feature in DB2 UDB V8 allows you to ALTER the size of a column when required, so you no longer need to create varying length records when you are unsure of future data length growth.

The number of records that can fit on a page also merits some consideration. DB2 provides a number of options for page size (4 KB, 8 KB, 16 KB, and 32 KB). A good starting point is to select the default (4 KB), especially if row sizes are fairly small or when access to the data is primarily random. However, there are situations in which a larger page size needs to be considered. If the length of individual rows in a table is longer than 4 KB, then you are required to use a larger page size, since DB2 does not support spanned records.

In other situations the total length of a fixed record may be just slightly longer than one-half a 4 KB page, so then only one record can fit on a page. A similar situation applies to records that are just over one-third of a page, one-fourth of a page, and so on. Such designs not only waste DASD space, but also require higher resource consumption for many DB2 operations. So for records that fit this description, you should consider using a larger page size, so that relatively less space is wasted.

The other possible page sizes are 8 KB, 16 KB, and 32 KB. The page size is not directly specified on the CREATE TABLE statement. Rather, the page size for tables is determined by the page size of the buffer pool to which the table space containing the table is assigned. For more details, refer to the CREATE TABLESPACE statement in the DB2 SQL Reference manual.

De-normalization considerations

The logical data model is an ideal picture of the data. The physical data model is the real-world implementation of your data. Normalization focuses only on the meaning of the data, without consideration given to the possible performance requirements of the applications accessing the data. Fully normalized database designs may cause performance challenges.

The most common example of this performance issue is the join operation. Often, the result of the normalization process places related information into separate tables. Applications then may need to access data from these multiple tables. Relational databases provide the capability for the SQL statements to access information from more than one table through a joining of multiple tables. Depending upon the number of tables and their respective sizes, a join operation can be very resource intensive and time consuming.

As with so many things in I/T, there is a trade-off to consider. Is the duplication of data in multiple tables of frequently requested columns, more or less expensive than performing the table joins? After normalizing your data model as far as you have intended during the logical database design process, then include some degree of de-normalization as one of the potential performance tuning options. If you do decide to de-normalize, be sure to document this thoroughly: describe in some detail, the reasoning behind the de-normalization steps that you took.

Designing large tables

Accessing very large DB2 tables can consume a correspondingly large amount of resources: CPU, memory, I/O. The two most important things that the user can do to improve performance when designing large tables are:

  • To implement partitioning
  • To create useful indexes

Both of these topics are discussed in more detail below.

Use segmented or partitioned table spaces

If the data includes LOBs, then the user must create LOB table spaces. For non-LOB data, the choice would normally be between segmented or partitioned table spaces, depending largely upon the amount of data to be stored, and to a lesser degree, the kinds of data access required by the associated application programs. Simple table spaces are rarely recommended any more.

Listed below are some of the performance advantages of segmented table spaces over simple table spaces:

  • For table spaces containing more than one table, when DB2 acquires locks on a table, that lock does not affect access to segments of other tables.
  • When DB2 scans a table, only the segments associated with that table are accessed. Furthermore, pages of empty segments are not fetched.
  • If a table is dropped, its segments become immediately available for reuse at the COMMIT point, without requiring the REORG utility to be executed.
  • If all rows in a table are deleted (known as a mass delete), all segments become immediately available for reuse at the COMMIT point, without requiring the REORG utility to be executed.
  • A mass delete operates much more efficiently and writes considerably less log information.
  • The COPY utility does not copy those pages that are empty due to a mass delete operation or a table being DROPped.

When tables reach a certain size, manageability and performance can be improved by implementing partitioned table spaces. If you anticipate this kind of growth, it is wise to define the table space as being partitioned when it is designed and created. Some of the potential advantages that partitioned table spaces offer are listed below:

  • Parallelism: You can take advantage of the three types of parallelism that are now being used by DB2 UDB. Query parallelism (multiple I/O paths) was introduced with DB2 V3. CP parallelism (multiple tasks on multiple CPs) was implemented in DB2 V4. Sysplex query parallelism (multiple tasks on multiple members of a DB2 data sharing group) was introduced in DB2 UDB V5. DB2 has evolved to significantly enhance the parallel processing capabilities of DB2 applications processing partitioned table spaces. For an incremental increase in CPU time, the elapsed time for these queries can be significantly reduced.
  • Working on part of the data: Partitioned table spaces allow a DB2 utility to process one partition of the data at a time, thus enabling concurrent access for other jobs or applications on the other partitions. In a similar manner, you can break up mass UPDATE, DELETE or INSERT operations into separate jobs. In addition to increasing availability, this technique provides the potential for decreased elapsed time for accomplishing this kind of DB2 work.
  • Faster access to frequently accessed data: If the partitioning index can separate the more frequently accessed rows from the rest of the table, then that data can be placed into a partition of its own and use a higher-speed DASD device.

In general, the larger the table, the more reason to create it as a partitioned table. But there are also some instances when it may be advantageous to create a partitioned table space for small tables. When lookup tables are used in joins with other large partitioned table spaces, you can maximize the parallelism in the join by partitioning the lookup tables as well.

When you use the partitioning key in the join predicate, a final consideration needs to be mentioned. Tables that will be joined on the partitioning key should have the same number of partitions and should break on the same values.

Data compression

DB2 provides the capability to compress the data in a table space or partition. This is accomplished by specifying the COMPRESS YES option in the CREATE TABLESPACE statement, and then executing either the LOAD or REORG utility on the table space. Data is compressed by replacing frequently occurring bit strings with shorter strings. A dictionary is built that contains information mapping the original bit strings and their replacements.

A certain amount of CPU resource is used to compress the data before it is stored, and then to decompress the data when it is read from the external storage device. However, data compression can also provide performance benefits, since more data is stored into less space (both on DASD and in the buffer pools), which can result in fewer synchronous reads, less I/O, and so on, than for uncompressed data.

Here are some things to consider when trying to decide whether or not to compress a table space or partition:

  • Row Length: The larger the row is (especially as it approaches the size of a page), the less efficient compression might be. DB2 rows cannot span pages, and you may not achieve enough compression to fit more than one row on a page.
  • Table size: Compression tends to work better for larger table spaces. For very small tables, the size of the compression dictionary (8 KB to 64 KB) may offset any space savings that compression provides.
  • Patterns in the data: The frequency of recurring patterns in the data determines how effective compression will be for a particular table space or partition. Data with a high number of repeating strings can result in significant compression.
  • Estimating Compression: DB2 provides a stand-alone utility, DSN1COMP, which can be executed in order to determine how effective compressing the data will be. For additional information about running this utility, refer to the DB2 Utilities Guide and Reference manual.
  • Processing Costs: There is some amount of CPU resource consumed in compressing and decompressing DB2 data. The amount used is significantly lower if you use IBM's synchronous data compression hardware feature than if you use the DB2 software simulation program (When DB2 is started, it determines whether or not the hardware compression feature is available).
  • Better Dictionary: When using the LOAD utility to build the compression dictionary, DB2 uses the first n rows loaded (where n is dependent upon how much your data can be compressed) to determine the contents of the dictionary. The REORG uses a sampling technique to build the dictionary. It not only uses the first n rows loaded, but also continues to sample rows during the rest of the UNLOAD phase of the utility execution. Therefore, REORG usually produces a dictionary that is more representative of the data in the entire table space or partition.

It is normally recommended to compress those DB2 table spaces and partitions in your environment that would benefit from compression, since the performance advantages of more data in less space almost invariably outweigh the CPU resource needed to compress and decompress the data.

Loading large tables

When dealing with large amounts of data, the initial loading of the data into the table can present some performance challenges. In order to achieve parallelism during the load, you can manually create multiple LOAD jobs, one for each partition or alternatively, you can load multiple partitions within a single LOAD utility. Each partition is spread across the I/O subsystem, which makes it easy to achieve optimal parallelism.

It is also important to specify the SORTKEYS parameter in the LOAD statements for maximum performance. This instructs DB2 to pass the index keys to the sort program in memory, rather than having the keys written to and read once again from sort work files on DASD. SORTKEYS also enables overlap between the loading and the sorting, since the sort is running as a separate task.

Additional recommendations for loading large tables are as follows:

  • LOAD one table at a time.
  • If possible, give a higher priority to the jobs that you anticipate will have the highest elapsed time.
  • Distribute the work across the sysplex.
  • Break up the secondary indexes into pieces in order to achieve parallelism (see discussion below entitled PIECESIZE).
  • Specify LOG NO (to prevent logging, and the considerable amount of resources that logging consumes) during the initial loading of the data, and then run an image COPY after successfully loading the data.

Free space considerations

The main purpose of allocating free space is to keep the data rows in the same physical sequence as the clustering index, thus reducing the need to frequently reorganize the data. In addition, the better clustering of rows will result in faster read access and faster insertion of rows. However, over-allocating free space may result in wasted DASD space, less data transferred per I/O, less efficient use of buffer pools and more pages to scan.

The allocation of free space in table spaces and indexes is determined by the PCTFREE and FREEPAGE options of the CREATE or ALTER TABLESPACE and CREATE or ALTER INDEX statements.

PCTFREE indicates to DB2 what percentage of each page in the table space or index to leave free when either loading or reorganizing the data. DB2 will then use that free space when inserting new rows and index entries. If there is not enough free space to write the row or index entry on the right (that is, in clustering sequence) page, then DB2 must place the additional data on another page instead. As more and more records are located out of physical sequence, the performance can suffer.

FREEPAGE indicates to DB2 how often to leave a full page of free space when either loading or reorganizing the data. For example, if you specify FREEPAGE 5, DB2 will allocate a page of free space after filling five pages with data. FREEPAGE should be used if your table rows are larger than half a page, since you cannot INSERT a second row on a page in such circumstances.

Whether or not to define your table spaces with any free space, and the amount to allocate, is very much dependent upon the INSERT characteristics (and to a lesser degree the DELETE activity) of the tables in the table space. In other words, how frequently are rows added to the table, and where in the table are they added? The following four broad categories are:

  • Read-only tables: If there is never any update activity on a table, then it can be defined with no free space. Also, there should not be any need to run the REORG utility.
  • Random inserts: For tables with a relatively large number of existing rows and a relatively light level of INSERT activity, using the default PCTFREE (5 for table spaces and 10 for indexes) is a good starting point. Then use RUNSTATS to monitor the level of disorganization of the data, and in combination with your desired frequency of running REORG, adjust the PCTFREE up or down as necessary. For tables with heavy INSERT activity, you will probably need to use a PCTFREE that is somewhat higher than the defaults. For a table that is initially empty or contains very few rows (such as during the deployment of a new database), you may need to specify a very high PCTFREE and run REORG quite frequently for a while, until the table is well populated.
  • Inserts at the end of the table: If the rows for a table do not grow in length, then there is no need to allocate free space, since they are added to the end of the table. And since they are written in physical clustering sequence, there should be no need to REORG. But if a table contains updatable VARCHAR columns, or if the table is compressed, it is possible that the row length could grow, which could cause a row to be moved to another page. You can determine this by executing RUNSTATS on the table space and then checking the NEARINDREF and FARINDREF columns of the DB2 catalog table SYSIBM.SYSTABLEPART. If your table becomes disorganized, then specify a PCTFREE for the table space and continue to monitor the number of mislocated rows with RUNSTATS. Based on the data and trends you observe, adjust your REORG frequency and PCTFREE number accordingly. By specifying the INDREFLIMIT and REPORTONLY options in REORG TABLESPACE, you can monitor the amount and pace of disorganization in your updated DB2 tables.
  • Inserts in a hot spot: This is the situation in which tables have heavy insert activity that is concentrated in a location (or locations) other than at the very end of the table. This is probably the most difficult category to handle. Try increasing the PCTFREE value. If the inserts stay within the home segment and the rows are not very long, several rows can be stored in the same page. FREEPAGE may be another alternative to consider in this situation. It will be necessary to closely monitor how quickly the table becomes disorganized, so that REORGs can be run before the performance degrades significantly.

Index design considerations

An index is a DB2 object (a separate VSAM dataset) that is comprised of an ordered set of keys extracted from one or more columns of the corresponding table. Many DB2 experts claim that building the right indexes for a table space may be the single most effective means for optimizing the performance of applications accessing that DB2 data. Years ago in I/T the cost and space of DASD was a more important consideration. As the technology has advanced, the tradeoff of reducing I/Os by adding more indexes (or adding columns to existing indexes) at the expense of extra disk space has become more and more attractive over the years. The primary performance benefits of indexes are:

  • To provide direct pointers to the requested data rows in the table
  • To eliminate a sort, if the required order of the result set matches the index
  • To avoid having to read the data rows, if the requested columns are all contained within the index entries

Partitioning indexes

When creating a partitioned table space in DB2 UDB V7, DB2 divides the data among the partitions according to the PART clause of the CREATE INDEX statement. That index becomes known as the partitioning index, and this method of partitioning is known as index-controlled partitioning. For partitioning indexes, it is recommended that you choose key columns that are unlikely to be changed. Updates to those columns may cause a row to move from one partition to another, resulting in degraded performance.

An important feature of DB2 V8 is table-controlled partitioning. Now, when creating partitioned tables, the partition boundaries are determined by the CREATE TABLE statement instead of CREATE INDEX. Under index-controlled partitioning, the concepts of partitioned table, partitioning index and clustering were all intertwined. With table-controlled partitioning, these three concepts are separate. This added flexibility allows you to consider more potential design alternatives, and therefore increases the likelihood of improving the performance of a DB2 database and its applications.

When to build indexes

The CREATE INDEX statement gives the user the ability to build the index immediately, or to defer it until a more convenient time. If you build the index immediately, the table space is scanned, which can take a considerable amount of time. By specifying DEFER, you postpone the creation of the index.

Whenever possible, create all indexes on a table before initially loading it, since the LOAD utility builds indexes more efficiently than the CREATE INDEX process. If you need to create an index on an already existing (and populated) table, use the DEFER clause. You can then later use the REBUILD INDEX utility, which like the LOAD utility, is a more efficient means of populating the index.

PIECESIZE

A new feature was introduced with DB2 UDB V5 that gives you the flexibility to break up a non-partitioning index (NPI) into pieces, and to control the size of the multiple data sets which will comprise the index space. This use of pieces enables the index pages of an NPI to be spread out into multiple data sets.

The size of the pieces is determined by specifying the keyword PIECESIZE in the CREATE or ALTER INDEX statement. The value for PIECESIZE must be a value that is a power of two, and can range from a minimum of 256 KB to a maximum of 64 GB. The default value is 2 GB for regular table spaces and 4 GB for LARGE table spaces. If your NPI is likely to grow significantly, then choose a relatively larger value. Also keep the PIECESIZE value in mind when determining values for primary and secondary space allocations (the PRIQTY and SECQTY options of the CREATE INDEX statement).

Using this option improves the performance of scans of the NPI by facilitating parallelism. Another advantage is the reduction of I/O contention on read or update processing. By specifying a smaller PIECESIZE, you create more pieces and therefore have more control over the placement of the pieces. Placing the pieces on separate I/O paths reduces the contention of SQL operations that need to access the NPI.

The ideal index

By examining the SQL statements in an application, you can build a hypothetically perfect index.

  1. First, include in the index all the columns in the WHERE clause, so that index screening can be used to reject unqualified rows from the result set. Place these columns at the beginning of the index. This would result in the maximum value for MATCHCOLS when doing an EXPLAIN on the SQL statement.
  2. Next, ensure that the index has these columns in the proper sequence (according to the ORDER BY clause), so that a sort can be avoided. This can be verified by checking all of the various SORT* columns of the PLAN_TABLE when doing an EXPLAIN.
  3. Finally, if possible, include all of the columns in the SELECT in the index, so that there is no need to access the table rows. The index entries can provide all of the requested data. This would show up in EXPLAIN as INDEXONLY = Y.

In many cases it will be too expensive, or impractical, or even impossible to implement such an ideal, because of the number of columns involved. There are architectural limits to the number of columns that can comprise an index, and for the total length of an index entry (although these limits allow for considerable index entry size and flexibility). Furthermore, there is the cost of index maintenance to consider. Building the ideal index may give a dramatic performance boost to query performance, but there will be a negative impact whenever there are SQL writes to the DB2 database (INSERT, UPDATE or DELETE). Therefore, you may often choose to implement indexes that include only the columns referred to in the WHERE and ORDER BY clauses.


Parallel processing considerations

Over the years DB2 has enabled improved performance in accessing data through the implementation of various methods of parallel processing. DB2 V3 introduced query I/O parallelism in order to improve the performance of data-intensive read-only queries. In this type of parallelism, DB2 fully utilizes the available I/O bandwidth made possible with partitioned table spaces. With this method DB2 initiates multiple concurrent I/O requests for a single query and performs parallel I/O processing on multiple data partitions. This typically results in a significant decrease in elapsed time for I/O bound queries, while incurring a minor increase in CPU time.

DB2 V4 introduced another parallelism technique, called query CP parallelism. This method extended parallel processing to process-intensive queries. With this method a single query causes DB2 to generate several tasks, which are executed in parallel to access the data. Partitioned table spaces show the greatest performance improvements for this type of parallelism.

DB2 UDB V5 extended parallel processing even further by introducing sysplex query parallelism. While query CP parallelism used multi-tasking for a query within a DB2 subsystem, this method enables all the members of a DB2 data-sharing group to process a single query. Both I/O-intensive and processor-intensive queries that are primarily read-only can benefit from this type of parallelism.

Enabling parallel access

There is a granularity to enabling parallelism in the DB2 environment. First of all, at the DB2 subsystem level, parallel access is controlled on installation panel DSNTIP4. The MAX DEGREE option on DSNTIP4 determines the maximum degree of parallelism (maximum number of parallel tasks). The default is 0, which means that there is no upper limit for the degree of parallelism that DB2 may invoke. I recommend that you evaluate the virtual storage capacity and constraints in your z/OS environment, and adjust this parameter as needed, so that DB2 will not create more parallel tasks than your virtual storage can handle.

You can control whether or not DB2 exploits parallel processing by the DEGREE option of the BIND PLAN and BIND PACKAGE commands. Specifying DEGREE(1) prohibits parallel processing, while DEGREE(ANY) enables parallel processing. For further flexibility, dynamic SQL allows for changing this option within a plan or package by the SET CURRENT DEGREE statement, which controls the value in a special register.

When a plan or package is bound with DEGREE(ANY), or the CURRENT DEGREE register is set to ANY, then the DB2 optimizer considers whether or not parallelism is possible with the most efficient sequential plan. If parallelism is not possible, then the next best sequential plan that allows for parallelism is chosen.

Limited partition scan

Limited partition scan is a method that allows DB2 to limit the scan of data in a partitioned table space. DB2 can determine, from the values in the SQL predicate, the lowest and highest partition that can possibly contain the table rows that are requested by the SQL statement, and then limits the data scan to that range of partitions. In order to use this technique, the SQL must provide a predicate on the first key column of the partitioning index.

Parallelism recommendations

Here is a list of things to consider in order to optimize the performance boost that parallel processing can deliver:

  • Partition the table space as evenly as possible, because the degree of parallelism is influenced by skews in the data. Normally DB2 divides the table space into logical pieces based on the size of the largest physical partition.
  • Allocate as many central processors (CPs) as possible, and as many I/O devices and paths as possible, for DB2 application processing.
  • For I/O intensive queries, ensure that the number of partitions is the same as the number of I/O paths that can access the table space.
  • For processor-intensive queries, ensure that the number of partitions is equal to the number of CPs that will be allocated for processing queries across the data-sharing group.
  • Place the partitions for the table space and indexes on separate DASD volumes and (if possible) separate control units in order to minimize I/O contention.
  • Execute the RUNSTATS utility on a regular basis in order to obtain partition-level statistics.
  • Monitor the virtual buffer pool thresholds and usage, making sure that you provide sufficient buffer pool space to maximize the degree of parallelism invoked.

Buffer pool considerations

The importance of the buffer pool

Most experts consider the database buffer pools to be the most critical resource in the DB2 environment affecting performance. Much of DB2's architecture and design is based on the concept of avoiding physical I/O as much as possible or practical.

The DB2 buffer pools consist of slots of contiguous memory. After being read in from DASD, the data and index pages go into these slots and remain there until the DB2 buffer manager determines that those slots should be used for some other data. The more often that the data requested by applications is in memory rather than out on DASD, the better overall performance will be. In essence, the data is reused, thus minimizing the I/O needed by the applications.

The decision to free up a buffer pool slot is based on the least recently used (LRU) principle. DB2 maintains two LRU lists, one for randomly accessed pages, and another for sequentially accessed pages. This prevents a large table scan from completely dominating the buffer pool and adversely impacting random operations. Through the use of various thresholds, DB2 provides you additional flexibility for improving the performance of buffer pools. These thresholds are discussed in some detail in section 2.7.4 of the DB2 SQL Reference manual.

Proper size for the buffer pool

The designation of buffer pool sizes is dependent upon the amount of storage (both central and expanded) that is available. I recommend that you analyze the buffer pool allocations, and increase its size until either there is no additional throughput resulting from the increased allocation, or the MVS paging rate approaches unacceptable rates. This is accomplished by increasing VPSIZE as long as the number of DASD I/Os keeps decreasing until the cost of paging outweighs the benefit of the reduced I/O.

Earlier it was stated that the number of GETPAGES is perhaps the best measure of the amount of work that DB2 is doing. And the purpose of the buffer pools is to minimize I/O (Asynchronous reads usually indicate pre-fetch, which is generally desirable from a performance standpoint. Synchronous reads, on the other hand, usually indicate a random I/O from DASD, because the requested page was not found in the buffer pool). The accounting reports show the number of GETPAGES and synchronous reads per buffer pool. A generally accepted ROT says that if the ratio of GETPAGES to synchronous reads is less than 10:1, then you should evaluate the need for a larger buffer pool.

Multiple buffer pool configuration

If your operating environment allows allocating reasonably large amounts of memory to DB2 buffers, then a configuration of multiple buffer pools can most likely provide improved performance for specific applications or databases. However, be aware that by having multiple buffer pools, monitoring their effectiveness becomes more important.

The general recommendations for allocating multiple buffer pools include the following:

  • Separate the table spaces from their associated indexes into different pools in order to minimize index I/O.
  • Group data with different data access patterns into separate pools. Batch and query applications typically do a lot of sequential processing, while data access for OLTP tends to be more random in nature. This provides a means to exploit the various thresholds to favor certain types of data access within a buffer pool.
  • Provide a separate buffer pool to isolate applications. This provides a means for closely monitoring an application with performance problems, or testing new applications.
  • If the performance of sorts is important in your environment, then create a separate buffer pool for work files.
  • For relatively small but heavily updated tables, a sufficiently large separate buffer pool may be able to eliminate both read and write I/Os.
  • A separate pool for read-only tables (small, reference tables) may improve performance.

Summary

Thoughtful database design can offer significant performance benefits, but it must begin early in the application development process. Many of the principles mentioned above have been used by wise developers since the early days of DB2, and still hold true today. But it is also crucial to be aware of DB2 functional enhancements, and changes in other areas of hardware and software technology that will influence your current and future applications. When database performance is an important focus in the development process, your database design will result in a greater probability of delivering optimum performance for your DB2 applications.

Comments

developerWorks: Sign in

Required fields are indicated with an asterisk (*).


Need an IBM ID?
Forgot your IBM ID?


Forgot your password?
Change your password

By clicking Submit, you agree to the developerWorks terms of use.

 


The first time you sign into developerWorks, a profile is created for you. Information in your profile (your name, country/region, and company name) is displayed to the public and will accompany any content you post, unless you opt to hide your company name. You may update your IBM account at any time.

All information submitted is secure.

Choose your display name



The first time you sign in to developerWorks, a profile is created for you, so you need to choose a display name. Your display name accompanies the content you post on developerWorks.

Please choose a display name between 3-31 characters. Your display name must be unique in the developerWorks community and should not be your email address for privacy reasons.

Required fields are indicated with an asterisk (*).

(Must be between 3 – 31 characters.)

By clicking Submit, you agree to the developerWorks terms of use.

 


All information submitted is secure.

Dig deeper into Information management on developerWorks


static.content.url=http://www.ibm.com/developerworks/js/artrating/
SITE_ID=1
Zone=Information Management
ArticleID=16251
ArticleTitle=DB2 for z/OS: DB2 database design
publish-date=08192004