In data warehouses, the size of fact tables or history tables poses challenges to designers and administrators. These tables routinely contain hundreds of millions of rows and can sometimes contain hundreds of billions of rows. Key concerns with tables of this size are:
- Query performance,
- Getting large volumes of new data into these tables and,
- Removing large volumes of obsolete data every month or quarter.
Over time, DB2 continues to add and enhance features to address these needs. An important addition in DB2 9 for Linux, UNIX, and Windows is the table partitioning feature. This leads to the questions:
- What are these features?
- How does each one contribute to addressing the concerns above?
- Which features should I use?
- How can I use these features in combination to achieve synergies?
These are the questions addressed by this article. After reading this article, readers will be able to:
- Understand the distinctive contribution of each feature to addressing their key concerns.
- See how to use these features effectively in combination.
With this background, readers will be well equipped to further investigate the details of the features they are interested in.
The CREATE table statement now provides three ways to group data in a database table.
Table 1. DB2 features
|Clause in the CREATE TABLE statement||DB2 feature name|
|DISTRIBUTE BY HASH||DPF - Database Partitioning Feature|
|ORGANIZE BY DIMENSION||MDC - Multidimensional Clustering|
|PARTITION BY RANGE||TP - Table partitioning|
You can use these clauses in any combination to achieve the desired effect. Table 2 summarizes terminology associated with these features, and used in this article, is summarized below.
Table 2. DB2 feature terminology
|DB2 feature name||Name of a part||Column(s) used to partition the data||Other terms|
|Data Partitioning Feature (DPF)||database partition||distribution key||In prior versions, distribution key was called partitioning key|
|Multidimensional Clustering (MDC)||cells, which are comprised of blocks||dimension||block indexes|
|Table partitioning (TP)||data partition||table partitioning key|
Each feature provides a distinct approach to grouping data in a table and provides a unique contribution to addressing needs related to large fact or history tables.
DPF, the oldest feature, enables you to divide a database into database partitions. Each database partition has its own set of computing resources, including CPU and storage. In a DPF environment, each table row is distributed to a database partition according to the distribution key specified in the CREATE TABLE statement. When a query is processed, the request is divided so each database partition processes the rows that it is responsible for. Essentially, DPF is a scalability feature. DPF can maintain consistent query performance as the table grows by providing the capability to add more processing power in the form of additional database partitions. This capability is often referred to as providing linear scalability using DB2's shared nothing architecture.
DPF is much more than a table design consideration. It is an approach to sizing and configuring an entire database system. Recommended practices have been developed for configuring such systems for optimal performance, reliability, and capacity growth. Customer can purchase the recommended hardware and software bundle and configuration in an offering known as the BCU (Balanced Configuration Unit).
MDC, introduced in DB2 Version 8, enables rows with similar values across multiple dimensions to be physically clustered together on disk. This clustering allows for efficient I/O for typical analytical queries. For example, all rows where Product=car, Region=East, and SaleMonthYear = Jan09 can be stored in the same storage location, known as a block. The dimensions are defined in the CREATE table statement and storage is reserved for each combination of values. Essentially, MDC is a feature that maximizes performance of queries, especially those commonly found in data warehouses. This includes queries that involve selecting rows based on combinations of values in several columns. For example, DATE is between "Jan-01-2004" and "Feb-01-2005" AND Country IS NOT "United States" AND Product="Cell Phones".
TP, introduced in DB2 9, is similar to MDC in that it enables rows with similar values to be stored together. However, the following TP characteristics distinguish it from MDC:
- TP supports partitioning a table into data partitions along a single dimension. A common design would be to create a data partition for each month's data. MDC supports defining multiple dimensions.
- With TP, the user can manually define each data partition, including the range of values to include in that data partition. MDC automatically defines a cell (and creates blocks to store data for that cell) for each unique combination of MDC dimension values.
- Each TP partition is a separate database object (unlike other tables which are a single database object). Consequently, TP supports attaching and detaching a data partition from the TP table. A detached partitions becomes a regular table. As well, each data partition can be placed in its own table space, if desired.
Essentially, TP's distinct benefit is related to adding or removing large numbers of rows from a table, that is, roll-in and roll-out. For readers familiar with the use of Union All View (UAV) to partition history tables on date, TP can be considered an analogous, but superior, solution.
Table 3 summarizes the comparison of these features:
Table 3. DB2 feature comparison summary
|Feature||How the feature organizes data||Benefits|
|DPF||Evenly distributes rows across database partitions||Scalability - add computing resources (that is, database partitions) as the database grows|
|MDC||Groups all rows with the similar values on multiple dimensions in the same physical location in the table, called a block||Query performance - organize data for faster retrieval, especially for queries that involve ranges of multiple predicates|
|TP||Groups all rows in a specified range of a single dimension in the same data partition||Data movement - add and remove large volumes of data by adding and removing entire data partitions|
This section expands on the earlier point that these "three amigos" are independent and complementary.
When designing a table, the use of each feature can be considered independently. For example,
- Determining the distribution key for DPF is not affected by whether or not MDC and TP are also being used.
- Whether a column should be used as table partitioning key for TP is not affected by whether that column is also used an MDC dimension and vice versa. Each decision can be made independently.
The way each feature works, say with regard to indexing, is not changed with the introduction of new partitioning features. For example, when MDC was introduced, its index aspects did not change DPF aspects of indexes. As well, when TP was introduced, it did not change indexing related DPF or MDC. Keeping this in mind when learning about these features can help avoid confusion.
For example, suppose you are learning about TP and you come across the statement that "TP has global indexes." You should not infer that there is some change in how indexes are handled in DPF. In statements such as this, the term "global" merely means that indexes are global across TP data partitions.
Generally speaking, one feature cannot be used to address shortfalls or problems in database design related to another feature. Notable examples include:
- TP cannot address DPF-related problems, whether the problem is DPF data skew or ease of administration activities in DPF. The same DPF remedies apply regardless of whether TP is also used.
- TP cannot be used to rectify a poor MDC design.
In summary, if there is a problem with DPF, MDC, or TP, then remedies applicable to that feature need to be attempted.
Fact (or history) tables in data warehouses make excellent candidates for use with each partitioning feature, as you see in Table 4 below.
Table 4. Fact tables have characteristics suited for use with DB2 partitioning features
|Feature||Suitable table characteristics||Characteristics of fact tables|
|DPF||Large tables - larger than can be handled by a single set of CPUs and I/O channels.||Fact tables are the largest database tables. They often contain hundreds of millions of rows and sometimes hundreds of billions of rows.|
|MDC||Queries whose result sets return rows with similar values along multiple dimensions||Fact tables (and data warehouses in general) are designed to support these types of queries|
|TP||Tables where large volumes of rows are added periodically then later removed after an expiry date||In fact tables, new data are often added daily. Obsolete data is removed usually monthly or quarterly|
This section provides a feel (and only a feel) for the nature of design decisions and rules of thumb. References in the resources list provide more comprehensive design guidance.
For DPF, the top priority when selecting a distribution key is to find one that will distribute the rows evenly across the database partitions. When this situation is not achieved, the result is data skew. This means that one or some database partitions are assigned a disproportionate number of table rows which can create a performance bottleneck. Columns with many distinct values are good candidates. Other considerations include choosing a column that maximizes the performance of joins.
Another DPF design decision is the number of database partitions. The number of database partitions is not primarily a table design consideration. Rather it is an overall system design consideration based on the anticipated raw data size of the entire database and capacity of the server hardware. Many systems need fewer than 20 database partitions. However, the largest systems have many more. The common range can be expected to increase due to the trend toward larger data warehouses.
For MDC, a key decision is which columns(s) will serve as MDC dimensions. The design challenge is to find the best set of dimensions and granularity to maximize grouping but minimize storage requirements. This requires knowledge of the pattern of queries that will be run. Good candidates are columns that have any or all of the following characteristics:
- Used for range, equality, or IN-list predicates
- Used to roll-in, roll-out or other large-scale deletes of rows
- Referenced in GROUP BY or ORDER by clauses
- Foreign key columns
- Columns in the join clauses in fact table of star schema database
- Coarse granularity, that is, few distinct values
A typical design is an MDC dimension for a column representing date plus dimensions on 0 to 3 other columns such as region and product_type.
For TP, design decisions include selecting the column to use as the table partitioning key and number of partitions. Usually the table partitioning key will be a time-based column. Each partition will correspond to a time range of data that is rolled out at the same time. For example, a table that has monthly rollout would have a partition for each month of data. One design consideration unique to TP is the need to handle rows that fall outside the ranges of values defined in the CREATE table statement.
A typical design for a database with monthly roll-out based on sale_date, would be to use sale_date as the table partitioning key and create a separate partition for each month.
Generally, one of the MDC dimensions will be a time-based column so the same column can be used for both MDC and TP. The MDC granularity can be finer-grain than the TP data partition size.
These points are summarized in the table below.
Table 5. Design rules of thumb summary
|Partitioning feature design decision||Rule of thumb|
|DPF - column to use as the distribution key||The top priority is choosing a column with many distinct values.|
|MDC - columns to use as dimensions||A typical design is to choose a column that represents date plus 0 to 3 other columns such as region and product_type.|
|TP - column used as the table partitioning key and number of partitions||Choose a time-based column. Define partitions that correspond to data that is rolled out at the same time.|
Table 6 shows examples of typical table designs. The Transactions history table represents a typical table in a traditional data warehouse. The Recent transactions table represents a table in an operational data store, essentially a data warehouse with only recent data.
Table 6. Examples of table designs
|Partitioning attribute||Transactions history table||Recent transactions table|
|DPF - column used as the distribution key||Transaction ID||Transaction ID|
|DPF - number of database partitions||20||4|
|MDC - columns used as dimensions||Transaction date(Year+Month)=36 values (see Note 1); Account type=5 values; State=51 values||Transaction date(days)=90 values; Account type=5 values; State=51 values|
|TP - column used as the table partitioning key and number of partitions||Transaction date(Year+Month)=1 partition per month||Transaction date(Year+Month)=1 partition per month|
|Other table attributes||- -||- -|
|# of rows (1 million per day)||1 billion||90 million|
|# of columns||30||30|
|# of indexes||4||15|
Note 1: An alternative design for the MDC dimension on the Transactions history table is to define Transaction date at a smaller granularity, such as by week or by day. The trade-off between better query performance versus increased storage requirements would depend on characteristics of the data and the queries.
Having clarified the distinct and complementary nature of the three partitioning features, it is worthwhile broadening the discussion to include MQTs (materialized query tables). MQTs are targeted for use in similar situations as the partitioning features - namely data warehouse fact or history tables. So, a full accounting of how to use partitioning features, presented in this article, needs to address any special considerations where MQTs are also involved.
An MQT is a table whose definition is based on the result of a query. Another way of thinking about MQTs is that they are like a view whose result set is stored in a table. MQTs improve response time for complex queries that involve any of the following:
- Aggregations or computations based on data from base table(s),
- Joins of base tables, or
- A commonly accessed subset of data from one or more larger base tables.
As the data for the base table(s) changes, the MQT needs to be updated accordingly. The MQT feature provides various options for doing this to suit various operational needs.
The table below augments Table 3 with a row for MQTs.
Table 7. DB2 feature comparison summary, including MQTs
|Feature||How the feature organizes data||Benefits|
|DPF||evenly distributes rows across database partitions||Scalability - add computing resources (that is, database partitions) as the database grows|
|MDC||groups all rows with the similar values on multiple dimensions in the same physical location in the table, called a block||Query performance - organize data for faster retrieval, especially for queries that involve ranges of multiple predicates|
|TP||groups all rows in a specified range of a single dimension in the same data partition||Data movement - add and remove large volumes of data by adding and removing entire data partitions|
|MQT||stores results of a query in a table||Query performance - pre-compute and store (materialize) the result set of queries that involve expensive operations such as complex joins and table scans|
The considerations for designing MQTs in conjunction with partitioning features can be summarized by the following points:
- MQTs can be created based on a table that uses any combination of partitioning features. For example, an MQT can be created on one or more tables that uses MDC and TP. The use of partitioning features on the base table does not need to take into account whether MQTs will be built upon this table. However, the design of an MQT can be affected by which partitioning features are used on the base table. For example, if the base table is DPF partitioned, the design of the MQT should consider whether to replicate the MQT across the data partitions.
- An MQT can also make use of partitioning features. For example, an MQT can be partitioned using MDC or TP.
To complete the Transactions history table example introduced earlier, here are some MQTs defined on these tables:
- MQT 1 - transaction totals per day per account type
- MQT 2 - Year-to-Date totals per state
We now turn to the first key consideration in evaluating and using these features: performance, specifically, performance of typical warehouse business user queries. These queries tend to:
- Select rows from a fact table that meet criteria for several dimensions. This implies joins between several dimension tables and the fact table.
- Use grouping or aggregate functions, such as COUNT, GROUP BY, ORDER BY.
- Return result sets that include many rows, from several thousand to several million rows.
- Are generated by users or their BI tools. This means queries are more ad hoc and not subject to the same performance testing and tuning found in transaction processing systems.
While people tend to think of faster performance, it's better to think of better performance. This encompasses things like:
- Peak query execution performance,
- Query execution performance stability,
- Performance across the wide range of workload characteristics found in data warehouses,
- Ease of designing the database to achieve performance goals, and
- Cost associated with achieving performance goals.
One emerging consideration for designing for performance is hardware trends. More powerful CPUs coupled with larger storage devices is making cost-effective I/O bandwidth a potential performance bottleneck. In this environment, I/O efficiency is a key design consideration.
This section illustrates how each partitioning feature contributes to the performance of query execution. We'll cover performance of roll-in and roll-out operations in later sections.
DPF contributes to query performance by putting into play more computing resources than would be possible otherwise. When the DB2 optimizer forms a query access plan for a query, it divides the work among the database partitions which work in parallel. Their individual results are then compiled and returned to the query submitter.
MDC contributes by making it efficient to retrieve data. Data with similar values on multiple dimensions is stored in the same location making I/O operations, a traditional bottleneck in data warehouses, efficient. As well, the MDC feature includes block indexes that contain an entry for each block of data rather than for individual rows. This makes it more efficient to perform index operations. To deliver on its potential performance, MDC tables must be designed with the optimal (or at least an adequate) set of dimensions. MDC will only benefit those queries that include dimension columns. MDCs are completely transparent to queries. Finally, MDCs have a couple of noteworthy benefits for administration:
- MDC block indexes means that fewer RID indexes are required. An administration benefit is that storage space for indexes is reduced.
- Because new rows are inserted in the part of the table that contains rows with similar values, data remains clustered without the need to run the REORG utility.
TP contributes to query performance via partition elimination. To illustrate, consider our Transactions history table with 36 partitions, one per month. To process a query that SELECTs data from the last 12 months, the optimizer knows to eliminate scanning data for partitions not in the last 12 months. This partition elimination applies to both index scans and table scans. TP will only benefit those queries that include the table partitioning key column.
Getting new data from operational systems into a warehouse fact table is a process known by terms like ETL, ingest, populating the warehouse, and roll-in. The examples below demonstrate the range of situations that can be encountered.
Example 1 - Daily ingest using Load
- After the end of the business day, half a million to 2 million records arrive from operational systems in multiple flat files.
- A customer script uses the DB2 Load utility to load each file into the fact table. This is done while the table is off-line during a nightly batch window.
- At the beginning of the next business day, the previous day's data is visible to users querying the table.
Example 2 - Near real-time ingest using insert
- Every 30 minutes, a file arrives with 10-100 thousand records.
- Upon arrival, a customer-written program adds the records to a staging table then uses insert statements to add the records to the fact table.
Example 3 - MQT refresh
When MQTs are present, they are treated as part of the overall process of adding new data to the warehouse. Of particular interest here, are the available MQT refresh policies. Generally, ETL processes manually specify when to update MQT rather than allow this to happen automatically.
- For Example 1, MQTs would likely be updated once all the nightly updates to the fact table are complete.
- For Example 2, MQTs would also commonly be updated once a day. This way, queries accessing the MQT will return the same result throughout the day even though the underlying data is being updated periodically.
DB2 partitioning features help with roll-in but sometimes introduce additional considerations that customers need to accommodate in their roll-in process.
DPF enables data to be added more quickly because each database partition can process the work in parallel. One the other hand, DPF introduces the additional consideration of sending the rows to the proper data partition.
MDC improves the roll-in process, compared to a table that does not use MDC. Benefits include:
- Less physical I/O: MDC tables have fewer RID indexes so less physical I/O is required to update indexes during roll-in.
- Faster inserts: MDC tables reduce page contention and locking which make it conducive to perform inserts using multiple, parallel streams.
- Better performance for concurrent business queries: Because MDC tables reduce page contention and locking, this also reduces disruption to the performance of concurrent business queries.
On the other hand, with MDC it is advisable to pre-sort the data according to the MDC dimensions.
In some situations TP can aid roll-in. TP provides the capability to add rows to a partition then to attach that partition to the table when it's ready. However, in the examples here, that option does not come into play. Recall our sample table (Transactions history table) has a separate partition for each month, yet we are adding data one or more times per day. In this common situation, an empty partition needs to be added to the table before the start of a month and data is added each day of the month.
Finally, MQTs also add considerations to the roll-in process. In particular, there is a need to decide when to update the MQTs.
After data has resided in the data warehouse for some period, it is no longer useful to business users and needs to be removed to make room for new data. This process is known by terms such as roll-out, purging, and archiving. The examples below demonstrate the range of situations that can be encountered.
Generally, roll-out is subject to one of the following business rules, which has implications for how DB2 partitioning features are used:
- Remove rows that reach a certain age: This is the simplest and most common business requirement. In a traditional history table, 36 months is a common expiry age. For recent history tables, 60 to 180 days are common expiry ages.
- Remove rows that reach a certain age subject to business rules: In this case, certain rows may have reached the normal retirement age but still must be retained. For example, a historical transaction might need to be retained due to being used as evidence in a dispute or investigation.
- Keep the data accessible but free up the storage: This situation is sometimes referred to as archiving rather than roll-out. In this case, the rows must still be visible to user queries but there is a desire to move these seldom accessed rows to cheaper, lower performance storage. This business need can be addressed by tools such as Tivoli Hierarchical Storage Manager (HSM).
As usual, MQTs need to be factored into the equation. Generally, MQTs are updated to remove the corresponding summary data. For example, if data from March 2003 is removed from the fact table, then summary data for that month is removed from MQTs.
For supporting roll-out, DB2 partitioning features have some noteworthy features geared to different business needs.
Starting with DPF, this feature does not contribute much to roll-out. Roll-out in DPF case is not much different than in the non-DPF case.
MDC and TP both offer benefits for the roll-out operation. In any given DB2 version, one feature may offer better roll-out performance than the other but over time both features should approach parity. What is more important when comparing these features is that each as unique advantages for certain roll-out situations.
TP is the obvious choice for the basic, common roll-out situation (that is, simply remove rows that have reached a certain age). Roll-out can be accomplished by a simple DETACH operation. As well, TP is uniquely suited for the following situations:
- Moving rolled out data to another location (that is, to a separate table or database) rather than simply deleting it.
- Working with tools such as Tivoli HSM to move these older seldom accessed rows to cheaper storage but still keep them visible to user queries.
MDC is uniquely suited to situations which are more demanding, but less common, than the basic roll-out situation:
- A customer wants to perform roll-out during periods of concurrent query activity and they cannot accept the impact of the zlock that TP briefly requires during a DETACH operation.
- A customer does not want to modify their application or scripts. That is, they don't want to replace their DELETE statements with TP's DETACH statement.
- A customer wants to delete large amounts of data on other dimensions than time (which is the column that the table is data partitioned on).
- A customer wants to remove rows that reach a certain age subject to business rules. In this case, they can issue a SELECT statement to identify rows that qualify then DELETE those rows.
With MQTs, when they are updated to remove the corresponding summary data, it is advisable to use table partitioning on the MQT and to define the same data partitions as on the base table.
To keep things in perspective, while roll-out is the most common way to remove obsolete data, we should note that customers sometimes use other means to remove data that don't require the services of partitioning features. These are:
- Refresh table: In some data warehouses, the entire table is dropped once per year and a replacement table is loaded with all the data except for the data that is no longer needed.
- Purge: In some recent history tables, every few days the entire table is dropped and recreated because data is now available in another table with a longer history. The empty table is then ready to ingest new data until its next purge date.
This article introduced the following DB2 table design features: table partitioning, MDC, DPF, and MQT. These features work as a team to address customer concerns related to query performance, getting new data in and getting old data out. The table below summarizes how these DB2 features address each customer need.
Table 8. How DB2 features address customer needs
|Query performance||Each feature contributes in its own way to performance improvement. Using more features will result in better performance.|
|Roll-in||In most customer scenarios, MDC provides the most benefit. TP can provide benefits in some less common situations.|
|Roll-out||For simple, common roll-out scenarios, TP provides the most benefit. MDC is suitable for handling other roll-out situations where TP is not as well suited.|
- The IBM Redbook Up and Running with DB2 UDB ESE: Partitioning for Performance in an e-Business Intelligence World provides more in-depth coverage of DPF, MDC, and MQTs along with many other aspects of optimizing DB2 for business intelligence.
- The IBM Redbook DB2 UDB's High-Function Business Intelligence in e-business provides good coverage of MQTs in Chapter 2.
Home page for the MDC Research and Development team provides links to various publications and presentations on MDC.
Visit the developerWorks Business Intelligence zone to read articles and tutorials and connect to other resources to expand your business intelligence skills.
Visit the developerWorks resource page for DB2 for Linux, UNIX, and Windows to read articles and tutorials and connect to other resources to expand your DB2 skills.
Get products and technologies
Download a free trial version of DB2 Enterprise 9.
- Build your
next development project with IBM trial software, available for download directly from