DB2 Version 9.7 for Linux, UNIX, and Windows

Buffer pool management of data pages

Buffer pool pages can be either in-use or not, and dirty or clean.

Pages remain in the buffer pool until the database shuts down, the space occupied by a page is required for another page, or the page is explicitly purged from the buffer pool, for example, as part of dropping an object. The following criteria determine which page is removed when another page requires its space:

Changed pages are always written out to disk before being overwritten. Changed pages that are written out to disk are not automatically removed from the buffer pool unless the space is needed.

Page-cleaner agents

In a well-tuned system, it is usually the page-cleaner agents that write changed or dirty pages to disk. Page-cleaner agents perform I/O as background processes and allow applications to run faster because their agents can perform actual transaction work. Page-cleaner agents are sometimes referred to as asynchronous page cleaners or asynchronous buffer writers, because they are not coordinated with the work of other agents and work only when required.

To improve performance for update-intensive workloads, you might want to enable proactive page cleaning, whereby page cleaners behave more proactively in choosing which dirty pages get written out at any given point in time. This is particularly true if snapshots reveal that there are a significant number of synchronous data-page or index-page writes in relation to the number of asynchronous data-page or index-page writes.

Figure 1 illustrates how the work of managing the buffer pool can be shared between page-cleaner agents and database agents.

Figure 1. Asynchronous page cleaning. Dirty pages are written out to disk.
The asynchronous page cleaning process

Page cleaning and fast recovery

Database recovery after a system crash is faster if more pages have been written to disk, because the database manager can rebuild more of the buffer pool from disk than by replaying transactions from the database log files.

The size of the log that must be read during recovery is the difference between the location of the following records in the log:
  • The most recently written log record
  • The log record that describes the oldest change to data in the buffer pool
Page cleaners write dirty pages to disk in such a manner that the size of the log that would need to be replayed during recovery never exceeds the following:
   logfilsiz * softmax / 100 (in 4-KB pages)
where:
  • logfilsiz represents the size of the log files
  • softmax represents the percentage of log files that are to be recovered following a database crash; for example, if the value of softmax is 250, then 2.5 log files will contain the changes that need to be recovered if a crash occurs

To minimize log read time during recovery, use the database system monitor to track the number of times that page cleaning is performed. The pool_lsn_gap_clns (buffer pool log space cleaners triggered) monitor element provides this information if you have not enabled proactive page cleaning for your database. If you have enabled proactive page cleaning, this condition should not occur, and the value of pool_lsn_gap_clns is 0.

The log_held_by_dirty_pages monitor element can be used to determine whether the page cleaners are not cleaning enough pages to meet the recovery criteria set by the user. If log_held_by_dirty_pages is consistently and significantly greater than logfilsiz * softmax, then either more page cleaners are required, or softmax needs to be adjusted.