softmax - Recovery range and soft checkpoint interval configuration parameter
This parameter determines the frequency of soft checkpoints and the recovery range, which help out in the crash recovery process.
The softmax parameter is replaced with the new page_age_trgt_mcr and page_age_trgt_gcr parameters, which are both configured as a number of seconds.
Existing upgraded databases continue to use the softmax parameter. You can check if you are using softmax by querying the database configuration and checking the value of this parameter. You can switch from using softmax to these new parameters by setting the value of softmax to 0.
New databases are created with the value of softmax set to 0 by default.
- Configuration Type
- Database
- Parameter Type
- Configurable
- Default [range]
- Db2® pureScale® environment
- 0 [ 1 - 65 535 ]
- Outside of a Db2 pureScale environment
- 0 [ 1 - 100 * logprimary ]
- Unit of Measure
- Percentage of the size of one primary log file
- Influence the number of log files that need to
be recovered following a crash (such as a power failure). For example,
if the default value of 100 is used, the database
manager will try to keep the number of log files that need to be recovered
to 1. If you specify 300 as the value of this parameter,
the database manager will try to keep the number of log files that
need to be recovered to 3.
To influence the number of log files required for crash recovery, the database manager uses this parameter to trigger the page cleaners to ensure that pages older than the specified recovery window are already written to disk.
- Determine the frequency of soft checkpoints. It is the process of writing information to the log control file. This information is used to determine the starting point in the log in case a database restart is required.
- Have not been committed, but updated the data in the buffer pool
- Have been committed, but have not been written from the buffer pool to the disk
- Have been committed and written from the buffer pool to the disk.
To determine which records from the log file need to be applied to the database, the database manager uses information recorded in a log control file. (The database manager actually maintains two copies of the log control file, SQLOGCTL.LFH.1 and SQLOGCTL.LFH.2, so that if one copy is damaged, the database manager can still use the other copy.) These log control files are periodically written to disk, and, depending on the frequency of this event, the database manager might be applying log records of committed transactions or applying log records that describe changes that have already been written from the buffer pool to disk. These log records have no impact on the database, but applying them introduces some additional processing time into the database restart process.
The log control files are always written to disk when a log file is full, and during soft checkpoints. You can use this configuration parameter to control the frequency of soft checkpoints.
current stateand the
recorded state, given as a percentage of the logfilsiz. The
recorded stateis determined by the oldest valid log record indicated in the log control files on disk, while the
current stateis determined by the log control information in memory. (The oldest valid log record is the first log record that the recovery process would read.) The soft checkpoint will be taken if the value calculated by the following formula is greater than or equal to the value of this parameter:
( (space between recorded and current states) / logfilsiz ) * 100
Recommendation: You might want to increase or reduce the value of this parameter, depending on whether your acceptable recovery window is greater than or less than one log file. Lowering the value of this parameter will cause the database manager both to trigger the page cleaners more often and to take more frequent soft checkpoints. These actions can reduce both the number of log records that need to be processed and the number of redundant log records that are processed during crash recovery.
- Very long transactions with few commit points.
- A very large buffer pool and the pages containing the committed transactions are not written back to disk very frequently. (Note that the use of asynchronous page cleaners can help avoid this situation.)
In both of these cases, the log control information kept in memory does not change frequently and there is no advantage in writing the log control information to disk, unless it has changed.