Improving log write performance
By following certain recommendations, you can reduce the performance impact of writing data to the log data sets.
Procedure
To improve log write performance, use any of the following approaches:
- If you replicate your logs to remote sites, choose the storage system that provide the best possible performance for remote replication.
- Choose the largest size that your system can tolerate for the log output buffer. Because the pages for the log output buffer are permanently fixed in real storage, choose the largest size that you can dedicate in real storage. A larger size for the log output buffer might decrease the number of forced I/O operations that occur because additional buffers are unavailable, and can also reduce the number of wait conditions. You can use the OUTBUFF subsystem parameter to specify the size of the output buffer used for writing active log data sets. The maximum size of the log output buffer is 400,000 KB.
To validate the OUTBUFF setting, you can collect IFCID 1 (system services statistics) trace records. The QJSTWTB field indicates the number of times the buffer was full and caused a log record to wait for I/O to complete. A non-zero count for QJSTWTB might indicate that the log output buffer is too small.
- Choose fast devices for log data sets. The devices that are assigned to the active log data sets must be fast. In environments with high levels of write activity, high-capacity storage systems, such as the IBM® Storage DS8000® series, are recommended to avoid logging bottlenecks.
- Avoid device contention. Place the copy of the bootstrap data set and, if using dual active logging, the copy of the active log data sets, on volumes that are accessible on a path different than that of their primary counterparts.
- Preformat new active log data sets. Whenever you allocate new active log data sets, preformat them using the DSNJLOGF utility. This action avoids the overhead of preformatting the log, which normally occurs at unpredictable times.
- In most cases, do not stripe active log data sets. Important: Do not use striped active logs for disaster recovery.
You can use DFSMS to the stripe the logs, but striping is generally unnecessary with the latest devices. Striping increases the number of I/Os, which can increase CPU time and lead to potentially greater Db2 commit times. Striping might improve the performance of batch insert jobs, but it might also harm the performance of online transaction processing. Striping is especially risky for performance if you replicate the logs over long distances.
- Consider striping and compressing archive log data sets by using DFSMS. Doing so might speed up the time to offload the logs and the time to recover by using archive logs. However, the performance of DFSMS striping and compression depends on the z/OS® release and the types of hardware that you use.