DB2 The basic rules of DB2 archival logging.
adamsand 2000003VK2 Comments (3) Visits (13385)
The basic rules:
1. When a transaction log is full and/or closed db2 will archive the log (assuming an archiving method is defined in LOGARCHMETH1).
2. When a transaction log no longer contains an open unit of work (i.e. it's extent number is less than the First Active Log) and it has been archived successfully it can be renamed and reused.
3. As a general rule, the minimum number of logs allocated at database activation is LOGPRIMARY. The maximum number of logs that will be allocated to handle a large transaction is LOGPRIMARY + LOGSECOND. There is no difference in the information written to a particular log or in the log header to distinguish them as a primary or secondary log. LOGPRIMARY and LOGPRIMARY + LOGSECOND simply act as a min and max value for logs that may be allocated in the active log path.
4. If DB2 allocates additional logs (more than LOGPRIMARY) to handle a larger transaction up to the limit of the sum of LOGPRIMARY and LOGSECOND, these logs will remain in the active log path until the database is deactivated.
5. Infinite logging (LOGSECOND = -1) changes the rules a bit. As soon as a log has been archived, it is a candidate for renaming and reuse. We generally try to maintain LOGPRIMARY number of logs + some overhead in the active log directory at any time. This means that your First Active Log (and subsequent logs) may be stored in the log archive location and will need to be retrieved for a rollback or crash recovery operation. This allows for much larger transactions than the limit imposed by the sum of LOGPRIMARY and LOGSECOND, but it can dramatically increase the time needed to do a rollback or crash recovery due to the overhead of retrieving the logs from the archive method (Caveat Utilitor - Let the user beware).
6. The ARCHIVE LOG command does not actually archive a log, it simply truncates it at the last new log page written and switches writing to the next log. Now that the log is closed, the normal archiving of a closed logs kicks in. (see rule 1)
A few examples:
If you have an open transaction that spans multiple logs, you may see that there is a copy of a particular log (or logs) in both the active and archive path. That is a result of a combination of rule 1 & 2. It was archived when closed, but not renamed in the active path due to that open unit of work. As soon as the unit of work completes, you may see a clump of those previously active logs renamed with almost the same timestamp. A directory listing in timestamp order will likely not be in log extent number order, but rest assured, DB2 has a handle on which logs are next to be used.
If the log archiving method stops working, DB2 can continue to function by allocating additional logs in the active log path until it hits DISK FULL. It is not restricted by the sum of LOGPRIMARY and LOGSECOND as the First Active Log is moving forward and the total span from First Active Log to the current log will not exceed that sum value. Also, it cannot rename/reuse those closed logs due to the fact that they have not been archived, even if they no longer contain an open unit of work tracked in them (due to rule 2, as both conditions must be true to rename/reuse them).
An online backup will need the logs that span the entire backup, from the First Active Log when the backup starts, thru the last log that is truncated when the backup completes. That is why you may see archived logs being retrieved at the end of an online backup for inclusion in the online backup image. The were likely archived and renamed during the backup if it takes a while (rule 2 again).
If you have a large job that needs many logs to complete the processing, you may see a number of logs greater than LOGPRIMARY in the active log path. This number of logs will remain the same and not decrease until you deactivate and activate the database. Assuming that all the logs were successfully archived, the number of logs in the active path will return to LOGPRIMARY again (rule 3) after deactivate / re-activation.
If you are using infinite logging (LOGSECOND = -1) and you allow an open unit of work to remain uncommitted, you can end up with 100s or even 1000s of logs between your First Active Log (now in the archive location) and the current log being written to (you can see the Current Log in a db2pd -db <dbname> -logs output). It is recommended that you use the parameter NUM_LOG_SPAN to define an upper boundary that is reasonable for the largest workload that you anticipate having to prevent an errant open unit of work from leaving you in an untenable state.
Hopefully, these will provide a clear picture of how logging is expected to behave and allow you to understand any behaviors that you may observe.
Please leave a comment if you have any questions or feedback.