We had see many questions get to us on Database logging size settings. These usually come up when the product in mid-development phase. See below for some tips.
PrimaryLog, all logs will be created when db is activated, set this value too big, you will not be able to active your database, if your database can not holding it. (Memory & disk space)
SecondaryLog, created on fly one by one, if run out of Primary Log.
LogSize, it determines the size of each transaction log file on disk. For best performance, create/archive/retrieve/delete overhead should be considered. Its should be multiplied times your OS page size big. Also, it should be considered with your hardware, e.g. Your memory, your network bandwidth(when transfer logs online in HADR/TSM case), logfilsiz should no more than a few hundred MB.
You must balance the size of the log files with the number of primary log files:
The value of the logfilsiz should be increased if the database has a large number of update, delete, or insert transactions running against it which will cause the log file to become full very quickly.Note: The upper limit of log file size, combined with the upper limit of the number of log files (logprimary + logsecond), gives an upper limit of 1024 GB of active log space.A log file that is too small can affect system performance because of the overhead of archiving old log files, allocating new log files, and waiting for a usable log file.
The value of the logfilsiz should be reduced if disk space is scarce, since primary logs are preallocated at this size.
A log file that is too large can reduce your flexibility when managing archived log files and copies of log files, since some media might not be able to hold an entire log file.
The log files should be sized so that log switches do not occur more frequently than every few minutes at their fastest. The rate of change of active log numbers can be obtained by querying MON_GET_TRANSACTION_LOG. If a larger logfilsiz causes an unacceptably long time between log switches during quieter times, you may choose to schedule some manual switches to trigger archiving.
Generally, your log space is (4K * logfilesz * (Prmary logs + Secondary logs)), you should make sure your workload is not run out of logs space. Consider your application workload log generate rate to set a start point value, and for further optimise will be Secondary Logs to be -1, this means db2 will automatically create more secondary logs if it need more, then start monitor your number of logs during peak time and down time, set PrimaryLog to use average logs number, and Primary + Secondary to a little bigger than peak time.
HOW to make the change valid within Production system without interrupt other users.
Force all connection to the DB, then deactivate your DB. To make sure you deactivated your DB, you need to use db2 list active databases to verify your DB is indeed not active!