Contents


Monitor and estimate logical log usage in Informix Dynamic Server

Logical log is an important aspect of database administration. Logical log, if not properly managed, will cause database administrators a lot of headaches. Some of the most common issues are:

  • Long transaction. A transaction is said to be long if it is not committed or rolled back when it reaches exclusive long transaction high water mark (ELTM). Exclusive long transaction high water mark is a configuration parameter that stipulates what percentage of logical logs a single transaction can span. As soon as a transaction hits the ELTM set in an Informix® Dynamic Server configuration file, the server will abort the transaction and begin to roll back the transaction. During the roll back the server will suspend further database operations and front-end users will experience a hang up in their application.
  • Disaster recovery. If we lost any logical logs, we are limited in our choices of restore strategy in case of disasters; we can only perform cold restore. Warm restore is impossible since the Informix Dynamic Sever needs to examine all logical logs to reapply database transactions that occurred after the last backup. And the consequence of this is data loss.
  • Slow performance. If we don't put logical logs in proper location, for example, if we put logical logs in root dbspace, or other hot spots on the disk, the overall performance of the system and the server will suffer.
  • Informix Dynamic Server hung. If you don't backup logical logs on regular basis, and if you used all logical logs in your system, Informix Dynamic Server will hang and suspend other database connections and operations.

This article will discuss in detail how to configure and manage logical logs. The article will also demonstrate a real life example how to estimate and predict logical log usage.

What is the logical log?

The logical log is actually a set of files that keep history of database transactions and database server changes since the last level 0 backup. Logical log files are not ordinary operating system files. They are created automatically by Informix Dynamic Server during first initialization and then managed by Informix Dynamic Server onparams utility. Logical log files are circular. In other words, they can be used again and again after they are filled up. But there are certain conditions for reuse:

  • The logical log file must be backed up. By default, Informix Dynamic Server will automatically back up logical log files when they are filled up. This is stipulated by ALARMPROGRAM parameter in the configuration file. This parameter should always be set to a script called log_full.sh which does logical log files backups. Log_full.sh is a server system script that comes with Informix Dynamic Server software package.
  • All records in the logical log must be associated with closed transactions. By closed transactions, we mean transactions that are either committed or rolled back. If a transaction is pending and there is a record associated with that transaction in the logical log, then this logical log can't be reused.
  • The logical log must not contain the last checkpoint record. Onstat -l output can tell us which logical log contains the last checkpoint record: if the last position of flags field in its output is "L", then that indicates the logical log contains the last checkpoint record and could not be reused.
  • The logical log must not contain any transactions that did not get flushed to disk. This is to ensure all transactions will not be lost. When a transaction is completed, it stays in the logical log, waiting for the checkpoint to be flushed to disk. If we reuse the logical log before the checkpoint, we will lose all those transactions.

When configuring logical log, we need to pay attention to following issues:

  • Size. The proper size of logical log files is between 200 kilobytes (minimum) and 2,097,152 kilobytes (maximum). This is a wide range and there is not a single hard rule to follow. Basically, smaller logical log files have smaller granularity of recovery. There is a potential risk of losing that last not backed up logical log file if the disk that contains the logical log files goes down.
  • Number. There must always be at least three logical log files. The maximum number is 32,767. This again is a wide range. Make a decision according to the production environment. The number of logical log files must not exceed the value of the LOGMAX parameter in the configuration file. This restriction is however lifted in Informix Dynamic Server 9.40x.
  • Location. The location of logical log files is important. When Informix Dynamic Server first initializes, it automatically creates logical logs and places them together with physical logs in root dbspace. Logical log files will cause a lot of disk writes to root dbspace where all important system statistics are stored and will possibly creates disk I/O contention. To minimize disk contention for root dbspace that in turn will improve overall system performance, move logical logs out of root dbspace and spread them in other disk devices.
  • Backup. After adding new logical logs, do a backup, either real (use real backup device) or fake (use /dev/null). Otherwise, Informix Dynamic Server can't use those newly added logical logs. This restriction is lifted in Informix Dynamic Server 9.4x where it will use logical logs right after they are added to the system.
  • Maintain a minimum of three logical logs at all times. Otherwise Informix Dynamic Server won't start up.

In addition, there are some performance considerations:

  • Logical log backups. When logical logs are filled up, they must be backed up. Backing up logical logs will use system resources such as CPU and memory and hinder database transactions that involve data located on the same disks as logical logs.
  • Checkpoints. Checkpoints block user or database transaction processing. If logical logs are backed up and freed frequently, the checkpoints will occur frequently and in consequence, database transactions may be blocked constantly and take longer time to finish.
  • Type of database transaction logging. Databases that use unbuffered logging will fill logical logs faster than databases that use buffered logging.

For details on how to effectively configure logical logs, refer to Chapter 13 of IBM Informix Dynamic Server Administrator's Guide, Version 9.4 (referred to as Administrator's Guide in this article).

Logical logs are managed by onparams utility, which allows adding, dropping and moving of logical logs. As mentioned above, during its first initialization, Informix Dynamic Server will automatically create some logical logs. The number of logical logs it creates is stipulated by LOGFILES parameter in configuration file. After that use onparams utility to add, drop or move logical logs. Onparams has following options:

onparams  -a -d <DBspace> [-s <size>] [-i] | 

          -d -l <log file number> [-y]     |

          -p -s <size> [-d <DBspace>] [-y] 

          -a  - Add a logical log file

          -i  - Insert after current log

          -d  - Drop a logical log file

          -p  - Change physical log size and location

          -y  - Automatically responds "yes" to all prompts

There are two prerequisites for using onparams utility:

  • · This utility is restricted exclusively to user Informix only. No other users in the system including user root or other users with DBA privilege can use this utility.
  • · Informix Dynamic Server must be in quiescent or single user mode. This means when using the utility to add, drop or move logical logs, no other user connections or activities are allowed.

Refer to Administrator's Guide for details how to use this utility.

How might one read and interpreter contents of logical log, and what useful information comes from logical log records? Logical log records cannot be read directly by normal operating system editors, such as vi; they can only be read using onlog utility. This utility gives the ability to:

  • · Display maximum information about each log record
  • · Do not display program header
  • · Display information about logged BLOB pages (-d option only)
  • · Read from tape device
  • · Display the specified log(s)
  • · Display the specified user(s)
  • · Display the specified TBLspace(s)
  • · Display the specified transaction(s)

There are two options of onlog to display logical log records, onlog -l and onlog -n. Assume, for example, that we have just completed a database transaction that contains a single update on a table. Now do onlog -l to see how logical log records this transaction:

addr     len  type     xid      id link    

18       48   BEGIN    20       155 0        07/01/2003 15:34:08 737      eric

         00000030 009b0001 00000000 00009af8 ...0.... ........ 

         00000014 00000000 0427bcdf 00000000 ........ .'...... 

         00000000 3f01f040 0000006e 000002e1 ....?..@ ...n.... 

48       60   HUPDAT   20       0  18       3000c1   1707     0        108 108 1  

         0000003c 00000049 00008010 00006d91 ...<...I ......m. 

         00000014 00000018 0427bcdf 00000000 ........ .'...... 

         003000c1 00001707 00000000 006c006c .0...... .....l.l 

         00010003 00020100 00000000          ........ ....

84       48   COMMIT   20       0  48       07/01/2003 15:34:08

         00000030 00000002 00000010 00006d91 ...0.... ......m. 

         00000014 00000048 0427bcdf 00000000 .......H .'...... 

         3f01f040 00001707 00000000 3f01f040 ?..@.... ....?..@

The output shows that the transaction generated three logical log records, each containing two parts: record header and ASII dump. ASII dump is the ASII code of the data change, in this case the update statement is used in database restore. The header part is the most interesting. It contains quite a few fields and will provide some useful information about transactions. The following table explains each field in the header part:

Header FieldDescriptionFormat
addrLog positionHexadecimal
lenRecord length in bytesDecimal
typeRecord type nameASII
xidTransaction numberDecimal
idLogical log numberDecimal
linkLink to the previous record in transactionDecimal

In addition, the output also recorded the transaction time: the transaction begins at 07/01/2003 15:34:08 and finished at 07/01/2003 15:34:08. The first record marked the beginning of the transaction (begin work) and the third record marked the end of the transaction (commit work). The second record showed the transaction only contains a single database operation that is update. Field len showed the logical log spaces used for the record; in this case, logical logs used 156 (total of all three records; 48 + 60 + 48) bytes for recording the transaction; this number is very useful in estimating logical log usage. There will be more discussion on this later in the article.

Now do onlog -n and see how different the output is. Since this option is used to display logical log records in a specific logical log,a logical log id is needed. Knowing that all logical log records are in logical log number 155 we issue onlog -n 155, and here is the output:

addr     len  type     xid      id link    

18       48   BEGIN    20       155 0        07/01/2003 15:34:08 737   eric

48       60   HUPDAT   20       0  18       3000c1   1707     0        108 108 1  

84       48   COMMIT   20       0  48       07/01/2003 15:34:08

The output is very similar to onlong -l, the only difference is that it left out the ASII dump part of the logical log records. Onlog -n option only reports logical log record header.

Look at a more complicated example. Assume again that we have just completed a transaction, but this time the transaction is much more complicated than the previous one. This time the transaction includes a set of database operations.

It creates a table, inserts a few rows, updates number of rows, deletes one row and than drops the table. The logical log used for recording this transaction is 234. Using onlog -n 234 we get the following output:

addr     len  type     xid      id link    

2ea81fc  36   CKPOINT  1        0  2ea8018  0       

2ea9018  48   BEGIN    38       234 0        07/04/2003 11:28:11 1035     eric

2ea9048  40   UNIQID   38       0  2ea9018  20005c   129     

2ea9070  380  BLDCL    38       0  2ea9048  2000d0 8 8 4 0 fan

2ea91ec  44   CHALLOC  38       0  2ea9070  00002:0000032326 8       

2ea9218  48   PTEXTEND 38       0  2ea91ec  2000d0   7        00002:0000032326

2ea9248  140  HINSERT  38       0  2ea9218  20005c   a0e      90 

2ea92d4  88   ADDITEM  38       0  2ea9248  20005c   a0e      7     1     36   

2ea932c  56   ADDITEM  38       0  2ea92d4  20005c   a0e      2     2     4    

2ea9364  72   HINSERT  38       0  2ea932c  20005d   1328     24 

2ea93ac  60   ADDITEM  38       0  2ea9364  20005d   1328     16    1     6    

2ea93e8  56   ADDITEM  38       0  2ea93ac  20005d   1328     17    -32766 4    

2ea9420  128  HINSERT  38       0  2ea93e8  20005f   909      77 

2ea94a0  120  ADDITEM  38       0  2ea9420  20005f   909      10    1     68   

2ea9518  88   ADDITEM  38       0  2ea94a0  20005f   909      7     2     36   

2ea9570  52   HINSERT  38       0  2ea9518  2000d0   101      4  

2ea95a4  52   HINSERT  38       0  2ea9570  2000d0   102      4  

2ea95d8  56   HUPDAT   38       0  2ea95a4  2000d0   101      0        4   4   1  

2ea9610  52   HDELETE  38       0  2ea95d8  2000d0   102      4  

2ea9644  140  HDELETE  38       0  2ea9610  20005c   a0e      90 

2ea96d0  88   DELITEM  38       0  2ea9644  20005c   a0e      7     1     36   

2ea9728  56   DELITEM  38       0  2ea96d0  20005c   a0e      2     2     4    

2ea9760  72   HDELETE  38       0  2ea9728  20005d   1328     24 

2ea97a8  60   DELITEM  38       0  2ea9760  20005d   1328     16    1     6    

2ea97e4  56   DELITEM  38       0  2ea97a8  20005d   1328     17    2     4    

2eaa038  128  HDELETE  38       0  2ea97e4  20005f   909      77 

2eaa0b8  120  DELITEM  38       0  2eaa038  20005f   909      10    1     68   

2eaa130  88   DELITEM  38       0  2eaa0b8  20005f   909      7     2     36   

2eaa188  36   PERASE   38       0  2eaa130  2000d0  

2eaa1ac  48   BEGCOM   38       0  2eaa188 

2eaa1dc  44   CHFREE   38       0  2eaa1ac  00002:0000032326 8       

2eaa208  36   ERASE    38       0  2eaa1dc  2000d0  

2eaa22c  48   COMMIT   38       0  2eaa208  07/04/2003 11:28:11

2eab018  484  DPT      1        234 0        37      

2eab1fc  36   CKPOINT  1        0  2eab018  0

This transaction is much larger than the first one and generates a lot more logical log records. In other words, logical logs use much more spaces to record this transaction than the previous one. The total spaces used by this transaction can be calculated by summing up all len fields in the output. In this case, logical logs used 3556 bytes of space for recording this transaction! For a detailed explanation of each record type of the above output, refer to Informix Dynamic Server Administrator's Reference.

Monitor logical log usage

After configuring logical logs, monitor logical log usage constantly to make sure there are enough logical logs such that Informix Dynamic Server will avoid long transactions situation as discussed above. To monitor logical logs usage, use the onstat utility. Command onstat -l displays current logical log usage. Here is the sample output of onstat -l:

Informix Dynamic Server Version 9.40.FC1     -- On-Line -- Up 1 days 19:37:46 -- 1818624 Kbytes



Physical Logging

Buffer bufused  bufsize  numpages numwrits pages/io

  P-2  0        16       178155   11983    14.87

      phybegin         physize    phypos     phyused    %used   

      1:263            8750       6413       0          0.00    



Logical Logging

Buffer bufused  bufsize  numrecs  numpages numwrits recs/pages pages/io

  L-2  0        16       1695756  319940   227242   5.3        1.4     

	Subsystem    numrecs  Log Space used

	OLDRSAM      1695756  241316428     



address          number   flags    uniqid   begin                size     used    %used

16f4a7e90        4        U-B----  244      7:53                15000    15000   100.00

16f4a7ee8        5        U---C-L  245      7:15053             15000     7558    50.39

16f4a7f40        6        U-B----  166      7:30053             15000    15000   100.00

16f4a7f98        7        U-B----  167      7:45053             15000    15000   100.00

16f181218        8        U-B----  168      7:60053             15000    15000   100.00

16f181270        9        U-B----  169      7:75053             15000    15000   100.00

16f1812c8        10       U-B----  170      7:90053             15000    15000   100.00

 7 active, 7 total

Look at the bottom part of the output. The Number field indicates logical log number; this number may not be in a series, since a DBA may insert and delete logical logs according to system needs. The output above, for example, shows that first three logical logs (number 1 to 3) are removed for some reason. The Flags field indicates the logical log status; U stands for logical logs that have been used, B for backed up, C for currently in use, L for logical log that has the most checkpoint record. The above output shows that logical log number 5 is used, but still currently in use and has the most checkpoint record. The Uniqid field is the unique log id; since logical logs are circular; this number changes whenever logical logs are getting reused. Size field indicates how large logical logs are in pages (one page is about 2 k bytes). The Used field indicates spaces used for transaction records, also in pages. %used field indicates logical log usage in percent. For detailed description of all fields in the output, refer to Chapter 3 of Administrator's Reference.

To monitor logical logs usage for current transactions, here is a little shell script. It basically usesthe onstat -l command, but it calculates the statistics colleted from the command and then prints out how much logical log space is used by the transaction in percent.

#!/bin/sh





$INFORMIXDIR/bin/onstat -l | awk -F" " ' BEGIN { t_flag=0 }

{

if ($1 == "address")

   { t_flag=1 }





if ( t_flag == 1 )

   {

   if ( $1 == "address")

      { t_logsize=0

        t_logused=0

      }

   else

      {

        if ( substr($3,3,1) != "B" )

           { t_logused=t_logused+$7 }

        t_logsize=t_logsize+$6

      }



   }

}



END {

t_logperct=t_logused/t_logsize*100

printf("Logical log space used %-d%\n",t_logperct)

}'

This script is quite useful in helping to identify how much logical log space is consumed by current transactions. But is there a way to be more proactive? If the transaction being performed is known, can logical log space be calculated beforehand? Can the transactions success or failure be predicted? If the answer is yes, then proactively take some actions such as increasing logical log space or splitting transaction into smaller pieces to avoid the long transaction.

Estimate logical log usage: An example

To estimate logical log usage is not a simple task, it involves lots of detailed analysis, testing and calculations. Here is a good real life example.

Our customer had issues with long transaction while running the program called Move BSC. BSC is a network element in telecommunication. When BSC moved from one location to another, the program executes lots of stored procedures and each of those stored procedures in turn performs lots of database operations such as insets, deletes and updates upon different database tables. Since our database uses transaction logging, all those operations are logged in logical logs. There are few cases that our customer moved too many BSCs at one time and that used all available logical logs on the system and the Informix Dynamic Server hit long transaction. Thus, the whole transaction was finally rolled back. This caused the customers frustration since the roll back took a long time and also froze the Informix Dynamic Server engine. So our customer requested us to give them a mathematical formula to estimate logical log usage before they start moving BSC and the maximum number of BSCs that could be moved at one time.

Since logical log usage is closely related to transactions, we had to first identify how many transactions are involved in the Move BSC program. We assume each execution of a stored procedure is a transaction, since all database operations in stored procedure are executed as a whole. The following table lists all stored procedures used in or rather called by Move BSC program and the number of its executions:

Stored Procedure NameNumber of executions
bsc_I2
bsc_UA2
cage_I4
cage_UA4
slot_I64
gp_func_I12
bsc_xcdr_assoc_I1
te_I52
te_timeslot_I637
gp_func_xcdr_I6
nail_I (xcdr)2
bsc_D2
cage_D4
slot_D64
gp_func_D12
bsc_xcdr_assoc_D1
te_D52
te_timeslot_D637
gp_func_xcdr_D6
nail_D3
Total1567

Now with the help of the above table, we were able to further identify what and how many database operations are involved in each transaction. After careful study and analysis, we produced a spreadsheet listing all database operations involved in each stored procedure or rather transaction. The list is very long and the following is just an extract of the list showing all database tables and corresponding operations involved in bsc_D stored procedure:

Stored Procedure NameTable NameOperationNumber of operation
BSC_DNelineUpdate200
 PvcDelete600
 TdmDelete100
 BSCDelete100
 NeDelete100
 Workspace_oplogInsert400
 Workspace_paramlogInsert300

Then we had to estimate logical log usage for each stored procedure. This involves a lot of testing and calculations. We needed to do lots of tests to get logical log usage statistics for each database operation on each tables involved in Move BSC. Next we had to use the statistics we collected from testing to calculate logical log usage for each stored procedure used in Move BSC program. Of all database operations, insert uses most of logical logs. So we decided to use insert as our basis for estimate. First we identified a list of tables involved in the stored procedure and then inserted a few rows or records into each of those tables. Then we used onlog utility to see how many logical log records are generated by the insert and summed up length of each logical log records. For example, to estimate logical log usage for NE table, we inserted a row into NE table in our testing database and then using onlog -n 155, where 155 is the log id, we got the following output:

addr     len  type     xid      id link    

18       48   BEGIN    20       155 0        07/01/2003 15:34:08 737   omcadmin

2ea9248  140  HINSERT  38       0  2ea9218  20005c   a0e      90 

2ea92d4  88   ADDITEM  38       0  2ea9248  20005c   a0e      7     1     36   

2ea932c  56   ADDITEM  38       0  2ea92d4  20005c   a0e      2     2     4    

84       48   COMMIT   20       0  48       07/01/2003 15:34:08

The above output shows inserting one record into NE table will generate three logical log records and the total logical log space used for this database operation is 284 bytes, the sum the records length (140 + 88 + 56). In addition, there is transaction overhead of 96 bytes, sum of record length for BEGIN and COMMIT. Transaction overhead is constant and needs to be added for each database transaction. In our case, for each execution of the stored procedure. As the result, we came up with the following table which lists logical log usage for inserting a record into each table involved in Move BSC operation:

Table namesLogical log usage in bytes
bsc448
bsccage300
bsccard324
bsclink378
bscnail424
ccb308
cpxcdr304
ds248
gpfunc248
msispan300
ne652
neline284
pvc248
scib356
srate300
tdm276
workspace_oplog392
workspace_paramlog236
xblconn290

With above tables, it would be much easier to figure out how much logical log spaces will be used for each stored procedure. For example, calculate logical log usage for stored procedure BSC_D. Based on tables above drive the following formula for the calculation:

Total logical log usage = 200 * 284 + 600 * 248 + 100 * 276 +

                                         100 * 448 + 100 * 652 + 400 * 392 +

                                         300 * 236 = 618000 (bytes)

In addition we need to add transaction overheads that is 96 bytes, checkpoint that takes 36 bytes and DPT overhead with a maximum of 300 bytes. All those overheads need to be added once for each stored procedure execution. So real total logical log usage for this stored procedure is 580500 + 96 +300 = 618396 bytes, about 620 KB (620 KB?). For detailed description of checkpoint and DPT, refer to Chapter 4 of Administrator's Guide.

The following table summarizes logical log usage for each stored procedure involved in Move BSC:

Logical log Usage for Move BSC

Operation NameNumber of OperationLog Usage per OperationTotal Log Usage (byte)
bsc_I2415880831760
bsc_UA2165960331920
cage_I41872074880
cage_UA41872074880
slot_I64296801899520
gp_func_I1215840190080
bsc_xcdr_assoc_I137183403718340
te_I 52414002152800
te_timeslot_I6373008019160960
gp_func_xcdr_I639500237000
nail_I (xcdr)22232044640
bsc_D26184001236800
cage_D42172086880
slot_D64273201748480
gp_func_D1215840190080
bsc_xcdr_assoc_D137136203713620
te_D52414002152800
te_timeslot_D6372464015695680
gp_func_xcdr_D642200253200
nail_D32232066960
Total1567Â 53861280

So the total logical log usage for Move BSC is about 54 MB. This is a very conservative estimate; since Move BSC contains tons of transactions, there are many chances that during Move BSC, some logical log space maybe released and ready for reuse because of committed transactions. We configured 80 logical logs each is 30 MB large, and long transaction high water mark is set to 50. This means the available logical log space on our system is: 80 * 30 / 2 = 1200 MB. Finally, we put all those calculations together and came up with the following formula for our customer:

  1200 MB - maximum BSC to move * 54 MB => 30MB

30 MB is the size of one logical log. According to Informix Dynamic Server documentation, we should always reserve an empty logical log. So if remaining logical log space is larger than 30 MB, than we are safe and we can confidently say Move BSC will succeed with no problem, thus the maximum number of BSC to move at one time is:

  Maximum BSC to move =< (1200 MB - 30 MB) / 54

Maximum of BSC to move is 21, in other words, if customer tries to move more than 21 BSC at one time, Informix Dynamic Server may run out of logical logs and hit long transaction which in turn will abort Move BSC operation and freeze Informix Dynamic Server engine.

Conclusion

How accurate is our formula and how much confidence do we have on our estimate? Let's put it into tests. In our lab, we did three tests on different machines, simulating customer situation to Move BSC. Before each test, we used onstat -l to get log space usage (the Log Space used field of the output) information and then after each test, we did it again, and then we calculate the difference between the two. The difference between the two numbers is the actual logical log space used. The following table showed our test results, unit in Kbytes:

Move BSC Test Results

TestDescriptionBeforeAfterActualPredictedAccuracy
1move10BSC33670798085170798051500000054000000095.37
2move15BSC2198356812297835681278000000081000000096.30
3move20BSC703515912481051591241070000000108000000099.07

Test results above showed that on average, our estimate is above 95% accurate to reality and also notice that the numbers in the predicted column are always larger than that in actual column; this means our estimate on logical log usage is sufficient and solid.


Downloadable resources

static.content.url=http://www.ibm.com/developerworks/js/artrating/
SITE_ID=1
Zone=Information Management, Java development
ArticleID=13890
ArticleTitle=Monitor and estimate logical log usage in Informix Dynamic Server
publish-date=11182003