Contents


DB2 problem determination using AIX commands and utilities

Comments

There are many scenarios where the troubleshooting of DB2 issues can involve and benefit from gathering operating system level data and analyzing it to understand the issues further.

This article discusses a number of problems you may face with your database including CPU usage problems, orphan processes, database corruption, memory leaks, hangs and unresponsive application. Learn to use AIX utilities and commands to help you understand and resolve each of these troublesome issues.

The data you collect from running these commands can be sent to the IBM Technical Support Team when opening a problem management request (PMR) in order to expedite the PMR support process. The end of each section of this article discusses the documents you should gather to send to the Technical Support Team. While this article gives troubleshooting tips to use as a guideline, you should contact the IBM Technical Support Team for official advice about these problems.

Monitor CPU usage

In working with your database, you might notice a certain DB2 process consuming a high amount of CPU space. This section describes some AIX utilities and commands which you can use either to analyse the issue yourself or to gather data before submitting a PMR to IBM Technical Support:

ps

A ps command reveals the current status of an active process. You can use ps -auxw | sort r +3 |head 10 to sort and get a list of the top 10 highest CPU consuming processes. Listing 1 shows the ps output:

Listing 1. Sample ps output
root@bagpuss $ ps auxw|sort -r +3|head -10
USER      PID   %CPU  %MEM   SZ  RSS    TTY STAT  STIME  TIME   COMMAND
sharon   1658958  0.1  9.0 218016 214804 - A      Sep 13 38:16 db2agent (idle) 0
dpf      1036486  0.0  1.0 14376 14068   - A      Sep 17  3:10 db2hmon 0
sharon   1822932  0.0  1.0 12196 11608   - A      Sep 12  6:41 db2hmon 0
dpf      1011760  0.0  0.0 9264  9060    - A      Sep 17  3:03 db2hmon 3
dpf      1532116  0.0  0.0 9264  9020    - A      Sep 17  3:04 db2hmon 2
dpf       786672  0.0  0.0 9264  8984    - A      Sep 17  3:02 db2hmon 5
dpf      1077470  0.0  0.0 9264  8968    - A      Sep 17  3:03 db2hmon 1
dpf      1269798  0.0  0.0 9248  9044    - A      Sep 17  2:50 db2hmon 4
db2inst1  454756  0.0  0.0 9012  7120    - A      Jul 19  0:52 db2sysc 0

topas

When executing a ps -ef command, you see the CPU usage of a certain process. You can also use the topas command to get further details. Similar to the ps command, a topas command retrieves selected statistics about the activity on the local system. Listing 2 is a sample topas output that shows a DB2 process consuming 33.3% CPU. You can use the topas output to get specific information such as the process id, the CPU usage and the instance owner who started the process. It is normal to see several db2sysc processes for a single instance owner. DB2 processes are renamed depending on the utility being used to list process information:

Listing 2. Sample topas output
Name          PID    CPU%  PgSp Owner
db2sysc      105428  33.3  11.7 udbtest
db2sysc       38994  14.0  11.9 udbtest
test          14480   1.4   0.0 root
db2sysc       36348   0.8   1.6 udbtest
db2sysc      116978   0.5   1.6 udbtest
db2sysc      120548   0.5   1.5 udbtest
sharon        30318   0.3   0.5 root
lrud           9030   0.3   0.0 root
db2sysc      130252   0.3   1.6 udbtest
db2sysc      130936   0.3   1.6 udbtest
topas        120598   0.3   3.0 udbtest
db2sysc       62248   0.2   1.6 udbtest
db2sysc       83970   0.2   1.6 udbtest
db2sysc      113870   0.2   1.7 root

vmstat

The vmstat command can be used to monitor CPU utilization; you can get details on the amount of user CPU utilization as well as system CPU usage. Listing 3 shows the output from a vmstat command:

Listing 3. Sample vmstat output
kthr 	memory 	page 		faults		cpu 
----- ----------- ------------------------ ------------ ----------- 
r  b avm     fre   re pi po  fr sr  cy   in     sy   cs   us  sy id   wa 
32 3 1673185 44373 0  0  0    0 0    0  4009  60051 9744  62  38    0   0 
24 0 1673442 44296 0  0  0    0 0    0  4237  63775 9214  67  33    0   0 
30 3 1678417 39478 0  0  0    0 0    0  3955  70833 8457  69  31    0   0 
33 1 1677126 40816 0  0  0    0 0    0  4101  68745 8336  68  31    0   0 
28 0 1678606 39183 0  0  0    0 0    0  4525  75183 8708  63  37    0   0 
35 1 1676959 40793 0  0  0    0 0    0   4085 70195 9271  72  28    0   0 
23 0 1671318 46504 0  0  0    0 0    0  4780  68416 9360  64  36    0   0 
30 0 1677740 40178 0  0  0    0 0    0  4326  58747 9201  66  34    0   0 
30 1 1683402 34425 0  0  0    0 0    0  4419  76528 10042 60  40    0   0 
0 0 1684160 33808  0  0  0    0 0    0  4186  72187 9661  73  27    0   0

When reading a vmstat output, as above, you can ignore the first line. The important columns to look at are us, sy, id and wa.

id: Time spent idle.

wa: Time spent waiting for I/O.

us: Time spent running non-kernel code. (user time)

sy: Time spent running kernel code. (system time)

In Listing 3, the system is hitting an average of 65% user CPU usage and 35% system CPU usage. Pi and Po values are equal to 0, thus there are no paging issues. The wa column shows there does not seem to be any I/O issues.

Listing 4 shows the wa (waiting on I/O) to be unusually high and this indicates there might be I/O bottlenecks on the system which in turn causes the CPU usage to be inefficient. You can check errpt -a output to see if there are any reported issues with the media or I/O on the system.

Listing 4. Sample vmstat output showing I/O issues
kthr memory page faults cpu 
----- ----------- ------------------------ ------------ ----------- 
r b  avm    fre re pi po  fr      sr     cy   in    sy   cs   us  sy id wa 
2 8  495803 3344 0 0   0   929    1689    0   998  6066  1832 4   3  76 16 
0 30 495807 3340 0 0   0   0        0     0   1093 4697  1326 0   2  0  98 
0 30 495807 3340 0 0   0   0        0     0   1055 2291  1289 0   1  0  99 
0 30 495807 3676 0 2   0   376    656     0   1128 6803  2210 1   2  0  97 
0 29 495807 3292 0 1   3   2266   3219    0   1921 8089  2528 14  4  0  82 
1 29 495810 3226 0 1   0   5427   7572    0   3175 16788 4257 37  11 0  52 
4 24 495810 3247 0 3   0   6830   10018   0   2483 10691 2498 40  7  0  53 
4 25 495810 3247 0 0   0   3969   6752    0   1900 14037 1960 33  5  1  61 
2 26 495810 3262 0 2   0   5558   9587    0   2162 10629 2695 50  8  0  42 
3 22 495810 3245 0 1   0   4084   7547    0   1894 10866 1970 53  17 0  30

iostat

An iostat command quickly tells you if your system has a disk I/O-bound performance problem. Listing 5 is an example of an iostat command output:

Listing 5. Sample iostat output
System configuration: lcpu=4 disk=331 
tty: 	tin 	tout    avg-cpu:   % user    % sys    % idle    % iowait 
0.0 	724.0                17.9     12.3      0.0       69.7

Disks: 		% tm_act   Kbps 		tps        Kb_read  Kb_wrtn 
hdisk119 		100.0      5159.2 		394.4       1560    24236 
hdisk115 		100.0      5129.6 		393.0       1656    23992 
hdiskpower26   	100.0      10288.8		790.8       3216    48228

%tm_act: Reports back the percentage of time that the physical disk was active or the total time of disk requests.
Kbps: Reports back the amount of data transferred to the drive in kilobytes.
tps: Reports back the number of transfers-per-second issued to the physical disk.
Kb_read: Reports back the total data (kilobytes) from your measured interval that is read from the physical volumes.
Kb_wrtn: Reports back the amount of data (kilobytes) from your measured interval that is written to the physical volumes.

To check if you are experiencing resource contention, you can focus on the %tm_act value from the above output. An increase in this value, especially more than 40%, implies that processes are waiting for I/O to complete, and you have an I/O issue on your hands. Checking which hard disk has higher disk activity percentage and whether DB2 uses those hard disks gives you a better idea if these two factors are related.

What to collect

You should collect the following information before opening a PMR with IBM Technical Support:

  1. db2support.zip
  2. truss -f -o truss.out -p <pid> of high cpu process
  3. db2pd -stack <pid> of high cpu process


Technical support might also send you the db2service.perf1 script which basically collects data repeatedly over a period of time. The output of the script needs to be bundled and sent back to the support team for their further analysis.

Troubleshoot orphan processes

There are scenarios when, even after doing a db2stop, you notice (by doing a ps -ef | grep DB2) certain DB2 processes such as the db2fmp process still running and consuming resources. If there was a case of abnormal shutdown, it is advised to do a ipclean after the instance has been stopped. Doing a db2stop should inherently shutdown all DB2 related processes; however, if an application using those processes was abnormally terminated, this might cause related DB2 processes to become orphan processes.

Orphan DB2 processes are those which are not attached or linked to any other DB2 processes. Abnormal termination of an application includes shutting it down by doing a Ctrl+C, closing the KSH session or killing it with a -9 option.

One way of confirming that the process is orphaned, is to try and match the process ID (PID) of the orphaned process from the ps -ef output with the Coordinator column of the db2 list applications show detail output. If the PID cannot be found in the db2 list apps output, then it is an orphan process. For example, if you issue a db2 list applications show detail command, you get this output:

Listing 6. Sample list applications output
CONNECT Auth Id Application Name Appl. Application Id Seq# Number of Coordinating DB 
Coordinator Status Status Change Time DB Name DB Path 
Handle Agents partition number pid/thread 

JDE test.exe 2079 AC1C5C38.G80D.011F44162421 0001 1 0 2068646
UOW Waiting 04/04/2006 09:25:17.036230 PTPROD 
/db2pd/otprod/ptprod/otprod/NODE0000/SQL00001/ 

--NOTICE PID 2068646. This is the PID on the local server. 

Part of the ps -ef output from the server: 

ps -ef |grep 2068646 
otprod 2068646 483566 0 09:06:28 - 0:59 db2agent (PTPROD) 0

This output shows the process with PID of 2068646 is not an orphaned process and is still attached to a DB2 process.

In order to avoid orphan processes, you may want to do the following: Make normal, clean exits at the client side so that DB2 is aware and can clean up resources on the server. Tweak values of TCPKEEPIDLE time to a number less than the default, and tune the DB2CHECKCLIENTINTERVAL and KEEPALIVE values.

What to collect

If you do notice orphan processes and wish to investigate this issue, you should collect the following information before opening a PMR with IBM Technical Support:

  1. ps -ef| grep db2 output
  2. db2support.zip with -c option
  3. A callstack of the process that is collected using dbx, db2pd -stack or kill -36 <pid>. The dbx command is a popular command line debugger used in both Solaris and AIX systems. The dbx output is helpful and can be run as follows:
    Listing 7. The dbx command
    dbx -a <PID>
    At the dbx prompt type 
    th 		--- Displays all threads for the process 
    th info 	--- Displays additional info about the threads 
    where 		--- Get stack trace for thread 1 
    th current 1  --- Makes t1 current 
    where 		--- Displays stack for thread 1 
    th current 2  --- Makes thread 2 current 
    where 		--- Displays stack for thread 2. 
    ... continue for all threads of the process 
    detach - 	--- Detach from process
    dbx -a <PID of orphan process>

Detect database corruption

You can start to investigate whether the database is corrupted if a user complains of not being able to access certain database objects or is unable to connect to a specific database partition. The following section highlights some of the errors that are logged by DB2 and how you can ensure that there are no operating system (OS) level issues affecting or causing DB2 database corruption. You might notice errors similar to the one in Listing 8 being logged in the db2diag.log:

Listing 8. Corruption errors
RETCODE : ZRC=0x87040001=-2029780991=SQLD_BADPAGE "Bad Data Page" 
DIA8500C A data file error has occurred, record id is "". 

Or
RETCODE: ZRC=0x86020019=-2046689255=SQLB_CSUM "Bad Page, Checksum Error" 
DIA8426C A invalid page checksum was found for page "". Or 2007-07-09-11.29.45.696176+120 I16992C16377 LEVEL: Severe PID : 68098 TID : 1 PROC : db2agent (sample) INSTANCE: instest NODE : 000 DB : sample APPHDL : 0-635 APPID: *LOCAL.instest.070709082609 FUNCTION: DB2 UDB, buffer pool services, sqlbcres, probe:20 MESSAGE : Important: CBIT Error DATA #1 : Hexdump, 4096 bytes

These errors are logged when DB2 tries to access data in a container and there is some form of corruption. In such an instance when DB2 cannot access the data, the database might be marked as bad. You can narrow down where there might be possible corruption. In the db2diag.log, look for messages similar to the following:

Listing 9. Corruption errors showing database object details
2006-04-15-03.15.37.271601-360 I235258C487 LEVEL: Error 
PID : 152482 TID : 1 PROC : db2reorg (SAMPLE) 0 
INSTANCE: instest NODE : 000 DB : SAMPLE 
APPHDL : 0-68 APPID: *LOCAL.SAMPLE.060415091532 
FUNCTION: DB2 UDB, buffer pool services, sqlbrdpg, probe:1146 
DATA #1 : String, 124 bytes 
Obj={pool:5;obj:517;type:0} State=x27 Parent={5;517}, EM=55456, 
PP0=55488 Page=55520 Cont=0 Offset=55552 BlkSize=12 
BadPage

The above errors indicate corruption has occurred in tablespace:5 and tableid:517. To check which table this refers to, execute the following SQL query:

Listing 10. Query to find a table with corruption
db2 "select tabname, tbspace from syscat.tables where tbspaceid = 5 and tableid = 517"

On the Operating System (OS) level, the most common causes for corruption are either hardware issues or file system corruption. For example, in the db2diag.log if you see the database being marked damaged with a ECORRUPT (89) error as follows :

Listing 11. Sample file system-related corruption errors
2007-05-22-13.45.52.268785-240 E20501C453 LEVEL: Error (OS) 
PID : 1646696 TID : 1 PROC : db2agent (SAMPLE) 0 
INSTANCE: tprod NODE : 000 DB : SAMPLE 
APPHDL : 0-32 APPID: GA260B45.M505.012BC2174219 
FUNCTION: DB2 UDB, oper system services, sqloopenp, probe:80 
CALLED : OS, -, unspecified_system_function 
OSERR : ECORRUPT (89) "Invalid file system control data detected."

You can check the following

  1. Review the errpt -a output and look for hardware I/O or disk-related messages. Listing 12 is an example of an errpt -a output which shows a file system corruption:
    Listing 12. Sample errpt output
    LABEL: J2_FSCK_REQUIRED 
    IDENTIFIER: B6DB68E0 
    Date/Time: Thu Jun 7 20:59:49 DFT 2007 
    Sequence Number: 139206 
    Machine Id: 000BA256D600 
    Node Id: cmab 
    Class: O 
    Type: INFO 
    Resource Name: SYSJ2 
    Description 
    FILE SYSTEM RECOVERY REQUIRED 
    Probable Causes 
    INVALID FILE SYSTEM CONTROL DATA DETECTED 
    Recommended Actions 
    PERFORM FULL FILE SYSTEM RECOVERY USING FSCK UTILITY 
    OBTAIN DUMP 
    CHECK ERROR LOG FOR ADDITIONAL RELATED ENTRIES 
    Detail Data 
    ERROR CODE 
    0000 0005 
    JFS2 MAJOR/MINOR DEVICE NUMBER 
    0032 0004 
    CALLER 
    0028 8EC8 
    CALLER 
    0025 D5E4 
    CALLER 
    002B 4AC8
  2. Run the fsck command on the file system where the container resides to be sure that it is sound. fsck interactively checks and repairs any file system malfunction. From the pSeries and AIX Information Center we can find the following examples of using the fsck command.
    Listing 13. The fsckcommand
    To check all the default file systems enter:
    fsck
    This form of the fsck command asks you for permission 
    before making any changes to a file system. To check the file system /dev/hd1, enter: fsck /dev/hd1 This checks the unmounted file system located on the /dev/hd1 device.

What to collect

You should collect the following information before opening a PMR with IBM Technical Support:

  1. errpt -a
  2. db2support.zip
  3. fsck results

Debug memory leaks

It is important to distinguish, if possible, between a memory leak and a system-wide performance degradation due to increased demands for memory. So initially it is pertinent to check that nothing has changed in the environment that could explain increased memory usage. The rest of this section discusses how to use AIX Operating System techniques to spot, track and debug those leaks. The article does not discuss detailed DB2 tools and techniques, although there is some mention where necessary.

What is a memory leak?

Wikipedia describes a memory leak as

a particular kind of unintentional memory consumption by a computer program where the program fails to release memory when no longer needed. This condition is normally the result of a bug in a program that prevents it from freeing up memory that it no longer needs. The term is meant as a humorous misnomer, since memory is not physically lost from the computer. Rather, memory is allocated to a program, and that program subsequently loses the ability to access it due to program logic flaws.

Specifically, it is a bug in the code whereby malloc() memory allocation calls are not met by corresponding free() memory calls. No corresponding free() system calls lead to unfreed blocks. Typically this is a slow process and occurs over days or weeks -- particularly if the process is left active as is often the case. Some leaks are not even detectable, particularly if the application terminates and its processes are destroyed.

Lisitng 14 is an example of a C code snippet that demonstrates memory leak. In this instance, memory was available and pointed to by the variable 's,' but it was not saved. After this function returns, the pointer is destroyed and the allocated memory becomes unreachable, but it remains allocated.

Listing 14. Sample c code
#include <stdio.h>
#include <stdlib.h>
void f(void)
{
     void* s;
     s = malloc(50); /* get memory */
     return;         /* memory leak - see note below */ 
     /* 
      * Memory was available and pointed to by s, but not saved.
      * After this function returns, the pointer is destroyed, 
      * and the allocated memory becomes unreachable.
      *
      * To "fix" this code, either the f() function itself
      * needs to add "free(s)" somewhere or the s needs
      * to be returned from the f() and the caller of f() needs
      * to do the free().
      */
}
int main(void)
{
     /* this is an infinite loop calling the above function */
     while (1) f(); /* Malloc will return NULL sooner or later, due to lack of memory */
     return 0;
}

How to spot, track and debug memory leaks

To begin with, you should call IBM if you suspect a DB2 process is leaking memory. But how do you know that you are experiencing this situation? This section discusses some of the options.

The first option is to use the ps utility. The ps utility can be used to quickly and simply determine if a process is leaking. This example demonstrates how a particular process is growing in size:

Listing 15. Sample 'ps aux' output showing the process growing in size
ps aux:

1st iteration:      
USER         PID %CPU %MEM   SZ  RSS    TTY STAT    STIME  TIME
COMMAND                                                 
db2inst1  225284  0.2  0.0 19468 18280     - A    11:26:06 10:34    
db2logmgr                                                       
                                                                     
2nd iteration:                                                       
db2inst1  225284  0.1  0.0 19696 18512      - A    11:26:06 10:34    
db2logmgr                                                       
                                                                     
3rd iteration:                                                       
db2inst1  225284  0.1  0.0 19908 18724      - A    11:26:06 10:36    
db2logmgr                                                       
                                                                     
4th iteration:                                                       
db2inst1  225284  0.1  0.0 20116 18932      - A    11:26:06 10:36    
db2logmgr                                                      
                                                                     
5th iteration:                                                       
db2inst1  225284  0.1  0.0 20312 19128      - A    11:26:06 10:37    
db2logmgr                                                    
                                                                     
ps -kelf:                                                            
                                                                     
1st iteration:        
F S      UID     PID    PPID   C PRI NI ADDR    SZ    WCHAN    
STIME    TTY  TIME CMD                                               
40001 A db2inst1  225284  254158   0  60 20 580e59400 18466         
11:26:06      - 10:34 db2logmgr (***) 0                           
                                                                     
2nd iteration:                                                       
40001 A db2inst1  225284  254158   1  60 20 580e59400 18696          
11:26:06      - 10:34 db2logmgr (***) 0                           
                                                                     
3rd iteration:                                                       
40001 A db2inst1  225284  254158   0  60 20 580e59400 18900          
11:26:06      - 10:36 db2logmgr (***) 0                           
                                                                     
4th iteration:                                                       
40001 A db2inst1  225284  254158   0  60 20 580e59400 20106          
11:26:06      - 10:36 db2logmgr (***) 0                           
                                                                     
5th iteration:                                                       
40001 A db2inst1  225284  254158   0  60 20 580e59400 20312          
11:26:06      - 10:37 db2logmgr (***) 0

The SZ and RSS values in the ps aux output are the 2 key columns to focus on when trying to spot a potential memory leak. As you can see, the values in bold are increasing. It is not sufficient, however, to determine root cause and more debugging is certainly required. Again, please raise this issue with IBM Technical Support, but what follows are some likely problem determination steps IBM will take.

Debug using procmap and gencore

As root:

  1. procmap <db2logmgr pid>> procmap.1
  2. ps aux > ps_aux.1
  3. ps -kelf > ps_kelf.1
  4. gencore <db2logmgr pid> <file>

sleep for a period of time, then:

  1. procmap <db2logmgr pid> > procmap.2
  2. ps aux > ps_aux.2
  3. ps -kelf > ps_kelf.2
  4. gencore <db2logmgr pid> < file>

Then repeat these steps again for another 2 or 3 iterations. Please note, on 64 bit AIX, the gencore creates very large files. Regardless of the word size, fullcore needs to be enabled. The following commands can be used to check that the environment is set up correctly:

Listing 16. The lsattr command
lsattr -El sys0| grep -i core
fullcore     true                  Enable full CORE dump                             True

And the limits for the instance owner needs to be set appropriately too. You may well be asked to enable MALLOC_DEBUG and export this to the DB2 environment. What follows is an example of this:

To start DB2 memory debugging for the next time the instance is started, run:
db2set DB2MEMDBG=FFDC

To start malloc debugging for the next time the instance is started, run:
export MALLOCDEBUG log:extended stack_depth 12
And append MALLOCDEBUG to the DB2 registry variable DB2ENVLIST:
db2set DB2ENVLIST MALLOCDEBUG

Then stop and restart DB2.

Once the core files have been created, you can use snapcore to bundle the core files and libraries into pax file. An example of snapcore is as follows:

Listing 17. Sample snapcore
snapcore /home/db2inst1/sqllib/db2dump/c123456/core
/home/db2inst1/sqllib/adm/db2sysc

This creates a file with a *.pax extension in /tmp/snapcore by default. The core file is useless without the executable that cored, in this case it was db2sysc not db2logmgr, which was seen to be growing, because that is a process not an executable. DB2 support is then able to interrogate the core to track the DB2 malloc() allocations against free() calls.

Recover from hangs

What is a hang

A hang occurs when a process has not moved forward or changed after a period of time. This can happen if a thread or process reaches a point in its execution where it can go no further and is waiting for a response. It also occurs when the process is in a very tight loop and never completes the function.

The first step is to identify if what you are experiencing is a hang or a severe degradation. Then you need to understand what is affected, or the scope. Some simple questions can help a lot:

  • Why do you think it has hung?
  • Are all DB2 commands hanging?
  • How long has the command been running for?
  • How long does it normally run for

Then to access the scope:

  • Are OS commands hanging too? If the answer to this is yes, then you need get assistance from the AIX support team.
  • Are db2 connect statements affected?
  • Can SQL be issued over existing connections?
  • If in a DPF environment, can you issue commands against other partitions?
  • Can you issue commands against other databases?

Recovery

Remember, please collect the stacks before you recover. Once you have the stacks the only choice you have is to issue db2_kill. Then check for any processes and IPCs shared memory, message queues and semaphores left lying around after the kill. You may have to remove any you find manually. You could also try ipclean to remove these resources. If the IPCs are not cleared out by ipclean or ipcrm and the processes are removed by kill -9, then the process is most likely hung in the kernel and you need to call AIX support.

Once it has come down, restart with db2start and then do a restart db command.

What to collect

The single most important piece of information to collect is a stack trace of the process that is believed to be hung. IBM DB2 support cannot debug a hang without this, and the stack trace must be collected prior to recovering DB2. If this is not done, you may have another outage in the future.

There will be pressure to restart DB2, but you must resist. The system must be in a hung state in order to diagnose the root cause of the problem and do the necessary debugging. A restart clears the situation and you have lost the window of opportunity to make the necessary changes. More seriously, you cannot provide any confidence that it won't recur. Thus, you need to resist the pressure to restart DB2 until you have collected all the diagnostics.

The following table describes good probelm determination (PD) and data caputre versus bad PD and data capture. Note that the best PD and data caputre requires the fewest steps and has a better change of success in determining root cause.

Poor PD and data capture:

  • Occurrence
  • Detection
  • Recovery
  • FFDC on (requires restart)
  • Restart (outage #2) Schedule outage, hopefully problem does not reoccur before
  • Occurrence (outage #3)
  • Detection
  • Data Collection
  • Recovery
  • Diagnosis (clock ticking)

Better PD and data capture:

  • Occurrence (outage #1)
  • Detection
  • Recovery
  • FFDC on
  • Occurrence (outage #2)
  • Detection
  • Data Collection
  • Recovery
  • Diagnosis (clock ticking)

Good PD and data capture:

  • Occurrence (outage #1)
  • Detection
  • Data Collection
  • Recovery
  • Diagnosis (clock ticking)

Stack traces

A stack trace is a snapshot of the function calls at a particular point in time. So multiple stack traces, a few minutes apart, provide a sense of motion. There are a variety of ways to collect stack traces; the following lists are, in my opinion, the most reliable:
Procstack <pid of hung process> >> pid.pstack.out
This is an AIX utility that just dumps the stack to a file. In this instance, I am appending the file because it is run again later and I do not want to have to re-write it.

Kill -36 <pid>
This command does not kill the process, but it sends a signal to dump its stack. This actually creates a fully-formatted trap file to the DIAGPATH area of DB2. Because it gives more information than procstack and the way it works internally, it is generally more expensive, particularly if there are hundreds of processes, which is often the case. The main focus of this article is to discuss AIX operating system tools to debug DB2. No discussion of hang problem determination is complete without mentioning db2pd, so the following invocations can be used to generate stacks traces:

db2pd -stacks (This generates stack dumps again all PID)
db2pd -stack <pid> (This generates a stack dump for the PID specified)
The trap file is created in the DIAGPATH area. Listing 18 shows an example of its usage:

Listing 18. db2pd -stacks usage
1. -stacks
$ db2pd -stacks
Attempting to dump all stack traces for instance.
See current DIAGPATH for trapfiles.
2. -stack <pid>
$ db2pd -stack 1454326
Attempting to dump stack trace for pid 1454326.
See current DIAGPATH for trapfile.

The DB2 support will ask you to tar and compress the DIAGPATH area. Most commonly they will ask you to run a db2support command which does it for you, providing the correct flags are used. However, if you use the OS method of procstack, you have to submit the output files.

Truss

The truss command can be used but is not as effective as a stack dump and is only likely to reveal anything if the processes is looping and can be reproduced. If the process is hung, only a stack dump can reveal how it got there.

ps

It is also a good idea to collect ps listings for all partitions, if applicable, before and after the stack dumps. If you collect the data manually the pseudo-code looks like this:

Listing 19. procstack
Procstack Pid or PIDs  >> procstack.out
Ps eafl >>  pseafl.out
Ps aux >>  psaux.out
Sleep 120
Repeat for at least 3 iterations.

Or:

Kill -36 <pid> or PIDs
Ps eafl  >>  pseafl.out
Ps aux  >>  psaux.out
Sleep 120
Repeat for at least 3 iterations.


NB: IBM DB2 support can provide a data collect script which automates this process.

Investigate unresponsive applications

Sometimes applications are merely unresponsive, and you have to figure out why it is unresponsive and how to get it to respond. If you issue a force application and it does not respond, you may be left wondering what you can do. First of all, it is important to know that force makes no guarantees to force. It is simply a wrapper around an OS kill command.

Without going into the architectural details of DB2, there are some situations which are dangerous to force. As such, the db2agent sets its priority level to be higher than that of the force. Under these circumstances, force does not work, and this is by design.

The bottom line is, not every unresponsive application is caused by a bug. It is possible that the application is just doing something important and not responding to any additional commands until it completes its current task.

Recovery

Recovery almost certainly requires a db2stop,db2start as DB2 does not take kindly to key engine processes being killed. It tends to invoke panic and bring the instance down. I would asses the impact the rogue application is having and, if possible, leave it in situ until you can recycle. It may be holding locks that are contending with other users, for example, and this is adversely affecting the application, in which case you may have to take an outage to remove it.

What to collect

The debugging of an unresponsive application is treated in the same way as a hung, but clearly the scope is narrower. You need to collect the following elements to send to IBM Technical Support:

  1. Iterative stack traces of the db2agent or DB2 process that is unresponsive.
  2. ps listings and other items, like: db2level, dbm cfg, db cfg, db2diag.log and possibly an application snapshot.

Conclusion

Problem determination in DB2 is made simpler because of the tools and utilities available in AIX. Often it is necessary to use both AIX and DB2 tools and commands to figure out what the problem is. This article discusses some of the problems associated with troubleshooting in DB2 and has hopefully given you the tools you need to fix your database.


Downloadable resources


Related topics


Comments

Sign in or register to add and subscribe to comments.

static.content.url=http://www.ibm.com/developerworks/js/artrating/
SITE_ID=1
Zone=Information Management, AIX and UNIX
ArticleID=261228
ArticleTitle=DB2 problem determination using AIX commands and utilities
publish-date=10112007