IBM Support

== DEBUGGING CORE FILES [02] == HOW TO PREPARE A CORE FILE FOR ANALYSIS?

Technical Blog Post


Abstract

== DEBUGGING CORE FILES [02] == HOW TO PREPARE A CORE FILE FOR ANALYSIS?

Body


== DEBUGGING CORE FILES [02] == HOW TO PREPARE A CORE FILE FOR ANALYSIS?

As we mentioned before the core file contains almost all regions that
the running process had in it's address space. But not all. Among the
missing ones are the 'text' regions. Those regions contains the various
instructions that the process would execute. As well, the symbol tables
that allow the debugger to print function names, are not included in
the core file.

Since viewing the assembly code (unless you compiled with -g) will
most likely be a required step during core file analysis you will need
all the objects (executable, shared libraries...) that the process
was using when the core file was generated. This is because the objects
contain the instructions. So when you ask the debugger to print the
assembly instructions of a function it will actually use the information
from the core file to access the objects and read those instructions.
Those same objects will contain 'symbol tables' that will allow the
debugger to print function names in a stack trace for example.

If you analyse the core file on the same machine as the one it was
generated on, then no problem as all objects will most likely be the
same and at the same location as what the core file indicates.
However, if you analyse the core file on a different machine then
a few things might happen:

  - The objects don't exist on the machine.
  - The objects are located in a different path on the machine.
  - The objects on the machine are from a different version.

For that reason it is always a good practice to package the core file
in such a way that it will contain all the objects from the original
machine. The way to package the core file with all it's object will
once again depend on the platform. Solaris, AIX, Linux, etc.... all have
more or less convenient ways to do this. The small table below describes
possible ways to package a core file depending on the machine.

Let's see a few ways to package a core file on a few platforms. You can
use various ways of course but this will give you an idea on how to do it.
The idea is to locate all objects and create a tarfile. Note that sometimes
some softwares provide utilities to do that for you like 'db2snapcore'
for DB2 product for example.


- AIX -

  For AIX you can use 'snapcore'. It is a utility that will collect the
  core file, the executable and all required objects and create a
  compressed 'pax' archive. Example:

    # snapcore -d /home3/dalla/tmp/snapcore ./core ./addrtofname

    # ls -l /home3/dalla/tmp/snapcore
    total 35736
    -rw-r--r--  1 dalla  build  18290813 May 12 01:02 snapcore_17498276.pax.Z


- HP -

  On HP the simplest way to package together core file, libraries and executable
  is to use the HP version of gdb located in '/opt/langtools/bin'. To do that you
  have to use the 'packcore' command in gdb:

    # /opt/langtools/bin/gdb /u/dalla/t /u/dalla/core
    (gdb) packcore core /u/dalla
    Core was generated by `t'.
    Program terminated with signal 11, Segmentation fault.
    #0  0x4000000000001ebc in main+0x2c ()
    (gdb) q

    Then you can see the resulting file:

    # tar tvf /u/dalla/core.tar
    rwxrwxrwx 9711/1240  24752 Apr  5 18:52 2017 t
    r-xr-xr-x   2/2 390152 Jan 26 23:21 2005 dld.sl
    r-xr-xr-x   2/2 1860696 Jun 27 15:05 2003 libc.2
    r-xr-xr-x   2/2  15048 Jan 26 23:21 2005 libdl.1


- LINUX -

  There is no 'simple' tool to package a core file on LINUX. So what you have
  to do is find out the list of objects that were loaded by the process when
  the core file was dumped. Example:

    # file core
    core: ELF 64-bit LSB core file x86-64, version 1 (SYSV), SVR4-style, from 't2'

  So the excutable is 't2'. What you can do then is use 'gdb' to print the modules
  loaded at the time the core was generated:

    # gdb t2 core
    (gdb) info sharedlibrary
    From                To                  Syms Read   Shared Object Library
    0x00002aaaaabe5000  0x00002aaaaacbf3c0  Yes (*)     /lib64/libc.so.6
    0x00002aaaaaaaba80  0x00002aaaaaac0d94  Yes (*)     /lib64/ld-linux-x86-64.so.2
    ...

  Once you have the list you create a file 'filelist' with relative path:

    # cat filelist
    ./home/dalla/t2
    ./lib64/libc.so.6
    ./lib64/ld-linux-x86-64.so.2
    ...

  Then from '/' you create the final tarfile:

    # cd /
    # tar cvfh ./tmp/corefile.tar ./home/dalla/core `cat ./home/dalla/filelist`


- SOLARIS -

  There is no 'simple' tool to package a core file and the required executable
  and libraries on Solaris (unless you have DB2 installed where you can use
  'db2snapcore'). So what you have to do is find out the list of objects that
  were loaded by the process when the core file was dumped. To do that you
  can use 'pmap' and check for unique names:

    # pmap core | grep "r-x" | awk '{print $4}' | sort -u > filelist
    # cat filelist
    /home2/dalla/tmp/t1
    /lib/sparcv9/ld.so.1
    /lib/sparcv9/libc.so.1
    /platform/sun4v/lib/sparcv9/libc_psr.so.1

  Then you edit 'filelist' to add a '.' in front of each file so you end up
  with a relative path. Something like this:

    # cat filelist
    ./home2/dalla/tmp/t1
    ./lib/sparcv9/ld.so.1
    ./lib/sparcv9/libc.so.1
    ./platform/sun4v/lib/sparcv9/libc_psr.so.1

  Finally you produce a tarfile with the core and the required files.
  The 'tar' has to be performed from the '/' directory to keep the files
  path valid:

    # cd /
    # tar cvfh /tmp/corefile.tar ./home2/dalla/core `cat ./home2/dalla/filelist`

  You can then 'gzip' the '/tmp/corefile.tar' and ship it as required.

[{"Business Unit":{"code":"BU058","label":"IBM Infrastructure w\/TPS"},"Product":{"code":"SSEPGG","label":"Db2 for Linux, UNIX and Windows"},"Component":"","Platform":[{"code":"PF025","label":"Platform Independent"}],"Version":"","Edition":"","Line of Business":{"code":"LOB10","label":"Data and AI"}}]

UID

ibm11140232