Archive and maintain DB2 message logs and diagnostic data with db2back
A downloadable script for maintaining your diagnostic data
With the increasing use of autonomic technologies, DB2 servers may produce large message logging files, admin notification log files, and event log files. This is especially true in large warehouse environments with many logical and physical partitions. Also, if a problem occurs, DB2 tends to produce vast amounts of diagnostic data for first failure data capture purposes.
This increase in logging activity may also lead to increased file system space consumption as well as manageability issues. Simply deleting the log files is not a viable option because DB2 support often requests historical diagnostic data from users, especially during ongoing issue investigation or after instance migrations.
This article introduces a new script that you can use to perform maintenance tasks on diagnostic logs and data for DB2 instances. The script is called db2dback.ksh and is available in the zip file contained in the Download section below. The script works in single partition and multi-partition environments, taking into account different user setups, with a shared diagnostic data path or split between different physical partitions.
The db2dback.ksh shell script lets you archive diagnostic data from the configured diagnostic data path (DIAGPATH) of a DB2 instance. It also allows you to maintain the data that has already been archived at the destination (archive) directory.
The DB2 instance owner should run the script on a regular basis. You can run the script manually, or through a scheduling tool (for example, as a cron job).
The script currently works with DB2 instances on AIX and Linux operating systems. In either environment, it works with both single partition instances or multi-partition instances that you have created with the Data Partitioning Feature (DPF). This also includes Balanced Warehouse setups. In a DPF environment, the script supports different instance configurations:
- Sharing a single
DIAGPATHbetween all partitions
- A separate
DIAGPATHfor each physical partition
DIAGPATH is a DB2 database manager configuration parameter value.
If this parameter is not set in the instance configuration, the DB2 default value of
$HOME/sqllib/db2dump for the DB2 instance owner is used.
Refer to the DB2 Information Center for more information on database manager configuration parameters.
Installing the script
The DB2 instance owner can follow these steps to install the script:
- Retrieve the db2dback.zip file from the Download section below.
- Extract the db2dback.ksh script from the zip file.
- Copy db2dback.ksh into the sqllib/bin directory of the DB2 instance.
You must have the proper execute permissions to be able to execute the script remotely on DPF setups.
Following is an example of the commands that would set the proper execute permissions:
cp db2dback.ksh ~/sqllib/bin chmod 755 ~/sqllib/bin/db2dback.ksh
Getting help for using the script
You can run the db2dback.ksh script with the
–h command line
option to display help for the different script options:
$ db2dback.ksh -h 04-01-2009 13:13:25: DIAGPATH is set to /home3/agrankin/sqllib/db2dump Usage: db2dback.ksh [-ahzvptl] [-o <path> ] [-r <days> ] Options: -h Print help message -a Archive diagnostic data -r <days> Remove diagnostic archives that are > then <days> old. Can be combined with -a -o <dir> Specify output directory -z Compress diagnostic data tar archive -v Verbose output. -p Run diag data archiving in parallel (default is sequential). -l Local execution. This is used in cases when db2dump is shared by all partitions. It also can be used if archive runs on just single physical partition. -t Suboption for -a, archives data to a tar archive at destination.
The following sections provide some more details about the different options.
Specifying the destination (archive) directory
If you do not specify a destination directory on the command line, the script uses the DIAGPATH/db2dump_archive directory as the default destination. If this directory does not exist, the script creates it.
You can create a DIAGPATH/db2dump_archive link that points to another local or NFS mounted file system that has enough space allocated to hold archive data. In a DPF setup with multiple physical partitions that do not share a diagnostic path directory, you must create this link on every physical partition.
–a (archive) command line option to archive diagnostic data from DIAGPATH:
db2dback.ksh -a [-o <destination_path> ]
By default, on a DPF system, the script attempts to use the
rah command to kick off local versions of itself on each physical partition.
If DIAGPATH is shared by all physical partitions (not recommended by BCU), you can use
–l suboption to invoke a local copy of the script.
The script renames db2diag.log and admin log files to db2diag.log.<timestamp> and <instancename>.log.<timestamp> and then starts new ones for the instance.
Then the script uses the UNIX
mv command to move all files
and directories from DIAGPATH with the following exceptions:
- Newly created db2diag.log and admin notification log files.
- Self tuning memory manager (STMM) log files, located in the stmmlog directory. The STMM facility manages the space used by its log files automatically and usually does not allow them to grow greater than 50MB total.
- Any diagnostic data files or first occurrence data capture (FODC) directories that have been created fewer than 15 minutes ago. This is done to ensure that the files will not be split between different archives or destinations if archiving started in the middle of diagnostic data dumping.
The directory hierarchy for all files moved from DIAGPATH is preserved at the new destination. All files are moved to a subdirectory with following naming convention:
–t command line suboption to create a tar archive with all diagnostic data files in the destination directory:
db2dback.ksh -a -t [-o <destination_path> ]
The files copied to a tar archive are removed from the source directory. The same file exception rules as above apply for tar archives. The tar files are named using the following convention:
–z command line suboption to compress files at the
By default, the script uses the gzip tool to compress files.
If the script cannot find
the gzip command on your system, it attempts to use the compress utility.
You can use this option with or without the
db2dback.ksh -a –z [-o <destination_path> ] db2dback.ksh -a -t –z [-o <destination_path> ]
When data is sent to the tar archive, the tool compresses this archive at the end.
If the data is being moved (no
–t option), each moved file
is compressed separately at the destination.
Only files with a size greater than 200KB are compressed.
By default, diagnostic data archiving on a DPF system is sequential, which means that the tool archives data for one physical partition at a time.
–p suboption to kick off archiving on all physical
partitions together, in parallel.
This inserts the
||& prefix on the DB2
rah command in the script.
Refer to the DB2 Information Center for more information on
Maintaining diagnostic data as an archive
Use the script with the
-r command line option to perform basic diagnostic data archive maintenance.
You can use this option with or without the
–a archiving option.
The format of the command without the
–a option is as follows:
db2dback.ksh -r <number_of_days>
When using this option, you must provide an argument that specifies the number of days you want to keep files before they are removed from the archive.
When you use the
-r option with the
–a archiving option,
the tool archives diagnostic data first and then attempts to remove older files.
The format of the command with the
–a option is as follows:
db2dback.ksh -a -r 180
You can specify 0 (zero) for the number of days argument to specify that you want to remove all archive files except for the db2dback.ksh utility log file.
The script log file
The db2dback.ksh script writes messages to its log file. These messages report progress and log any errors. The script creates a separate log file for each physical partition. The naming convention for the log file uses the machine’s hostname as a part of its name as follows:
The script creates the log file in the archive destination directory. The file contains information only about the last invocation of the script to ensure that this file itself does not grow too big. Following is an example of the log file:
db2dback.ksh 02-05-2009 19:00:38: Option -r specified 02-05-2009 19:00:38: Removing all archives older than 0 days 02-05-2009 19:00:38: Removing archive db2dback.p6db2serv.2009-02-05-190017
That's all you need to get going with managing your diagnostic logs for DB2 on AIX or Linux. Try it out, and see how much easier it can be to manage your diagnostic data.
- DB2 Information Center: Learn more about the diagnostic logs and about configuration parameters for DB2.
- Now you can use DB2 for free. Download DB2 Express-C, a no-charge version of DB2 Express Edition for the community that offers the same core data features as DB2 Express Edition and provides a solid base to build and deploy applications.
- Visit the developerWorks resource page for DB2 for Linux, UNIX, and Windows to read articles and tutorials and connect to other resources to expand your DB2 skills.
- Download IBM product evaluation versions or explore the online trials in the IBM SOA Sandbox and get your hands on application development tools and middleware products from DB2, Lotus®, Rational®, Tivoli®, and WebSphere®.