Log maintenance in a WebSphere Commerce environment


During the operation of a WebSphere Commerce site, multiple files are written with logging and tracing information. As time goes by, these logs continue to grow in size. Without a policy for archiving and purging, logs can grow and occupy multiple gigabytes.

Detecting the different logs that are written in a WebSphere Commerce environment can be challenging because running a WebSphere Commerce site involves multiple products, such as the IBM® HTTP Server, WebSphere Application Server, and a database, such as DB2® or Oracle®, to name a few. All these products define their own set of logs.

This article helps you understand the importance of implementing policies for archiving and purging old logs, and to show you log files that are found in typical WebSphere Commerce installations.

Large log files and their impact to the operations

Large logs affect the operation of your site in multiple ways by:

  • Requiring additional storage. If the proper alarms are not in place, the logs can exhaust the file system and lead to an outage.
  • Impacting performance, as manipulating large files consumes more system resources.
  • Increasing backup and restore time.
  • Increasing the time required to transmit the logs to other machines for problem determination and by making it more difficult and time consuming for the technician to find the relevant information inside the log.

Archiving and purging logs

There are several considerations to be made when managing logs:

Log rotation: When a log is rotated, the existing contents are purged or moved to a different file and log is re-initialized. This is generally done following one of these criteria:

  • Size: Logs are rotated when they reach a particular size. This technique is useful when logs grow rapidly.
  • Time: Logs are rotated at particular intervals or times. For example, every 6 hours or every day at midnight. This technique is useful when the logs, such as Web server logs, are used to feed other applications.
  • Execution: The application might create unique logs for each execution by adding a unique identifier to the log name, such as a timestamp or an incremental number.

Archiving and purging: When the current log is re-initialized, the existing contents are archived. At this point, you need to decide the policies for purging old logs. Purging logs can be done using size and time as variables:

  • Size: Only n number of logs will be kept. You can discard the rest.
  • Time: You can decide what logs are archived based on time. For example, "keep the last 48 hours worth of logs".

In some cases, the product that writes the log already provides options for rotating and purging. For example, Figure 1 shows the options that are available in the WebSphere Administrative console for the trace file.

Figure 1. File rotation options for WebSphere Application Server trace files
Figure 1. File rotation options for WebSphere Application Server trace                     files

Not all the applications offer automatic archiving and purging of log files. In some cases, you will need to implement your own scripts and policies.

Administrators can write scripts and configure them to run at regular intervals using the UNIX cron or the Windows® task scheduler. For example, "logrotate" is a popular Linux command for rotating application logs. Solaris administrators can use the "logadm" command.

Before starting from scratch, we recommend you visit sites, such as, which list multiple examples, including some written in Perl that you can use in most platforms.

Logging and tracing options

Most products configure different log levels or tracing options to determine the amount of information that is written to the logs.

If you find large logs, you need to work with the application owner to find out if a less verbose option can be used instead. If tracing is enabled, the application owner verifies that the tracing is required and in use for troubleshooting.

Keep in mind that application administrators can choose to enable tracing or verbose logging at anytime, so we recommend that you define file purging and rotation scripts even if the files are not currently being written to.

Products discussed in this article

Multiple software products work together in a WebSphere Commerce site. This article lists the most commonly used products. We recommend that you do an inventory of all the additional products you use within your site and identify what additional logs are created.

Note that the products discussed are for the Linux, UNIX, and Windows platforms only:

WebSphere Commerce utilities

Table 1 summarizes the logs that are created by using the different WebSphere Commerce utilities.

Table 1. WebSphere Commerce utilities logs
Log Directory File Rotation
Loading utilities WC_installdir/logs/ trace.txt Yes
DBClean utility script
WC_installdir/instances/instanceName/logs/DBClean Yes
Staging server utilities WC_installdir/logs/ Yes
MigrateEncryptedinfo utility WC_installdir/logs/ CCInfoMigration.log No

Loading utilities

The WebSphere Commerce loader utilities are used to add and maintain data in the WebSphere Commerce database. Each utility may log to the messages log, trace log, and utility log.

Further debugging can be enabled and is logged into the utility log file. Each utility log file is named as Utility.dbtype.log, where Utility is the loader utility (Massload, idresolver, and so on). dbtypeis the database type (db2 or oracle). This log is only generated when the debug option is enabled.

The following log files are generated by default into the WC_installdir/logs/ directory:

  • messages.txt
  • trace.txt
  • Utility.dbtype.log

The messages.txt and trace.txt log files rotate at a size of 1024 MB by default. You can configure the location of the log files as well as the file size for log rotation.

Refer to the Loading utilities log and Configuring tracing and logging for the loading utilities topics in the WebSphere Commerce Information Center for further details.

DBClean utility script

The DBClean utility script is used to purge old data from the WebSphere Commerce database. On each run of this utility, by default, a log is created in the WC_installdir/instances/instancename/logs/DBClean directory with the following name,

You can specify a new location for the log by using the -log parameter when invoking the script.

Logs are automatically rotated by the tools by appending a timestamp to the file name for each execution. Administrators are still responsible for purging old files when they are no longer required.

Staging server utilities

The staging server utilities are used to synchronize the staging and production data. Each utility creates a new log file on each run.

The following files are generated for the stagingcopy, stagingprop, and fileprop utility, respectively, in the WC_installdir/logs directory:


Logs are automatically rotated by the tools by appending a timestamp to the file name for each execution. Administrators are still responsible for purging old files when they are no longer required.

MigrateEncryptedInfo utility

MigrateEncryptedInfo is used for changing the merchant key and to re-encrypt all encrypted data in the database. You can use this utility when a merchant key must be changed frequently, thus you must monitor the logs to ensure that they do not build up excessively.

Upon each run of the utility it generates, the following log files are generated in the WC_installdir/logs directory:

  • CCInfoMigration.log
  • MKChangeUserAndCCInfoMigration.log
  • MigrateEncryptedInfoError.log

The size of these files depends on the amount of data that is encrypted, and the amount of logging that has been enabled. There is no purge or rotation available for the log files generated by this utility. Upon each execution of the utility, the log file will be re-created.

WebSphere Application Server

Table 2 summarizes the most important logs created and maintained by WebSphere Application Server (hereafter called Application Server):

Table 2. WebSphere Application Server logs
Log Directory File Rotation
Java Virtual Machine logs WAS_installdir/profiles/profileName/logs/serverName SystemOut.log Supported based on time and size
Trace logs
WAS_installdir/profiles/profileName/logs/serverName trace.log Supported base on file size
Native and garbage collection logs WAS_installdir/profiles/profileName/logs/serverName native_stderr.log No
Javacore and heapdumps WAS_installdir/profiles/profileName
IBM_HEAPDUMPDIR environment variable
First Failure Data Capture (FFDC) WAS_installdir/profiles/profileName/logs/ffdc *.log
Supported based on file size
Service log
WAS_installdir/profiles/profileName/logs/ activity.log No
HTTP error and NCSA access logging WAS_installdir/profiles/profileName/logs/serverName http_access.log
Supported base on file size

Java Virtual Machine logs

The Java™ Virtual Machine (JVM) logs, by default, are created under the WAS_installdir/profiles/profileName/logs/serverName directory with the following names:

  • SystemOut.log
  • SystemErr.log

You can use administrative console to configure the default logging directory and log rotation options by following the information described in Configuring the JVM logs as shown in Figure 2.

Figure 2. File rotation options for WebSphere Application Server JVM logs
Figure 2. File rotation options for WebSphere Application Server JVM                     logs
Figure 2. File rotation options for WebSphere Application Server JVM logs

These log files are managed automatically by the Application Server. Log rotation occurs when either the log file reaches the maximum file size or a certain time. Before the current log is emptied, the contents are moved to a file, which is named following this naming convention:


Trace log

Although the Application Server does not write to the trace log by default, WebSphere administrators can choose to enable tracing at any time.

If tracing is enabled, the Application Server writes the trace log into the WAS_installdir/profiles/profileName/logs/serverName directory using the following name, trace.log.

You can change this log name and location using the WebSphere Administrative Console.

The trace file can only be rotated based on size. You can specify the maximum number of historical files, as shown in Figure 3. When rotation occurs, the current log file is renamed using this naming convention,

Figure 3. File rotation options for WebSphere Application Server trace logs
File rotation options for WebSphere Application Server trace                     logs
File rotation options for WebSphere Application Server trace logs

Native and garbage collection logs

The Application Server writes native process information to the following logs under the WAS_installdir/profiles/profileName/logs/serverName directory:

  • native_stderr.log
  • native_stdout.log

If garbage collection (GC) is enabled on your system, GC information is written to native_stdout.log on Solaris systems and to native_stderr.log on AIX, Windows, and Linux.

There is no configuration to handle log rollover for these logs. If you remove the log file, the log file does not generate automatically until the server is restarted. To reduce the file size, you can manually empty the file.

Javacores and heapdumps

Under abnormal conditions, such as crashes or out-of-memory problems, the JVM may create javacores and heapdumps. Heapdumps can be very large in size (more than 500 MB) so it is important that you archive them when they are no longer required for problem determination.

By default, javacores and heapdumps are created under the WAS_installdir/profiles/profileName directory with the following names:


You can overwrite this location by setting the IBM_HEAPDUMPDIR environment variable.

If the JVM is configured to generate a heapdump when an out-of-memory occurs, it is possible that multiple heapdumps are generated in succession, which given their large size, can fill up the file system quickly.

Starting with Java 1.4.2 SR5, you can limit the number of heapdumps and javacores that are generated by using the JAVA_DUMP_OPTS environment variable:



  • n is the maximum number of javacores that can be generated
  • m is the maximum number of heapdumps that can be generated.

First Failure Data Capture

First Failure Data Capture (FFDC) runs in the background and registers runtime events and errors. The FFDC logs are created under the WAS_installdir/profiles/profileName/logs/ffdc directory. These logs are automatically rotated and maintained by the Application Server. You do not need to worry about these logs, but if you notice that they are using a considerable amount of space, review the contents and contact WebSphere Application Server support.

Service log (activity.log)

Application Server also includes a service log. By default, this log is created in the WAS_installdir/profiles/profileName/logs/ directory using this name, activity.log.

This binary file is used with the Log Analyzer tool to display events and to analyze problems based on a symptom database. It allows you to specify the maximum file size (default is 2 MB), as shown in Figure 4. When this size is reached, the service log wraps in place. The service log does not roll over to a new log file like the JVM logs.

Figure 4. File size options for WebSphere Application Server service log
File size options for WebSphere Application Server service                     log

HTTP error and NCSA access logging

Application Server writes the following logs under the WAS_installdir/profiles/profileName/logs/serverName directory with access and error information for the HTTP Transport channel:

  • http_access.log
  • http_error.log

Although these logs are not enabled by default, they can grow rapidly if enabled. Application Server offers basic rotation based on size, which can be configured in the WebSphere Administrative Console.

WebSphere Application Server plug-in

The Application Server plug-in generally logs a minimal amount of data, when on the default trace level of "Error". This logs only error messages resulting from abnormal request processing. Increasing the log level may cause large amount of logging, thus this file must be monitored.

Table 3. WebSphere Application Server plug-in logs
Log Directory File Rotation
Plug-in log WAS_installdir/Plugins/logs/webserver http_plugin.log No

The log contains the log messages that are written by the plug-in, based on the log level configured. The name and location of the file is defined in the plugin-cfg.xml.

For example, you might specify the following:

<Log LogLevel="Error" Name="/opt/WebSphere/Plugins/logs/webserver1/http_plugin.log"/>

If the file does not exist, then it is created. If the file already exists, then it is opened in append mode and the previous plug-in log messages will remain.

If the Web server and Web server plug-in are running on an AIX, Linux, or Solaris system, and you change the log level in the plugin-cfg.xml file, this change is not picked up dynamically. You must restart the Web server to pick up the change.

There is no support for log rotation and purging. If the file is removed while the server is running, a new log file will not be created. You must restart the server to generate a new log file.


The IBM HTTP Server (IHS) can generate a large amount of logging as all the requests that are made to the site are logged in the Web server access.log. In a typical WebSphere Commerce site, this log can grow to several gigabytes of size in just one day.

Table 4 summarizes the logs that are created by the IBM HTTP Server:

Table 4. IBM HTTP Server logs
Log Directory File Rotation
Access WC_installdir/instances/instanceName access.log Supported. Not enabled by default.
Error WC_installdir/instances/instanceName error.log Supported. Not enabled by default.

Access and error logs

Although IHS offers a utility to implement log rotation, it is not enabled by default.

Rotatelogs is a utility that is included with the Web server. It allows you to rotate based on time or size. Visit the rotatelogs Questions and Answers page for more details.

# Rotates access log every 24 hours.
CustomLog "|/usr/IBMIHS/bin/rotatelogs /usr/IHS/logs/access.log 86400" common

# Rotates errror log when it reaches 50M
ErrorLog "|/usr/IBMIHS/bin/rotatelogs /usr/IHS/logs/error.log 50M"

Cronolog is a popular third party utility that you can also use for log rotation:

# Uses cronolog to rotate access logs daily
CustomLog "|/usr/cronolog /usr/IHS/logs/%Y/%m/%d/access_log" common

Keep in mind that although you can use these utilities to implement log rotation, they do not purge old logs. You still need to implement your own script to archive and delete old logs.

WebSphere MQ

Table 5 lists the logs that are generated by WebSphere MQ.

Table 5. WebSphere MQ logs
Log Directory File Rotation
Queue Manager error /var/mqm/qmgrs/qmname/errors AMQERR01.LOG
Yes, based on file size. Predefined purging.
Error logs /var/mqm/errors AMQERR01.LOG
Yes, based on file size. Predefined purging.
Trace logs
/var/mqm/trace No

Error logs

WebSphere MQ writes to different error log files. If the queue manager name is known, then the error message is written to an error log file in the queue manager's errors directory, /var/mqm/qmgrs/qmname/errors.

If the queue manager name is not known, then the error message is written to an error log file in the errors subdirectory, /var/mqm/errors.

The error log files are written using these names:


Each log file has a default capacity of 256 KB. You can follow the information described in queue manager logs to change the size.

When an error occurs, WebSphere MQ begins by writing to AMQERR01. If AMQERR01 grows beyond 256 KB, it is copied to AMQERR02. If AMQERR02 exists, WebSphere MQ copies AMQERR02 to become AMQERR03, and any data in AMQERR03 is overwritten.

Trace log

WebSphere MQ writes detailed tracing information under the /var/mqm/trace directory using the following name, Where "pid" is the process ID and "id" is an incremental unique identifier.

DB2 database

Except for the STMM log, DB2 does not provide out-of-the-box log rotation or pruning.

Table 6 summarizes the most important logs created and maintained by the DB2 database.

Table 6. DB2 database logs
Log Location File Rotation
Notification DIAGPATH Instance.nyf on UNIX and Event Log on Windows No
db2diag.log DIAGPATH db2diag.log No
db2dasdiag.log UNIX: DASHOME/das/dump
Windows: Located in the "dump" folder, in the DAS home directory.
db2dasdiag.log No
Dump, trap, and core files DIAGPATH pid.partition, core, and so on No
STMM logs DIAGPATH/stmmlog stmm>#.log Yes

The DIAGPATH configuration parameter specifies the directory on which most of the log and trace files are created. This is usually referred to as the db2dump directory.

The First failure data capture information and First failure data capture locations topics in the DB2 Information Center provide more details about the logs generated during the operation of a DB2 database.

Instance notification log

The notification log is written to by DB2, the Health Monitor, the Capture and Apply programs, and user applications. On UNIX platforms, the administration notification log is created under the directory specified by the DIAGPATH variable with the following name, instance.nfy.

On Windows, all administration notification messages are written to the Event log. You can delete the notification log while DB2 is running. When this happens, DB2 creates a new log and continues writing log to the new log file.


The db2diag.log file contains diagnostic information and it is useful for problem determination for DB2 customer support. It is located under the DIAGPATH directory with the following name, db2diag.log

The "Archiving the diagnostic log" section in the An Introduction to DB2 UDB Scripting on Windows developerWorks article provides an example of how to archive the db2diag.log. You can delete db2diag.log while DB2 is running. When this happens, DB2 creates a db2diag.log and continues writing log to the new log file.


The db2dasdiag.log log contains diagnostic information about errors encountered specifically in the DB2 Administration Server (DAS). It is located under the DIAGPATH directory with the following name, db2dasdiag.log.

You can delete the db2dasdiag log while DB2 is running. When this happens, DB2 creates a db2dasdiag.log and continues writing log to the new log file.

Dump files, trap files, and core files

Dump files are generated whenever DB2 encounters an error. This binary file contains additional information that is useful for problem determination.

Trap files are generated by the database manager when it cannot continue processing because of a trap, a segmentation violation, or an exception.

When DB2 terminates abnormally, the operating system generates a core file. This binary file is similar to a DB2 trap file, but it contains the entire memory image of the terminated process. There are no mechanisims to control the number of dump, trap, and core files generated by DB2.

Self-tuning memory manager log

New in version 9, the self-tuning memory manager (STMM) is responsible for configuring automatic settings for several critical memory configuration parameters. For each change made by STMM, the change is logged in both db2diag.log and the STMM log files. You can find the STMM logs in the DIAGPATH/stmm directory with the following name, stmm#.

The STMM log is split into a maximum of five files, each of which has its maximum size of 10MB. These log files are maintained in a circular fashion, always removing the oldest one before creating a new file. Both maximum size and maximum number of files are not configurable.

Oracle database 10g

The Oracle database does not automatically rotate or purge logs. This task is left for administrators. Database and system administrators need to implement cron scripts to regularly monitor the directories where logs are written and clean them as required.

Table 7 lists just a few of the logs that are created on an Oracle system. The Names server, Enterprise Manager, RAC, Dataguard, Import and Export utilities, Data Pump, and so on are just examples of other Oracle components that you may also need to implement for log maintenance.

Table 7. Oracle database logs
Log Location Rotation
Background logs Show parameter background_dump_dest No
User logs Show parameter user_dump_dest No
Core logs Show parameter core_dump_dest No
Audit logs Show parameter audit_file_dest No
Listener logs lsnrctl show log_directory
lsnrctl show log_file
lsnrctl show trc_file
lsnrctl show trc_level

Background and user logs

The background_dump_dest parameter specifies the location where trace files for the background processes (LGWR, DBWn, and so on) are written. Each background process logs to its own file and the file is named as SID_bgprocess_PID.trc.

Maintaining these files can be tricky. If the file is in use and you delete it, Oracle stops writing to the file (it will not be re-created). When implementing a clean up script, you need to delete logs whose timestamp is older than the last database restart. Or, use commands, such as ps on UNIX, to ensure the process is not running.

If the process is running, you can still empty the file and Oracle continues writing to it. To empty the file on UNIX, you can use the ">" operator as in the following example:

$ > o10g_p003_42314.trc

The Metalink note 394891.1 explains how to recreate background trace files that may have been accidentally deleted.

Trace for user processes, such as SQL trace, is written to the user_dump_dest directory. The administration is similar to that for the background processes.

If you are concerned about the logs growing too quickly, you can also set max_dump_file_size, which limits the maximum size of all the trace logs, except for the alert log.

Alert log

The alert log is a special trace file that logs messages and errors. It is created in the background_dump_dest directory under the name of alert_sid.log. As this log is not rotated by Oracle, you need to ensure that it does not grow beyond a certain size. In contrast with the other trace files, you can safely delete the alert log while the instance is running and Oracle will re-create it for you.

Listener logs

The listener log is another important file to maintain periodically. You can use the following commands to list the location of the listener log:

lsnrctl show log_directory
lsnrctl show log_file

To delete the listener log, the listener needs to be stopped first or logging needs to be redirected to a different file.

If tracing is enabled, these commands show the directory and the file name to which the trace is written to:

lsnrctl show trc_directory
lsnrctl show trc_file

SQL*Net log

SQL*Net logging is controlled in sqlnet.ora. The following parameters determine the location of the logs:

  • log_directory_server
  • log_file_server

The default log_directory_server is $ORACLE_HOME/network/log and the default value for log_file_server is sqlnet.log. You can delete the sqlnet.log file while the process is running.


This article explained how, without the proper controls, log files can grow to unmanageable sizes that can affect the operation of the site and possibly lead to outages. In this article, you learned about typical logs that are found in a WebSphere Commerce environment, existing facilities to maintain them, and techniques for implementing your own policies and scripts.


The authors would like to thank John Edelstein for his help with the Oracle section of this article.

Downloadable resources

Related topics


Sign in or register to add and subscribe to comments.

Zone=Middleware, Commerce
ArticleTitle=Log maintenance in a WebSphere Commerce environment