A poorly designed system can run into performance problems soon after it goes live. Even a well-tuned system can encounter performance issues after a long period of operation or a major function change. Tuning a system is an inevitable task for system administrators. Being a major part of most application systems, database performance tuning is an important aspect of this task. Statistics show that database tuning can yield a 20% performance improvement over a system that has never been tuned. However, tuning a production system can be risky if not performed properly. This article demonstrates a real case study of database performance tuning in an IBM DB2 for Linux, UNIX, and Windows environment.
The system being tuned in this case study is a workflow application based on a JIRA Enterprise package that uses DB2 as the backend database. The application operates in two modes: nightly batch mode and daily OLTP mode. During the nightly batch hours, a series of shell scripts execute to transfer external data (in the form of plain text files) to the database. And during the daily OLTP hours, operators process this business data following workflows defined in JIRA.
After the application had been live for almost a year, the customer did not see a decrease in the rate of problem incidents. Investigation indicated that some of these incidents were caused by performance problems, such as database deadlocks and JIRA file lock time-outs. According to the contract, there was to be a yearly 5% workload increase. If system performance could not be improved, more performance-related incidents could be expected in the future. Performance tuning became a must.
Discovering the problems
The prerequisite task for system performance tuning is to find out where the performance problems are originating. Nigel's performance MONitor for Linux (NMON) is a great tool for collecting key performance data, such as CPU utilization, memory utilization, disk busy rate, top processes, and so on. NMON was used to gather performance information for each server within the system.
After reviewing the collected NMON data, two performance problems were identified:
- During the nightly batch hours, CPU utilization of the database server stayed at 80% for almost one hour.
- Some disks of the database server become 100% busy periodically throughout the day.
Normal database performance tuning contains the following phases:
- Collect database server information
- Collect database usage information
- Analyze database information
- Design tuning activities
- Implement and evaluate tuning results
The following sections discuss each of these phases in detail.
Collect database server information
During this phase, you collect the hardware and software information for the database server and the configuration of the database. Following is some of the information that you need to collect:
- Type of the database server
- Number and type of CPUs
- Amount of memory
- Number and manufacturer of the disks drives
- Type and manufacturer of the storage subsystem
- Configuration of the storage subsystem
- Operating system and database information
- Output from the db2look tool for each instance running on the server
db2look –d dbname –e –o outputfile)
- Description of each tablespace and its container
db2 list tablespacesand
db2 list tablespace containers for tablespacename show details)
Remember that more information offers more assistance. Any information you collect here might help you greatly with your later analysis. For example, for the case study, the db2look and tablespace information explained why there was a disk busy issue: all user data tables were created on the same tablespace, which was located on the same disk.
Collect database usage information
There are two ways to collect database usage information: taking snapshots and monitoring events. Both methods collect real-time database usage information, such as buffer pool activity, locking status, SQL statement information, and so on. However, they have different monitoring mechanisms, which means that they can be used in different situations.
A snapshot, as suggested by the name, captures instant information about the database at a specific point in time. Taken at regular intervals, snapshots can be used to observe trends or to foresee potential problems. They are useful for troubleshooting problems that occur during a known time period or for ad hoc database health checking. Using snapshots is less resource-consuming than using event monitors.
On the other hand, event monitors are event-driven. Within the predefined time period, event monitors create records whenever specified events occur. Compared with snapshots, event monitors can provide more database object-based statistics (for example, statistics for databases, tables, tablespaces, and so on). The monitoring is continuous such that it collects overall database usage during the period monitored. Because of its continuity, the resources consumed can be astonishingly large when the targeted system is a very busy one. You should try to prevent monitoring from crippling the system when a production system is under investigation.
Before exploring how to lower the performance impact of monitoring, take a look at what the options are when setting up event monitors: table event monitors, file event monitors, and pipe event monitors. As the names suggest, the event monitors are distinguished from each other by where the event records are to be created: to SQL tables, to files, or through named pipes. Because the pipe event monitors are not often used in practice (a program is required to read the data from the named pipe), this article focuses on the table and file event monitors.
Table 1. Tips to lower the performance impact of event monitors
|Table and file tips||Eventtype|
|STATEMENTS monitor is the biggest performance threat. If performance is a concern, the STATEMENTS monitor should be separated from other monitors to form a batch of its own.|
|In order to reduce the overhead of table inserting or file
writing, the event records are first written to a buffer space.
When the buffer space is full, the event records are then moved to
the event tables or files. For performance reasons, highly active
event monitors should have larger buffers than relatively inactive
event monitors. BUFFERSIZE indicates the capacity (in 4K page) of
the buffer space. Because the space is allocated from the database
monitor heap, the combined capacity of all event monitors should
not exceed the maximum size (use
|If BLOCKED is set, when event records are moved from the buffer space to tables/files (buffer space is full), each agent that generates an event waits for the move to be finished. In this way, no loss of event data can be assured. However, this will also degrade database performance. Hence, when performance is a concern, event monitors should be set as NONBLOCKED. There will be data loss, but the impact to database performance is minimized.|
|Table-specific tips||Logic data groups monitor elements|
|Each event monitor uses multiple database tables to store collected data. For example, the STATEMENTS event monitor collects statements data and stores them in tables: CONNHEADER, STMT, SUBSECTION, and CONTROL. By excluding unnecessary event tables and fields from being collected, performance impact can be minimized.|
|When disk busy is the performance bottleneck, locate the event tables at a separated tablespace and a separated disk in order for disk writing to be distributed more evenly.|
|The PCTDEACTIVATE option is used to control the storage usage of event monitors. It is defined as a percentage number. For example, if PCTDEACTIVATE is set to 90, when the tablespace where the event tables are located reaches its 90% capacity, the event monitors are automatically deactivated. This option can only be used for database managed tablespace (DMS).|
|Similar to the PCTDEACTIVATE option, MAXFILES and MAXFILESIZE can be used together to control how many storages to which the event monitors are entitled. MAXFILESZIE defines the maximum number of 4K pages that a single event monitor file can contain. When the maximum is reached, a new file is created to store incoming event data. This continues until the number of files reaches the predefined MAXFILES value, at which time the event monitor is deactivated automatically.|
By applying the following two approaches, you can further lower the risk of impacting production performance:
- Before performing event monitoring in the production environment, perform a test run in the test environment or even a short-period trial run in the production environment to assess the actual performance impact.
- Set a threshold for each performance indicator (for example, CPU utilization: 90%) and closely monitor these indicators during the monitoring period. Once the threshold is exceeded, stop the monitoring immediately.
Analyzing database information
With all the information collected, you can perform the various analyses described in this section.
SQL statement analysis
The major information resource of SQL statement analysis is the statements event monitor. If the events are monitored using event files, db2evmon can be used to format the output, as shown in Listing 1.
Listing 1. db2evmon command
db2evmon –path event_files_directory > output_filename
The resulting entry is shown in Listing 2.
Listing 2. Sample statement event entry
1) Statement Event ... Appl Handle: 53793 Appl Id: *LOCAL.db2inst1.101126060601 Appl Seq number: 00003 Record is the result of a flush: FALSE ------------------------------------------- Type : Dynamic Operation: Describe Section : 201 Creator : NULLID Package : SQLC2G15 Consistency Token : AAAAALIY Package Version ID : Cursor : SQLCUR201 Cursor was blocking: TRUE Text : select * from schema.table ------------------------------------------- Start Time: 11/26/2010 15:06:35.641755 Stop Time: 11/26/2010 15:06:35.665380 Elapsed Execution Time: 0.023625 seconds Number of Agents created: 1 User CPU: 0.003768 seconds System CPU: 0.000000 seconds Statistic fabrication time (milliseconds): 0 Synchronous runstats time (milliseconds): 0 Fetch Count: 62 Sorts: 0 Total sort time: 0 Sort overflows: 0 Rows read: 62 Rows written: 0 Internal rows deleted: 0 Internal rows updated: 0 Internal rows inserted: 0 Bufferpool data logical reads: 1 Bufferpool data physical reads: 0 Bufferpool temporary data logical reads: 0 Bufferpool temporary data physical reads: 0 Bufferpool index logical reads: 0 Bufferpool index physical reads: 0 Bufferpool temporary index logical reads: 0 Bufferpool temporary index physical reads: 0 Bufferpool xda logical page reads: 0 Bufferpool xda physical page reads: 0 Bufferpool temporary xda logical page reads: 0 Bufferpool temporary xda physical page reads: 0 SQLCA: sqlcode: 0 sqlstate: 00000
Text line shows that the SQL statement executed.
Elapsed Execution Time indicates how
long it takes to execute this SQL statement. Accumulated execution time
can be calculated for each SQL statement by summing up all elapsed
execution times of the same statement. Then statements that have the highest
accumulated execution time are candidates for SQL statement analysis.
IBM provides a series of tools to analyze SQL statements. Visual Explain, db2exfmt, and db2expln are helpful in reviewing each statement's access plan. The db2advis tool provides recommendations on whether new indexes need to be created to optimize the execution performance.
The deadlock event monitor provides detailed information on how deadlocks happen and the history of each occurrence. Listing 3 shows a sample deadlock event entry.
Listing 3. Sample deadlock event entry
3382) Deadlocked Connection ... Deadlock ID: 1 Participant no.: 2 Participant no. holding the lock: 1 Appl Id: 10.207.4.51.40897.100826202041 Appl Seq number: 03988 Tpmon Client Workstation: server01 Appl Id of connection holding the lock: 10.207.4.51.39361.100826202035 Seq. no. of connection holding the lock: 00001 Lock wait start time: 08/27/2010 10:38:13.168058 Lock Name : 0x020012032900E9161100000052 Lock Attributes : 0x00000000 Release Flags : 0x20000000 Lock Count : 1 Hold Count : 0 Current Mode : none Deadlock detection time: 08/27/2010 10:38:22.765817 Table of lock waited on : table Schema of lock waited on : schema Data partition id for table : 0 Tablespace of lock waited on : USERSPACE1 Type of lock: Row Mode of lock: X - Exclusive Mode application requested on lock: U - Update Node lock occured on: 0 Lock object name: 73398812713 Application Handle: 957 Deadlocked Statement: Type : Dynamic Operation: Fetch Section : 1 Creator : NULLID Package : SYSSH200 Cursor : SQL_CURSH200C1 Cursor was blocking: FALSE Text : SELECT value1, value2 FROM schema.table WHERE value1 = ? for update with rs List of Locks: …… Lock Name : 0x020012032900EC161100000052 Lock Attributes : 0x00000000 Release Flags : 0x00000080 Lock Count : 1 Hold Count : 0 Lock Object Name : 73399009321 Object Type : Row Tablespace Name : table Table Schema : schema Table Name : EXCLUSION Data partition id : 0 Mode : U - Update …… 13384) Deadlocked Connection ... Deadlock ID: 1 Participant no.: 1 Participant no. holding the lock: 2 Appl Id: 10.207.4.51.39361.100826202035 Appl Seq number: 09195 Tpmon Client Workstation: server01 Appl Id of connection holding the lock: 10.207.4.51.40897.100826202041 Seq. no. of connection holding the lock: 00001 Lock wait start time: 08/27/2010 10:38:13.166513 Lock Name : 0x020012032900EC161100000052 Lock Attributes : 0x00000000 Release Flags : 0x40000000 Lock Count : 1 Hold Count : 0 Current Mode : none Deadlock detection time: 08/27/2010 10:38:22.787777 Table of lock waited on : table Schema of lock waited on : schema Data partition id for table : 0 Tablespace of lock waited on : USERSPACE1 Type of lock: Row Mode of lock: U - Update Mode application requested on lock: U - Update Node lock occured on: 0 Lock object name: 73399009321 Application Handle: 951 Deadlocked Statement: Type : Dynamic Operation: Execute Section : 1 Creator : NULLID Package : SYSSH200 Cursor : SQL_CURSH200C1 Cursor was blocking: FALSE Text : UPDATE schema.table SET value2 = ?, value3 = ? WHERE value1 IN (?,?) List of Locks: Lock Name : 0x020012032900E9161100000052 Lock Attributes : 0x00000000 Release Flags : 0x40000000 Lock Count : 1 Hold Count : 0 Lock Object Name : 73398812713 Object Type : Row Tablespace Name : USERSPACE1 Table Schema : schema Table Name : table ……
Listing 3 shows which two locks are involved in the deadlock, the types of each lock, and the corresponding SQL statements. By modifying the related statements, the occurrence of deadlocks can be reduced.
You can perform bufferpool analysis by using the information that the bufferpools event monitor provides, as shown in Listing 4.
Listing 4. Sample bufferpool event entry
3) Bufferpool Event ... Bufferpool Name: IBMDEFAULTBP Database Name: database Database Path: /shared/dbg/db2inst3/db2inst3/NODE0000/SQL00001/ Buffer Pool Statistics: Buffer pool data logical page reads: 14871152 Buffer pool data physical page reads: 1699818 Buffer pool data page writes: 53823 Buffer pool index logical page reads: 8606405 Buffer pool index physical page reads: 290822 Buffer pool index page writes: 272282 Buffer pool xda logical page reads: 0 Buffer pool xda physical page reads: 0 Buffer pool xda page writes: 0 Buffer pool read time (milliseconds): 1536574 Buffer pool write time (milliseconds): 353641 Files closed: 0 Buffer pool asynch data page reads: 1694131 Buffer pool asynch data page read reqs: 59110 Buffer pool asynch data page writes: 53371 Buffer pool asynch index page reads: 227455 Buffer pool asynch index page read reqs: 8527 Buffer pool asynch index page writes: 270292 Buffer pool asynch xda page reads: 0 Buffer pool asynch xda page read reqs: 0 Buffer pool asynch xda writes: 0 Buffer pool asynch read time: 1327887 Buffer pool asynch write time: 347809 No victim buffers available: 1509238 Unread prefetch pages: 2995 Direct I/O Statistics: Sectors read directly: 13610 Sectors written directly: 1695616 Direct read requests: 1382 Direct write requests: 3763 Direct read time: 3758 Direct write time: 22236 Vectored IOs: 67407 Pages from vectored IOs: 1921234 Block IOs: 0 Pages from block IOs: 0
The efficiency of the bufferpool can be roughly calculated using the formula in Listing 5.
Listing 5. Formula for efficiency of a bufferpool
1 – (Bufferpool data logical page reads + Bufferpool index logical page reads) divided by (Bufferpool data physical page reads + Bufferpool index physical page reads)
If the calculated number is less than 90%, increasing the size of the bufferpool is a reasonable tuning option.
Information from the database event monitor can be used for memory analysis, as shown in Listing 6.
Listing 6. Sample memory event entry
3) Database Event Record is the result of a flush: FALSE Lock Statistics: Lock Waits: 0 Total time waited on locks (milliseconds): 0 Deadlocks: 0 Lock escalations: 0 X lock escalations: 0 Lock timeouts: 0 Sort Statistics: Sorts: 844 Total sort time (milliseconds): 160043 Sort overflows: 80 Sort share heap high water mark: 9851 Post Shared Memory Threshold Sorts: 20 Hash Statistics: Hash Joins: 25 Hash Loops: 0 Hash Join Small Overflows: 0 Hash Join Overflows: 0 Post Shared Memory Threshold Hash Joins: 0 …… Node Number: 0 Memory Pool Type: Backup/Restore/Util Heap Current size (bytes): 65536 High water mark (bytes): 196608 Configured size (bytes): 319815680
If the output contains too many lock escalations or X lock escalations, this can indicate an under-allocated LOCKLIST memory. A high sort overflow rate (sort overflows divided by sorts) or a high hash join overflow rate ((hash join small overflows + hash join overflows) / hash joins) means that the SORTHEAP has not been allocated enough memory. If the memory high-water mark is close to the configured size, it means the allocated memory size is too small.
Table space and table analysis
Table space and table event monitor information can be used to identify which table space or table is the most frequently accessed, as shown in Listing 7.
Listing 7. Sample table space/table event entry
5) Tablespace Event ... Tablespace Name: USERSPACE1 Record is the result of a flush: FALSE File System Caching: Yes Buffer Pool Statistics: Buffer pool data logical page reads: 14846454 Buffer pool data physical page reads: 1699227 Buffer pool data page writes: 31111 Buffer pool index logical page reads: 8593610 Buffer pool index physical page reads: 290381 Buffer pool index page writes: 272125 Buffer pool xda logical page reads: 0 Buffer pool xda physical page reads: 0 Buffer pool xda page writes: 0 Buffer pool read time (milliseconds): 1529939 Buffer pool write time (milliseconds): 350770 Files closed: 0 Buffer pool asynch data page reads: 1693042 Buffer pool asynch data page read reqs: 58409 Buffer pool asynch data page writes: 30761 Buffer pool asynch index page reads: 227412 Buffer pool asynch index page read reqs: 8489 Buffer pool asynch index page writes: 270137 Buffer pool asynch xda page reads: 0 Buffer pool asynch xda page read reqs: 0 Buffer pool asynch xda writes: 0 Buffer pool asynch read time: 1325077 Buffer pool asynch write time: 345169 No victim buffers available: 1435565 Unread prefetch pages: 2982 Direct I/O Statistics: Sectors read directly: 3488 Sectors written directly: 1695176 Direct read requests: 436 Direct write requests: 3752 Direct read time: 476 Direct write time: 22217 4) Table Event ... Table schema: SCHEMA Table name: TEMP (00001,00002) Data partition id: 0 Record is the result of a flush: FALSE Table type: Temporary Data object pages: 1 Index object pages: 0 Lob object pages: 0 Long object pages: 0 Rows read: 3 Rows written: 1 Overflow Accesses: 0 Page reorgs: 0 Tablespace id: 1
Read/Write numbers indicate how busy the table space or table is. If the most frequently accessed tables are located on the same disk as other tables, it will be a good idea to relocate them to separate disks to distribute the disk reads and writes more evenly. Another solution is to distribute the data within the table over multiple physical disks.
Design tuning activities
Based on all the information you collected, you can design actual tuning activities. However, each tuning activity has its associated risk and cost. Careful risk and ROI analysis must be performed before deciding to implement a solution. The analysis might result in categorizing tuning activities into: immediate implementation, conditional implementation, or no implementation. For the case study, Table 2 was created to help make tuning decisions.
Table 2. Tuning decision table
|Tuning activities||Performance improvement||Risk||ROI||Decision||Condition|
|Add new indexes||Low||Low||Low||Immediate||N/A|
|Upgrade CPU||Medium||Low||Medium||Conditional||Peak CPU utilization reaches 90%|
|Relocate database tables||High||High||Medium||Conditional||High CPU Wait I/O is observed|
Implement and evaluate the tuning result
After tuning activities are tested, you can deploy them to the production environment. To evaluate the tuning results, you can use NMON again to assess how much performance improvement was gained from the tuning.
For the case study, after the tuning results were presented to the stakeholders, only the Add new indexes option was selected. The other options did not offer a reasonable ROI to the stakeholder. They decided that other options were either too costly or too risky. The stakeholders wanted take those actions taken only when absolutely necessary.
The case study focused on tuning only the valuable SQL statements for improvement instead of all of them. The target was set at SQL statements for which the accumulated execution time exceeded 60 seconds. Running db2advis against those targeted SQL statements gave the results shown in Listing 8.
Listing 8. Sample db2advis output
Your SQL Statement execution started at timestamp 2011-04-06-11.02.28.049293 Recommending indexes... total disk space needed for initial set [ 0.134] MB total disk space constrained to [ 67.464] MB Trying variations of the solution set. Optimization finished. 3 indexes in current solution [ 16.9089] timerons (without recommendations) [ 7.5935] timerons (with current solution) [55.09%] improvement -- -- LIST OF RECOMMENDED INDEXES -- =========================== -- index, 0.134MB CREATE INDEX "SCHEMA "."IDX107060204130000" ON "SCHEMA"."TABLE1" ("FIELD1" ASC, "FIELD2" ASC, "FIELD3" ASC) ALLOW REVERSE SCANS COLLECT SAMPLED DETAILED STATISTICS; COMMIT WORK ; -- -- RECOMMENDED EXISTING INDEXES -- ============================ -- RUNSTATS ON TABLE "SCHEMA"."TABLE1" FOR SAMPLED DETAILED INDEX INDEX1 ; -- COMMIT WORK ; -- -- UNUSED EXISTING INDEXES -- ============================ -- DROP INDEX INDEX2; -- =========================== 13 solutions were evaluated by the advisor DB2 Workload Performance Advisor tool is finished.
Table 3 compares the initial db2advis results with those achieved after tuning.
Table 3. db2advis results
|Database||SQL statement||Execution time (seconds)||Improvement (%)||Saved time (seconds)|
According to the results of the db2advis tool, the top four highest Saved Time SQL tuning activities were implemented. A second NMON analysis was performed to evaluate the tuning result. As expected, there was not a significant decrease in the peak CPU utilization. However, peak hours are reduced from about 55 minutes to no more than 50 minutes. The result was quite satisfactory to the stakeholders.
Of course, the prudent thing to do is to continue monitoring the CPU utilization and CPU wait I/O data. Further actions will be taken in the case study once these numbers reach the predefined thresholds.
This article demonstrated a methodology for investigating performance issues in a DB2 for Linux, UNIX, and Windows database and for making the improvements that are likely to yield the most results with the least risk to a production system.
- Visit the developerWorks resource page for DB2 for Linux, UNIX, and Windows to read articles and tutorials and connect to other resources to expand your DB2 skills.
- Learn more about Information Management at the developerWorks Information Management zone. Find technical documentation, how-to articles, education, downloads, product information, and more.
- Stay current with developerWorks technical events and webcasts.
- Follow developerWorks on Twitter.
Get products and technologies
- Download a free trial version of DB2 for Linux, UNIX, and Windows.
- Now you can use DB2 for free. Download DB2 Express-C, a no-charge version of DB2 Express Edition for the community that offers the same core data features as DB2 Express Edition and provides a solid base to build and deploy applications.
- Build your next development project with IBM trial software, available for download directly from developerWorks.
- Participate in the discussion forum.
- Check out the developerWorks blogs and get involved in the developerWorks community.
Dig deeper into Information management on developerWorks
Get samples, articles, product docs, and community resources to help build, deploy, and manage your cloud apps.
Keep up with the best and latest technical info to help you tackle your development challenges.
Software development in the cloud. Register today to create a project.
Evaluate IBM software and solutions, and transform challenges into opportunities.