The top 6 tips for a successful DB2 Performance Expert deployment for DB2 for Linux, UNIX, and Windows, Part 2: Taking performance baselines, using DB2 Workload Manager, and monitoring in a partitioned environment

Performance monitoring and tuning are critical tasks for a database administrator (DBA). DB2® Performance Expert helps DBAs tackle immediate performance problems, as well as avoid problems in the future. This article is the second of two in a series that describes the top 6 things you can do to get the most out of DB2 Performance Expert. Learn how to take performance baselines, take advantage of DB2 Workload Manager, and configure monitoring in a large DPF environment.

Share:

Alice Ma, Architect IM Solutions, Warehousing Tools, IBM

Alica MaAlice Ma has over 15 years of experience in the IT industry working with customers and IBM Business Partners to architect solutions, develop, test, and provide advanced support. Currently she works on IBM InfoSphere Warehouse, building solutions specializing in performance and DBA tools at IBM Silicon Valley Lab. Her areas of expertise include problem determination, performance tuning, and recovery for DB2.



Ute Baumbach (bmb@de.ibm.com), Data Studio Software Developer and Customer Advocate, IBM

Author photoUte Baumbach has been a software developer at the IBM lab in Germany for 18 years, where she has worked in various software development projects and roles. Most of her projects were based on DB2. For five years, she has worked as a member of the DB2 Performance Expert development team, now the Data Studio Performance Management development team. Ute is an IBM Certified Database Administrator and a Certified Application Developer for DB2 for Linux, UNIX, and Windows.



07 May 2009

Also available in

Introduction

As a database administrator (DBA), performance monitoring and tuning can be your most challenging business-critical task. You need comprehensive and proactive performance and tuning tools to help you to resolve immediate performance issues as well as to help avoid future performance issues. DB2 Performance Expert for Linux®, UNIX®, and Windows® helps you optimize your company’s DB2 servers and DB2 applications, whether for data warehousing, for OLTP, or for mixed environments.

This series offers the top six things you can do to get the most out of DB2 Performance Expert, based on hands-on experiences working with customers:

Part 1 explored the first three. This Part 2 describes the remaining vital activities that can help you make DB2 Performance Expert work best for you.

Taking performance baselines

A performance baseline is a report about the performance of a DB2 instance during a specified timeframe. The baselines can be used to compare the performance of a DB2 instance during different timeframes. This article describes how to take baselines with Performance Expert based on the short-term and long-term history data.

Taking baselines using short-term history data

Short-term history data can be accessed using the history slider on the monitoring panels, such as Statistic Details .

For example, assume you have a test running for 2 hours on the 23rd of August between 2:00 p.m. and 4:00 p.m. After the test, you want to save a performance baseline for the two hours of the test. To accomplish this, complete the following steps:

  1. Open Statistic Details by clicking on the icon.
  2. From the snapshot toolbar, click Data > History, and use the slider to move to the start time of the test, which is 2:00 p.m.
  3. Select Processing > Delta, and use the slider to move to the end time of the test, which is 4:00 p.m. Figure 20 shows the toolbar.
Figure 20. Choose baseline time range
Screen cap: Toolbar showing 2:00 and 4:00 start and end times

Click here for a larger view of Figure 3.

Performance Expert displays the performance metrics for exactly the two hours you requested. This is your baseline. You can tell, for example, how many rows were read during the two hours and how many sorts occurred.

Note: Only performance metrics that are of type snapshot counter have deltas calculated. Performance metrics that are of type snapshot gauge or snapshot information do not have their deltas calculated. To see the type of a counter, click on the value and press F1. A help window containing the description of the counter opens.

The timeframe that is shown in any charts on the panels is independent from the timeframe you specified for the delta processing in the snapshot toolbar. The timeframe of the charts can be specified using the + and – buttons at the right of a chart. To look at the data in the tree, complete the following steps:

  1. Click on Databases, Bufferpools, Tables, Table Spaces, or whichever you are interested in.
  2. If you selected Tables or Dynamic SQL Statements, you need to additionally check the box for Receive statement cache information to get the data displayed. The table snapshot and the dynamic SQL snapshot are more expansive snapshots, and in online mode it can take time to get the information from the monitored instance. Therefore you have to explicitly request this data.
  3. To make this baseline persistent, choose Statistic Details > Print/Export. To print or export the complete data, select All for Print range, All for Columns, and All for Rows, as shown in Figure 21.
  4. Click OK.
Figure 21. Print/Export options
Screen cap: Print/Export window with All selected for Print range, Columns, and Rows

An HTML report opens, which you can save. For the file name, include the date and timeframe so you know later what range this report represents. You can play around with Print range options to see how the output varies.

Look at some of the baseline reports in more detail. For example, Figure 22 shows the SQL Activity statistics of your database during the test.

Figure 22. Baseline of SQL Activity statistics
Screen cap: Example of SQL Activity statistics

Figures 23 and 24 show the tables used and the statements executed during the test. For each table, you see how the information was used in the test, such as how many rows were read or written.

Figure 23. Used tables baseline
Screen cap: Table showing statistics of used tables
Figure 24. Executed statements baseline
Screen cap: Table showing statistics of executed statements

For each statement, you also see the execution details, such as how often a statement was executed during the test or the elapsed and average execution time. If you want the statements, tables, or bufferpools sorted by a particular metric, such as execution time, sort the information before you request the Print/Export. Sort by clicking on a column header.

Before you request the Print/Export, you can also use the filter options or customize the columns by clicking the icons Filter options icon and Customize column icon, respectively.

If you want to Print/Export more details about a particular object, such as a bufferpool, double-click on a bufferpool and select the Print/Export menu, which applies to the currently active tab.

Taking baselines using long-term history data

You can access long-term history data using Performance Warehouse by clicking the Performance Warehouse icon Performance Warehouse icon. Or you can access long-term history data in graphical format using Statistic Details, including trends and trend prediction. Use the Print/Export functions of these graphs as follows:

  1. Right-click a performance metric of a bufferpool, such as Bufferpool Hit Ratio.
  2. Select Analyze Performance Warehouse Data. A tab with a graphical view, trend, and trend prediction opens.
  3. Use the buttons to indicate the timeframe.
  4. Select Print/Export.

Using the Performance Warehouse, you can create scheduled reports on the long-term data using the Process function, or you can use queries or rules of thumb to analyze the data. For taking baselines, focus on report generation.

Report generation is called a process because:

  • You have the capability of scheduling report generation. For example, each Monday morning you can generate a database report from the previous week.
  • For most of the reports, the long-term history data is automatically available in the Performance Warehouse (PWH). The data were aggregated in the background into the PWH tables from the short-term history (database, bufferpool, OS data). These reports are called reports on aggregated data. However, for reports on SQL, the data must first be collected and loaded into PWH. This collection and load step also belongs to the process. For the collection of SQL data, the statement event monitor is used on your monitored instance. Because this event monitor causes a lot of overhead on the monitored instance, these processes to create SQL reports should be started with consideration. These reports are called reports on SQL event monitor data.

Taking baselines using reports on aggregated data

To take a baseline using a report on aggregated data, complete the following steps. The example assumes you have a test running for two hours on the 23rd of August between 2:00 p.m. and 4:00 p.m. for which you want a baseline report.

  1. Any time after your test, open Performance Warehouse, expand Process Groups for the monitored instance, right-click Process Groups, and select Create to create your own process group and give it a name.
  2. From the Public view, right-click a report process template, such as DB Activity Trace Report, and copy it to your own process group. Other available report templates for reports on aggregated data include the BP. Act Trace Reports and the OS reports for collecting operating system data.
  3. After copying the template, edit the report process by expanding the copied DB activity trace report in your process group.
  4. Click Steps. On the right pane, the Report step appears.
  5. Double-click the Report step.
  6. Click the Options tab, and specify the interval of your test, including some time before and after the test.
  7. Select the Database name, the type Trace, and the partitions in which you are interested, as shown in Figure 25.
Figure 25. Specify report options
Screen cap: Report Step Properties window with database name PEDEMO, type Trace, and Partition Number 0 selected
  1. Click OK.
  2. Right-click your DB Activity Report, and click Execute, as shown in Figure 26.
Figure 26. Execute process to create report
Screen cap: Performance Warehouse window with DB Activity report and Execute selected
  1. Select Execute once, immediately on the scheduling window, and click Finish.
  2. In the tree, select Process Executions. In the right pane all process executions are listed.
  3. Use the refresh button Refresh button to see when your process has finished.
  4. When you see a Stop timestamp, double-click this process execution, expand the REPORT Step Execution, select Output File, and open the HTML file. The HTML file shows the DB activity report for the selected timeframe. This is your baseline.

In the DB activity report, you see detailed information about the connections to the database, sort activity, locking problems, logging activity, I/O, and so on. There is also a section in the report showing key performance indicators, as shown in Figure 27. This section gives you a good overview of how your test ran.

Figure 27. Key performance indicators in DB activity report
Screen cap: Window showing several key performance indicators in DB activity report

Click here for a larger view of Figure 27.

To reuse the same process to generate another report, change the status from Active to In definition by right-clicking the process, choosing Properties, and changing the Status. Then you can edit the process and execute it again.

Scheduling report generation on aggregated data

To execute a process to generate a report regularly, you can schedule it. For example, you might want to have a DB Activity Report from the previous week each Monday morning.

  1. Follow Steps 1-5 above.
  2. Click the Options tab, and specify Date relative to process execution. You want the process to execute on Monday mornings, and the report should contain data from the beginning of the previous week (Monday at 10:00 a.m.) until the end of the previous week (Friday at 5:00 p.m.). Specify -7 (days) as the start offset and -3 (days) as the end offset.
  3. Enter 10:00 a.m. for the start time, and enter 5:00 p.m. for the end time. The filled out panel is shown in Figure 28.
Figure 28. Specify relative process execution options
Screen cap: Report Step Properties window showing 10:00am and 5:00pm
  1. When the scheduling window opens, select the option Execute repeatedly according to a schedule.
  2. On the Schedule tab, select Execute every Monday at 09:00.
  3. On the next Monday after 9:00 a.m., go to Process Executions.
  4. Double-click the process execution, expand the REPORT Step Execution, select Output File, and open the HTML file.

Taking baselines using reports on SQL event monitor data

Using SQL event monitor data for baselines is useful only if you are running exactly the same workload during multiple tests. Otherwise the comparison of executed SQL statements to the baseline is not possible. Create this report only if you don’t care about the performance impact that the DB2 statement event monitor generates on your monitored database.

In the same example, you have a test running for two hours on the 23rd of August between 2:00 p.m. and 4:00 p.m. for which you want to take an SQL baseline. The SQL baseline must be captured during the test and not after, as done for the reports on aggregated data, because a DB2 statement event monitor is started to capture the data.

Follow these steps to take the baseline:

  1. During your test, open Performance Warehouse, and expand Process Groups for the monitored instance.
  2. If you don't already have your own process group, right-click Process Groups, and select Create to create one and give it a name.
  3. From the Public view, right-click a report process template, such as SQL Activity Summary Report, and copy it to your own process group. Other available report templates for reports on SQL event monitor data include SQL Activity Trace Report and SQL PL Profiler in case you have stored procedures written in SQL PL running in your monitored database.
  4. After copying the template, edit the report process, and specify your desired report options using the following steps. The SQL Activity CRD for Queries process is also based on event monitor data, but at the end, no report is created. You can analyze the collected data only using Queries. You would use the Queries process if for a multi-partition monitored instance for which you want to collect SQL event monitor data for multiple partitions simultaneously.
  5. In your own process group, expand the copied SQL activity summary report, and click Steps. On the right pane, the CRD, Load and Report steps appear.
  6. Double-click the CRD (Collect report data) step.
  7. Click the Options tab, and specify the database, the partition you are interested in, and the elapsed time the DB2 statement event monitor should run to collect data. Keep this time to just a few minutes to avoid collecting too much monitor data and to reduce the impact on the monitored database.
Figure 29. Specify event monitor settings
Screen cap: CRD Step Properties window with 1:00 minute of elapsed time
  1. If you leave the Event condition for event type statement field empty, the DB2 statement event monitor is started for the whole monitored database. If you want to start the event monitor only for specific applications, specify an event condition in the CREATE EVENT MONITOR statement syntax shown in Figure 30.
Figure 30. Specify event monitor filters
Screen cap: Flow diagram for Event Condition Block
  1. Click OK.
  2. Double-click the Load step, and select the Options tab. During the Load step, the collected event monitor data is loaded into the performance database of the Performance Expert server.
  3. If static SQL is executed in your monitored database and you want to collect the data into the report, check the check box Retrieve the text of static SQL statements.
  4. Click OK.
  5. Right-click your SQL Activity Summary Report in the tree, and select Execute.
  6. Select Execute once, immediately on the scheduling window, and click Finish.
  7. In the tree, select Process Executions. In the right pane all process executions are listed.
  8. Use the refresh button Refresh button to see when your process has finished.
  9. When you see a Stop timestamp, double-click this process execution, expand the REPORT Step Execution, select Output File, and open the HTML file. The HTML file shows the DB activity report for the selected timeframe. This is your baseline.

The SQL Activity Summary Report gives you an overview of the executed statements sorted by total execution time. You can drill down into each statement using links to get detailed information about each statement execution.

To reuse the same process to generate another report, change the status from Active to In definition by right-clicking the process, choosing Properties, and changing the Status. Then you can edit the process and execute it again.

Scheduling report generation on SQL event monitor data

If the test you want to take a baseline on is running at night, you can schedule the report generation and the collection of SQL event monitor data by the DB2 statement event monitor.

  1. Right-click your SQL Activity Summary Report in the tree, and select Execute.
  2. Select Execute once, on a certain date or Execute repeatedly according to a schedule on the scheduling window, depending on how often you want to create a report based on your specifications in the CRD and Load step.
  3. Click Finish.
  4. On the Date & Time tab or on the Schedule tab, indicate when the data collection and report generation should be done, which should occur while the test is running.

Shortcut to report generation on SQL event monitor data

While your test is running and you want to capture a baseline for the SQL that is executed for a specific application, you can invoke the report generation on SQL event monitor data using Application Summary.

  1. Open Application SummaryApplication Summary icon from the System Overview.
  2. Right-click the application for which you want to collect SQL event monitor data, and select Create SQL Activity Trace Report.
  3. Specify the amount of time the DB2 statement event monitor should run to collect data. Again, keep this to just a few minutes. If you want simply to generate a report on the fly and save it to disk without analyzing the collected data using queries, check the check box Delete data after report generation in the window shown in Figure 31.
Figure 31. SQL Activity Trace Settings
Screen cap: DB2-SQL Activity Tracing Report Generation showing Delete Data After Report Generation box available to check
  1. Click OK. A progress window opens. After that, the report is displayed. If you want to keep the report as a baseline, save it to disk.

Additional recommendations for taking baselines on SQL performance

This article described how to take baselines on both short-term and long-term history data, including how to take a baseline on SQL performance. If you use the short-term history in delta mode, you can see in the Dynamic SQL Statement view which SQL statements have been executed during a specific timeframe. If you use the long-term history, start a DB2 statement event monitor to collect SQL event monitor data and to store the collected data in the performance database of the Performance Expert server.

Because the DB2 statement event monitor generates some performance overhead on the monitored database, the following are a few recommendations on when to use which SQL performance baseline method.

  • Use the Dynamic SQL Statement view in Statistic Details in history and delta mode in the following cases:
    • Your workload mainly consists of dynamic SQL
    • For the baseline, it is not important to know from which applications the SQL statements were executed
    • You take the baseline when the short-term history is still available and has not yet been deleted automatically
  • Use the Reports on SQL event monitor data based on the DB2 statement event monitor in the following cases:
    • Your workload consists of substantial static SQL
    • For your baseline, it is important to know from which applications the SQL statements were executed
    • For the long-term history, it is not enough to have the baselines just as an HTML report, and you want to keep the collected data in the performance database of the Performance Expert server for long-term analysis.

Taking advantage of DB2 Workload Manager

As of DB2 9.5, you can define and manage workload performance criteria. You can use DB2 Workload Manager (WLM) by writing your own scripts to call the table functions, stored procedures, and event monitors to gather information about the workload. An easier alternative is to use Performance Expert to gather this information for you.

Performance Expert provides a nice user interface to help you set up, configure, and display Workload Manager key performance indicators (KPI). These KPIs, which are displayed in the Performance Experts System Overview window as shown in Figure 32, are the most common performance counters. These performance counters are used to help you gain a better understanding of your workload and how your workload impacts your business needs.

Figure 32. Workload Manager Key Performance Indicators
Screen cap: DB2 Performance Expert window with KPI tab selected and indicators highlighted

Click here for a larger view of Figure 32.

If you are new to DB2 or to DB2 Workload Manager, Performance Expert can help you understand these counters. If you hover over each KPI, Performance Expert displays pop-up messages that provide brief descriptions about each counter, including hints and tips about how these KPIs and performance counters interact. To find out more information about these counters, use the Performance Expert help or refer to the DB2 Information Center (see Resources).

Additional recommendations

  • From the Performance Expert System Overview, you can drill down into the workload management definitions to gain a better understanding of your system workload. You can even create customized detailed views by using filters to see only what you want to see.
  • Using filters, you can drill down into a workload based on a particular database to examine the service class for the workload. When you select a service class, Performance Expert automatically highlights the correlating sub-service class.
  • The Performance Expert dashboard shows performance counters for your defined workloads. The dashboard can highlight what work needs to be run on your DB2 server and how this correlates to your business commitments.
  • You might want to use Performance Expert Workload manager to compare how your workload runs during off-peak times against how your workload runs during peak times. You can use Performance Expert to examine the number of connections based on a workload by high watermark or peak times.

Performance Expert Workload Manager configuration

By default, Performance Expert automatically detects DB2 Workload Manager and automatically collects Workload Manager statistics. Performance Expert stores the Workload Manager event monitor tables in tables on the monitored database under a DB2-monitored user ID assigned to the Performance Expert server. Performance Expert collects the DB2 Workload Manager counters at different intervals. To set the Workload Management collection intervals, complete the following steps.

  1. From the Performance Expert System Overview window, right-click the monitored instance.
  2. Select Properties from the drop-down box, select the History tab, and change the intervals.
Figure 33. WLM short-term history setting
Screen cap: Performance Export Server Properties window with History tab selected showing 60 second recording interval and 50 hour timeline

Configuring monitoring in a large DPF environment

Monitoring a partitioned environment can be one of a database administrator's greatest challenges. It can seem overwhelming to try to gather DB2 performance data on each partition, store the data, and correlate it. Performance Expert makes it easier to monitor a partitioned (DPF) environment by providing a nice user interface to show performance counters globally, by individual partitions, by partition groups, or by your own customized views.

Performance Expert provides high-level and detailed-level views for performance counters.

  • The individual partition view displays one partition at a time.
  • The group view displays partition groups, including your own customized groups.
  • The global view displays aggregated data from all partitions.

Figure 34 shows the different partitioned views.

Figure 34. View by each partition, by partition groups, or by global view
Screen cap: Statistics Details window showing partition information

Click here for a larger view of Figure 34.

Additional recommendations

Limit the number of monitored partitions

When using Performance Expert to monitor large partitioned environments, limit the number of partitions that Performance Expert is monitoring by defining and using partition sets. If your DB2 partitioned environment is balanced, use Performance Expert to monitor one partition on each physical machine. It just doesn’t make sense to collect data on all partitions, especially if you have many of them.

To define a partition set, complete the following steps:

  1. Log in into the monitored DB2 instance.
  2. Right-click on the monitored instance, and select Properties from the drop-down box.
  3. Select the Partition Sets tab.
  4. Define a new partition set by specifying a name, assigning partitions, and assigning roles to the partitions, as shown in Figure 35.
  5. Click the History tab, and select the partition set for which you want to collect historical data, as shown in Figure 35.
Figure 35. Create customized partition views
Screen cap: Peformance Expert Server Properties window with Partition Sets tab selected showing partition sets

Avoid using global views

If you decide not to limit the number of partitions, avoid using global views. In a large partitioned environment, it could take a long time for the DB2 snapshot to be returned from all 20 or 90 partitions. If you do choose to get a global view, understand it might take a while for Performance Expert to return data.

Set default login mode to history only

Performance Expert enables you to view data that is current or historical (short-term). In a large partition environment, set the default Performance Expert login mode to history-only mode, because too many users logging in using current mode can impact the monitored DB2 instance. Set this default on the Performance Server properties file or through the user interface.

To configure the default login to history-only mode through the user interface, complete the following steps:

  1. Log in into the monitored DB2 instance.
  2. Right-click the monitored instance, and select Properties from the drop-down box.
  3. Select the Server Connection tab.
  4. Click the check box next to Default is history-only mode.
Figure 36. Login by history-only mode
Screen cap: Performance Expert Server Properties window with Server Connection tab selected and Logon mode selected for Default Is History-Only Mode

To enable the Default is history-only mode for all Performance Expert clients, use the db2pesrv.prop file. The db2pesrv.prop file is in the default Performance Expert installation default directory in the bin subdirectory. For example, the default Performance Expert directory is /opt/IBM/db2pesrv/v3/bin.

For all instances, set the parameter in the db2pesrv.prop to client.enforce_history_only_mode = yes

For single instances, set the parameter in the db2pesrv.prop to client.enforce_history_only_mode.<instance_id> = yes

Selectively disable data collection

If the monitored instance has many containers that rarely change, disable the Performance Expert server from collecting container information. If you are using DB2 automatic storage, you need to decide whether you want Performance Expert to collect counters for tablespaces. Because DB2 automatically manages your containers and space allocation for your tablespace, you might not want Performance Expert to collect these counters. By default, Performance Expert captures containers and tablespace counters so you can see when DB2 automatic storage makes changes. You can always disable Performance Expert from collecting container and tablespace data. See Resources for more information.

To disable Performance Expert Server from collecting data, connect to the performance database and issue one of the following commands:

  • To disable tablespace and container collection: update db2pm_<instance ID>.parameter set PA_FLAGVALUE=’Y’ where PA_KEY=’DISABLE_TABLESPACE_SNAPSHOT’
  • To disable container collection only: update db2pm_<instance ID>.parameter set PA_FLAGVALUE=’Y’ where PA_KEY=’DISABLE_LCONTAINERS_SNAPSHOT’
  • To disable global snapshot data in a partitioned environment (all snapshot categories): update db2pm_<instance ID>.parameter set PA_FLAGVALUE=’Y’ where PA_KEY=’DISABLE_GLOBAL_SNAPSHOT’

Conclusion

DB2 Performance Expert for Linux, UNIX, and Windows is a rich and powerful tool to help you monitor and tune database performance. This article described how to take performance baselines, how to take advantage of DB2 Workload Manager, and how to configure monitoring in a large DPF environment.

Resources

Learn

Discuss

Comments

developerWorks: Sign in

Required fields are indicated with an asterisk (*).


Need an IBM ID?
Forgot your IBM ID?


Forgot your password?
Change your password

By clicking Submit, you agree to the developerWorks terms of use.

 


The first time you sign into developerWorks, a profile is created for you. Information in your profile (your name, country/region, and company name) is displayed to the public and will accompany any content you post, unless you opt to hide your company name. You may update your IBM account at any time.

All information submitted is secure.

Choose your display name



The first time you sign in to developerWorks, a profile is created for you, so you need to choose a display name. Your display name accompanies the content you post on developerWorks.

Please choose a display name between 3-31 characters. Your display name must be unique in the developerWorks community and should not be your email address for privacy reasons.

Required fields are indicated with an asterisk (*).

(Must be between 3 – 31 characters.)

By clicking Submit, you agree to the developerWorks terms of use.

 


All information submitted is secure.

Dig deeper into Information management on developerWorks


static.content.url=http://www.ibm.com/developerworks/js/artrating/
SITE_ID=1
Zone=Information Management
ArticleID=387699
ArticleTitle=The top 6 tips for a successful DB2 Performance Expert deployment for DB2 for Linux, UNIX, and Windows, Part 2: Taking performance baselines, using DB2 Workload Manager, and monitoring in a partitioned environment
publish-date=05072009