Optim Performance Manager (OPM), formerly known as DB2 Performance Expert, helps organizations to resolve database and database application problems before they impact the business. Optim Performance Manager, Version 4.1, was announced on 06 April 2010 after extensive work was done to make the product easier to get up and running and to make the product easier to use.
Optim Performance Manager supports monitoring of DB2 for Linux, UNIX, and Windows V9 databases, including single partition, multi-partition, and pureScale databases.
Significant new capability has also been added to a new product offering known as Optim Performance Manager Extended Edition. This new offering includes the base Optim Performance Manager capabilities augmented with the Extended Insight capabilities for end-to-end database monitoring, integration with Tivoli enterprise monitoring solutions, and support for configuring the DB2 Workload Manager capabilities.
In this article, you'll take a tour of the new and enhanced capabilities in Optim Performance Manager 4.1, including:
- Guided analysis using new web-based health summary and diagnostic drilldowns
- Trend analysis using interactive reporting
- Rapid deployment for rapid return
- Flexible administrative control with monitoring privileges
- Enhanced integration to enable end-to-end diagnosis and tuning
- Problem prevention using proactive workload monitoring. (In Version 18.104.22.168, the basic workload management configuration is available in Optim Performance Manager and no longer requires the Extended Edition. Capabilities for configuring autonomic management and workload manager reporting are available only in Extended Edition.)
- Extended insights into more application environments (only in Extended Edition)
- Integration with Tivoli enterprise monitoring (only in Extended Edition)
On 22 October 2010, IBM made available significant enhancements to both Optim Performance Manager and its Extended Edition product. This article highlights those enhancements.
A new web-based user interface makes it easier to get performance information by removing reliance on a workstation client. (The DB2 Performance Expert Client is still available as well.) A health summary view provides instant visual indicators of the health of all monitored databases based on key performance indicators. The health summary view also provides visual alerts for problematic areas, such as I/O, memory, logging, workload, sorting, and locking.
From any alert, you can display more details about the alert and then drill down to detailed diagnostic dashboards for each of these areas. These dashboards provide important performance metrics and running SQL statements for immediate problem detection. Figure 1 shows an example of a health summary view in which applications are waiting too long for locks. (Note: Lock wait alerts require DB2 9.7 Fix Pack 1, or later.)
Figure 1. Health summary in Optim Performance Manager
(View a larger version of Figure 1.)
The Actions tab in Figure 1 provides the drill down link to the Locking dashboard. Figure 2 shows how you can drill down to the details to see which applications are involved in the locking problem and which statements those applications are executing.
Figure 2. Details of locking problem showing executing SQL statement, locked objects, and executing application
(View a larger version of Figure 2.)
You might also receive an alert for a database in an e-mail. You can start directly from the Overview dashboard for that database to see key performance indicators for multiple problem areas. From there you can drill down to the more detailed diagnostic dashboards for each area. For example, Figure 3 shows the Overview dashboard for the pedemo database, which indicates problems in the I/O, locking, and system areas, as indicated by red alert icons.
Figure 3. Overview dashboard for a database in Optim Performance Manager
(View a larger version of Figure 3.)
Graphical views that you can open for most performance indicators on the Overview dashboard show you how your system is behaving over time in order to identify bottlenecks or peaks. Links to the detailed dashboards lead you to the appropriate dashboard you would use to further analyze the problem. For example, Figure 4 shows the graph of the buffer pool hit ratio opened from the Overview dashboard. The link to the detailed Buffer Pool and I/O dashboard is highlighted.
Figure 4. Graph of buffer pool hit ratio over time
If you click the link to the Buffer Pool and I/O dashboard from the Overview, you see a screen as shown in Figure 5.
Figure 5. Buffer Pool and I/O diagnostic dashboard
(View a larger version of Figure 5.)
This dashboard shows you how efficiently your buffer pools are working. From this dashboard, you can choose a particular buffer pool and drill down to the tablespaces and tables using that buffer pool.
There are several ways you can be guided to detailed diagnostic dashboards that help you analyze particular performance problems. To give you a better understanding of the extent of such detailed information available to you, Table 1 gives a summary of available diagnostic dashboards in Optim Performance Manager:
Table 1. Optim Performance Manager diagnostic dashboards
|Active SQL||Identifies and analyzes long-running queries in a certain time frame. You can stop a query. If Optim Query Tuner is installed, you can launch it in context to do more tuning.|
|Buffer pool and I/O||Checks and tunes database I/O on the buffer pool, table space, and table level.|
|Extended Insight (only in Extended Edition)||Checks transaction response times of your database applications and determines where and why the response time was spent. If Optim Query Tuner is installed, you can launch it in context to do further tuning. See Figure 10.|
|Logging||Checks and tunes log performance.|
|Locking||Identifies and analyzes deadlocks, timeouts, and locking conflicts. If Optim Query Tuner is installed, you can launch it in context to do further tuning.|
|Memory||Checks the DB2 instance and database memory consumption. Determines whether memory should be increased or decreased.|
|System||Checks system resources. If you have Optim Performance Manager Extended Edition, you could launch into Tivoli Monitoring (if installed) to get more detailed information about system resources.|
|Utility||Plans execution of utilities and identifies failures.|
|Workload||Gives an overview of workload utilization.|
Behind the web-based user interface, Optim Performance Manager uses a powerful repository server that collects performance metrics from the monitored database in one-minute intervals (a configurable value) and stores them into a DB2 database. This enables post-mortem problem detection and resolution, such as What happened over the weekend?. Optim Performance Manager also helps you detect trends over time so you can plan for future growth. (Predefined reports can help with this analysis. See the section Trend analysis using interactive reporting for more details.) Each diagnostic dashboard has an intuitive time slider that enables you to browse through the collected performance data and analyze what happened during the timeframe in which a problem occurred, as shown in Figure 6.
Figure 6. Slider bar lets you review current or historical performance data
(View a larger version of Figure 6.)
This section describes the predefined, interactive reports that are available to help you get started with trend analysis. These reports are interactive in that you can drill down from the generated report to get more detailed information. The following reports are available:
- Top n dynamic SQL statement
- Connected applications
- Database and database manager configurations
- Disk space consumption for table spaces, including growth rate
Figure 7 shows an example of a top n dynamic SQL statement report.
Figure 7. Example of a top 10 dynamic SQL report
(View a larger version of Figure 7.)
In the top n SQL report, you can click on a statement to drill down into more statement details. Figure 8 shows an example of a report that shows disk space consumption for table spaces, including growth rate.
Figure 8. Example of table space disk space consumption report
(View a larger version of Figure 8.)
In the table space disk consumption report, you can click on a table space to drill down into more details.
This section describes enhancements to get you up and running faster, including an integrated installer and predefined monitoring profiles.
Optim Performance Manager uses an integrated installer to set up all components of Optim Performance Manager, including the application server and DB2 repository. After installation, you can directly launch the Web UI and add the databases you want to monitor. The monitoring settings for the monitored database can be added by choosing one of the predefined system templates. Templates exist for OLTP, business intelligence, or SAP databases, as shown in Figure 9.
Figure 9. Configure monitoring wizard lists predefined templates for monitoring
Click Finish to start monitoring, or you can adapt the chosen template.
The capability to control who is allowed to monitor and who is allowed to configure is new in Optim Performance Manager 4.1. Users without privileges can see only the cross-database Health Summary and the Alert Overview information. Users with the new canMonitor privilege on a monitored database are also allowed to look at the detailed dashboards for any monitored database for which they have that privilege. And users with the canManageAlerts privilege on a monitored database are allowed to change alert settings, such as alert thresholds. This privilege system makes it easier to enable more people to monitor the database while restricting configuration control to a more select set of people.
One of the biggest pain points in performance is poorly performing SQL. After installation, Optim Query Tuner can be launched from Optim Performance Manager from any dashboard that analyzes SQL activity, and the specified SQL statement is carried into the Query Tuner context. The following dashboards support the capability to launch Query Tuner:
- Active SQL Dashboard to identify long-running SQL statements
- Locking Dashboard to identify SQL statements that cause locking problems
- Extended Insight Dashboard (only with Extended Edition) to identify SQL statements that are part of transactions with high response times, as shown in Figure 10.
Figure 10. Launch point to Optim Query Tuner from Extended Insight Dashboard
(View a larger version of Figure 10.)
A key capability to managing workload prioritization and resource utilization in DB2 for Linux, UNIX, and Windows is the DB2 workload manager (WLM). Available as part of the DB2 Advanced Enterprise Server Edition, DB2 workload manager helps automatically manage workloads according to your priorities. This helps to manage resource utilization, especially in those cases where there are widely varying workloads. For example, you can prioritize operational workloads that are in and out quickly against ad-hoc activities such as auditing or ad-hoc reporting. You can assign workloads directly to service subclasses (such as the CEO work and price-lookup workloads), or you can have DB2 WLM assign workloads to a subclass based on the estimated cost of the workload, as shown in Figure 11. The cost of a workload is based on optimizer cost that includes CPU and elapsed time in the calculation. If you have OPM EE, you can also set response time objectives for your subclasses, which a service continually adjusts such that the performance objectives are met.
Figure 11. Map workloads to service subclasses either directly or by estimated cost
An obvious benefit of using DB2 workload manager is to prevent problems caused by low-priority work or rogue queries consuming system resources such that higher-priority work cannot get the resources needed to meet service level agreements. (See Resources for more information about DB2 WLM capabilities.) The tooling solution for configuring DB2 workload manager is part of Optim Performance Manager, which is also included in the DB2 Advanced Enterprise Server Edition.
Service superclasses enable you to take a first cut at dividing resources at a higher level than that represented by workloads or users. You can also see information you need in context while making configuration decisions. The following sections go into more detail about these capabilities and then show some scenarios using the WLM configuration tool from within OPM.
In OPM, you can create service superclasses, which are superclasses into which you can group users or applications. (In 4.1, these were known as business processes). You to use concurrency limits as a way to roughly divide system resources, as shown in Figure 12. You can also apply different policies to different groups or applications.
Figure 12. Setting concurrency limits for a service superclass
(View a larger version of Figure 12.)
The WLM tooling in OPM can highlight the most useful metrics for configuration and present them in the context in which you need them. Connection attributes, such as application name, user name, group name, or other configuration tags, are used to define workloads. Previously, administrators would have had to leave the configuration tool to find these connection attributes. In 22.214.171.124, you can see the connection attributes for all currently running activities directly from the configuration interface and which workloads they are currently assigned to, as shown in Figure 13.
Figure 13. The correct information, in context, for configuring WLM
(View a larger version of Figure 13.)
Support for autonomic performance objectives is new in 126.96.36.199. If you want to ensure that the activities of a workload meet a required response time, you can enable autonomic performance objectives. The autonomic performance objective service monitors the response times of recent requests and adjusts the concurrency limit of the activity's subclass as needed.
The next sections explore three example configuration scenarios:
- Reduce the impact of long-running queries
- Ensure more consistent response times for high-priority applications
- Enforce target response times with autonomic performance objectives
This scenario separates large queries into their own service classes and then limits the number of those allowed to run concurrently. The example also adds a threshold limit that enables the automatic cancellation of rogue queries that exceed the threshold.
OPM automatically creates a template configuration for you, which includes a service subclass intended for long-running queries (called DS_LOW_CONC_SUBCLASS). All you need to specify is the estimated cost of what is defined as a large query for your enterprise. You can then specify a concurrency limit for this service subclass to restrict how many large queries are allowed to run concurrently. Figure 14 shows that for the example enterprise, the minimum cost used to define a large query is 100000 timerons, and the upper limit is unbounded. The concurrency limit for this service subclass is 8 concurrent activities.
Figure 14. Template configuration for the low-priority service subclass
You can create more service subclasses for finer-grained control and monitoring.
Once you have separated long-running queries in their own service subclass, you can further reduce their impact by imposing thresholds. You can enable thresholds to monitor activities that exceed a limit, stop the activity, or both. Figure 15 shows that the example imposes a 60-minute limit on queries in that service subclass.
Figure 15. Defining a threshold limit on queries
Limiting the concurrency for service subclasses that have lower business priority can make the response times more consistent for critical applications. You can use reports to help you evaluate how consistent the response times are for a particular service subclass during a particular time period, which you can specify using a slider bar. Figure 16 and Figure 17 show the view of the activity's elapsed time in the price_lookup application before and after using WLM to limit concurrency on competing, less-important service subclasses. Although the majority of activities took .0025 seconds to complete, a few activities ran for as long as 0.305 seconds, as shown in Figure 16.
Figure 16. Graph view of elapsed time before activating concurrency settings
Figure 17 shows the same application after limiting concurrency on competing activities.
Figure 17. Graph view of elapsed time after activating concurrency settings
After limiting concurrency, no activity took longer than 0.0855 seconds, which is over three times better in the worst-case performance. In both cases, the most common response time was 0.004 seconds. However, before the change, nearly 10% of activities took 0.0155 seconds or longer, while after limiting concurrency, less than 5% of activities took that long. In other words, overall response time is lower, and activity is smoothed out and less erratic.
Histograms are also available. For easy visual comparison, you can juxtapose matching histograms from before and after you make a change.
You can further improve the average response time of the application by enabling autonomic performance objectives. In Scenario 2, after limiting concurrency, 88.6% of activities finished in 1ms or less. If you change the enforcement type of the application's workload from fixed to performance objective, as shown in Figure 18, you can automatically set the concurrency limit such that 1ms response times are achieved more often.
Figure 18. Setting a performance objective for a service subclass
In the example in Figure 18, if fewer than 95% of activities complete within 1ms, the autonomic performance objective service increases the concurrency limit until the objective is met. However, the application will never be allowed more than 40 concurrent activities.
The capability of working with the details of a workload manager configuration is new in 188.8.131.52. The WLM tooling in OPM makes configuration easy by guiding you through the process and presenting you with the most relevant information and settings. However, you can work directly with the WLM settings when necessary by using the detailed configuration editor, as shown in Figure 19.
Figure 19. The details of a WLM configuration
(View a larger version of Figure 19.)
The following sections describe features that are available only in the Extended Edition of Optim Performance Manager (OPM EE).
Extended insight is the capability to monitor and report on the database end-to-end response time of an application. With Extended Insight, the transaction and each individual SQL statement is measured on each step of the journey as it traverses the software stack. This helps to immediately pinpoint where a response-time issue is occurring: either in the application server (such as WebSphere), the network, or the database. You can set thresholds on the desired response-time SLAs, and you can view alerts if the response times of your transactions exceed that threshold.
- Extended insight to CLI, .NET, and static applications
- Application metadata to pinpoint problems in the application source
- Statement execution details about database (new in 184.108.40.206)
With Extended Insight, you can monitor applications that use Java, CLI, or .NET to access the database. This includes many important business applications, such as Cognos®, DataStage®, or SAP. For these applications, you will see information about the application, including driver time, network time, and application time, as shown in Figure 20.
Figure 20. CLI workloads now available for extended insight
(View a larger version of Figure 20.)
The following environments are supported in 220.127.116.11:
- .NET applications
- Requires the DB2 Data Server Client Package, Version 9.7, Fix Pack 3 or later. Configuration is the same as for CLI applications.
- Type 2 connections (CLI or Java Common Client)
- Requires the DB2 Data Server Client Package, Version 9.7, Fix Pack 2 or later.
- Static SQL in CLI applications
- Requires the DB2 Data Server Client Package, Version 9.7, Fix Pack 3 or later. With this support, you can take advantage of the capability to convert dynamic SQL to static SQL in CLI applications using the client optimization capabilities in the separately available Optim pureQuery Runtime. You can also monitor those static workloads in Optim Performance Manager Extended Edition.
To make it even easier to get started with Extended Insight, there are predefined workload views for WebSphere Application Server, SAP, Cognos, DataStage, and InfoSphere™ SQL Warehouse tasks. The views enable you to distinguish response times from different users, applications, hostnames, WebSphere Application Servers, and SAP systems.
Another important enhancement in the Extended Insight capability in OPM EE 4.1 is support for static applications. IBM has touted the benefits of static SQL for a long time, including:
- Improved manageability (with use of distinguishing package names)
- Improved and more consistent performance.
- Better security (less risk of dynamic SQL injection)
With OPM EE 4.1, you can use the Extended Insight capability to get detailed information about applications using static SQL, including package, section, and collection information. You can also get detailed information about monitoring data, as shown in Figure 21.
Figure 21. Static SQL supported with Extended Insight
(View a larger version of Figure 21.)
Application metadata to pinpoint problems in the application source is available in 18.104.22.168. Determining the source of the SQL so it can be modified is an important step in tuning SQL. This can be like finding a needle in a haystack, especially if the SQL was generated by a third party, such as Hibernate or JPA. To help identify the source code, the Extended Insight dashboard can display pureQuery metadata, such as Java class, package, application name, method name, and source line number, as shown in Figure 22.
Figure 22. Java source metadata displayed in the Extended Insight dashboard
(View a larger version of Figure 22.)
Pinpointing problems enables the database administrator and the developer to collaborate by quickly identifying the source SQL. This feature requires a separate license for pureQuery Runtime, Version 2.2.1 or later.
Statement execution details about databases are new in 22.214.171.124. With DB2 V9.7, Fix Pack 1 or later, the time a transaction spends on the database server is further broken down to show you more detail on where the SQL transaction spends its time in the database, such as I/O operations, sorting, or waiting for locks. Figure 23 shows the SQL transaction breakdown.
Figure 23. More details on database server time are available in 126.96.36.199
In Version 188.8.131.52, you can gather detailed database breakdown statistics, not just for the transaction but for individual SQL statements. This helps you determine, for example, which SQL statement is causing high I/O times. Figure 24 shows the response time chart for the transactions executed for an application. The right side shows the top SQL statements that the dtrader application executed. Selecting a statement shows in the lower section the execution details of this statement. The General Information tab shows how much time the statement spent in the application, network, or data server. The Statement Server Execution Details tab shows the time breakdown and further execution details of the statement on the data server itself.
Figure 24. Server statement execution details available with Extended Insight dashboard
(View a larger version of Figure 24.)
Optim Performance Manager Extended Edition integrates the deep database performance insight of Optim with the broad, enterprise-wide insights of IBM Tivoli monitoring products. This combination extends transaction response-time monitoring from the database to the end-to-end transaction path.
Database application environments can be complex, often including several middleware components through which transactions can flow, including web servers, application servers, message servers, transaction servers, and database servers, as shown in Figure 25.
Figure 25. Complex application environments with specialized tools for diagnosis and correction
The product IBM Tivoli Composite Application Manager (ITCAM) for Transactions can keep a watch over the entire end-to-end transaction path that touches many of these components. When ITCAM for Transactions detects a transaction execution problem, it can isolate the problem to individual components in the end-to-end transaction path. It can then provide a launch point for investigation into the components. For any transaction problems in the DB2 database component, ITCAM for Transactions can launch the Extended Insight capability in Optim Performance Manager Extended Edition in the context of the problematic database transactions. This capability enables you to use the database insights Optim provides to further isolate the problem and drive it to resolution. Furthermore, Tivoli monitoring provides deeper, more extensive operating system, network, and storage information that you can access from within the system dashboard of OPM EE.
The following three scenarios further illustrate the benefits of the Tivoli and Optim integration:
- Locate and isolate database transaction problems within an end-to-end transaction path
- Drill down on a database transaction problem
- Use Tivoli monitoring to drill up from Optim Performance Manager to investigate a potential system problem
In all three scenarios, the Tivoli Enterprise Portal (TEP) console is used as the user interface through which all activities take place.
Figure 26 shows an ITCAM for Transactions view of a topology of a single (simplified) transaction path that flows through three components: a WebSphere application server instance, a JDBC driver, and a DB2 database. Arrows that connect component to component show the transaction elapsed times.
Figure 26. Tivoli topology view
This visualization of the transaction path makes it much easier for operations personnel to explore the end-to-end transaction path to identify and isolate any problems. Note that although Figure 26 shows a very simple transaction topology, you can explore more complex topologies using helpful topological navigation aids within TEP. You can also use ITCAM alerts, notifications, and situations to find potential problems in an automated fashion.
If there is a database transaction problem that you discovered within the end-to-end transaction topology, you can easily drill down in context using the Extended Insight capability of OPM EE to further isolate the problem in the database component to find a fix for the problem quickly. Figure 27 illustrates this drill-down capability.
Figure 27. Tivoli topology view with alert raised on database
In this scenario, there are several JSPs running on a WebSphere Application Server that are each executing SQL through the JDBC driver to the DB2 for Linux, UNIX, and Windows database (called GSDB). Note that the average elapsed execution time of the transactions as they go from the JDBC driver to the database is 40 milliseconds, which has raised an alert on the database (indicated by a red arrow in the lower right corner of the database icon). This alert tells operations personnel that they need to drill down on this database to investigate this alert. To drill down, click the database icon, and click Database Diagnostics, as shown in Figure 28.
Figure 28. Launching into Optim Performance Manager
The Extended Insight Dashboard of Optim Performance Manager Extended Edition within a new view in TEP launches within the same transactional context, which enables you to investigate this database's transactions, as shown in Figure 29.
Figure 29. Optim Extended Insight dashboard in the Tivoli Enterprise Portal (TEP)
(View a larger version of Figure 29.)
At this point, a database administrator can further investigate these transactions to isolate the problem using of the domain expert capabilities that OPM EE offers. All capabilities of the OPM EE interface are available from the TEP.
This third scenario describes a contextual launch in the opposite direction of Scenario 2. In this case, you are still in TEP, but you are investigating a potential database issue within OPM EE. During your investigation, you notice on the OPM EE system dashboard that there is a potential system problem. To get more details, click the icon to launch into detailed Tivoli system information views (still within TEP) for the system that is under investigation within the OPM system dashboard, as shown in Figure 30.
Figure 30. Launching into Tivoli system information from OPM
(View a larger version of Figure 30.)
Although the integration points were described as three separate scenarios, a typical problem-isolation scenario would seamlessly blend all these scenarios together in various combinations.
The previous sections described key capabilities and scenarios in which those capabilities are used. Table 2 describes the delivery of these capabilities, which shows that a useful set of base capabilities, including the web-based user-interface, reporting capabilities, and basic WLM configuration, is delivered in Optim Performance Manager. These base capabilities are available at no extra charge with DB2 Advanced Enterprise Server Edition. All of the base capabilities are delivered and extended on with the Extended Edition package, which includes Extended Insight capability, configuring WLM to achieve autonomic performance objectives, and Tivoli integration.
Table 2. Optim Performance Manager packages
|Feature||Optim Performance Manager (included in DB2 Advanced Server Edition)||Optim Performance Manager Extended Edition|
|Alerts and notifications||X||X|
|Overview health summary||X||X|
|DB2 WLM administration tooling||X||X|
|Configuring workloads with autonomic performance objectives||X|
|Extended Insight (database end-to-end response time monitoring for Java and CLI)||X|
|Tivoli ITCAM integration||X|
This article described the key enhancements in DB2 performance monitoring provided by Optim Performance Manager, including the following:
- Proactive performance management
- DB2 WLM (workload management) solution to assign resources to high-priority applications. DB2 WLM is available in DB2 Advanced Enterprise Server Edition.
- Configuring workloads for autonomic performance objectives. Configuring workloads requires an upgrade to Optim Performance Manager Extended Edition.
- Trending reports that help to proactively plan for future capacity.
- Alert and system overview dashboards to quickly identify problems.
- Guided problem-solving approach
- From problem identification to diagnostics to resolution.
- Integration with Tivoli for overall health with database drilldown.
- Integration with Optim solutions for SQL resolution.
- Overall health
- Engine monitoring with Optim Performance Manager.
- Application monitoring with Optim Performance Manager Extended Edition.
- Out-of-the-box application monitoring for SAP, Cognos, DataStage, Java (WebSphere), CLI, and .NET applications.
- Reduced time-to-value
- Simplify installation and configuration.
- Simplify problem resolution.
- Provide reporting capabilities for health reporting and trend analysis.
Optim Performance Manager Extended Edition plays a central role in a performance management solution that incorporates the lifecycle approach of prevent, identify, diagnose, and solve. The integrations provided with the solution are illustrated in Figure 31, which show how the various pieces of the solution work together.
Figure 31. Optim performance management solution to prevent, identify, diagnose, and resolve database performance problems
DB2 workload manager, with configuration assistance provided by Optim Performance Manager Extended Edition, helps prevent problems caused by runaway queries or misallocation of resources to workloads that are not business critical. Alerting in Tivoli ITCAM and in the OPM Health Summary helps you identify issues quickly. Detailed dashboards help you diagnose the problem by guiding you through problem areas to the source of the problem. For problem queries, Optim integrations can help you resolve problems by tuning queries in context and by providing advice on whether database changes are required or whether queries need to be rewritten and replaced. If queries need to be replaced, Optim integrations provide the exact location in the source application where the change is required.
The 184.108.40.206 product release offers features that support performance management from a lifecycle perspective. See the Resources section for links to more information.
The authors would like to thank Kevin Cheung for his help updating this article with the enhancements for workload management configuration.
- Review the demo "Optim Performance Management solution" (developerWorks, 2010
April) to see how one fictitious company uses Optim solutions to resolve
problems, to accelerate performance of existing applications using
pureQuery client optimization, and to build performance into applications,
right from the start.
- Watch the demo "DB2 Workload Management demo" (developerWorks, 2010 June) See how
one fictional company uses DB2 Workload Management and Optim Performance
Manager to allocate database system resources to help a high priority
application achieve its business objectives in a data warehousing
environment. These capabilities are conveniently packaged together in DB2
Advanced Enterprise Server Edition.
- Refer to the IBM Redbook® "Optim Performance Manager" for information on planning,
deployment, and usage of the product.
- Look up the DB2 Advanced Enterprise Server Edition product web page for more
information on the capabilities included in this edition of DB2.
- Peruse the Optim Performance Manager Extended Edition product web page for
more information, including how to purchase the product.
- Find the IBM Tivoli OMEGAMON XE for DB2 Performance Expert on z/OS
features and benefits page for more information about monitoring DB2 for
z/OS databases, including using Extended Insight capabilities to pinpoint
bottlenecks in database application environments.
- See the information center topic on SAP configuration for more
information about SAP support.
- Look through the DB2 workload management concepts in the DB2 Information Center.
- Read the topic about configuring web
console security in the information center.
- Study the DB2 workload management best practices.
- Learn more about configuring autonomic performance objectives.
- Navigate through the Optim Performance Manager 4.1 Information Roadmap.
- Explore the developerWorks Optim family page to learn more about Optim
- Check out Using
the IBM Optim Performance Manager Extended Insight dashboard to see a
demo that provides tips and techniques for using the Extended Insight dashboard.
- See a schedule of virtual technical briefings around the Optim
integrated data management portfolio.
- Learn more about Information Management at
Information Management zone. Find technical documentation, how-to
articles, education, downloads, product information, and more.
- Stay current with developerWorks technical events and webcasts.
- Follow developerWorks on
Get products and technologies
- For a look at
health monitoring capability included in Optim Performance Manager,
download the no-charge Data
Studio Health Monitor.
- Build your next
development project with IBM trial
software, available for download directly from
- Participate in the discussion forum.
- Follow the Managing the data lifecycle blog.
- Check out the developerWorks
blogs and get involved in the developerWorks
Ute Baumbach has been a software developer at the IBM lab in Germany for 18 years, where she has worked in various software development projects and roles. Most of her projects were based on DB2. For five years, she has worked as a member of the DB2 Performance Expert development team, now the Data Studio Performance Management development team. Ute is an IBM Certified Database Administrator and a Certified Application Developer for DB2 for Linux, UNIX, and Windows.
Anshul Dawra is a Senior Software Engineer in the IBM Information Management group at Silicon Valley Labs in San Jose, CA. He is an architect in the pureQuery and Extended Insight team. Before joining the pureQuery team, he worked on the design and development of IBM Data Server Driver for JDBC and SQLJ.
Kevin Beck is the architect for tooling to support workload management features in DB2 for Linux, UNIX, and Windows. His interests include business intelligence, data mining, and data warehouses. He has been a member of the DB2 development team at IBM since 2001, and before that, he was a member of the Informix data server development team. He has contributed to benchmarks and performance work, and he has deep knowledge of how the Informix and IBM data servers operate at the query-processing level. Kevin has many years of experience delivering education and presentations about data server topics.
Randy Horman is a Senior Technical Staff Member on the Optim database administration tools development team at the IBM Toronto Lab. He received a B.A. degree in mathematics, computer science, and economics, as well as an M.Math degree in computer science from the University of Waterloo in 1994 and 1995, respectively. He subsequently joined IBM at the Toronto Lab, where he began working on the parallel database system, DB2 Parallel Edition. Recently, Randy has focused on database manageability, specifically the scalability and automation of administration as well as the applicability of autonomic technology. Randy is a member of the Association for Computing Machinery and of the Computer Society of the Institute of Electrical and Electronics Engineers.
Kathy Zeidenstein has worked at IBM for a bazillion years. She currently manages the IBM Optim/Data Studio and Warehouse Information Development team, which is responsible for providing product documentation and online help for these products. Previously, she worked on the IBM Optim Solutions technical enablement team and was responsible for community development and communications. She also has experience in product marketing and management in text search and text analytics technologies.