End-to-end database monitoring with Optim Performance Manager Extended Insight

Identify, analyze, and resolve problems from application layer to database using the Extended Insight feature

Learn how you can use IBM® Optim™ Performance Manager Extended Insight to monitor end-to-end database response time for Java™ and CLI applications. Extended Insight extends database monitoring across the database client, the application server, and the network, giving DBAs immediate insight into where workloads, transactions, and SQL requests are spending their time. With OPM EI, you can quickly detect trends like declining response times for applications or network congestion.

Sujan Ghosh (sghosh.raj@in.ibm.com), Software Engineer, IBM

Sujan GhoshSujan Ghosh has been a software engineer at the IBM lab in India for more than four years as a member of the DB2 Java common connectivity and pureQuery client optimization team. Most of his projects have been based on DB2 and Java technology. He is an IBM Certified Database Administrator for DB2 for Linux, UNIX, and Windows, and an IBM Certified Solution Developer for XML.



Rajesh Sambandhan, Advisory software engineer , IBM

Rajesh SambandhanRajesh Sambandhan is an advisory software engineer on the IBM Information Management team, India Software Labs. He has been in the industry for more than seven years, working primarily in J2EE and Test Driven Development.



Asim Singh, Staff Software Engineer, IBM

Asim SinghAsim Singh works for IBM Data Studio in Silicon Valley Lab, San Jose, Calif. He is well versed in the Java language and Java technologies around database application development. He is professionally and academically experienced in Java persistence technologies, to name a few: IBM Data Studio pureQuery, JPA, Hibernate, and Ibatis. He also has a strong background in relational database technologies, a bachelor's degree in computer science from University of Wisconsin, Madison, and a master's degree in software engineering from San Jose State University.



Anshul Dawra (adawra@us.ibm.com), Senior Software Engineer, IBM

Photo of Anshul DawraAnshul Dawra is a Senior Software Engineer in the IBM Information Management group at Silicon Valley Labs in San Jose, CA. He is an architect in the pureQuery and Extended Insight team. Before joining the pureQuery team, he worked on the design and development of IBM Data Server Driver for JDBC and SQLJ.



David Chang (dwchang@us.ibm.com), Software Engineer, IBM

David ChangDavid Chang is a software engineer in the IBM Information Management group at the Silicon Valley lab in San Jose, Calif. He was a core developer for pureQuery and the OPM Extended Insight team. Before joining the pureQuery team, he worked as a developer for the IBM Data Server Driver for JDBC and SQLJ.



30 June 2011

Also available in Chinese

Introduction

A typical enterprise application environment, illustrated in Figure 1, uses a layered architecture to isolate the logic of various aspects of the application. For example, a Java developer can focus on the business logic, while frameworks and additional layers of specialized technology can manage the details of database access, connection management, resource management, transaction management, object mapping, etc. This division of responsibilities increases productivity and reliability and allows the developer to focus on solving business problems without the distraction of managing all the details across the entire environment.

Figure 1. Layered application stack
A layered architecture isolates the logic of one various aspects of the application

The diverse technologies in this layered architecture are an important part of any modern IT architecture and are a great asset to developers, but they can create challenges for DBAs and those responsible for application and database performance. When there is a performance problem in this layered architecture, it often becomes difficult to get to the root cause. This is particularly true for DBAs. Consider an example where a DBA knows the problem is a poor-performing query, but that query was generated by a framework, using an object-relational mapping technology. The resulting application is deployed on an application server and uses a shared pool of database connections. Tracking back from database query to the application can be difficult. We have heard stories about DBAs walking the halls, SQL printout in hand, asking each developer, "Is this your query?" Once you find the right developers, persuading them that the query should be fixed can be frustrating and time-consuming. Making the fix can be next to impossible. Through this whole process, there's plenty of opportunity for finger-pointing and wasted time, the last thing you need when you are on the line to get a critical application back into production.

To make it easier to diagnose and solve performance problems in this environment, you need visibility into every layer of the stack. You need to be able to see a transaction or SQL statement as it works its way from application to client and all the way to the database server and back, so you know where the SQL came from and where it spent its time.

IBM Optim Performance Manager Extended Insight provides end-to-end database performance monitoring for Java and CLI applications, giving you the ability to quickly understand where your database applications are spending their time. With it, DBAs can quickly see and understand where the database applications are using resources across the IT stack: the application, the application server, the database client, the database server, and the network. This degree of insight helps you quickly isolate and address problems. When the problem isn't in the database itself, Extended Insight makes it easier for you to work with the people managing other parts of the infrastructure or the developers to solve those problems.

In this article, we will outline the features of Optim Performance Manager Extended Insight V4.1.0.1.


Prerequisites

Before installing IBM Optim Performance Manager Extended Insight V4.1, ensure that your system meets the requirements for hardware and software. See Resources for support documentation listing the specific requirements. As that document mentions, Extended Insight supports JDBC and CLI-based applications. WebSphere-based applications are also supported for the latest WebSphere® Application Server versions and the older versions with the required patches.


Install

Optim Performance Manager Extended Edition is installed as an additional component after you install Optim Performance Manager. The Optim Performance Manager Extended Insight software component includes the Extended Insight license activation kit and Extended Insight client software. Running the Extended Insight License Activation Kit on the Optim Performance Manager server configures settings to allow Optim Performance Manager to communicate with the Extended Insight client. Running the client software piece configures the settings to allow the Extended Insight client to start sending Extended Insight monitoring data to the Optim Performance Manager.

Figure 2 illustrates the basic architecture of the product.

Figure 2. Optim Performance Manager Extended Insight architecture
Architecture of Optim Performance Manager Extended Insight allows enabling monitoring for applications as well as working with the monitored statistics

(View a larger version of Figure 2.)

As illustrated, the client software must be installed on the machine you wish to monitor. The license activation kit is installed on the Optim Performance Manager machine. Running the client software configures the settings to allow the Extended Insight client to send monitoring data to Optim Performance Manager.

For instructions on installing Optim Performance Manager and the Extended Insight feature, refer to the Information Center.


Extended Insight Analysis dashboard

Users interact with Optim Performance Manager through a web-based UI. With it, you can enable the monitoring of applications and work with monitored statistics. The Extended Insight Analysis dashboard is a web interface for working with the monitoring data available with the Extended Insight feature. The Extended Insight Analysis dashboard provides two main displays for working with the data:

  • Workloads view— This is the initial screen that displays when you open the dashboard. It displays data from all workloads and serves as the starting point for analyzing Extended Insight metrics. This dashboard provides a top-level view of all the workloads running on the monitored database system.
    Figure 3. Extended Insight analysis dashboard
    Extended Insight Analysis Dashboard workloads view

    (View a larger version of Figure 3.)

  • Details view— This view provides detailed information when you drill down on a specific workload.

The following sections introduce this dashboard by breaking it down to these main points:


Viewing of workloads organized by workload cluster groups

It can be difficult to determine where and when performance problems and bottlenecks occur. To isolate the source of performance problems, you can group and monitor transactions that come from specific components by using workload cluster groups.

Extended Insight monitoring is based on workload clusters that are predefined or created by users. Connection attributes that are set for each monitored database determine how they are grouped into a workload cluster group. Figure 4 shows an example of the default workload cluster group organization.

Figure 4. Default workload cluster group
Default workload cluster group organization in a tree structure

The database is at the top level. The default workload cluster groups are under the database. This organization provides flexibility by allowing a user to view system metrics from different perspectives and levels of granularity. For example, you can view the metrics at a high level, such as the database as a whole, or at a more granular level, like by the following workload cluster groups:

  • Client application names— Each application may set its own name to identify the application responsible for the workload. Client application names are commonly used to distinguish different applications running in the system. A payroll application, for example, may be identified as "payroll," and a reports application may be identified as "reports." In this system, the monitoring statistics of each application may be viewed independently.
  • Client user IDs— Workloads can be viewed per each client user. Various workloads may execute under different user IDs. For example, there may be different client user IDs for administrative users and normal users. The ability to view performance data at the user level allows the workload of each client user ID to be viewed independently.
  • Host names and IP addresses— This allows workloads to be viewed per each data client machine. For example, a monitored database may be accessed from multiple application servers running on different hosts. The ability to distinguish performance metrics between hosts can be valuable in finding and resolving a performance issue.
  • Application types— Optim Performance Manager Extended Insight supports monitoring of CLI, WebSphere, and stand-alone JDBC applications. An enterprise application likely contains a mixture of applications types. This cluster group provides an ability to distinguish one application from another.

Custom workload cluster groups

Optim Performance Manager Extended Insight supports custom workload clusters groups. There are a set of pre-built workload cluster groups for various standard workloads, such as SAP, Cognos®, InfoSphere® Warehouse, and others. Additionally, a custom group may be defined. For instance, a workload cluster group could be defined for a user named "ted" running on IP address 9.30.236.105. Figure 5 shows where you specify the connection attributes and filter criteria for a custom workload cluster group.

Figure 5. Connection attributes and filter criteria
Checkboxes let you select application type, client application name, client accounting string, host name or IP address, authentication ID, client workstation, or client user ID

Figure 6 shows the attributes that have been selected for the customized group.

Figure 6. Customized group
Group filtered for user running on IP address 9.30.236.105

DB2 client information fields

DB2® enables applications to send detailed information with each SQL operation. This information can then be used by Extended Insight Dashboard to filter the monitoring information view. Table 1 shows the information that can be sent.

Table 1. Client information fields
FieldDescriptionLength (LUW, z/OS®)
Client user IDThis user ID is for identification purposes only and is not used for authorization. It identifies the user of an application.255, 16
Client workstation nameThe workstation name of the client system. Some applications also use this field to identify the business transaction executed within an application.255, 18
Client application nameIt can be used to identify the application hosted in an application server or to identify the business transaction within an application.255, 32
Program nameIdentifies the application running on the client. It is only supported for a connected DB2 on z/OS database.-, 80
Accounting stringIt can be used to specify charge-back information or to add additional monitoring details about the database workload.200, 200

Ways to set DB2 client information fields in your application

IBM DB2 Universal driver JCC provides methods of class com.ibm.db2.jcc.DB2BaseDataSource, such as public void setClientInfo(String name, String value) throws SQLClientInfoException; through which you can set client information and properties.

Listing 1 shows an example of using this class.

Listing 1. JDBC offers methods of class com.ibm.db2.jcc.DB2BaseDataSource
public static void main(String[] args) throws 
	ClassNotFoundException, SQLException{
String url="jdbc:db2://lap1.boeblingen.de.ibm.com:50000/DEMO";
String user="user";
String password="passwd";
Class.forName("com.ibm.db2.jcc.DB2Driver");
java.sql.Connection conn= DriverManager.
getConnection(url,user,password);
		
// Setting Client Information below
conn.setClientInfo("ClientUser","xyz");        
conn.setClientInfo("ClientHostname","my Laptop");
conn.prepareStatement
("Select * from SYSIBM.SYSDUMMY1" +"WHERE 0=1").executeQuery();
}

Similarly for CLI, you can use the functions shown in Listing 2 to set client information.

Listing 2. CLI offers the setsqli()^2 interface
SQL_API_RC SQL_API_FN sqleseti {
     unsigned short DbAliasLen,
     char * pDbAlias,
     unsigned short NumItems,
     struct sqlca * pSqlca};
			
SQL_STRUCTURE sqle_client_info {
     unsigned short type;
     unsigned short length;
     char *pValue; };

What if you can't change the application code? In that case, you can set the client information in file db2cli.ini for CLI and for JDBC in connection URL. Listing 3 shows how to set the information in db2cli.ini for CLI applications.

Listing 3. For applications using CLI driver
// Specify the connection properties in file db2cli.ini
;Settings for the data source 
<data source name> =[<data source name>]
ClientUserID = <my user id>
ClientApplName = <my app name>
ClientWrkStnName = <my workstation name>
ClientAcctStr = <accounting string>

Listing 4 shows how you would set the client information in the connection URL for JDBC applications.

Listing 4. For applications using JDBC driver
// Specify the JDBC properties in the connection URL, such as 
jdbc:db2://lap1.boeblingen.de.ibm.com:50000/
   DEMO:ClientWrkStation=my Laptop;

Performance statistics

Performance statistics are available for each workload cluster in the system for a given time interval. The time interval can be set from a little as two minutes to up to the whole data set. You can view the most recent data (with automatic refresh) as well as just viewing historical data. The time interval can be set using the time slider as shown in Figure 7.

Figure 7. Extended Insight time slider
Extended Insight Time Slider interval can be set using the time slider

Figure 8 illustrates time selection.

Figure 8. Time selection
shows setting the time for one week

Once the desired time interval is selected, performance statistics representing that time are displayed. For each of the workload cluster groups, metrics are displayed for average end-to-end response time, maximum in-flight elapsed time, maximum end-to-end response time, average network time, and average client time, as Figure 9 shows.

Figure 9. Metrics for each workload cluster group
Metrics shown in seconds

The screen also displays transactions per minute, statement failure rate, warning percentage, and critical percentage.

Figure 10. Transaction and statement failure metrics
Example shows 5,969.722 transactions per minute

Here is a description of the performance statistics returned:

  • Average end-to-end response time — Measures the average time transactions take to complete for the displayed time interval. The transaction time includes the time at which an application's statement activity begins until a commit or rollback implicitly or explicitly ends the transaction. Viewing the transaction times in this format gives a good high-level overview of how the system is performing.
  • Maximum in-flight elapsed time— Shows the maximum transaction elapsed time for all currently active transactions within the displayed time interval. Client monitoring data is reported to the Optim Performance Manager server on minute-by-minute intervals. A transaction is considered "in-flight" if it is still executing at the end of a minute-reporting interval. A large value here would indicate that there is a long transaction running that is still live in the system.
  • Maximum end-to-end response time— Shows the maximum transaction time for completed transactions within the displayed time interval. This value compared to the average provides a sense of how much deviation there is between the completed transaction times.
  • Average data server time— Displays the average data server time for completed transactions within the selected time interval. The data server time represents the time the data server spent processing requests from the client.
  • Average network time— Displays the average time spent waiting on the network for completed transactions within the selected time interval. The network time is calculated on the client and represents the time spent blocking on network I/O. This field can be used to detect an increased network volume or latency in the system.
  • Average client time — Displays the average amount of time the data client application spent working with an open cursor or result set for completed transactions within the selected time interval. This measurement represents the application processing time from when an application opens a cursor/result set until it is closed. The time indicated here represents the application time only and does not include the time spent in the JDBC driver, network, or data server.
  • Warning (%)— Shows what percentage of completed transactions were identified with an elapsed time within the warning threshold set by the user-defined thresholds.
  • Critical (%)— Shows what percentage of completed transactions were identified with an elapsed time within the critical threshold set by the user-defined thresholds.
  • Transactions (/min)— Measures the volume of transactions by showing how many transactions were completed per minute.
  • Statement failure rate (%)— Indicates the percentage that resulted in errors.

Additional statistics in DB2 V9.7

When running against DB2 V9.7 and later, the following fields are also available.

Figure 11. Additional statistics for DB2 v9.7
Additional Statistics fields are available in DB2 v9.7 like Rows Read Rate, Rows Modified Rate
  • Rows read rate— Indicates average number of rows read by transactions performed on the monitored database.
  • Rows modified rate— Indicates the number of rows affected by create/update/delete. (This may be >0 for queries if there are temporary tables that are modified.)
  • Rows returned rate— Indicates rows read in order to find a row to return to the application. A high value here can indicate excessive table scans that can be prevented by creating indexes on the predicate columns.

User-defined thresholds

In traditional database monitoring, DBAs can set thresholds on database-specific areas, such as bufferpool or lockwait. With the Extended Insight feature of Optim Performance Manager, DBAs can set thresholds that directly relate to the user experience. For example, you can set a threshold on end-to-end response time of the database application.

DBAs can specify response-time thresholds for the entire workload cluster group or for individual workload clusters. When the workload cluster group is activated for monitoring, you are informed if thresholds were exceeded. The thresholds that you set for workload clusters on the Extended Insight dashboard do not apply to other dashboards.

Figure 12. Edit workload cluster group client user IDs
DBAs can specify response-time thresholds for the entire workload cluster group or for individual workload clusters

Response-time details

Now you've seen how the Extended Insight Analysis Dashboard's workloads view provides an overview of the system as a whole and how the workloads are organized and customized into workload cluster groups. The next step is to view the response-time details panel for a particular workload to get more detailed insight into what's going on. Figure 13 shows what the panel looks like.

Figure 13. Response-time details
Screen shows graph on upper left, SQL statement history on right, and listing of average response time at the bottom

(View a larger version of Figure 13.)

There are three key items in the Response-Time Details panel that give you insight into a workload:

  • End-to-end details— Each layer of the graph can be viewed to provide a complete breakdown of where time was spent.
  • SQL statements— From here, you can see exactly which SQL were executed by transactions under a specific workload cluster.
  • Clients— Shows a list of database client hosts involved in the workload.

1. End-to-end details

The goal of the end-to-end details display is to show exactly how time is spent throughout all the layers of the system. At a high level, this means statistics are available for the total time spent on the client, over the network, on to the database server, and back again. Within certain layers there is an even more detailed breakdown of time spent as shown in Figure 14.

Figure 14. Average compilation processing time
Average compilation processing time that shows time spent on the client, over the network, on to the database server, etc.

The following sections dive into the major layers of the end-to-end details and show the kind of insight you can get from each one.

Figure 15. End-to-end response time
Average end-to-end response time represents the average transaction time for all transactions completed in the interval

The outermost layer is the average end-to-end response time. This represents the average transaction time for all transactions completed in the interval. The average transaction time consists of the total time it took for a transaction to complete from start to finish. This is inclusive of client processing, network time, and server time.

Figure 16. Detail area for average end-to-end response time
Detail area for average end-to-end response time, including overall average response time per transaction, maximum response time etc

In this panel, you can find information like:

  • Overall average response time per transaction shows the average times for completed transactions.
  • Maximum response time shows the longest completed transaction time.
  • Maximum time of running transactions shows the maximum time of transactions that are still running.
  • Number of executions shows the number of transactions executions completed.
  • Statements shows the number of statements executed in completed transactions.
  • Statement failure rate shows the rate of statement executions resulting in an exception.

Charts are also displayed for an overall time distribution breakdown, as well as transaction and statement throughput. These allow you to quickly see where time is spent, as well as correlate transaction throughput and statement throughput over the time interval.

Figure 17. Client layer
Client layer represents the total time spent on a client that includes application time, database driver time, etc

The client time layer represents the total time spent on a client. This layer comprises three parts:

  1. Application Time
  2. WebSphere Connection Pool Wait Time
  3. Database Driver Time
Figure 18. Client layer
Detail area for average client time

The pie chart gives a breakdown of where in these three parts the time is spent. In this case, a significant amount is spent in application layer.

Figure 19. Client application layer
Application Processing Time is the time spent within a transaction processing only the application logic

The application processing time is one of the three pieces that make up the average client time — the other two being the WebSphere connection wait time and the driver processing time. An application is defined as the software driving calls to the database driver to access the database. The application processing time is the time spent within a transaction processing only the application logic. This excludes all the time an application spends waiting for the driver to process requests and return results.

Figure 20. Client WebSphere layer
Client WebSphere layer is time spent waiting for a connection for the WebSphere connection pool

This area provides insight into the time spent waiting for a connection for the WebSphere connection pool. A high value here would indicate a pool usage issue.

Figure 21. Client driver layer
Client driver layer shows times spent performing database driver processing logic

The driver layer shows times spent performing database driver processing logic. This is only time spent performing driver logic, not time spent on blocking I/O, or waiting on the data server.

Figure 22. Network layer
Network layer displays times spent waiting on network I/O

The network layer displays times spent waiting on network I/O. It does not include time from the data server processing or data client processing.

Figure 23. Detail area for average network time
Detail area for network time shows average network time per transaction, number of executions, etc.

The detail area for network time shows the following network statistics:

  • Average network time per transaction is the amount of time spent waiting on network I/O.
  • Number of executions is the number of transaction executions.

There are also graphs for bandwidth usage and corresponding transaction throughput. If there is an issue with transaction throughput, this information is valuable in showing the network activity and how it relates to the transaction activity.

Figure 24. Data server layer
Data server details gives insight into what's happening on the data server side in relation to the workload

Finally, the data server details provide insight into what's happening on the data server side in relation to the workload.

Figure 25. Data server time overall properties
Data server time properties displays average data server time per transaction, rows returned, number of executions, statements, etc.

This panel shows information relevant to the performance of the data server including the following:

  • Average data server time per transaction— The average time spent on the data server for completed transactions.
  • Rows returned— The number of rows retrieved from the data server.
  • Number of executions— The total number of transactions executed.
  • Statements— The total number of statements executed.

There are also graphs for rows returned, transaction throughput, and statement throughput. These provide a quick overview of how the transaction throughput relates to the statement and cursor activity as seen in the graphs for Statement and row throughput.

Although not covered here, an even greater breakdown of the data server time is also available in Optim Performance Manager's Extended Insight that includes a wealth of database statistics, such as lock wait time, I/O time, workload manager queue time, compilation time, and more.

2. SQL statements

The next major functional area of the response-time details view is the ability to show the SQL executed in the workload. The SQL statements are displayed in a table.

Figure 26. SQL statements
SQL statements display that was executed

The criteria for sorting SQL:

  • Average data server time
  • End-to-end response time
  • Average end-to-end response time
  • Network time
  • Average network time
  • Client time
  • Average client time
  • Failed statement executions
  • Average number of round trips
  • Average number of bytes transferred remotely
  • Average number of bytes transferred locally
  • Average number of selected rows

Selecting a SQL statement from the list brings up additional information for the statement as shown in the detail area for SQL statements.

Figure 27. Detail area for SQL statements
Additional information for the SQL statement

(View a larger version of Figure 27.)

Statement performance

Looking more closely, the statement performance panel displays metrics specific to the statement.

Figure 28. Detail area for SQL statements
Detail area for SQL statements where time is spent for this statement

From this display, the breakdown of where time is spent for this statement's executions is shown both graphically as a pie chart as percentages and as time values in a table.

Statement outcome

The statement outcome panel displays any failures or exceptions encountered during an execution of the selected statement.

Figure 29. Statement outcome
Statement Outcome displays any failures or exceptions encountered during an execution of the selected statement

The failure rate represents the percentage of executions that encountered errors. Additionally, the first SQL code is provided to give insight into the error encountered. Figure 30 shows an example of a statement where a -601 SQL code was encountered.

Figure 30. Statement outcome example
Statement outcome with failure rate represents the percentage of the executions which encountered an error

Statement information

The statement information panel provides additional meta information about the statement and provides a key ability to trace a statement execution back to an application's source code.

Figure 31. Statement information panel
Statement Information panel provides additional meta information about the statement and provides a key ability to trace a statement execution back to an application's source code

(View a larger version of Figure 31.)

The transfer volume displays metrics that relate to communication between the client and the data server.

Figure 32. Transfer volume
Transfer volume displays metrics that relate to communication between the client and the data server

All average values represent the average among statement executions. Bytes transferred locally refers to the number of bytes transferred between the client and a local database instance. Bytes transferred remotely indicates bytes transferred between the client and a remote database instance. Rows returned is the number of rows that were actually read/processed on the client. Number of round trips is the number of requests issued to the data server during the execution of the statement.

3. Clients

The final major functional area of the response-time details view is the clients display, which provides a way to see which clients were involved in the workload cluster.

Figure 33. Clients display
Clients displays provide a way to see which clients were involved in the workload cluster

The table displays a list of all client instances participating in this workload. Notice that there is an awareness of each client instance/process participating in the workload. The table contains the following details about the clients:

  • Client hostname or IP address— The hostname/IP address of the client executing the workload.
  • Transaction executions— The number of transactions executed from this client.
  • Time of first connection— Indicates the time of the first connection request for the given client instance.
  • Average response time— Indicates the average response time of transaction executions. This is also known as average end-to-end response time.

The clients table may be sorted by the following criteria:

  • Average response time
  • Warning
  • Problem
  • Transaction throughput
  • Average data server response time
  • Average client time
  • Average driver processing time
  • Average application time
  • Average WebSphere application connection wait time

Details pertaining to a client can be shown in the detail area for clients by selecting a client entry from the table.

Figure 34. Detail area for clients
Details pertaining to a client can be shown in the detail area for clients by selecting a client entry from the table

(View a larger version of Figure 34.)

Client information

Looking closer, the client information panel shows details of the client host and environment.

Figure 35. Client information panel
Client information panel shows details of the client host and environment

Some key attributes here:

  • Host name or IP address— The hostname/IP address of the client host.
  • First connection start time— This is the time of the first connection request made to the data server.
  • Operating system— The operating system of the client host.
  • Database driver level— The database driver level in use.
  • Connection properties— The properties that represent the client data source.
  • Extended Insight client level— The active level of the Extended Insight client.
  • JRE vendor— The Java runtime environment from IBM.
  • Runtime properties:
    • WebSphere Application Server data source name— This indicates the JNDI of WebSphere Application Server.
    • WebSphere Application Server data server name— This indicates the node of WebSphere Application Server.
    • WebSphere Application Server version— This indicates the version of WebSphere Application Server.

Client performance

The client performance panel shows performance metrics associated with this client.

Figure 36. Client performance panel
Client Performance panel shows performance metrics associated with this client

The values displayed in the table:

  • Average client time indicates the average amount of time spent on the client (excludes any time waiting on network or server) per transaction.
  • Number of executions indicates the number of transactions executed by this client.
  • Response time warnings indicates the number of transactions that fall under the user specified warning threshold.
  • Response-time problems indicates the number of transactions that fall under the user specified problems threshold.

The following values are displayed in pie charts:

  • Overall time distribution indicates a high-level overview of how the time is spent as a percentage across the client, network, and the server.
  • Client time distribution shows a detailed break down of the client time component from the previous pie chart.
  • Application time represents the time an application spent working with a cursor (excludes driver, network, server time).
  • Connection pool wait time represents the amount of time an application is blocked waiting for a connection from the connection pool.
  • Database driver time shows time spent in the database driver (excludes network and server time).

WebSphere Application Server connection pool

If a monitored database application is deployed in WebSphere Application Server, the connection pool panel provides the information pertaining to the WebSphere Application Server.

Figure 37. WebSphere Application Server connection pool panel
WebSphere Application Server connection pool panel provides the information pertaining to the WebSphere Application Server

(View a larger version of Figure 37.)

The values displayed in the table:

  • Connection pool size— Displays the current actual connection pool size
  • Average connections in use— Shows the average number of connections used across the time interval displayed
  • Maximum connection wait time— Shows the maximum time that an application was blocked waiting for a connection from the pool.

From the pool usage graph:

  • Configured maximum pool size displays the configured maximum pool size.
  • Currently used connections displays the current actual connection pool size.

Extended Insight use case

Let's look at a scenario where Extended Insight can help diagnose and potentially alleviate database application's response-time problem.

Customers are experiencing slow response time when trying to view the history of there online orders. The call-center person takes the customer call and forwards the concern to the responsible DBA. The DBA then launches Extended Insight dashboard to see how various database applications are performing.

Figure 38. Extended Insight dashboard
Extended Insight dashboard helps DBA see how various database applications are performing

(View a larger version of Figure 38.)

The Extended Insight dashboard shows that the OrderHistory application is getting end-to-end response-time alert indicated by the red square next to it. After looking at the end-to-end response-time delay, the DBA notices that the OrderHistory application is taking more than time in the database server as compared to other layers (network time, client time, etc.). DBA then double-clicks on the OrderHistory application to further isolate the performance slowdown and see details of Extended Insight dashboard.

This view helps diagnose the problem in specific layers of application stack and provides detailed information when drilling down on a specific workload like average compilation processing time in Figure 14, SQL statements in Figure 26, client display in Figure 33, etc.

Figure 39. Extended Insight dashboard details
Extended Insight dashboard displays detailed information when drilling down on a specific workload

(View a larger version of Figure 39.)

After analyzing the transactions on the left side of the dashboard, DBA not only confirms that the transactions are spending more than 14 seconds in the database but also sees from the graph that every transaction is executing consistently slow.

On the right side of the dashboard, SQL statements in the transactions can be viewed and sorted on the time they took to execute. Furthermore with the pie chart, DBAs can confirm where the SQL statement is spending most it time.

SQL statements can also be traced to the line of issuing code, if pureQuery metadata is populated on the dashboard.

After figuring out the application, transactions, the root-cause SQL, and optionally the issuing line in the application, the DBAs can contact the concerned person to fix the performance problem.


Conclusion

In this article, we have shown how you can use Optim Performance Manager Extended Insight to monitor database end-to-end response time for Java and CLI applications. Optim Performance Manager Extended Insight extends database monitoring across the database client, application server, and network, giving DBAs immediate insight into where database workloads, transactions, and SQL requests are spending their time. With Optim Performance Manager Extended Insight, you can quickly detect negative trends, including eroding response times for applications, network congestion between applications, and the database, to successfully sustain service-level agreements.

Optim Performance Manager allows you to manage applications to meet service-level agreements more easily and effectively. Users can see the response time for single SQL requests or complete database transactions from the application's point of view, enabling creation of performance indicators that more directly relate to the user's experience. Optim Performance Manager Extended Insight helps reduce the time to figure out the causes of the performance issues from several days to few minutes.

Resources

Learn

Get products and technologies

Discuss

Comments

developerWorks: Sign in

Required fields are indicated with an asterisk (*).


Need an IBM ID?
Forgot your IBM ID?


Forgot your password?
Change your password

By clicking Submit, you agree to the developerWorks terms of use.

 


The first time you sign into developerWorks, a profile is created for you. Information in your profile (your name, country/region, and company name) is displayed to the public and will accompany any content you post, unless you opt to hide your company name. You may update your IBM account at any time.

All information submitted is secure.

Choose your display name



The first time you sign in to developerWorks, a profile is created for you, so you need to choose a display name. Your display name accompanies the content you post on developerWorks.

Please choose a display name between 3-31 characters. Your display name must be unique in the developerWorks community and should not be your email address for privacy reasons.

Required fields are indicated with an asterisk (*).

(Must be between 3 – 31 characters.)

By clicking Submit, you agree to the developerWorks terms of use.

 


All information submitted is secure.

Dig deeper into Information management on developerWorks


static.content.url=http://www.ibm.com/developerworks/js/artrating/
SITE_ID=1
Zone=Information Management
ArticleID=682802
ArticleTitle=End-to-end database monitoring with Optim Performance Manager Extended Insight
publish-date=06302011