Profiling SQL procedures
An introduction to the SQL PL Profiler
Complex applications or batch processing can execute hundreds of SQL procedures and potentially thousands of SQL statements. Tuning such an environment by only analyzing the optimizer plans is not feasible due to the quantity of produced plans. Equally important, even the best-tuned SQL can cause a performance drag if it is called too often. This article introduces a Java-based GUI tool which discovers the SQL procedures invoked by an application in flight. The tool profiles the procedures and presents the collected data side-by-side with each procedure’s source code. All this happens with a minimal number of clicks of the mouse, allowing the application developer or DBA to find and tune resource-consuming statements or algorithms efficiently.
The SQL Procedure Profiler (or simply SQL PL Profiler) tool can be set up as an applet accessible through a Web browser, or it can be set up as a standalone application.
To set up the tool as an applet you need:
- A Web server (such as Apache from www.apache.org)
- DB2 UDB V8.2 with Java support
- A Java swing plug-in enabled browser
- .java.policy file under the login profile for Windows® users. (For example, this policy file should exist at C:\Documents and Settings\<username>\.java.policy.) It is included under the policy directory.
- DB2 JDBC installed
When these prerequisites are fulfilled, download the SQL Procedure Profiler for Applet. Once downloaded, unzip the file and move the folder into the main HTML subdirectory of the Web server (for Apache that would be into htdocs).
To start the applet on the local machine, simply type the URL http://localhost/profiler into the browser. If the applet is served from a different machine, replace appropriately.
To set up the tool as a standalone Java application you need:
- Java 1.4 or higher
- DB2 UDB V8.2 with Java support
- DB2 JDBC installed
Download the SQL Procedure Profiler for Application and unzip it. Include the download directory into your CLASSPATH. Start the SQL Procedure Profiler application by typing:
from a command line window.
Figure 1. Profiler GUI
After starting the tool, you need to provide connection parameters that include:
- The name of the database on which the tool operates
(Unless the tool operates in a disconnected mode, which will be described later, this database is the database that you want to profile.)
- The name or URL of the server where the database resides
- The port number where the DB2 instance communicates.
- A user name and password for a user who has sufficient privileges to start and operate a statement event monitor as well as to create a schema where the SQL Profiler holds its local data.
After providing the requested information, click on the Connect button.
Figure 2. Success message
Once the connection is successful, you will get a message indicating success as shown in Figure 2. Click OK, and you will see the buttons (Start, Clear) and radio buttons enabled.
Specify the application
Once the tool is connected to the database, you have the choice to either profile every application running on the database, or to profile a specific connection.
To get the application id from an interactive connection such as the DB2 Command center or CLP simply execute “VALUES application_id()" under that connection. Then click the Yes radio button and choose the application ID from the drop down list, as shown in Figure 3.
Figure 3. Specify application section
The application ID for the local connection to a machine where DB2 resides is *LOCAL.DB2.<#######>. For remote connections, *LOCAL is replaced with a hexadecimal representation of the IP address of the client machine.
Use the event monitor with the profiler
Before turning on the event monitor, you can set up the profiler to include optional event monitor elements by choosing optional elements. These optional elements include:
Figure 4. Optional event monitor elements
Now it's time to start profiling by turning on the event monitor. This is done by clicking on the Start button which will bring up the success dialog box confirming a start of an event monitor as shown in Figure 5. The event monitor will log information on each executed SQL statement from this point on. This may be an application already in progress, such as a batch job. Or, it may be a procedure you invoke at this time.
Figure 5. Event monitor message
Note that the event monitor information is very detailed, and it will quickly grow in size if many fast SQL statements are executed in a short time. Also note that turning on the event monitor will have a definite performance impact on DB2. Once the procedure to be profiled interactively has finished, or you believe you have captured a sufficient slice of the batch job, you can click the Stop button.
The Clear button clears and sets up the environment for running a new session of the profiler. You don’t want to press it before you have received the report.
Select procedures and schemas
The first thing the SQL Procedure Profiler does once the event monitor has been stopped is to find all the SQL Procedures that were captured. You now have the opportunity to select which procedures you are interested in, either on a schema level or individual basis.
Figure 6. Schema selection drop down combo box
Pick "All" schemas to see all procedures, or pick a desired subset. Then move the procedures you want the profiler to report on into the “Selected procedures" box.
Figure 7. List box listing procedures
Generate the report
Having selected the procedures, you can assign a name for the report by
typing on the text field labeled "Report."’ This report name must be
unique compared with previous reports you have run and archived. Clicking
on Generate Report will now generate a report that will
be display in a different tab pane. The report can be saved to the
database for future reference. You do this by checking the “Save Report"
box, which is checked by default.
Generating the report may take quite a while. Be patient. The results will be worth your while.
Search the report
The archived reports can be viewed by switching to the Search Report tabbed pane. Click Refresh to view the available reports.
Figure 8. Tabbed Pane for Viewing Report
If the reports are available ‘View Report’ will be enabled.
Figure 9. List box displaying available reports
Select a report and click on View Report to view it.
View the reports
The sample report looks as shown in Figure 10.
Figure 10. Sample report
Deselecting in the Display Item Menu can hide the column in the report.
Figure 11. Display Item Menu to hide or display column
You can sort on a specific column by simply clicking on the column name. The sorting will alternate between ascending, descending and no sorting for each mouse click. To add additional sort columns, keep the control key pressed while choosing additional columns. You can choose to hide or display specific columns by selecting or deselecting the column from "Display Item Menu" list as shown in the Figure 11.
Export and close the report
The report can be exported and saved to a file by clicking on Export. Fields in the file are separated by ‘|’ along with a tab. The exported report can now be further processed through for example Microsoft Excel (Data -> Get External Data -> Import Text File).
The Close button will close the current report.
The tool does not require you to be connected to the database to be profiled. Instead all relevant data can be transferred from the profiled database and the tool can run connected to a different database, such as a development environment, using the imported data.
This mode of operation is useful when there are concerns about running this tool in a production environment. To collect profile information, the following steps need to be performed from a shell in UNIX®/Linux or a DB2 command window on Windows.
- Connect to the database being profiled using a user id that has the authority to create event monitors.
- Type the command
db2 –tvf startEventMon.sql(see Appendix A.1) to create and turn on the statement event monitor. The event monitor will now collect data on the entire database. To customize the event monitor, for example to collect data only on a specific application, please refer to the SQL Reference manual.
- To stop the statement event monitor, run
db2 –tvf stopEventMon.sql(see Appendix A.2).
- Then run the export script
db2 –tvf export.sqlto export the statistic data. The script will generate several files including the event monitor data, and a dump of some DB2 catalog views.
At the client machine where you run the profiler applet or the profiler application, perform the following steps:
- Connect to any database using a userid which has sufficient privileges to create a schema. The chosen database is where the SQL Procedure Profiler will store its local data.
- Download the files generated by the export script (in step 4 above)
from the server, and change the current working directory to the
location of the files. Then run
doImport.bat’ (for Windows) or
doImport.sh(for UNIX) to import the data into tables.
- Now, start SQL Procedure Profiler.
- Connect using the same userid chosen in step 1.
- Check the Import checkbox. This will enable the
Figure 12. Load button is enabled when Import is checked
- Clicking the Load button will list the procedures and schemas (as shown in Figure 7) profiled on the production database.
Analyze the report
A report provides the following information:
- ROWNUM: This sequential numbering of all the rows in the report for reference purposes.
- ROUTINESCHEMA: The schema in which the routine was created.
- LINE: Line number for a statement within in a procedure.
- NUM_ITERATION: Number of times a SQL
statement was executed. Note that in case of a cursor declaration the
value is the number of times the cursor was opened. It does not
include the number of fetches made by the cursor.
If the number of iterations is zero, it means the statement was never executed. If however the number of iterations field is empty it means one of four things:
- The line is not the beginning of the SQL statement.
- The statement is OPEN, FETCH or CLOSE and the data has been provided in the DECLARE CUSROR statement.
- The statement was merged with another statement for performance reasons and is accounted there. You will see this most often for clusters of SET statements which get collapsed into a VALUES INTO.
- The statement was executed within the procedure virtual machine (PVM). This means execution of the statement was comparably trivial and hence fast enough to not worry about.
- ELAPSED_TIME: The elapsed time spent running the statement (in seconds). For a cursor this is the elapsed time between open and close. Also note that the sum of all elapsed times within a routine is not equal to the elapsed time of the calls invoking the routine. The reason for this mostly lies in the fact that the event monitoring itself has a significant overhead. However, elapsed time is a good indicator of relative performance between statements and routines.
- CPU(microseconds): The total CPU time (in microseconds) consumed to execute a SQL statement. The higher the value, the more CPU intensive the statement is. CPU time comprises user as well as system CPU. Unlike elapsed time, in case of a declare cursor statement the CPU time is the sum of open, all fetched and close. So it is a much better indicator, along with other fields, of the cost of a cursor.
- TEXT: Text of the SQL statement profiled.
- TOTAL_SORT_TIME: The total elapsed time (in milliseconds) for all sorts that have been executed by the statement (e.g. query statement using ORDER BY clause).
- TOTAL_SORT: Total number of sorts executed by the statement. This element helps to identify statements which are performing large numbers of sorts. Such statements may benefit from additional tuning to reduce the number of sorts, such as adding an index.
- FETCH_COUNTS: Total number of rows delivered by the cursor execution. Note that blocking cursors do pre-fetching. This means that a cursor which does not drive to EOF may show more rows in FETCH_COUNTS than were actually fetched using the FETCH statement.
- ROWS_READ: Total number of rows read during the execution of the statement.
- ROWS_WRITTEN: Total number of rows changed in the table by the execution of the statement while inserting, deleting or updating.
Other column elements are:
INT_ROWS_DELETED, INT_ROWS_INSERTED, INT_ROWS_UPDATED, POOL_DATA_L_READS, POOL_DATA_P_READS, POOL_INDEX_L_READS, POOL_INDEX_P_READS, POOL_TEMP_DATA_L_READS, POOL_TEMP_DATA_P_READS and POOL_TEMP_INDEX_L_READS.
For details on these statement-monitor elements please refer to the DB2 UDB System Monitor Guide and Reference.
In this article we showed you how to deploy and use a Java based SQL profiling tool. This tool gathers statistic data — such as number of iterations, elapsed time, CPU time, sort time, and so on — from applications in flight without prior knowledge of the call structure of invoked procedures. The resulting data can be highly valuable in helping you to pinpoint bottlenecks in SQL procedures.
Scripts for importing and exporting profiler data
Script for startEventMon.sql
CREATE EVENT MONITOR STMTMON FOR STATEMENTS WRITE TO TABLE STMT (TABLE STMTS, INCLUDES (section_number, package_name, stop_time, start_time, system_cpu_time, user_cpu_time, appl_id, creator, blocking_cursor, cursor_name, stmt_type, stmt_operation, fetch_count, rows_read, rows_written, total_sort_time, total_sorts, INT_ROWS_DELETED, INT_ROWS_INSERTED, INT_ROWS_UPDATED, POOL_DATA_L_READS, POOL_DATA_P_READS, POOL_INDEX_L_READS, POOL_INDEX_P_READS, POOL_TEMP_DATA_L_READS, POOL_TEMP_DATA_P_READS, POOL_TEMP_INDEX_L_READS ) ) BLOCKED; SET EVENT MONITOR STMTMON STATE = 1;
Script for stopEventMon.sql
SET EVENT MONITOR STMTMON STATE = 0;
Script for export
export to stmt.ixf of ixf messages msgs.txt select * from stmts; export to routine.ixf of ixf messages msgs.txt select R.ROUTINENAME, R.SPECIFICNAME, R.TEXT, R.ROUTINESCHEMA, R.PARM_COUNT from SYSCAT.ROUTINES R; export to routinedep.ixf of ixf messages msgs.txt select D.ROUTINENAME, D.ROUTINESCHEMA, D.BTYPE, D.BNAME from SYSCAT.ROUTINEDEP D; export to statements.ixf of ixf messages msgs.txt select ST.PKGNAME, ST.SECTNO, ST.STMTNO from SYSCAT.STATEMENTS ST;
Script for import
import from stmt.ixf of ixf messages impmsgs.txt REPLACE_CREATE into REMOTE_STMTS; import from routine.ixf of ixf messages impmsgs.txt REPLACE_CREATE into REMOTE_ROUTINES; import from routinedep.ixf of ixf messages impmsgs.txt REPLACE_CREATE into REMOTE_ROUTINEDEP; import from statements.ixf of ixf messages impmsgs.txt REPLACE_CREATE into REMOTE_STATEMENTS;
Script for doImport.bat and doImport.sh
db2 -tvf dropRemoteTbl.sql echo "Drop completed" db2 -tvf import.sql echo "Import completed"
Script for dropRemoteTbl.sql
DROP TABLE REMOTE_STMTS; DROP TABLE REMOTE_STATEMENTS; DROP TABLE REMOTE_ROUTINES; DROP TABLE REMOTE_ROUTINEDEP;