Load testing Web applications using IBM Rational Performance Tester: Part 4. Test results analysis reports

IBM®Rational® Performance Tester provides a variety of performance analysis reports that enhance your visual experience of test results and enable you to identify performance bottlenecks easily. They enable you to run as many performance tests as possible during the test phase, with as little burden as possible incurred from using the tool. In addition, because testing may occur across many iterations, you can use the performance analysis reports obtained from each run to determine the health of an application for a particular iteration.

Foong Lee, IT Specialist, IBM

Foong Yen Lee is a Technical Consultant at the IBM Innovation Center for Business Partner in Kuala Lumpur, Malaysia. She works with ISVs and other Business Partners to help them integrate their solutions with IBM Technologies and products. She also provide detailed technical consultancy including, porting assistance, product validation, testing, performance tuning, technical education and proof of concept. You can contact Foong Yen at leefy@my.ibm.com.


developerWorks Contributing author
        level

Allan Tham, IT Specialist, IBM

Allan W. Tham provides presale technical support to IBM Business Partners for IBM DB2 data warehouse, information management, and Rational software. He is also an extended member of the IBM Innovation Center, helping ISVs migrate various databases to IBM DB2 and perform load tests using the Rational testing suite. Allan earned an Honor Specialized Computer Science degree from York University, Canada, and he is a certified DB2 Content Manager designer and DB2 Universal Database administrator. Before joining IBM, he worked in a nationwide government procurement system for end users, where he was an Oracle database administrator for three years.



14 September 2007

Before you start

Learn what to expect from this tutorial and how to get the most out of it.

About this series

IBM® Rational® Performance Tester is a performance testing tool that emulates various user loads to mimic the real-life loads. With proper planning coupled with realistic simulation, this tool uses the current loads to estimate future loads. For example, a customer's application may potentially serve a total of 5000 users. With Rational Performance Tester, you can easily emulate the user loads at 1000, 2000, 3000, 4000, 5000 and beyond to project the right user growth, so that you can also project server sizing, such as optimal CPU and memory requirements, more accurately. You can identify and diagnose performance bottlenecks, whether such problems occur in the network, database, the application server, or even the user application. The root cause analysis capability further analyzes application tiers, which may include page components such as Enterprise Java™Beans (EJBs), servlets, a Java™ Database Connector (JDBC) API, Web services, and so forth. This functionality enables you to pinpoint the performance culprit easily and efficiently by analyzing the online or extracted reports.

Rational Performance Tester also helps you create, run, and analyze performance tests and validate the scalability and reliability of your Web-based applications before deployment. The default supported protocols, such as HTTP and HTTPS, allow You to run the load tests on Web applications. Several extensions are also available:

  • IBM® Rational® Performance Tester Extension for Citrix Presentation Server
  • IBM® Rational® Performance Tester Extension for SOA Quality
  • IBM® Rational® Performance Tester Extension for Siebel Test Automation
  • IBM® Rational® Performance Tester Extension for SAP Solutions

Here's a quick summary of this series of five articles:

  • Part 1 gives you an overview of IBM Rational Performance Tester Version 7.0.
  • Part 2 walks you through the basics of using Rational Performance Tester by creating, running, and evaluating a simple test.
  • Part 3 covers testing as user loads grow (see the next section for more).
  • Part 4 (this part) is all about reports, because a load test is only as good as the reports of the results.
  • Part 5 shows you additional reports, as well as how you can customize and export the reports to suit your needs.

The goal of this series is to help you understand the features, topological considerations, and constraints so that you can create and test Web applications and analyze the performance reports. With this knowledge and the ease of use of Rational Performance Tester, load testing a Web application will no longer be a burdensome chore, and you can include it for each iteration of your software.

Prerequisites

Be sure to work through Parts 1 through 3 before you start this article, because you use the same sample applications. It's important that you have learned the basics of using Rational Performance Tester for load testing from the other articles in this series, so that you can proceed to the more complex activities in this one.

Note:
The workbench machine should be used only for workbench activity, such as creating tests and distributing the performance load to run on remote machines.

Please ensure that your system meets these prerequisites:

Table 1. Required resources
ResourceWorkbench machineRemote machines
HardwareMinimum 1GB, more if running testMinimum 1GB
Software IBM Rational Performance Tester (includes IBM Rational Agent Controller)
IBM Rational License Server
IBM Rational License Server
IBM Rational Agent Controller
LicensesActivation kit for Rational Performance Tester to enable permanent use

Floating license key imported into Rational License Server

Note: The floating license key must be for more than or same number of virtual users that will test in Rational Performance Tester*

Pointing to floating license key served by workbench machine
NetworkAble to ping all remote machinesAble to ping workbench machine

*The trial version of Rational Performance Tester allows only five concurrent tests of users. To test more than that, you need to purchase the license. The IBM® Rational® Software Delivery Platform V7.0 - Desktop Product Activation site has information about how to get licenses and the activation process. You can download both IBM® Rational® Agent Controller and IBM® Rational® License Server from the IBM Software Access Catalog. See Resources for links.

IBM Rational License Server manages floating and named-user license keys for Rational products. The floating license key is required if you want to run more than five virtual user tests. In this example, the license key is imported into the license server, which resides on the workbench machine and serves the key to all remote machines. The remote machines point to the license server.

The IBM Rational Agent Controller needs to be installed on all remote machines, to enable distributed testing. The workbench machine would have the Rational Agent Controller installed when installing Rational Performance Tester.

Figure 1 shows the setup that you need for the exercises in this article.

Figure 1. Topology of the setup for remote testing
Topology of the setup for remote testing

Reporting in Rational Performance Tester

The saying "a picture is worth a thousand words" is applicable to the various analysis reports that come with IBM® Rational® Performance Tester. These reports, complete with easy-to-use features, not only enhance your visual experience of test results, but also enable you to identity application bottlenecks rather easily. The purpose of these reports is to enable you to run as many performance tests as possible during the test phase, with as little burden as possible incurred from using the tool.

Day-to-day tasks for you (as a tester) may include, but not be limited to, the following:

  • Test Case Generation. The bulk of the work happens here. You need to gather the requirements and come up with test cases, often involving different permutations in order to emulate the actual workload.
  • Server Assignment. Assign servers for test purposes. Servers should be sized appropriately to cater to the requirements.
  • Test Recording and Configuration. Start recording and configuring the test environment.
  • Emulate the Workload and Playback. This is the stage where the actual load is pumped in based on the test cases generated.
  • Report Generation. Reports are being generated for analysis purposes. Any application- or system-level bottleneck can be identified easily.

Because testing may occur across many iterations, you can use the performance analysis reports obtained from each run to determine the health of an application for a particular iteration. If you do not have comprehensive analysis reports, you need to perform manual steps to obtain test reports. Many testers resort to writing their own custom test scripts, which results in higher maintainability as requirements change over time. Fortunately, Rational Performance Tester comes with a variety of easy-to-use, customizable performance analysis reports. This capability can assist you in the test phase in two ways:

  • Finding and documenting bottlenecks in early phases. You can provide feedback to developers on an application bottleneck as early as possible based on the performance analysis reports that you generate.
  • Finding and documenting the breaking point for an application in later phases. Discover a component that breaks for a particular load, after the initial load was perfect. A use case for this scenario may be workload increase after the initial deployment.

This tutorial explores the Performance and Page Element reports provided by Rational Performance Tester.

Overview

The emphasis of this tutorial is to showcase the feature-rich reporting capabilities built into Rational Performance Tester. The examples use the standard test application DayTrade as the application under test. Some of the reports captured here are purely for illustrative purposes.

For real-life engagements, you can only generate meaningful reports if you have a robust understanding of the application under test. Without this comprehension, finding the potential bottleneck is like finding a needle in a haystack. This is especially true when the application is composed of numerous complex modules.

This tutorial discusses the following reports included in Rational Performance Tester. Part 5 of this series discusses additional reports, as well as customization and export capabilities.

  • Performance report
  • Page Element report

Citrix and SAP reports are outside the scope of this tutorial, and therefore will not be discussed.

Analysis Reports: Introduction

Report Component: Counters

Rational Performance Tester performance test results appear in hierarchical order under the project name, as shown in Figure 1. Because reports are for the most part made up of counters, it is natural to consider some of the available counters provided in Rational Performance Tester. The generic counter is listed as the first category in the Performance Test Runs view.

Figure 1. Hierarchical listing of generic counters and reports
tree view of project reports

However, if you simply navigate to and double-click your test run result counters under All Hosts, that will not bring up a report (this tutorial will show you how to get to the reports in the How to navigate to the reports section that follows). These counters in the Performance Test Runs view merely serve as containers. The generic counter can be divided into HTTP, Citrix, and SAP counters. HTTP counters, for example, are composed of the following categories (shown in Figure 1):

  • Page Counter

    • Page/element Attempts. Attempts rate, the count between a run or an interval for all pages and all elements.
    • Page/element Attempts Completed. Count for attempts completed.
    • Page/element Hit Rate. Hit rate per page/element, or the total page/element within a run or an interval.
    • Response Time. Average and maximum response time for page/element.
    • Status Code Successes. Percent and total page/element success rate.
    • Verification Points. Percent and total page/element verification that indicates a pass, fail, or inconclusive result.
  • Run Counter

    • Total User. Total users involved in the run.
    • Active User. Count of the users who are currently active.
    • Completed User. Count of total users who completed the test run.
    • Status Codes. Count of the HTTP status code (that is, 100, 200, 300, 400, and 500 status codes) within a run or an interval.
    • Run Duration. The duration of the run.
    • And so on.
  • Test Counter. Average, minimum, maximum, and standard deviation for execution time for a run, provided either as scalar or aggregate manner.
  • Transaction Counter

    • Attempts. Total transactions that were attempted for a run or an interval.
    • Completed. Total transactions that were completed for a run or an interval.
    • Execution Time. Average, minimum, maximum and standard deviations for an individual transaction, or all transactions, within a run or an interval.

Types of Reports

Rational Performance Tester provides application- and resource-level analysis reports. Application analysis reports have everything to do with application system performance-related issues, such as page or page element response time, page hit, page or page element throughput, and so on. On the other hand, resource analysis reports relate system resources utilization such as CPU, memory, disk, and network throughput. Currently, there are three resource monitoring data collected:

  • IBM® Tivoli® Monitoring (ITM)
  • Microsoft®Windows® Performance Monitor (PerfMon)
  • Linux®/UNIX®rstatd

In Rational Performance Tester, the performance test reports that are provided are HTTP, SAP, and Citrix, of which the latter two require extensions (SAP and Citrix reports will not be discussed in this tutorial). HTTP performance reports will be discussed in greater detail in the following sections, and are summarized in Table 1.

Table 1. Summary of HTTP Analysis reports in Rational Performance Tester
Report CategoryReport Tab NameDescription
Performance Report Overall An overview of how test pages are doing overall; it includes success rate in percentage for Page Status Code, Page Element Status Code, and Verification Point (VP) Status.
Summary A quick summary of three categories of summaries (Run Summary, Page Summary, and Page Element Summary) that gives you a high-level insight into the test run.
Page Performance Performance summary rendered in a bar chart detailing the response time for the slowest 10 pages.
Response vs. Time Summary A summary of page performance presented as a line chart, averaging response time for all pages and page elements for a given interval.
Response vs. Time Detail A trend analysis report graphed with each page per line, in which each point represents the response time per interval.
Page Throughput A report representing page hit rate and count per interval, with user load given alongside.
Server Health Summary A summary of system health, with the focus on page and page element health.
Server Health Detail A health detail report featuring 10 pages with the lowest successes. This includes counters such as the attempt and hit count per page.
Resources A report showing the resource counter monitored.
Page Element Report Overall A report showing the average response time for all page elements for a given interval.
Response vs. Time Summary A report that graphs the average response time per page element for the 10 slowest page elements for a specified interval.
Response vs. Time Detail A report with a table that lists the average response time for each page element, in detail.
Page Element Throughput A two-graph trend analysis report graphed in a given interval for page element hit rate and user load, respectively.
Server Health Detail A report showing the success rate in percentage for the 10 slowest page elements during the test run.

Various performance-related reports are generated and saved automatically. One quick way to gain access to these reports is to view reports via the Performance Test Runs view. By default, this view is available from the bottom left panel (if you have closed it, you can go to menu bar and select Window > Show View > Performance Test Runs). During performance test runs, the test results are automatically populated in the test administration server for further analysis. Each performance test run results in a line item in the Performance Test Runs view, including the timestamp of the actual run, as shown in Figure 2.

Figure 2. Automated report capture after test run
time-stamped reports

The simplest way to navigate to the record is to expand the desired performance test run result, and then right-click the All Hosts icon for a list of reporting options, as shown in Figure 3. Remember that reports are divided into categories, and each category contains tabs with unique names. For example, the first option (Display Default Report) displays the default report, which is the HTTP Performance Report. Alternatively, the Performance Report can be obtained by selecting HTTP Reports > Display Performance Report. Likewise, page element, percentile, and verification point reports can be obtained in similar manner.

Figure 3. Report Display Options
menu command

There are also the Display Report and Display Transaction Report options. For example, to display the Transaction Report, you can either select Display Transaction Report directly or Display Report, which lists Transaction Report as one of the report options. Display Report allows you to select from all of the available reports, including your custom reports, as shown in Figure 4.

Figure 4. Choosing a report from the Display Report list
select a report to open

Analysis reports: demystifying performance reports

This section will explain performance reports (although many of them are self explanatory) by focusing on and explaining the content of each report. In order to pinpoint performance bottlenecks, analyzing the correct reports is essential. You will be viewing the default report presentations in this section; the customizations of these reports are discussed in Part 5 of this series.

Formal Definitions

The following terms are used in this tutorial:

Interval

This is the interval for statistical sampling. This value depends on the Statistics sample interval value that you set for the schedule.

Attempt

An attempt is a request sent.

Hit

A hit is when the server received a request and returned a response based on the request.

Standard Deviation

This is the deviation of data from the mean.

Status Code Success

A success refers to the response code verification point passed for a request. In the absence of a verification point (not enabled), a status code success means that the server received a request and returned a response where the status code was in the 200 or 300 category, or returned an expected response in the 400 or 500 category.

Response time

The time between the first request character sent and the last response character received.

Performance Report

Performance Report is the default report displayed after a test run. It gives a summary of important information (such as run, page, page elements, and transaction summary), together with the details on page health checks (such as response time breakdown and workload trend). As you saw in Table 1, there are nine analysis reports (each with its own tab) that fall under this category.

Overall

This is the report shown during a test run, and it includes a progress bar that indicates the stages in the test run. Without the progress indicator (which shows Initializing Computer(s), Running, Performing Test Log data transfer, and Complete), it is difficult to see that a long-running test run is still in progress. Therefore, you could easily mistake a long-running test for a hung process.

This report, shown in Figure 5, includes percent page status code success, percent page element status code success, percent page verification points passed rates, and response code or response size verification. The first two bars are always there. However, the third or fourth bar is only available when verification points are enabled.

Under a normal test run, factors such as user think time, delay time, and server resources under test can affect the success rate of page and page element codes. Percent Page Status (as the primary request) is usually lower than Percent Page Element Status. This is because a page consists of multiple elements, and a failure on an element constitutes a failure in the primary request.

In order to qualify as a pass for Page Status Code, the primary request return code for a page has to fall between 200 and 300, (with verification disabled in the main page). However, with verification enabled in the main page, the main request inherits a status code of 200 (HTTP V1.1) by default (which you can modify to suit your test environment by navigating down the primary request).

Page Verification Point success rate, on the other hand, depends on page title, content, response code, and response size specified. You will explore verification reports in greater detail in Part 5 of this series.

Figure 5. HTTP Performance Report: Overall
color-coded bar graph

Summary tables

The summary tables give you a quick overview of the performance test run. By default, without transaction results being added, you can find three summaries under this tab. In this test run, because the transaction reports were added, four summaries are available, as shown in Figure 6:

  • Run Summary

    • Active Users. The active users currently under test run. This number is meaningful when the test run is in progress. When a test run is completed, this number is reduced to 0.
    • Completed Users. Under normal circumstances, this number increases in proportion to the number being decreased for active users. Upon test run completion, this number should be equal to the Total Users under test.
    • Elapsed Time [H:M:S]. Total run duration displayed in Hour:Minute:Second format. This time indicates the duration from the start to the end of a test run.
    • Executed Test. The test under the test run. This is a quick indicator to show which test is under a project, because many tests can co-exist.
    • Display Results for Computer. By default, this value is set to All Hosts. You can drill down to the performance report for each individual machine.
    • Run Status. Usually Complete upon a test run completion. Particularly useful during a test run (other than using the progress bar) to indicate the progress of a test run. Valid values that may be displayed are Initializing Computers, Running, Transferring data to test log, Stopped, and Complete.
    • Total Users. This is the total user load. Under normal circumstances, this figure will tally with Completed Users.
  • Page Summary

    • Average Response Time for All Pages [ms][for Run]. A page response time is the sum total of the time taken for each element within a page to respond to a request. This value indicates the average response time in milliseconds for all the pages under test.
    • Maximum Response Time for All Pages [ms][for Run]. This indicates the maximum response time for all pages.
    • Minimum Response Time for All Pages [ms][for Run]. This indicates the minimum response time for all pages.
    • Percent Page VPs Passed [for Run]. This is the same figure provided by the verification point bar in the Overall tab. Once enabled, this figure shows the total verification point success rate.
    • Response Time Standard Deviation for All Pages [for Run]. This figure shows the deviation from the mean of the average response time for all pages.
    • Total Page Attempts [for Run]. The sum total of page attempts made. A page attempt is a request to the server from a primary page, excluding the page elements. This value shows the request without awaiting the response sent from the servers.
    • Total Page Hits [for Run]. The sum total of page hits. A page hit implies a round-trip of a request (originated from primary pages) and response (from the servers). This value should tally with Total Page Attempts in normal circumstances.
    • Total Page VPs Failed [for Run]. If set, this value indicates the total number of verification points that failed for primary pages.
    • Total Page VPs Passed [for Run]. If set, this value indicates the total number of verification points that passed for primary pages.
  • Page Element Summary. The same information as for Page Summary is available here, except that it is for page elements in total.
  • Transaction Summary. A transaction is a collection of elements (both primary pages and page elements) that can be gathered for better performance analysis.

    • Average Execution Time for All Transactions [for Run]. This is the average execution time for all of the transactions defined in a test run.
    • Execution Time Standard Deviation for All Transactions [for Run]. This is the deviation from the mean for all transactions.
    • Maximum Execution Time for All Transactions [ms][for Run]. This is the maximum execution time for all transactions.
    • Minimum Execution Time for All Transactions [ms][for Run]. This is the minimum execution time for all transactions.
    • Total Transactions Completed [for Run]. This is the total number of transactions completed during a test run. This value should tally with the Total Transactions Started under normal circumstances.
    • Total Transactions Started [for Run]. This is the total number of transactions started during a test run.
Figure 6. HTTP Performance Report: Summary
performance data in tables

Page performance

Because it shows the slowest 10 pages (for an application under test that spans more than 10 pages), this report is best used to identify the slowest primary pages. With one glance, you can immediately identify the pages (using the bar chart) that take the longest time to respond to a test run. This report comes with both bar charts and page response time counters in tabular format. The response time counters (such as minimum, maximum, average, and standard deviation) are provided. This is one easy way to identify a performance bottleneck. For example, in this scenario, the third page (Trade Portfolio) has the highest response time in milliseconds, as shown in Figure 7.

Figure 7. Performance report: Page Performance
performance data in tables

The default display for the response time breakdown statistics (shown in Figure 8) are:

  • Method. Method being invoked.
  • Class. Class being invoked.
  • Package. Packages of the classes involved.
  • Base Time. Total time spent inside this object, not counting the time that this object used to invoke other objects.
  • Average Base Time. Base time divided by total calls.
  • Cumulative Time. Total time spent inside this object, and the time this object used to invoke other objects.
  • Call. Total number of times the object is invoked by other objects.
Figure 8. Page Performance: Display Response Time Breakdown Statistics
report detail displayed in tab

The Page Performance report allows you to drill down further to the page element level. It also allows you to drill down from the page element to individual host, application, component, package, class, and method response times for a particular page element.

  1. For example, right-click a bar and select Display Response Time Breakdown Statistics. This brings you to a response time breakdown for page elements.
  2. To see the response time breakdown for a particular page element, select the element by highlighting it in the Page Element Selection wizard, and then click Finish to obtain the breakdown, as shown in Figure 9.
Figure 9. Display Response Time Breakdown Statistics: tree layout
TestServer1 response time highlighted

There is more than one way to drill down to individual host, application, component, package, class, and method response times.

  1. Another way is to right-click to select Display Host Response Time Breakdown, as shown in Figure 10.
  2. To navigate to the method level, continue to right-click and use the menu options until you get to method level.
Figure 10. Display Host Response Time Breakdown
pop-up menu command

Response vs. Time

The Response vs. Time Summary report, shown in Figure 11, is a summary of average response time and response time standard deviation for all pages and page elements. On the left is the average response time for all pages. Each point represents an average response time for a given interval. The default interval is 5 seconds. You can set this interval in the test schedule, under Schedule Element Details > Statistics > Statistics sample interval.

The right graph shows the average response time for all page elements for a given interval. Usually, these two graphs are similar, because a slow page element response will inevitably create a slow page response. To identify any anomalies, look for a spike in the graphs (if there is any).

Figure 11. Performance Report: Response vs. Time Summary
pop-up menu command

Response vs. Time Details is a line graph that shows the average page response time for all pages. Each page is represented by a symbol. For example, in this scenario, the primary page, Trade Order Information is represented by a red cross, as shown in Figure 12. In addition, each point represents an interval for the duration of the test run.

The table at the bottom of the screen, on the other hand, gives page response information (such as minimum, maximum, average, and standard deviation response times, and so on) for all pages.

Figure 12. Performance Report: Response vs. Time Details
various measurements in different colors

Page Throughput

This report, shown in Figure 13, is two line graphs that show page hit rate and user load, respectively. The left graph shows the page attempt and page hit rate for a given interval. An attempt is the request being sent by a primary page, and a hit is the response from the server upon a request. Naturally, these two lines should stay close to one another, because an attempt will usually correspond to a hit.

If the workload increases without a proper server sizing, however, the number of attempts increases while the number of page hits decreases. When this happens, either scale up the server under test or re-allocate user workload to another server.

The right graph displays the active and completed users for a test run represented in intervals. In this scenario, active users were being loaded up around 150 before any user reached completion at 75 seconds. The active users stabilized at around 150, while the completed user line graph shows a healthy linear increase. You can easily identify an overloaded situation (a combination of high user workload and server capacity) from this graph.

Figure 13. Performance Report: Page Throughput
page hit rate and user load graphs

Server health

The Server Health Summary report, shown in Figure 14, provides you a summary of server health under test by displaying page and page element related information. These graphs indicate either that workload is too high (parameters such as user think time and start delay time can affect the workload), or that the server lacks the capacity to handle the workload.

The default page counters available in this report are page/page element attempts, hits, and status code successes. The left bar graph shows the page information, and the right graph shows page element information.

Figure 14. Performance Report: Server Health Summary
two bar graphs

The Server Health Detail report, shown in Figure 15, is a graph that shows the 10 slowest pages by detailing each page's percent status code success for the test run. A successful status code refers to the HTTP response code passing if a verification point is set for that particular page. In the absence of a verification point, a successful status code falls in the range between 200 and 300.

The information included in the tabular format includes attempts, hits, status code successes count, percent status code success, and so on.

In this scenario, the page Welcome to DayTrade has a percent status code success of 96.4. After observing this, you can investigate further by reviewing the test log for this test run.

Figure 15. Performance Report: Server Health Details
welcome to day trade bar

Resources

This report will only be meaningful if you turn on IBM Tivoli Monitoring, Windows Performance Monitor (PerfMon), or Linux/UNIX rstatd resource monitoring. Otherwise, this report will show a blank page.

Page Element Report

Overall

This line graph, shown in Figure 16, displays all page elements versus time. Page elements are, for instance, images, sidebars, buttons, and application scripts that are considered elements of a Web page. The following information is included in the Performance Summary table:

  • Average Response Time for All Page Elements. Each page element's average response time in milliseconds is added together to provide this figure. This figure shows the total of the page elements average.
  • Page Element Attempt Rate. In each second, how many page elements were sent to the server for processing.
  • Total Page Element Attempts. This is the total number of page elements that were sent to the server for processing in the entire run.

In order to view each element's results in detail, look at the tabs on Response vs. Time Summary, Response vs. Time Detail, Page Element Throughput, and Server Health Detail.

Figure 16. Page Element Report: Overall
one line graph with data points

Response time

The Response vs. Time Summary report, shown in Figure 17, is a line graph that shows average response time versus time for each page element. Each page element is represented by a symbol that is described in the legend.

You can apply a filter to the result by count, value, or label. When you apply a filter, the result shows just the information that you need. For example, setting the filter by count to 10 highest response times shows you the top 10 slowest page elements. By knowing the slowest elements, you can take action to lower the response time (for example, use a smaller picture that loads faster).

The following information is included in the Performance Summary table in Figure 17 with a count filter of 10 highest (meaning the 10 page elements that took the longest to respond).

  • The two left-most columns show the parent page and the element name that was measured.
  • Response Time - Average for Run. Each page element can be sent to the server and returned to the Web page more than one time. Each time, the response time is recorded and then averaged to provide this figure.
  • Response Time - Standard Deviation for Run. Imagine the Bell curve and draw a vertical line across the middle to show the mean of the average response time of each element. This figure shows the deviation from the mean. A bigger deviation number means that there was less consistency in the response time, and a smaller deviation number means more consistency.
  • Attempts - Rate for Interval. Interval means the time span for which Rational Performance Tester records results on the graph. As such, this figure shows the attempt rate in seconds sent to the server at the sample interval.
  • Attempts - Count for Interval. The number of times the page element was sent to the server at the sample interval.
Figure 17. Page Element Report: Response vs. Time Summary
one line graph with data points

The Response vs. Time Detail graph, shown in Figure 18, displays all of the page element readings in the run. The following explain the report table attributes:

  • The two left-most columns show the parent page and the element name that was measured.
  • Response Time - Average for Run. Each page element can be sent to the server and returned to the Web page more than one time. Each time, the response time is recorded and then averaged to provide this figure.
  • Response Time - Standard Deviation for Run. Imagine the Bell curve and draw a vertical line across the middle to show the mean of the average response time of each element. This figure shows the deviation from mean. A bigger deviation number means that there was less consistency in the response time, and a smaller deviation number means more consistency.
  • Attempts - Rate for Run. The attempt rate in seconds of the page element sent to the server in the entire run.
  • Attempts - Count for Run. In one second, the number of times the page element was sent to the server in the entire run.
Figure 18. Page Element Report: Response vs. Time Detail
data table displayed on tab

Page Element Throughput

In this report, shown in Figure 19, the left line graph shows the Page Element Hit Rate, and the right line graph shows the User Load. The following are the metrics being used:

  • Page Element Hit Rate. This graph show counters versus time. Page element throughput means at that particular time interval, how many page elements were sent to the server, processed, and got a response. Interval means the time span for which Rational Performance Tester records results on the graph.

    • The two lines on the graph show the page element hit rate and attempt rate. Attempt here means that the page element is sent to the server but might not get a response. Hit rate means that the page element was sent to the server and got a response.
    • The Performance Summary table shows the page element hit rate per second in the entire run.
  • User Load. This graph shows the user counter versus time. In the beginning of the run, you should see zero completed users and some active users. There will be a point in time when the completed user count increases and the active user count decreases until it reaches zero. Normally, you get a graph that looks like an X. The Performance Summary table shows the active users, completed users, and total users.
Figure 19. Page Element Report: Page Element Throughput
two line graphs side by side

Server Health

This bar graph, shown in Figure 20, displays the percentage of successes for the page elements in the run. You can apply a filter to the result by count, value, or label. If you apply a filter, the result shows just the information that you need. For example, setting the filter by count to 10 lowest shows the 10 least successful page elements. These are the Performance Summary table attributes:

  • The first two left columns show the parent page and the element name.
  • Attempts - Count for Run. The number of times the page was sent to the server.
  • Hit - Count for Run. The number of successful responses from the server.
  • Status Code Successes - Count for Run. When the server responds with the page element, this is the number of success code returns. It should match the Hit Count.
  • Status Code Successes - Percent Status Code Success for Run. The success rate as a percentage. The number should match the bar graph.
  • Attempts - Rate for run. In each second, how many of the page elements were sent to the server.
Figure 20. Page Element Report: Server Health Detail
bar graph and table

Other reports and customizations

This latest part of the series (Part 4) looked at various Performance and Page Element reporting capabilities provided by IBM Rational Performance Tester to generate default performance analysis reports. The next part of the series (Part 5) will show you additional reports, as well as how you can customize and export the reports to suit your needs.

Resources

Learn

Get products and technologies

Discuss

Comments

developerWorks: Sign in

Required fields are indicated with an asterisk (*).


Need an IBM ID?
Forgot your IBM ID?


Forgot your password?
Change your password

By clicking Submit, you agree to the developerWorks terms of use.

 


The first time you sign into developerWorks, a profile is created for you. Information in your profile (your name, country/region, and company name) is displayed to the public and will accompany any content you post, unless you opt to hide your company name. You may update your IBM account at any time.

All information submitted is secure.

Choose your display name



The first time you sign in to developerWorks, a profile is created for you, so you need to choose a display name. Your display name accompanies the content you post on developerWorks.

Please choose a display name between 3-31 characters. Your display name must be unique in the developerWorks community and should not be your email address for privacy reasons.

Required fields are indicated with an asterisk (*).

(Must be between 3 – 31 characters.)

By clicking Submit, you agree to the developerWorks terms of use.

 


All information submitted is secure.

Dig deeper into Rational software on developerWorks


static.content.url=http://www.ibm.com/developerworks/js/artrating/
SITE_ID=1
Zone=Rational
ArticleID=380112
ArticleTitle=Load testing Web applications using IBM Rational Performance Tester: Part 4. Test results analysis reports
publish-date=09142007