IBM Rational Tester for SOA Quality test execution and performance reports

Tracking service performance is a snap

Service-oriented architectures are at the base of many modern computing infrastructures. If that's the case for your shop, you need an easy and consistent way to track the quality of the Web services you've deployed. Learn how Rational Tester for SOA Quality can help you understand how your services work and eliminate bottlenecks.

Michael Kelly (Mike@MichaelDKelly.com), Consultant, www.MichaelDKelly.com

Mike Kelly is an independent consultant located in the Midwest of the United States. Mike also writes and speaks about topics in software testing. You can find most of his articles and his blog on his Web site, www.MichaelDKelly.com.



27 March 2007

Also available in Chinese

IBM® Rational® Tester for SOA Quality automates the creation, execution, and analysis of functional and regression tests for service-oriented architecture (SOA) applications. In this article, you will take a look at reporting in IBM Rational Tester for SOA Quality. Specifically, you'll take a detailed look at using both the Test Log viewer and the various options available in a Web service performance report.

The Rational Tester for SOA Quality product is an extension of the Rational Performance Tester application. If you are not familiar with Rational Tester for SOA Quality or Rational Performance Tester, you should take some time to read some of the introductory articles included in the Resources section below.

Test setup

This article was written using IBM Rational Performance Tester version 7.0.0, IBM Rational Tester for SOA Quality version 7.0.0 Open Beta, Microsoft Windows® 2000 Professional SP2, and the current (as of the initial publication date of this article) Google Web API.

This article will use Web service tests for the Google Web API. You can find a link to the WSDL for this service in the Resources section below. We have three tests in our test suite for this article, one for each operation in the API; the contents of these tests is shown in Figure 1 below.

Figure 1. Test contents for the GoogleAPI test suite
Test contents for the GoogleAPI test suite

Test one: doGetCachedPage()

The first test makes a call to the doGetCachedPage() operation. It passes in the API key and the url for IBM developerWorks (http://www.ibm.com/developerworks/). For this test case, we have an equal verification point that looks for an exact match for the response message. I have this test case set up to fail its verification point.

Test two: doGoogleSearch()

The second test makes a call to the doGoogleSearch() operation. It passes in the API key, the q value of "developerWorks," a start value of 0, a maxResults value of 10, and default values for everything else. For this test case, we have a contain verification point that looks only for this snippet on developerWorks: "An online collection of tutorials, sample code, standards, and other resources <br> provided experts at IBM to assist software developers using open standards <b>...</b>". I have this test case set up to pass its verification point.

Test three: doSpellingSuggestion()

The third test makes a call to the doSpellingSuggestion() operation. It passes in the API key and the phrase "IBM Rationla Performance Tester." For this test case, I again have a contain verification point that looks for the resulting correct spelling. I have this test case set up to pass its verification point.

With our three test cases set up, we can now look at some different types of test results.

The Test Log viewer

The Test Log viewer is the best source for the details around a test. It's the first place to go for the results of a functional Web service test, and is the place to go for the details of a performance Web service test. The viewer has two views, Overview and Events. In the Overview view, you can find general information and common properties. General information includes the name, description, and relative file path of the test suite. Common properties include the start and stop times, the verdict for the test run, the type of test suite, and so on. This view also includes verdicts, which indicate the overall result of the test run. Table 1 illustrates the types of verdicts you might see.

Table 1. Possible verdicts and verdict icons
VerdictIconDescription
ErrorErrorIndicates that the primary request was not successfully sent to the server, that no response was received from the server, or that the response was incomplete or could not be parsed.
FailfailIndicates that the verification point did not match the expected response or that the expected response was not received.
InconclusiveinconclusiveIndicates that custom code provided that you provided defined a verdict of inconclusive.
PasspassIndicates that the verification point matched or received the expected response.

In the Events view, you can find the details for your test run. The Events tree lists all test execution events, such as the script start and end, loop, invocation, message, or verdict. When you select an object in the Events tree, the properties of the selected event or object are displayed under Common Properties and Detailed Properties. Common Properties displays the time and text of a selected event in the Events tree. Clicking the name of the element under Properties opens the test suite, test case, or test behavior of the event that you selected in the Events tree. The Text field displays a message about the execution of the event that you selected in the Events tree. The Defects section is integrated with ClearQuest, allowing you to log or view defects associated with the selected element.

Let's take a closer look at the test suite for the Google Web API. Figure 2 illustrates an example Events tree from an automatically generated schedule.

Figure 2. Events tree for the Google Web API test suite
Events tree for the Google Web API test suite

As you move through the tree (starting at the top and moving down), you can see that there is a verdict roll-up for each level in the tree; the Google API suite element shows the roll-up verdict, and each verification point shows its own verdict. The delay between each Web service call is automatically added; the default value is zero milliseconds. Finally, each call is listed with its corresponding request and verification point.

For the details of a verification point, you'll want to use the Web Service Protocol view. In this view, you can see the details of the returned envelope and the verification point. For some reason, this view isn't shown by default, so you may need to open it by selecting Window > Show View > Other, and then, in the Show View window, selecting Test > WS Protocol Data. Figure 3 offers a look at our verification point for doSpellingSuggestion().

Figure 3. Web Service Protocol view
Web Service Protocol view

You can also see the XML for a request or response Web service call in the Web Service Protocol view. Figure 4 illustrates the XML for the doSpellingSuggestion() response.

Figure 4. Response XML for doSpellingSuggestion()
Response XML for doSpellingSuggestion()

Web service performance reports

Web service performance reports are useful for both summary information for functional Web service tests and for detailed information for performance Web service tests. What's great about these reports is your ability to customize the summary information that you can show as soon as the test ends.

Customizing the reports for Web services

The first report you see when your test runs is the Overall report; Figure 5 illustrates an example. This report, by default, shows the percentage of Web service calls that were successful, and the pass percentage for each type of verification point you have in your suite.

Figure 5. Overall Web service performance report
Overall Web service performance report

There are all sorts of cool things you can do with this report. To see what your options are, right-click anywhere on the report and select Add/Remove Performance Counters > Web Services Performance Counter... This will open the Add/Remove Web Services Performance Counter wizard, illustrated in Figure 6.

Figure 6. Add/Remove Web Services Performance Counter wizard
Add/Remove Web Services Performance Counter wizard

If you expand any of the counter categories, you'll see different types of counters, such as:

  • Percent success
  • Average, minimum, maximum, and standard deviation response time or connection time
  • Total verification point counts
  • Total contains and percent contains for verification points

Take a look at a specific example. If you expand Response Time, you can add a counter for the average response time for all Web service returns, as shown in Figure 7.

Figure 7. Adding an Average Response Time For All Returns counter
Adding an Average Response Time For All Returns counter

If you add that counter, the Overall report changes, as you can see in Figure 8.

Figure 8. Overall report with the addition of the Average Response Time For All Returns counter
Overall report with the addition of the Average Response Time For All Returns counter added

In the example, you can see that the average response time for the Web service returns was 831.33 ms. To make sure that you are seeing the correct counter/value, you can switch over to the Response Time vs. Time Detail report (another tab along the bottom of the Web Service Performance Report view) and take a look at the response time for each Web service call, as shown in Figure 9.

Figure 9. Performance Summary table from the Response Time vs. Time Detail report
Performance Summary table from the Response Time vs. Time Detail report

If you average those three numbers, you get 831.33. I often try to double-check a counter the first time I add it to a report. That's mostly because I don't trust myself, not because I don't believe the tool. The checkpoint lets me know that I'm doing what I really think I'm doing. I don't like making mistakes in my performance reports. For some reason, management always wants them to be accurate the first time.

You can add and remove counters from all the reports in the same way that you added the Average Response Time For All Returns counter. Open the Add/Remove Web Services Performance Counter wizard and look at the options you have for each report. Play around with adding different counters and see what happens. After you look at the data in a couple of different ways, you'll find out what works for you.

When you close the report you're working with, you'll be asked if you want to save your changes. If you want your changes to become part of the Web Services Performance Report going forward, go ahead and save. If you aren't sure that you always want that information, don't save, but just add it each time you need it (or create a new type of report).

Web Service Verification Point reports

Web Service Verification Point reports cover the details of the verification points in your test. While these reports don't add much in terms of new data, they do present nice summary views of the verification point information. In addition, if you have a large number of verification points in your suite, these reports can be helpful in locating the results quickly.

To show the Web Services Verification Point report, right-click on the log and select Web Services Reports > Web Services Verification Point Report. This will open up the report, starting on the Summary view, as shown in Figure 10.

Figure 10. Summary Web Services Verification Point report
Summary Web Services Verification Point report

The other views on this report show the details for the various verification point types in the suite. For example, the Return Contain Verification Points view shown in Figure 11 looks at the results for the two contains verification points in the suite.

Figure 11. Return Contain Verification Points view
Return Contain Verification Points view

You can add and remove counters from these reports in the same way as you did above in the Overall Web Service Performance report.

Tips and tricks

The more time you spend analyzing results, the more interest you'll have in learning some faster ways to find the information you need. The tips and tricks in this section may help you get where you need to go faster.

Viewing multiple reports at the same time

Often, you'll want to view two reports side by side. You may want to compare them, or you may just want to take in more information at once. To view two reports at a time (or any number or reports, actually; you can do this for as many reports as you like), simply click the title of the report, then drag the cursor to the left edge of the viewer area and dock it, as shown in Figure 12.

Figure 12. Dragging the report
Dragging the report

The cursor changes to a black arrow. The reports for the two runs are displayed side by side, as shown in Figure 13.

Figure 13. Two reports side by side
Two reports side by side

Filtering results

By filtering the results that are displayed in a report, you can remove unnecessary data and focus on the data that is significant to you. As far as I can tell, you can filter on any report. Simply right-click on the report (just as you would to add a counter) and select Apply Filter. This will open the Performance Counter Filter dialog, shown in Figure 14.

Figure 14. Performance Counter Filter dialog
Performance Counter Filter dialog

Here's a brief summary of the three options you have available:

  • Filter by count: Displays the specified number of items. For example, if you select this option and then type 15, the report will show the 15 items with the highest or lowest values (depending on the radio button you select).
  • Filter by value: Displays items based on a comparison with the specified value. For example, if you select this option and then type 15, the report will show all of the items that are higher or lower than 15 (depending on the radio button you select).
  • Filter by label: Displays items that match the specified label. If you are filtering a table, the label is usually a page, and is listed in the left column. If you are filtering a graph, the label is a legend in the graph.

Evaluating results for a specified time range

Another option, similar to filtering, is to narrow the time range of the test. To recalculate results for particular start and stop times, you can specify a time range for a report. You can enter custom start and stop times to filter out data from the ramp-up or ramp-down phases of test runs. This ability to focus on a specific time range enables you to see, for example, only the results from the period during which the maximum number of virtual users were making Web service calls. Perhaps that's when you really care about the pass/fail percentages of your verification points. The aggregated results are recomputed to take into account only the data collected during the specified time range.

In the performance report you want to change, right-click and select Change Time Range... This opens the Select Time Range dialog, shown in Figure 15.

Figure 15. Select Time Range dialog
Select Time Range dialog

Click New Time Range and add the new time range to the list of Available Time Ranges. Click Finish and the report refreshes, zooming the time axis to show data only from the specified time range. Aggregate results are recalculated to reflect only data from the selected time range. Note that the newly specified time range is stored with the report. To return to the complete report, right-click the report, select Change Time Range..., and then select Overall Time Range from the list of Available Time Ranges.

Exporting results to CSV, XML, or HTML

You can export test results to CSV, XML, or HTML. This can be useful for a number of reasons. Most often, you'll want to do this to aggregate your test data with data collected from another tool, or for archiving and reporting purposes.

You can export the entire results of a run or specific parts of the results to a CSV file for further analysis. To export results of a run:

  1. Choose File > Export.
  2. In the Export window, click Performance test run statistics, and then click Next.
  3. Type the name of a CSV file (with the .CSV extension), and then click Next.
  4. Select the run to export, and then click Next. The runs are listed in chronological order, with the most recent run at the bottom of the list.
  5. At this point, you can select the type of information that you want to export if you wish. Click Finish when you're done.

You can export test logs in XML format for further analysis. To export a test log in XML format:

  1. Choose File > Export.
  2. In the Export window, click Test execution history, and then click Next.
  3. In the Export Test Execution History window, browse to the folder where you want to store the log, and then click Next. If you enter only a file name, the log will be exported to the install folder.
  4. Select the log to export, and then click Finish.

You can export an entire report, or a tab on a report, to HTML format. To export a report to HTML:

  1. In the Performance Test runs view, right-click on the report or tab to export, and then select Export to HTML.
  2. In Specify file path for HTML exported file, select a folder to store the newly created report, and then click Next. Although your current project is the default, you would typically create a folder outside of the project to store exported reports.
  3. Click Finish.
  4. Optionally, you can paste the exported report into a spreadsheet program for further analysis.

Response time breakdowns and resource monitoring data

Without going into the details here (because this is already a long article, and others know more about this part of reporting than I do), it's worth noting that you can also include information captured or imported for response time breakdown data and for resource monitoring data. Resource monitoring data consists of a sequence of observations collected at regular intervals. You can collect data in real time, or you can retrieve it from an IBM Tivoli Enterprise™ Monitoring Server. In addition to response time breakdown data, resource monitoring data provides you with a more complete view of a system that can aid in problem determination. Here are some of the kinds of data that you can collect and analyze:

  • CPU usage (total, for individual processors, or even for individual processes)
  • Available memory
  • Disk usage
  • TCP/IP and network throughput

This feature provides a more complete view of your web service to help isolate problems. You can monitor the system under test (or the agents) using IBM Tivoli Monitoring agents or Rational Performance Tester. To view resource monitoring data you can use the Profiling and Logging perspective of Eclipse.

Response time breakdown shows you how much time was spent in each part of the system under test code as the system was exercised. The response time breakdown view is associated with a Web service call from a particular execution of a test or schedule. You can use response time breakdown to do the following:

  • Identify code problems
  • See which application on which server is a performance bottleneck
  • Drill down further to determine exactly which package, class, or method is causing a problem

To capture response time breakdown data, you must enable response time breakdown in a test or schedule, and configure the amount of data to be captured. The data collection infrastructure (something you'll see as you install the Rational Performance Tester tools) collects response time breakdown data. Each host on which the application runs, and from which you want to collect data, must have the data collection infrastructure installed and running. In addition, you must configure (or instrument) each application server to use the data collection infrastructure.

Next steps

Now that you know how to get into the reports and change them, take some time to play around with the various counters and reports available. Be sure to look at exporting the data to different formats (especially CSV). And for your large tests, practice filtering and changing your time range to see if you can remove some of the noise from your results.

Resources

Learn

Get products and technologies

Discuss

Comments

developerWorks: Sign in

Required fields are indicated with an asterisk (*).


Need an IBM ID?
Forgot your IBM ID?


Forgot your password?
Change your password

By clicking Submit, you agree to the developerWorks terms of use.

 


The first time you sign into developerWorks, a profile is created for you. Information in your profile (your name, country/region, and company name) is displayed to the public and will accompany any content you post, unless you opt to hide your company name. You may update your IBM account at any time.

All information submitted is secure.

Choose your display name



The first time you sign in to developerWorks, a profile is created for you, so you need to choose a display name. Your display name accompanies the content you post on developerWorks.

Please choose a display name between 3-31 characters. Your display name must be unique in the developerWorks community and should not be your email address for privacy reasons.

Required fields are indicated with an asterisk (*).

(Must be between 3 – 31 characters.)

By clicking Submit, you agree to the developerWorks terms of use.

 


All information submitted is secure.

Dig deeper into Rational software on developerWorks


static.content.url=http://www.ibm.com/developerworks/js/artrating/
SITE_ID=1
Zone=Rational, SOA and web services
ArticleID=203673
ArticleTitle=IBM Rational Tester for SOA Quality test execution and performance reports
publish-date=03272007