Comment lines: Alexandre Polozoff: How well does traditional performance testing apply to SOA solutions?

Traditional performance testing has some basic principles that must be followed in order to obtain meaningful, useful, and reliable data. This article takes a look at how well those principles apply to Service Oriented Architecture (SOA) solutions, and what additional considerations are necessary to collect useful performance data in an SOA world. This content is part of the IBM WebSphere Developer Technical Journal.

Share:

Alexandre Polozoff (polozoff@us.ibm.com), Senior Certified IT Specialist , EMC

Author photoAlexandre Polozoff is a Master Inventor and Senior Certified IT Specialist in the IBM Software Services for WebSphere Performance Technology Practice for the WebSphere suite of products. In this role, he works with IBM customers on various high volume and performance related engagements. Mr. Polozoff has an extensive 20 year background in network and telecommunications management, application development, and troubleshooting. He has also published papers and speaks at various conferences on performance engineering best practices.



27 February 2008

Old school

The other day one of my colleagues asked if I had any recommendations on performance testing a Services Oriented Architecture (SOA) solution. This got me thinking... Performance testing is a science with a few basic principles that must be followed, but is there anything specific to SOA that we haven't done before from a performance testing perspective?

First, the basics.

  • Know what to test

    Identifying and writing effective use cases is the starting point for any exercise in performance testing. Two of the best ways use cases can be identified are to:

    • Analyze access logs from a live site and discover what actual use cases are occurring.
    • Have your business analyst provide the use cases they expect the application will process.

    Either way, since any performance testing will only be as valuable as the use cases that are tested, the primary goal here is not to leave any use cases out. Use cases that are not tested will ultimately cause problems in production.

    For example, for the typical e-commerce site there are four (at least) basic use cases:

    • Hit the home page: There is always a landing page for each user that comes to the site.
    • Browse the catalog: Some of the users that hit the landing page will browse the catalog and look at different items in the catalog.
    • Shop: Some of the users that browse the catalog will put one or more items into their shopping cart. Some of these users might also delete items from their cart.
    • Check out: Some of the users that put items in their shopping cart will purchase those items.
  • Percentage mix of use cases

    With the use cases identified, you next need to understand the frequency of each use case. In the e-commerce example, you might learn that:

    • 100% of the users to the e-commerce site will hit the landing page.
    • Of those users, about 80-85% will browse the catalog.
    • 25% will add or remove items from their shopping cart.
    • 2-3% will actually check out and purchase the items in their cart.

    A corresponding mix of use cases must be represented in your testing.

  • Build the test cases

    Using a load test tool, such as IBM® Rational® Performance Tester, the test team will take the identified use cases and build test scripts designed to test each case. Remember that the test cases will only be as effective as the use cases on which they are based For example, since you know that some shoppers will remove items from their shopping cart, you need a use case that tests the vital function of adding and removing items from a shopping cart and what affect that has on the application.

  • Load and stress testing

    With test cases and the percentage mix, the test team can now execute the scripts. To achieve reasonable test results, a test should include:

    • The application version that is a release candidate for production. This does not necessarily mean that this particular version of the application will be promoted to production, only that it is a candidate for production. A release candidate can avoid (or be denied) promotion for several reasons, one being performance testing failure. Typically, the release candidate is at minimum a working application that has been functionally tested.
    • Production-like data. For an e-commerce site, you must have a version of the catalog that does or will exist in production. Additionally, any user-related data that has privacy implications (such as telephone number, address, and so on) must be scrubbed in such a way as to still be valid test data, but not identifiable with any particular user (such as a telephone number of 1-222-555-1212).
    • Production-like infrastructure. This element is difficult for many organizations. The rule of thumb is to have at least three logical instances of an operating system and similar firewalls, routers, switches, and so on.
    • Enough virtual user licenses to be able to place as close to a production load as possible to each logical instance of the operating system.

    The test team can load and stress the environment using the test cases they developed at the correct percentage mix of test cases. Application monitoring and data collection is vitally important during load and stress testing. A tool such as IBM Tivoli® Composite Application Manager (ITCAM) for SOA is an excellent choice for application monitoring AND troubleshooting, should the load or stress tests run into problems.

  • Analysis

    Once the data is collected, it must be analyzed by a performance specialist. This person is responsible for:

    • Determining if the test was successful and the data that was collected is complete. Any unsuccessful test needs to be rerun.
    • Understanding if the successful test was the most optimal test possible, or if changes to the application code or envoironment configuration are needed.

    Appropriate actions (such as code or configuration changes) need to be taken after the analysis is complete. Once the changes have been functionally tested, the application can be load and stress tested once more, followed by another analysis.


SOA scenarios

With the high level basics of performance testing behind us, are there any testing scenarios that might be specific to SOA?

  • Highly-utilized service

    SOA environments tend to have a core set of services that might be highly utilized, either by external service consumers or inter-enterprise consumers. These services typically need to be not only highly performant, but also highly available; these services are generally considered critical to the business.

    A good corollary would be a stateless session bean at a facade layer. Stateless session beans usually front some business function and are reused by several other applications. While the business criticality of the EJB components or SOA services might warrant possibly different runtime environments that are more highly available, this factor doesn't fundamentally change how these components are performance tested. Regardless of how vital a component is to the business, every component must be performance tested to understand how it is will behave in production. Most production environments employ some level of sharing, whereby introducing a poorly tested component can be a risky venture, especially if there is a possibility that the component could hog all the CPU time or memory.

  • Composite services

    A composite service is a service that calls one or more services behind it. This scenario is quite like the stateless session bean facade mentioned above. Traditional performance testing of a facade attempts to individually test each service behind the facade in order to understand how each one behaves and what tuning or application code changes are needed for optimal throughput. Finally, when all the individual services have finished testing, the facade (or front facing service) is itself performance tested, possibly followed by more tweaks to either the facade or one of the backend services.

    Testing individual services behind the front facing service might not always be possible, because the services might use interfaces that are not readily testable, such as an asynchronous messaging interface. In these cases, performance testing is typically done either through the front facing facade or by adding a servlet layer that can initiate the appropriate messaging calls to conduct the test. This might require some additional code or infrastructure that would not be promoted to production, but rather would exist strictly for performance testing purposes.

All in all, it seems that traditional performance testing methodologies still hold for SOA-based solutions, although it might be necessary to build out additional applications and infrastructure to support the performance testing effort. Organizations that have not had experience testing asynchronous messaging interfaces might not be aware of the additional support that will be required for performance testing. Still, the fundamental requirements for a successful production environment are the same as they have ever been, and that is to test, test thoroughly, and test often!

Resources

Comments

developerWorks: Sign in

Required fields are indicated with an asterisk (*).


Need an IBM ID?
Forgot your IBM ID?


Forgot your password?
Change your password

By clicking Submit, you agree to the developerWorks terms of use.

 


The first time you sign into developerWorks, a profile is created for you. Information in your profile (your name, country/region, and company name) is displayed to the public and will accompany any content you post, unless you opt to hide your company name. You may update your IBM account at any time.

All information submitted is secure.

Choose your display name



The first time you sign in to developerWorks, a profile is created for you, so you need to choose a display name. Your display name accompanies the content you post on developerWorks.

Please choose a display name between 3-31 characters. Your display name must be unique in the developerWorks community and should not be your email address for privacy reasons.

Required fields are indicated with an asterisk (*).

(Must be between 3 – 31 characters.)

By clicking Submit, you agree to the developerWorks terms of use.

 


All information submitted is secure.

Dig deeper into WebSphere on developerWorks


static.content.url=http://www.ibm.com/developerworks/js/artrating/
SITE_ID=1
Zone=WebSphere
ArticleID=291824
ArticleTitle=Comment lines: Alexandre Polozoff: How well does traditional performance testing apply to SOA solutions?
publish-date=02272008