Performance metrics available for WebSphere Commerce
Need for Agile performance preview
The solution architecture is done and the iterations are on their way churning out designs and code. In projects executed following agile methodologies, features are designed and added in increments. While it is not too hard to get the whole picture on functional capabilities of the solution, it is relatively difficult to get a handle on how all this incremental design and code will eventually perform untill the last of the iterations are done and dusted. That often means, projects shift from iterative mode to waterfall mode with a performance test phase planned right at the end of development cycle, leaving little leeway for development teams to correct any major performance bottlenecks identified during performance testing.
Can performance testing be made iterative? I can imagine my performance specialist rolling her eyes. Problem is that we cannot certify the system for performance unless a test can be run in a controlled environment. And it takes time - which is premium in short iterations demanded by agile projects. So, is there hope? If we compromise our scope to not "certify" the system for performance and instead go for "flushing out major performance problems upfront", we may have a way around this conundrum.
Performance Measurement tool and framework
From fix pack 7 onwards performance logger feature supports a trace string "com.ibm.commerce.performance=fine" to trace response time of key external calls from WC to OMS. Through this a development team can understand the SLA when checking availability, reserving and cancelling inventory and getting order details from OMS. With Fix pack 9, the performance measurement logger feature of WebSphere Commerce allows teams to understand response time, impact of caching and size of result on each call at various application layers. It replaces the old ServiceLogger tracing with a package of logging that can be analyzed through the performance measurement tool.
How to enable and use performance measurement capabilities is well documented in the knowledge center. But here are the essentials of how a development team can use these capabilities to get a preview of performance capabilities of the solution before official performance test is due:
1. Plan a day during your test runs in your iterations for collecting performance metrics.
2. Work with your WCS/WAS administrator to enable the following traces:
3. Have your testers execute a full suite of tests that represents typical browsing behavior of the live site - include search, promotional messages/pricing, navigation, registered/guest and checkout flows.
4. Collect the files in WAS_profiledir/logs/ folder of your server.
5. Use your favorite tool to project the analysis -- for example, basic performance reports are in CSV format and can be simply charted in spreadsheets
Once a base line is formed, further iterations can focus on deviations from previous measurement.
What is measured
Development team gets a preview of where the heaviness is and where either a caching strategy or re-design is required. The reports are detailed enough to point out the following:
- Average call duration and Execution time (performance reports)
- Size of results (performance reports)
- Cache hit/miss (performance reports)
- Stack of operations with time taken by each child in the stack (stack reports)
- Get information on a full execution cycle (execution report)
- Identify source of calls (caller report)
A note of Caution
While this test is not a replacement for proper performance load test, this type of testing can point out where contention is likely to occur and take corrective design decisions early in the development cycle. If this was so easy, then why isn't everyone doing it - simply because there are likely to be false positives and a few false negatives in the findings. After all, the test environment is unlikely to be modeled to represent the scale of prod or perf environment. The environment itself may have some variability due to multiple testers accessing it and development teams delivering fixes. And test execution is subject to manual variations - think time cannot be controlled, sequence may vary etc. So, teams should take findings of such a testing with a grain of salt. So, such an approach requires the performance architect to calculate probabilities against the findings to determine which of them represent real problems. But in the hands of an expert, this mechanism can help create a solution that is functionally devoid of performance bottlenecks.