8 minutes
Performance testing judges system performance or application performance with loads of various sizes.
Key criteria include speed (how quickly it operates), stability (if it performs without crashing), scalability (how smoothly increasing loads are handled) and responsiveness (how quickly it responds to user prompts).
The concept of software performance underlies all computer use, and poor performance can wreck an organization’s best efforts to deliver a quality user experience. If developers don’t adequately oversee performance testing or run performance tests frequently enough, they can introduce performance bottlenecks. This situation can choke off a system’s ability to handle even its typical traffic loads during expected periods. It becomes even more problematic when unexpected times of peak usage create added demand.
This challenge could jeopardize a company’s entire public-facing operations. Reputations for enduring quality usually take long periods to develop. However, they can be quickly and permanently damaged when the public begins to question whether a system or application can operate with dependable functionality. End-user patience is increasingly becoming a limited commodity. So, given that company reputations are often on the line, there’s a lot at stake when performance issues are the topic of conversation.
Industry newsletter
Stay up to date on the most important—and intriguing—industry trends on AI, automation, data and beyond with the Think newsletter. See the IBM Privacy Statement.
Your subscription will be delivered in English. You will find an unsubscribe link in every newsletter. You can manage your subscriptions or unsubscribe here. Refer to our IBM Privacy Statement for more information.
Let’s first define the methodology used in most performance test scenarios. Six multipart steps define the typical performance testing process.
The first step in the performance testing process involves setting useful parameters, like outlining the performance goals of the application.
Then, establish what constitutes acceptable performance criteria (like response times, throughput, resource utilization and error rates).
This stage is also when personnel identify key performance indicators (KPIs) to capably support performance requirements and business priorities.
Not all tests should be used in every situation. Developers or other testers must define what the testing is meant to analyze.
They begin by scoping out top usage scenarios and designing test cases that reflect real-world user interactions. The next step is specifying the test data and workloads that will be used during the testing process.
After locking down these variables, testers select the performance testing tools, test scripts and testing techniques to use. This step includes setting up gating, the process whereby code-based quality gates either permit or deny access to later production steps.
Performance testing also examines bandwidth to confirm that data transmission rates can sufficiently handle workload traffic.
One last step must be taken before the performance testing process can officially begin. Testers construct a testing environment that accurately mimics the system’s real production environment, then confirm that the software applications under test (AUTs) have been deployed within the testing environment.
The final preparation involves integrating monitoring tools to capture performance metrics generated by the system during testing.
With testing parameters now clearly defined, it’s time to execute the performance testing. Testers or test automation run the test scenarios that have been chosen, and those tests are used together with performance testing tools.
Testers typically monitor system performance in real-time so they can check on throughput, response times and resource usage. Throughout the test scenarios, testers monitor the system for performance bottlenecks or other performance-related oddities reflected in test metrics.
Next, testers evaluate the performance data that’s been collected during the testing process. They pore over the gathered data and search for areas of performance that need improvement.
Then, testers compare these test results against the performance benchmarks that were established as part of the first step of the testing process. Through this comparison, testers can see where the test results deviate from expected performance and bottlenecks could have occurred.
After identifying performance problems through analysis of test data, developers work with the code to update it with the system. They use code optimizations, resource upgrades or configuration changes to mitigate the cited performance issues.
After implementing changes, developers repeat the software testing sequence to confirm that they applied the changes successfully. Developers repeat the procedures until performance results align with defined benchmarks.
Performance testing goes deep “under the hood” to check system or application output, so it makes sense that software development teams are the most common users of performance testing methods. Included in this first user group are professionals actively engaged in the development process: developers, quality assurance (QA) engineers, and DevOps teams. Each gets something unique from performance testing:
The next group of users isn’t developers but they still work at ground level with system performance management as a major component of their jobs:
But it’s not just company management that conducts performance testing. Many organizations and businesses also make frequent use of performance testing for various purposes in companies of all sizes:
Developers perform different types of performance testing to derive specific types of result data and support a certain testing strategy. Here are the most prominent types of tests.
Load testing indicates how the system performs when operating with expected loads. The goal of load testing is to show system behavior when encountering routine-sized workloads under normal working conditions with average numbers of concurrent users.
Example: For an e-commerce website, a tester simulates being a site user and going through the steps of shopping for items, placing products in carts and paying for those purchases.
Load testing shows whether the system can support regular load conditions. Scalability puts that system under stress by increasing the data volume or user loads being handled. It shows whether a system can meet an increased pace and still deliver.
Example: In vertical scaling, a developer might build up a database server’s CPU and memory so it can accommodate a larger volume of data queries.
Stress testing is analogous to a dive test conducted by a submarine crew. Here, the system is pushed to its understood operational limits—and then even further—to determine exactly how much the system can take before reaching its breaking point.
Example: Failover testing is an extreme form of stress testing that begins with simulating component failures. The goal is to see how long it takes for the system to recover and resume operation.
Here we’re testing a different kind of stress—when user traffic or data volume transfer suddenly experiences a sharp and drastic spike in activity. The system must absorb various changes while continuing usual operations.
Example: Companies that run websites need to prepare not only for outages, but also for the surge of users trying to access the site simultaneously once it's back online. They must also assess whether the system can handle that sudden increase in demand. Spike testing can calculate how smoothly it’s likely to go.
Sometimes with performance, we’re discussing user traffic. Volume testing, in contrast, is concerned with how a system manages large amounts of data. Can the system process the data fully and provide data storage without data degradation?
Example: A medical clinic maintains huge volumes of patient information and is legally required to be able to access those patient records and related medical data. That constant influx of data can strain a system. Volume testing lets users know whether their system is up to the challenge of constantly accepting more data.
Think of it as performance testing over the long haul. The real culprits sought by endurance testing (also called soak testing) are the data degradation and issues with memory leaks that often occur over an extended period of time.
Example: Social media platforms operate around the clock, and continuous usage can present problems with platform stability, data storage and user accounts. Endurance testing gives you a picture of the current operation and indicators of future performance.
Developers and testers can choose from numerous tools designed for performance testing. Here are a few of the most popular:
Like with nearly all matters related to computers, artificial intelligence (AI) is now pushing software testing to entirely new levels of efficiency. It is making the overall performance testing process faster, more accurate and easier to automate.
Specifically, AI can employ shorter testing cycles, making it take less time to run tests. And through AI’s eagle-eyed accuracy, it’s able to notice more subtle performance changes that could elude human testers. Also, through predictive analytics, AI can evaluate operating trends and historical data and predict where and when bottlenecks might occur next. It can also leverage that predictive system behavior and even adjust test parameters based on it.
But, by far, the most significant thing that AI has done for performance testing (so far) is to assist its efforts on a grand scale by enabling automation. This automation is striking in that it’s fully capable of running the performance testing process—all of it.
AI cannot only automate how tests are carried out, but it can also write the test scripts intended for execution. In addition, it can interpret test results on the backend and offer guidance to remediate problem situations.
One of the most interesting and promising impacts of AI on performance testing is the rising use of human-AI collaboration. This arrangement realizes that human instinct and knowledge still have a vital role to play. In fact, in some situations, following human impulses is still the prime directive.
Some experts are convinced that the performance testing of the future relies on this hybrid approach. It combines computer mentality and processing muscle with a human sense of context and nuance.
Automate software delivery for any application on premises, cloud or mainframe.
Use DevOps software and tools to build, deploy and manage cloud-native apps across multiple devices and environments.
Unlock new capabilities and drive business agility with IBM’s cloud consulting services. Discover how to co-create solutions, accelerate digital transformation, and optimize performance through hybrid cloud strategies and expert partnerships.