What is performance testing?

3 July 2025

8 minutes

 

Authors

Phill Powell

Staff Writer

Ian Smalley

Senior Editorial Strategist

What is performance testing?

Performance testing judges system performance or application performance with loads of various sizes.

Key criteria include speed (how quickly it operates), stability (if it performs without crashing), scalability (how smoothly increasing loads are handled) and responsiveness (how quickly it responds to user prompts).

The concept of software performance underlies all computer use, and poor performance can wreck an organization’s best efforts to deliver a quality user experience. If developers don’t adequately oversee performance testing or run performance tests frequently enough, they can introduce performance bottlenecks. This situation can choke off a system’s ability to handle even its typical traffic loads during expected periods. It becomes even more problematic when unexpected times of peak usage create added demand.

This challenge could jeopardize a company’s entire public-facing operations. Reputations for enduring quality usually take long periods to develop. However, they can be quickly and permanently damaged when the public begins to question whether a system or application can operate with dependable functionality. End-user patience is increasingly becoming a limited commodity. So, given that company reputations are often on the line, there’s a lot at stake when performance issues are the topic of conversation.

The latest tech news, backed by expert insights

Stay up to date on the most important—and intriguing—industry trends on AI, automation, data and beyond with the Think newsletter. See the IBM Privacy Statement.

Thank you! You are subscribed.

Your subscription will be delivered in English. You will find an unsubscribe link in every newsletter. You can manage your subscriptions or unsubscribe here. Refer to our IBM Privacy Statement for more information.

The six steps of performance testing

Let’s first define the methodology used in most performance test scenarios. Six multipart steps define the typical performance testing process.

1. Define performance criteria and requirements

The first step in the performance testing process involves setting useful parameters, like outlining the performance goals of the application.

Then, establish what constitutes acceptable performance criteria (like response times, throughput, resource utilization and error rates).

This stage is also when personnel identify key performance indicators (KPIs) to capably support performance requirements and business priorities.

2. Design and plan tests

Not all tests should be used in every situation. Developers or other testers must define what the testing is meant to analyze.

They begin by scoping out top usage scenarios and designing test cases that reflect real-world user interactions. The next step is specifying the test data and workloads that will be used during the testing process.

After locking down these variables, testers select the performance testing tools, test scripts and testing techniques to use. This step includes setting up gating, the process whereby code-based quality gates either permit or deny access to later production steps.

Performance testing also examines bandwidth to confirm that data transmission rates can sufficiently handle workload traffic.

3. Establish test environments

One last step must be taken before the performance testing process can officially begin. Testers construct a testing environment that accurately mimics the system’s real production environment, then confirm that the software applications under test (AUTs) have been deployed within the testing environment.

The final preparation involves integrating monitoring tools to capture performance metrics generated by the system during testing.

4. Conduct tests

With testing parameters now clearly defined, it’s time to execute the performance testing. Testers or test automation run the test scenarios that have been chosen, and those tests are used together with performance testing tools.

Testers typically monitor system performance in real-time so they can check on throughput, response times and resource usage. Throughout the test scenarios, testers monitor the system for performance bottlenecks or other performance-related oddities reflected in test metrics.

5. Study results

Next, testers evaluate the performance data that’s been collected during the testing process. They pore over the gathered data and search for areas of performance that need improvement.

Then, testers compare these test results against the performance benchmarks that were established as part of the first step of the testing process. Through this comparison, testers can see where the test results deviate from expected performance and bottlenecks could have occurred.

6. Optimize, test, repeat

After identifying performance problems through analysis of test data, developers work with the code to update it with the system. They use code optimizations, resource upgrades or configuration changes to mitigate the cited performance issues.

After implementing changes, developers repeat the software testing sequence to confirm that they applied the changes successfully. Developers repeat the procedures until performance results align with defined benchmarks.

IBM DevOps

What is DevOps?

Andrea Crawford explains what DevOps is, the value of DevOps, and how DevOps practices and tools help you move your apps through the entire software delivery pipeline from ideation through production. Led by top IBM thought leaders, the curriculum is designed to help business leaders gain the knowledge needed to prioritize the AI investments that can drive growth.

Who benefits from performance testing?

Performance testing goes deep “under the hood” to check system or application output, so it makes sense that software development teams are the most common users of performance testing methods. Included in this first user group are professionals actively engaged in the development process: developers, quality assurance (QA) engineers, and DevOps teams. Each gets something unique from performance testing:

  • Developers use performance testing to find performance issues early in the software development lifecycle (SDLC) so they can prioritize development changes and enact them more efficiently.
  • QA teams rely on performance testing to inform their efforts and help them determine whether an application’s operating speed, functional stability and scalability features are working sufficiently at various workloads.

The next group of users isn’t developers but they still work at ground level with system performance management as a major component of their jobs:

  • Project managers use performance testing to assess risks and gauge system impacts that could result from system changes.
  • Chief technology officers (CTOs) or staffers in similar capacities need performance testing to help them determine how shifting demands can be translated into long-term growth.
  • Company owners are all about protecting revenue streams. Performance testing confirms that load times are sufficiently quick and unresponsive customer service doesn’t trip up the revenue process.

But it’s not just company management that conducts performance testing. Many organizations and businesses also make frequent use of performance testing for various purposes in companies of all sizes:

  • Large businesses that regularly experience periods of high traffic volume depend on performance testing to confirm that their applications can handle heavy workloads. This approach is especially important for companies that are heavily reliant on online platforms with large user bases.
  • Small companies (and startups) need performance testing as part of growth-planning efforts. These organizations use performance testing to alert them about future bottlenecks that could negatively impact their ability to grow successfully.
  • Businesses (of any size) that are preparing to launch new applications are smart to engage in performance testing as a means of predicting problems with real-world operating scenarios. The same holds true for companies making important IT system changes.
  • Financial institutions hold enormous responsibility and face considerable industry regulations. Such businesses routinely use performance testing to help them maintain baseline standards, especially as they pertain to periods of high transaction volume.

Types of performance testing

Developers perform different types of performance testing to derive specific types of result data and support a certain testing strategy. Here are the most prominent types of tests.

Load testing

Load testing indicates how the system performs when operating with expected loads. The goal of load testing is to show system behavior when encountering routine-sized workloads under normal working conditions with average numbers of concurrent users.

Example: For an e-commerce website, a tester simulates being a site user and going through the steps of shopping for items, placing products in carts and paying for those purchases.

Scalability testing

Load testing shows whether the system can support regular load conditions. Scalability puts that system under stress by increasing the data volume or user loads being handled. It shows whether a system can meet an increased pace and still deliver.

Example: In vertical scaling, a developer might build up a database server’s CPU and memory so it can accommodate a larger volume of data queries.

Stress testing

Stress testing is analogous to a dive test conducted by a submarine crew. Here, the system is pushed to its understood operational limits—and then even further—to determine exactly how much the system can take before reaching its breaking point.

Example: Failover testing is an extreme form of stress testing that begins with simulating component failures. The goal is to see how long it takes for the system to recover and resume operation.

Spike testing

Here we’re testing a different kind of stress—when user traffic or data volume transfer suddenly experiences a sharp and drastic spike in activity. The system must absorb various changes while continuing usual operations.

Example: Companies that run websites need to prepare not only for outages, but also for the surge of users trying to access the site simultaneously once it's back online. They must also assess whether the system can handle that sudden increase in demand. Spike testing can calculate how smoothly it’s likely to go.

Volume testing

Sometimes with performance, we’re discussing user traffic. Volume testing, in contrast, is concerned with how a system manages large amounts of data. Can the system process the data fully and provide data storage without data degradation?

Example: A medical clinic maintains huge volumes of patient information and is legally required to be able to access those patient records and related medical data. That constant influx of data can strain a system. Volume testing lets users know whether their system is up to the challenge of constantly accepting more data.

Endurance testing

Think of it as performance testing over the long haul. The real culprits sought by endurance testing (also called soak testing) are the data degradation and issues with memory leaks that often occur over an extended period of time.

Example: Social media platforms operate around the clock, and continuous usage can present problems with platform stability, data storage and user accounts. Endurance testing gives you a picture of the current operation and indicators of future performance.

Widely used performance testing tools

Developers and testers can choose from numerous tools designed for performance testing. Here are a few of the most popular:

  • JMeter: Apache’s JMeter application is a favored open source software designed for performance testing of workloads. The tool can use thread groups to generate virtual users to mirror traffic scenarios involving real users, including conditions containing an overload of users. These extreme situations are designated as studying a system under test (SUT). JMeter provides valuable test metrics such as response times, which measure how quickly a system under test (SUT) responds to user requests. It also tracks error rates—the percentage of failed requests—and throughput, indicating how many requests an SUT can handle within a specific time frame.
  • LoadRunner: As with JMeter, LoadRunner creates massive numbers of virtual users. LoadRunner then generates artificial user activity for these millions of virtual users, such as messages between components or user interactions with the user interface. Such interactions are stored in the test scripts that LoadRunner generates. It’s been reported that as of 2025, more than 2,600 companies were using its Micro Focus LoadRunner as a tool for performance testing and QA activities.
  • RoadRunner: “PHP” once stood for “personal home page,” but the term morphed into meaning “hypertext preprocessor.” It now refers to a server-side scripting language (meaning developers execute code on servers and not on client web browsers). Open-source RoadRunner (not to be confused with sound-alike LoadRunner) serves as a PHP application server and process manager. It streamlines application performance by negating the need for boot load time and limiting latency. RoadRunner enables a dynamic analysis framework for Java programs operating concurrently. RoadRunner offers an application programming interface (API) that evaluates event streams.

AI impacts to performance testing

Like with nearly all matters related to computers, artificial intelligence (AI) is now pushing software testing to entirely new levels of efficiency. It is making the overall performance testing process faster, more accurate and easier to automate.

Specifically, AI can employ shorter testing cycles, making it take less time to run tests. And through AI’s eagle-eyed accuracy, it’s able to notice more subtle performance changes that could elude human testers. Also, through predictive analytics, AI can evaluate operating trends and historical data and predict where and when bottlenecks might occur next. It can also leverage that predictive system behavior and even adjust test parameters based on it.

But, by far, the most significant thing that AI has done for performance testing (so far) is to assist its efforts on a grand scale by enabling automation. This automation is striking in that it’s fully capable of running the performance testing process—all of it.

AI cannot only automate how tests are carried out, but it can also write the test scripts intended for execution. In addition, it can interpret test results on the backend and offer guidance to remediate problem situations.

One of the most interesting and promising impacts of AI on performance testing is the rising use of human-AI collaboration. This arrangement realizes that human instinct and knowledge still have a vital role to play. In fact, in some situations, following human impulses is still the prime directive.

Some experts are convinced that the performance testing of the future relies on this hybrid approach. It combines computer mentality and processing muscle with a human sense of context and nuance.

Related solutions
IBM DevOps Accelerate

Automate software delivery for any application on premises, cloud or mainframe.

Explore DevOps Accelerate
DevOps solutions

Use DevOps software and tools to build, deploy and manage cloud-native apps across multiple devices and environments.

Explore DevOps solutions
Cloud consulting services 

Unlock new capabilities and drive business agility with IBM’s cloud consulting services. Discover how to co-create solutions, accelerate digital transformation, and optimize performance through hybrid cloud strategies and expert partnerships.

Explore cloud services
Take the next step

Unlock the potential of DevOps to build, test and deploy secure cloud-native apps with continuous integration and delivery.

Explore DevOps solutions Discover DevOps in action