8 minutes
System testing is the performance-based, end-to-end software testing of an entire system. This end-to-end testing includes aspects of functional testing, non-functional testing, interface testing, stress testing and recovery testing.
Imagine you’re looking at a software system under a microscope, starting at the most extreme level of magnification, with the unit. This is the basic building block of the software system. Then the view expands outward to include the next level of magnification—the modules created by those individual units. Finally, by zooming out fully, you arrive at the system level. At this level of magnification, you can see everything within the system and how all the components created by those modules work together.
In some ways, system testing is kind of like that microscopic view—but with a key difference. System testing is a form of black box testing, which means that testers are less interested in the view of components involved in its assembly than in the overall functionality of the system. From this kind of “pass/fail” perspective, system behavior is only noteworthy in this context as far as it relates to system performance. (White box testing allows more visibility into the nature of the components within a system.)
System testing often involves the analysis of multiple separate software systems that may or may not work in unison within a particular software system.
Consider the countdown that precedes space launches. While everyone remembers the dramatic 10-step countdown before ignition and lift-off, fewer recall the numerous departmental checks that are asked by the flight chief and answered in the affirmative as a “go.” In a typical space launch, department heads are consulted about planned operations, mission safety, vehicle systems, and expected weather conditions, among numerous other matters. Each department is queried and each department head responds in turn.
Similarly, system testing can be considered the final checklist that precedes the launch of a new software system. The last round of cleaning up any known software bugs has been completed. And just like the historic check-off lists that early space pioneers originated, it all comes down to a final go from each “department” included within the system testing.
Each query is shaded in terms of the system’s functionality:
When discussing system testing, we will naturally encounter the topic of dependencies, which are relationships that exist within test cases. In such situations, the outcome of one test case can depend partially or wholly on the results of another test cases. Dependencies can also involve functionality, testing environments or security policies and can even impact the entire testing process an organization maintains.
System testing methodologies don’t provide a deep look into their inner workings (remember, this is a form of black box testing), but instead lets you know if a particular application works. The idea if for system testing to help locate gaps, errors or missing requirements as it determines the overall functionality of the software application.
System testing is usually performed following integration testing but before acceptance testing, thus ensuring all components function together properly. As we’ll see, it often encompasses both the functional and non-functional aspects of the system. Because it’s based on both the strictly functional and the broadly non-functional areas, it gets into aspects as far afield as usability, security and performance.
One of the main purposes of system testing is that it enables you to verify that a software's coding language can be translated into a usable program. However, the overarching goal of system testing is to make sure that the software being evaluated capably supports the business requirements of the users who will be relying upon it.
The testing process is intended to reflect the same production environment that will be used, to ensure the software functions as needed despite changing, real-world conditions. Similarly, test data is created to mimic real-world data and situations. Once test cases are conducted, defects in the software can be located and fixed.
System testing can be classified according to one of three main groups. The first, functional testing, is concerned with system performance, but seeks no deeper answer beyond learning if the software functions as promised. Here are some of the main types of functional testing:
While functional testing provides extremely important information, that data is basically limited to an up or down vote based strictly on whether the system performs as it should. That omits a huge number of pertinent variables—like system security, UX and performance aspects.
Non-functional system testing provides a means for judging how the system operates. The essential difference between functional testing and non-functional testing can be boiled down to the following simple comparison:
With that in mind, here are some of the key forms of non-functional system testing:
Still other types of system testing are useful despite not falling into the categories of functioning tests or non-functioning tests. Here are some of the most noteworthy of these methodologies:
While the system testing process provides the most comprehensive black-box performance testing available, performing system testing is not without potential problems:
The requirements that system testing must satisfy are numerous—be they business requirements endemic to that organization, functional requirements unique to that software, or specified requirements that may apply to its operation. Indeed, there never seems to be a shortage of system requirements that operating systems need to absorb. Requirements that change often can trip up a system and leave it with an incomplete batch of test cases.
It should come as news to nobody that deadlines can wreak havoc in a business environment. Deadlines are legendary for creating harsh impacts as work schedules collide with calendar-driven expectations. How deadline pressure is manifested in software development is that suitable and ample system testing is often short-changed and is either conducted in an incomplete fashion or ignored altogether. This often requires retesting later in the software development lifecycle.
System testing doesn’t occur in a vacuum or without effort. It requires the skilled work of testing teams, testing tools to assist that labor, and adequate budget to enable it in the first place. Without these components, it’s easy for shortcuts to be implemented in their place, leading to incomplete testing. And if any part of that equation is altered or negated, it’s possibly going to exert a negative impact on the test coverage that results from that system testing of that software application or software product.
Many potential vulnerabilities can be assessed during the testing process, but software engineering staff can only make those assessments if supported and in control of the testing environment they’re working in. When testers aren’t working in stable environments, it becomes easier for them to miss potential software defects. And if testers in unstable environments do find software bugs that need repair, they often have a harder time reproducing those bugs later.
When your task involves software quality assurance, reviewing lines of computer code is painstaking work that can grow tedious and time-consuming. What can make such work even more unpleasant is when communication gaps occur between testers, developers and other stakeholders. As with any business endeavor, communication problems engender misunderstandings and create a production environment in which defects can escape detection and take root within operating systems.
Testing techniques vary, and test results come in all stripes, from simply understood test data to vast and complex datasets that require more serious management during and after the testing phase. The level of project complexity is increased further when test environments don’t fully mirror their production counterparts. The testing strategy that’s implemented during the software development process needs to consider how to best navigate these issues.
We explored why some organizations are prepared for both the disruption and potential of AI. Find out what these AI-ready companies have in common.
Register now to learn how advanced AI analytics can unlock new opportunities for growth and innovation in your business. Access expert insights and explore how AI solutions can enhance operational efficiency, optimize resources and lead to measurable business outcomes.
Explore the latest IBM Redbooks publication on mainframe modernization for hybrid cloud environments. Learn actionable strategies, architecture solutions and integration techniques to drive agility, innovation and business success.
Explore how IBM Wazi Deploy and modern language features can streamline your z/OS DevOps. Learn how automation and open-source tools improve efficiency across platforms.
Embark on your DevOps transformation journey with IBM’s DevOps Acceleration Program. This program guides enterprises through critical stages such as assessment, training, deployment and adoption to achieve seamless DevOps implementation.
IBM named a leader for the 19th year in a row in the 2024 Gartner® Magic Quadrant™ for data integration tools.
Automate software delivery for any application on premises, cloud, or mainframe.
Use DevOps software and tools to build, deploy, and manage cloud-native apps across multiple devices and environments.
Unlock new capabilities and drive business agility with IBM’s cloud consulting services. Discover how to co-create solutions, accelerate digital transformation, and optimize performance through hybrid cloud strategies and expert partnerships.