Exploiting use cases to improve test quality

from The Rational Edge: Read how testing and quality assurance organizations can improve test quality by employing use case techniques. This content is part of the The Rational Edge.


Debra Sheldon, User Acceptance Test Manager, Client Process Transformation, IBM

Author photoAn IBM Certified I/T Specialist with more than fifteen years in software testing, Debra Sheldon began working with multi-geo end users in 1993 and offshore solution teams in 1996. She currently directs a team of user test managers for the Client Process Transformation at IBM. These projects supply IBM's global work force with critical sales and distribution tools designed to operate in a global market environment. She led test teams for Nielsen Household Data, and Nielsen Media prior to joining IBM. An active process improvement advocate, Debra is currently working on formalizing user acceptance testing processes for global applications. She has an MBA from University of South Florida and a BA from San Jose State University.

Sylvia Lenters, Senior IT Specialist, Systems Engineering, Architecture, and Test, IBM

Author photoSylvia Lenters is a Senior IT Specialist in IBM's GBS Systems Engineering, Architecture, and Test (SEA&T) organization. She has eighteen years' experience successfully leading and supporting various projects, with fifteen of those years in test leadership roles. Her experience includes test planning, test execution, technical test leadership, and test management. A member of many committees and competency teams, she was one of four test team leaders involved in helping her organization successfully achieve SEI CMMI Capability Level 5 in 2005. She has a BS in Computer Science from Texas Woman's University in Denton, Texas, and is a senior member of the American Society for Quality.

15 February 2008

Also available in Chinese

use cases and computer screenTest organizations can realize significant gains in test quality by harnessing the power of use cases. For years, developers and business analysts have employed use case models to capture requirements. Test organizations can greatly benefit by using these same use case techniques. Well-constructed use cases provide value to testing efforts in terms of coverage efficiency, ease of traceability, and accuracy in estimation. They can also mitigate some of the challenges of globally distributed virtual teams. But the greatest benefit, quantitatively, is in test case generation.

Use cases and the supporting artifacts are invaluable drivers for identifying and creating Unit, Function, System Integration, and User Acceptance tests. Each of these test levels has unique objectives and requires different sets of inputs to achieve solid test coverage, but all can draw value from the project's use cases. By implementing use case methods, workshops, and reviews, test organizations can deliver well-defined and appropriate testing with greater precision and greater efficiency.

The argument for employing use cases in testing

Use cases have been a tool for software developers for years. A use case clearly describes how a specific user action initiates a named process to deliver a specific outcome to the user. Correctly specified use cases provide a direct link to help users and developers develop a common understanding of the user requirements for a system. They are a proven and powerful software development tool.

Test organizations should also be taking full advantage of the benefits use cases offer. Testers are uniquely positioned to garner great benefit by harnessing the power of use case implementation by converting use cases into efficient, effective, phase-appropriate testing. Reuse of these use cases by testers requires no additional effort or expense for the user and development communities, provides a complete and consistent basis for test sizing and test case generation, and will result in better quality solutions tested in a more predictable manner.

Testers are able to achieve all of these benefits because use cases, and the supporting artifacts, provide roots and drivers to create solid foundations in several areas that are typically problematic for testers. Because use cases are delivered early, and are understandable by all resources on the project, test teams can perform more accurate test sizing earlier in the project than with line-item requirements gathering. Further, use cases drive clearer communication and enhance development's understanding of the users' needs. Their simplicity ameliorates some of the communication challenges of global, virtual-solution delivery teams. Use cases also offer the opportunity to simplify requirements traceability and to insure more complete test coverage.

Test teams using use cases are able to use new and improving techniques for enhanced coverage and efficiency, as well as more precise scope definition and test prioritization. These combined benefits result in a more effective, more rigorous test process and more predictable test outcomes.

Because of the many benefits use cases offer, test organizations should be motivated to make sure each use case is built correctly. The project team should conduct a use case workshop, or several if needed, including employing test writers for all test phases. Each use case must undergo static testing to insure it contains clear and beneficial information from which testers can build appropriate test cases for the designated test phases. By re-using well-constructed use cases and supporting artifacts, test organizations are empowered to deliver targeted, efficient testing.

What is a use case?

Use cases have been around since the mid 1980s, when the concept was first described by Ivar Jacobson.1 Jacobson was an early pioneer in the concept of component-based design. He is also credited as having been a principal developer of both the Unified Modeling Language (UML) and the IBM® Rational® Unified Process®, or RUP®. His interest in software development best practices led him to develop the concept of use cases as a way to better identify and specify software requirements.

Although the concept of use cases has been around for more than twenty years, many software practitioners have never actually used them or even been exposed to them. The growing popularity of business transformation methodology, which relies heavily on process rules, is changing that. Popular tools such as IBM Rational RequisitePro® (used to support RUP) incorporate the generation of use cases as part of the requirements definition process. The introduction of UML, which sets standards for software description, has also helped to further the adoption of use cases.

A use case focuses on what the stakeholder needs the system to deliver, rather than describing how to arrive at that end result. A use case takes a "black box" approach to the system.2 It should state what action is going to occur, but not get into specific details about how that action will be performed. Think of a use case as results-oriented from the actor's point of view. The actor doesn't really care how the action is performed as long as the expected outcome is achieved. Therefore, a use case must represent an outcome that holds significant value to the actor.

UML defines a use case as "A description of a set of sequences of actions, including variants, that a system performs that yields an observable result of value to a particular actor."3 This "value" concept is very important when determining what does and doesn't constitute a use case. If you cannot identify a specific value that will be realized by the actor, the action is probably not a good candidate for a use case.4

A use case can be graphical or textual, but ideally it is both.5 Use cases can be created in a text-only format, and were originally documented in tools such as Microsoft Word. Over time, it has become increasingly common for them to be graphically represented in use case diagrams. Use case modeling using UML and tools that support RUP are most commonly used. Such tools encourage both text descriptions and a variety of supporting diagrams to better illustrate how the system will be used.

Use case diagrams (see Figure 1 for an example) are often grouped to show a collective set of stakeholder needs. In the diagram, any entity that communicates directly with the system is commonly referred to as an "actor." An actor can be a person or a role that represents one or multiple people. It can also be another computer system. An actor is commonly represented by a "stickman," even when the actor is another computer system. Each use case is represented by an oval, and labeled with a declarative statement that describes what action the use case is intended to perform. This statement serves as the use case name. It should be brief, but descriptive of the action to be performed. Communication lines can also be drawn to show a relationship between actors and use cases.

Illustration of typical use-case diagram

Figure 1: Use case diagram

The use case may be supported by a use case description containing key properties, such as pre-conditions, post-conditions, and a flow of events. A use case model consisting of the use case diagram, actor definitions, and a use case description may also be produced.6

A team's initial goal in developing use cases is to identify all of the core functions that the user stakeholders want from the system. Once they've established these functions, the team can begin to expand on the details related to each key function, and they can begin to look at alternative process flows and exceptions that might be related to each of the identified use cases.

In use case development, it is generally assumed the core functions will have a positive or successful outcome. This is often referred to as the "basic path" or "happy path." For example, if using "Search for a Product," the positive outcome would be for the sought product to be found. Numerous alternative flows, including exceptions and errors, may also exist. For example, what if the product is not found? Or, the system on which the product catalog is located is not responding? Or, a customer keys invalid information into the search field? These are also valid paths and should be captured in the documentation.

The level of detail and types of use case documentation vary greatly from one organization or company to another and even from one project to another within the same organization. This can be influenced by many different factors, including the project's budget or scope (particularly if the solution is object-oriented in nature), the skill set of available resources, the use of UML, and the use of RUP or another methodology.

As mentioned above, use cases may be completely text-based with no supporting diagrams, or they may be illustrated using only simple use case diagrams as shown in Figure 1. According to the Object Management Group, UML 2.0 has thirteen different types of diagrams.7 These diagrams are divided into three categories: structure diagrams, behavior diagrams, and interaction diagrams. Structure diagrams, such as a class diagram, represent static application structure. Behavior diagrams, such as use case and activity diagrams, represent general types of behavior. Interaction diagrams, such as a sequence diagram, represent different aspects of interactions. In addition to the variety of diagrams teams can produce, teams can also create other supporting artifacts, such as a glossary or special-requirements document (e.g., one for non-functional requirements).8

The development of use cases for a project is generally seen as a way of facilitating the requirements gathering process, thereby speeding up application development. Most publications seem to focus primarily on this aspect of use cases. While most software testing publications don't reference use cases at all, the reality is that use cases can be extremely valuable to test organizations for driving improvement in many areas.

Benefits of use cases for test organizations

As we'll show in the sections that follow, use cases offer compelling benefits for improving testing quality and efficiency.

Sizing estimation

Use cases provide exciting opportunities in the area of test sizing. John Smith9 details the possibilities for translating a use case into Source Lines of Code (SLOC) using the Constructive Cost Model (COCOMO), and, in turn, using that information to develop a Rough Order of Magnitude (ROM) sizing.10 The success of this approach assumes that lower-level work products, such as architecture and data design, have been generated previously.11

Another sizing technique, Use Case Function Point (UCFP) estimation, is an adaptation of Function Point Counting. It has the same challenges as COCOMO-type methods. Because of the shortcomings in the foundation supporting these two older methods of estimation, bridging the gap between level of detail and decomposition may be too problematic to overcome effectively. However, since ROM sizings have a tendency to get locked in and considered final early in the project, any tool that provides solid information related to test sizing early in the process remains valuable.

The Use Case Points Method (UCPM) appears to offer the highest potential for effective test sizing. This method was developed based on more recent design methods, rather than being retrofitted to handle the advent of use cases. Introduced by Gustov Karner in 1992, the estimation is driven by the elements of a use case.12 With some modification, this method can be directly utilized by test organizations.13 The more black box, work-stream driven the test organization is, the more effectively primary use case documents can be applied to sizing. Thus, with UCPM user acceptance or systems integration, teams will be able generate more accurate estimations much earlier than the other test phases. Because the value of any sizing tool or method lies in the accuracy of its inputs, historical sizing weights, as they become increasingly available, will provide even higher confidence levels in these estimation techniques.

Development inputs

Use cases contribute to better quality development inputs and code that test organizations rely upon. As the use cases begin to take shape, the requirements are discussed in terms of goals for the user or actor. This process forces the developer to concentrate on the functional requirements outside of the development paradigm to be used.14 This method pulls developers inside the user's thought process (e.g., their goals), and developers can see the requirements in context. This process results in developers who more thoroughly understand what a system is supposed to do and, thus, can deliver a well-built solution to testers.

Although the process requires further skilled analysis, these use cases are then decomposed for input into other work products that solid testing requires. At that point, these contextual or black box scenarios are transformed into white box implementations.15 Analysis and design modeling outputs are linked directly to the use cases driving it. Additionally, use cases drive user-experience modeling, which sets the stage for ensuring the user interface is consistent throughout the solution. In the case of object-oriented programming, these models clearly lay out the classes and methods linked to a use case. The impact of any changes in a lower-level document can be linked back to any of the higher-level user cases.16


A critical element in the overall quality of a solution is the ability to track the requirement through all stages of the development and delivery process. The IEEE refers to linking elements from one artifact to another and uses such definitions as the pure justification of this exercise.17 A use case enables a predecessor-successor relationship to be defined and instantiates the driver for the feature.

The purest economic argument for traceability, however, is that it provides justification to execute a specific test. By knowing that each test case can be traced back, ultimately to a business requirement, test planners are assured that testing is not being driven by anything other than the solution currently being financed. Conversely, the sponsors of the solution are able to directly identify evidence that the solution is providing them with business value.

Use cases can significantly improve a tester's ability to reconcile the development artifact list. It is a trail from the requestor, through the implementer, to the verifier. Booch et al. conclude there is no established standard for communicating requirements traced through the solution deliverables.18 Critical applications require more stringent methods for communicating dependency for how a single requirement links to and connects each artifact.19 However, even a typical non-critical system should require simple identification of dependency relationships. The basic process of turning what customers want into a use case provides this traceability among test deliverables. When use cases drive solution testing, you can readily answer the question of why or what was tested, providing confidence that the solution matches the explicit expectations of the customer.

Developing the appropriate test for the appropriate phase

Without use cases, developing unique test cases that satisfy the objectives of a specific phase can become a complex and ambiguous exercise because the various test groups generate tests from the same source documents. This can result in redundant or inappropriate tests and coverage gaps. With use case-driven development, each test level uses a different grouping of artifacts, which aids in keeping the test within the appropriate boundaries for the phase. Because use cases are expressed in terms of user goals, additional artifacts created for implementation will be required for solution completion. For example, when use cases and supporting tools are created, a unit tester can retrieve the appropriate class to verify. For the function tester, use cases combined with the supporting artifacts create a complete picture of the function's purpose and the valid test inputs. Function testers may use the use case as background, but use the sequence diagrams to drive the test case creation. They have access to the specific values supported by looking at the design information. The Systems Integration Test phase returns to the main use case and identifies the appropriate number of alternate and variable paths through the use case. Finally, the User (Customer) Acceptance Test team can execute test cases that are valid and respond directly to the question of solution satisfaction.

Coverage efficiency

Testers can more efficiently plan and execute testing when employing the use case model. The use case model allows testers to visually identify their test cases, especially when the process includes a use case diagram.

While it is variously referred to as branch, path, or variability identification, in all cases the test designer identifies the various ways the user or external system can execute the use case. At this point in the process, use case implementation provides a means to identify invalid tests, remove redundant paths, and plan black box testing with no guesswork as to coverage. By identifying all variable paths through the use case, testers can pinpoint where paths overlap those of other scenarios. Overlapping or redundant paths are candidates for removal from the planned test case execution library. Clay Williams has even identified a method and system for optimizing test case generation by focusing on actions and sequences.20 Called "Method and System for Generating an Optimized Suite of Test Cases," it uses an algorithm and can arrive at a minimal number of test cases for coverage while eliminating redundancy.

Whether using an established method, or creating a "grass-roots" approach to arrive at less duplication and more complete coverage, use case-based methods have clear advantages over line-item requirements methods.


The act of ordering test cases for execution is a critical management tool for a test organization. There are several prioritization approaches, and selection of an appropriate method varies between projects, teams, and development methodologies. Regardless of how test prioritization is done, use cases offer a more simplified process than that of older requirements methods.

Testers may choose to prioritize testing by the criticality of function to the user community. Use cases support this approach by allowing the sponsor to clearly identify outcomes; functionality is then directly traced to these outcomes. This is clearly superior to prioritization via a traditionally decomposed line-item requirements method, which is likely to lose focus or traceability because it requires a multi-step, indirect process.

In the traditional process, testers are forced to link together groups of technical requirements associated with higher-level user outcomes. Determining test priority in this way is time consuming and error-prone because it requires reverse mental mapping and/or re-composition of multiple test cases back to the scenario-based goal the sponsor originally sought to represent. With use cases, these error-prone activities are nearly nonexistent. Because they directly map to the desired outcome of the functionality under test, tests can be readily placed in order of the critical need previously established by the sponsor. With the objective for each test made easy to identify, the most difficult aspect of prioritization becomes getting the business sponsor to commit to the order of importance for new or changed functions.

The simplest approach to test case prioritization is identification of critical path. Critical path is typically the requirement that user-facing functionality be operational in order for the solution to be of value to the end user. Critical path is useful in regression testing and other severely time-constrained testing. Identifying the critical path, especially for large releases with many requirements, can be a daunting task, however. This is another area where use cases (and particularly use case diagrams) prove very beneficial. Since use case diagrams start out identifying the positive, or "happy," paths, and then branch off to show alternate flows and exceptions, they provide a means to identify the most direct path through the application. The technique also provides information that allows quick (but informed) decisions about which alternate flows and exceptions are most important to test.

Use case implementation opens the door to even more sophisticated and potentially efficient test-prioritization methods. For example, the ranTest initiative for Software Engineering 2006 employs use cases weighted with specific criteria.21 The ranTest technique takes the universe of use cases for the release and statically prioritizes them. The test organization then executes the test cases in that order. Should the budget or schedule fail to cover the entire test suite, less critical use cases are omitted. In addition, the technique allows for dynamic prioritization. While executing tests identified in the static-based prioritized group, risk areas are identified. Thus, a second layer of test case prioritization is created, which focuses on proving that the most critical functionality is working with the highest amount of reliability. The end product is an efficient test approach that provides a specific level of assurance the solution meets the goals of the customer.

Use cases also improve the accuracy of defect prioritization. When a test failure occurs, use case-based methods allow immediate categorization and prioritization of the problem, because the overall importance of the use case has already been established. Instead of a subjective evaluation of the defect, the tester can rely directly on the declared value the sponsor placed on that specific use case at the time of requirements specification. Defect prioritization becomes nearly instantaneous and non-debatable. And, the elapsed time between defect discovery and the determination of its priority is dramatically reduced.

Meeting global resource challenges

Business transactions occur within a framework of culture and laws in a given market. Established market behavior proves that the best opportunity of success outside of an immediate culture is with like cultures or within cultural clusters.22 This behavior collides with current software delivery models that recruit development and test resources from around the globe, and frequently results in misunderstanding between users and developers. The continuing movement toward more complex and process-driven solutions further increases the risk of these "disconnects" between users and developers. These conditions combine to make the challenge of ensuring that the solution fits the target market even more complex.

As Bittner and Spence point out, "Use cases work by allowing us to visualize what the system does and how to use it."23 By employing use cases, global test resources gain back some of the knowledge that may have been lost crossing cultural boundaries. They become better acquainted with business goals driving the requirement for the specific market in which the application will be executed.

Use cases engage users in the requirements process and allow them to communicate clearly with the testers as to whether the solution is on- or off-target.24 The understanding and subsequent implementation of a traditionally stated requirement varies widely among individuals. Because use cases reflect specific outcomes and are delivered early, a global test team has more time to develop a well grounded understanding of the user goals. This additional upfront time allows testers to fully verify their understanding of the implementation and the desired outcomes. When factoring in the challenges of being unable to visually communicate with the requirement owner,25 different time zones, and lack of functional capability within the market which the product is intended to serve, the gap between what is tested and delivered and what the business really needs can be dramatic. When using global resource teams to provide solutions, use cases offer the power to reduce this requirement-to-implementation "translation" risk.

Creating test cases

The greatest impact use case implementation can have on a test organization is in test case creation. Use cases create direct input into some of the higher-level testing. For acceptance testing, when properly implemented, use cases shorten the gap between the tester's conceptualization and the reality of user goals. Although that initial deliverable is of marginal value to the unit tester, this lack of detail forces the solution delivery team to subsequently create supporting material that contains the essential inputs for lower-level testing, such as the unit test. Effectively, each test phase has access to input appropriate and complete information for the distinct objectives of that phase and no other. Moreover, these inputs, because of their specialized nature, are clear, with direct information easily converted into a test case for the appropriate test phase. Testers are not distracted by sifting through large, all-encompassing documents in search of inputs for their unique test phase. The results are better-scripted, better-targeted tests.

Generally, test case creation from a use case-driven method involves four basic steps:

  1. Identify the scenarios
  2. Identify variants
  3. Adjust coverage for efficiency and completeness
  4. Assign data inputs

Not all test levels approach the above steps in the same manner. The following sections describe how to create test cases appropriate for the test phase for projects that are driven by a use case model.

Creating test cases for user acceptance tests

Because use cases contain actor and goal information in real language, test cases for user acceptance can be written prior to the creation of tests for the other test phases. A use case is defined from the perspective of the user, not the internal workings of the system, so acceptance tests can be created directly from the use cases.26 Moreover, use cases are uniquely suited for the User Acceptance Test because acceptance test cases are required to resemble the scenarios originally elicited from system stakeholders.27 As Lee Copeland points out, use cases are best suited for end-to-end, black box testing -- exactly the conditions under which user-acceptance tests are executed.28

Test case generation for acceptance tests differs from other test phases due to some distinct deviations between the underlying premise of this test level and the others. Acceptance testing is at that point in the cycle where the technical test teams have already concluded the application is complete and ready for delivery to the customer. The individual test cases are driven by use cases, and not supplemental, non-functional requirements specifications. Test cases are a variant subset of upstream testing. Another difference is that the acceptance team approaches their test results as having the final say as to the completeness of the solution. Finally, the tester is likely a "non-technical," ordinary stakeholder, not expected to be aware of the underlying architecture, but intimately familiar with the goal of the transaction described within the test.

Since use cases are intended to be transparent and not system dependent, they will be missing some of the critical details essential to lower-level test phases. User Acceptance Test case writers should expect no detailed data in the use case. However, as Copeland clearly identifies, transaction testing, a category under which acceptance testing falls, is distinctly dependent upon data.29 Boris Beizer supports the criticality of data, stating that generating, capturing, and extracting data accounts for thirty to forty percent of the effort.30 This leads to the question of how much detailed data and scripting is required for acceptance testing. Our position is that detailed scripting with detailed data inputs and user interface steps runs counterintuitive to the general objective of the User Acceptance Test and is not required for test case scripts. Cem Kaner and James Bach assert that "complete scripting is favored by people who believe that repeatability is everything and who believe that with repeatable scripts, we can delegate to cheap labor."31 With an acceptance team built of high-value power users, top guns or other subject matter experts, a User Acceptance Test case will not require low-level user interface or class details. Depending on the goal of the use case, some data input will be required in the pre-conditions or setup of the test case.

To begin creating test cases, User Acceptance Test resources will need to look at the flow diagram for the use case. In Figure 2, the basic flow, or happy path, goes straight through the diagram. This will be your first Acceptance Test case. The alternative flows are then marked off. These represent additional test cases.

Shows basic and alternate paths for test cases

Figure 2: Flow diagram for a use case

When working from a use case with a small number of paths, you can reasonably expect user-acceptance testing to execute all paths. For use cases that contain a high number of alternative scenarios and variants, or for those that are linked to other use cases, executing a test for each path is nearly impossible. Especially for non-critical applications, such an approach is likely not even supported by a business case.32 Moreover, many of these paths may be redundant or even invalid.

McGregor and Major offer one approach to keeping the test case size manageable, which is exceptionally appealing to the User Acceptance Test community.33 The authors profile different types of users, creating operational profiles. Selecting the path typically taken with the operational profiles is an excellent way to prioritize acceptance-level testing. Designed to be executed by users who fit those operational profiles, the resultant subset of test cases will provide assurance to the sponsor that all of the user community can utilize the solution to achieve the stated goals. At the same time, operational profiling is a tool to prioritize or reduce the size of the testing. Keeping with the objective of this test phase, loops and exception paths are not required coverage, unless such a path is commonly executed by one of the profiled communities.

Creating test cases for the Systems Integration Test

The Systems Integration Test phase verifies the integration of all applications (including interfaces that are internal and external to the organization) with their hardware, software, and infrastructure components in a production-like environment. It is true end-to-end testing.

The approach for turning use cases into test cases is much the same for the Systems Integration Test as it is for the User Acceptance Test; however the Systems Integration Test is broader and deeper in scope than the User Acceptance Test. While the User Acceptance Test team may be primarily concerned with the happy path and perhaps one or two alternate paths defined for a given use case, as shown in Figure 3, the Systems Integration Test team will likely need to exercise the happy path as well as the majority of alternate paths and exception paths (particularly those related to error conditions).

Shows exception path diverging from basic path

Figure 3: Exception path for the Systems Integration Test

As discussed above, for larger and more complex processes with many paths, testers may need to do careful analysis to ensure that the critical path functions are tested and that test efficiency is maintained.

Systems Integration Testing requires a more granular view of the system than does acceptance testing. In addition to scrutinizing functional aspects of the system at a lower level, the Systems Integration Test also encompasses non-functional tests. Additionally, the Systems Integration Test team is typically much more concerned about data testing than the acceptance team, especially in regard to data being passed to or received from other systems. So, while it is possible the User Acceptance Test team may actually have a one-to-one ratio of use cases to test cases, it is likely the ratio of use cases to test cases will be one-to-many for the Systems Integration Test.

To test all of these areas sufficiently and most effectively, the Systems Integration Test team requires more than use cases. Systems integration benefits from some of the more detailed UML diagrams, such as the activity or flow chart diagram. Dean Leffingwell and Don Widrig define an activity diagram as one that has "the advantage of reasonable familiarity."34 Easily read by even non-technical resources, these flow charts typically identify the business process and rules. They allow a visual reference to the scenario. Another artifact, a sequence diagram, provides information on interactions between objects in the sequential order that those interactions occur.35 These diagrams, complemented by text descriptions, should provide sufficient information for the Systems Integration Test to begin building functional types of test cases very early in the project lifecycle. Utilizing variant tables and data-table or decision-tree types of documents will then complete these test cases.

Some organizations may not support UML or use any sort of standard use case modeling tool. If the use cases are solely text-based, they can still be of significant value to the Systems Integration Test team in planning for testing and generating test cases, but the Systems Integration Test team will want to participate in the use case workshop to make sure the use cases are driven down to a sufficient level to support their testing needs. In this scenario, however, it is still likely the team will be dependent on external design documents that may not be generated until later in the project lifecycle.

All use case methods, formal or not, require supplementary documents for complete test case creation. Systems integration testers are able to complete the test case library by extracting information related to non-functional integration test cases. Test cases for security, performance, usability, translation, and such are driven by these types of documents. Typically, this documentation is not available until after use case creation is well underway. One risk in the timing of the delivery of these types of documents is a potential delay in environment acquisition, the specifics of which are identified in this article. Test organizations should plan to mitigate this isolated risk.

The ability to generate the majority of test cases and scripts early has many benefits to the Systems Integration Test team. It's also a good argument for usage of a use case modeling tool using UML. Early test case development allows the test team adequate time to do peer reviews and hold stakeholder reviews. In fact, the team may even have adequate time to do iterative reviews, which will allow them to perfect the test cases to meet both technical and user requirements.

Creating test cases for the Function Verification Test

During the Function Verification Test phase, discreet pieces of functionality are verified in detail. The Function Verification Test is typically considered to be the testing of functional components against Detailed Requirements as well as External and Internal Design documents. Internal and external interfaces may be tested using stubbed data input and trapped data output.

Unlike the Systems Integration Test, which focuses more on the ability of the application to interact with other applications and systems, the Function Verification Test scrutinizes the application at a field-by-field level and involves testing individual components and processes, bounds and limits, and other technical details internal to the application itself. The ratio of use cases to test cases will be one-to-many for the Function Verification Test. The Function Test team will be interested in the components that make up the basic flow, alternate flows, and exception flows that will later be tested by the Systems Integration and User Acceptance Test teams.

In order to achieve optimum results at this level of detail, the Function Verification Test team can benefit from some of the more detailed UML diagrams, such as the class and sequence diagrams. Copeland defines a class diagram as one that "describes the classes that make up a system and the static relationships between them. Classes are defined in terms of their name, attributes (or data), and behaviors (or methods)."36 Sequence diagrams provide information on interactions between objects in the sequential order that the interactions occur. Each interaction should result in some desired outcome.37 These types of diagrams, complemented by text descriptions, should provide sufficient information for the Function Verification Test team to build test cases very early in the project lifecycle.

Figure 4 shows a Rational UML sequence diagram from IBM DeveloperWorks that illustrates both incoming and outgoing messages (interactions).

Shows a typical sequence diagram

Figure 4: A Rational UML sequence diagram from IBM DeveloperWorks illustrating both incoming and outgoing messages

If a project is not using UML or does not carry use case documents to the level of detail found in the class and sequence diagrams, the Function testers will have to wait until detailed external and internal design documents are completed in order to build their test cases. Also, use cases do not typically cover non-functional requirements, which are common to the Function Verification Test phase.

Creating unit tests

We discuss unit testing lastly because the inputs required for this level of testing are some of the latter documents created by the project team. A unit test is the initial testing of new and changed code in a module. Most of this type of testing employs white box test methods. It is the lowest level of software testing and the first line of defense in finding software defects in dynamic testing. It is also the phase in which defects can be identified and corrected most easily and economically. Only a small portion of the actual use case serves as direct input to a unit test case. The bulk of unit tests depend upon lower-level artifacts based from these use cases.

Since use cases take a black box approach to the system, they cannot be critical drivers for the unit testers. However, they can provide significant value when the unit tester executes the test using a stubbed-out, black box approach. Use cases provide significant insight into how users expect the system to work; and, if fully documented, they identify not only the basic flows of the application, but also related alternate flows, possible sub-flows to the alternate flows, and also exception or error flows. Black box testing is dependent on the tester knowing what the system output is going to be given a particular system input. The process flow information identified in use cases can be very helpful to the developers doing unit testing, because it allows them to easily understand how the users expect information to be input into the system and what output they expect based on a particular input.

If the project organization is using UML for modeling, unit testers will need sequence diagrams, class diagrams, and perhaps activity diagrams. Class diagrams deliver the structural information unit testers need. Sequence diagrams, also of great value in Function Verification, provide the interaction information. They show "what the actor is doing, what components he is interacting with, and what parameters are getting passed,"38 and they are particularly useful to testers developing unit test cases. Thus, the development team responsible for unit testing should make sure that class and sequence diagrams are included as project deliverables.

It is very important the unit tester has access to artifacts that communicate decomposed information, outside of the actual use case. This level of information enables the creation of solid unit test cases and coverage. If the organization is not using UML, the development team will likely have to wait until detailed design documents have been created before they will have the level of information needed to create their test cases.

Closing the test coverage gap

Use cases drive the testing process, but they cannot directly address many types of tests cases. Non-functional requirements are not included in use cases, largely due to methodological guidelines, presumably motivated by an efficiency driver. But, more simplistically, creating an external list, or line item requirement, for such an attribute is far more efficient than creating a use case for it. As such, the use cases themselves are useless for driving non-functional tests. Thus, this coverage gap raises the likelihood of such a defect going to production undetected.

Testers will need to pull from other artifacts when building tests cases for the following types of tests:

  • Performance/load/stress/race conditions
  • Hardware compatibility
  • Installation
  • Internal audit and controls
  • Reliability
  • Security
  • Software compatibility
  • Translation
  • User interface

This exposure does not interfere with unit or acceptance tester objectives. Unit level tests are driven by the lowest level of documentation. Acceptance tests are motivated by the highest form of the use case. It also does not preclude those testers from finding non-functional related defects. We could even make an argument that this limitation is a form of prioritization, because when executed in a production-like environment, non-functional defects found under those limitations are likely to impact a larger number of end users.

To build test cases for non-functional requirements, testers are dependent upon supplemental documentation, also called supplementary specifications.39 Bittner and Spence write that special requirements (requirements that do not apply to the whole system) can be placed into a use case, and could thus be accessible to the high-level test case writer.40 But generally, this type of information, and thus the related test cases, should be generated later in the development process than is done with those tests driven by a use case.

Validating use cases

Situations where the process of generating test cases from use cases becomes stalled will occur. In such instances it is important to identify the drivers. The common drivers are:

  • The test case writer requires too much detail.
  • The use cases are too complex.
  • The use cases are too decomposed.
  • The writer lacks basic understanding of the business problem.
  • The test case is inappropriate for the level of test.

Much of the above can be avoided by being proactive early in the cycle. Because they have so much at stake, test organizations must be positioned to impact the quality construction of use cases. As Leffingwell and Widrig point out, "The use case technique builds a set of assets that can directly drive the testing process."41 But correcting a weak or erroneous use case at test development time can be difficult and costly, especially when using a waterfall-based development method. Thus, we recommend a two-pronged approach to ensure the use cases are of functional value to the testers. First, a test representative involved in use case creation should steer the team away from the common pitfalls of use case writing. Secondly, teams should always perform static testing of the use case.

The first pitfall to be avoided is functional decomposition, a problem that stems from a variety of behaviors. The purest form of decomposition is the breaking of (what should have been) the use case into smaller parts.42 This occurs when lower-level information is substituted for a user or system goal in the use case. One way this occurs is by confusing data flow diagrams with use case diagrams.43 Basically, the use case author begins looking at the goal in terms of design rather than functionality. The results are use cases that cannot stand alone to create a testable scenario.

A related pitfall is the mixing of requirements and design within the use case. In this situation, the use case can be used to create testable scenarios, but the inclusion of hidden information44 impinges on the ability to maintain the test case through any design changes.

Too much or too little information is another common pitfall, but it involves more art than science to resolve. Not enough information makes it difficult for testers to create a robust test- case suite. Too much information makes it difficult to identify the test paths and prioritize tests. Knowing the right balance at the onset requires experience with use cases and historical teaming of individuals on the project.

A simpler pitfall to avoid is a team's failure to enforce design mechanisms, methods, and processes.45 This includes the writing style and the content. Although it is a wide-ranging topic, with consistent process across the functional areas of the project team, test groups can easily avoid this pitfall.

Even with an experienced and well-disciplined team, there is still a risk that a use case may be defective to the point it impacts the ability to ultimately create a strong test case. The second opportunity for making sure the use case is a solid, usable input for testing is static testing of the test case. Teams can even use static testing of the use case as a gating mechanism. When static testing is considered as an identified task in the project-plan dependencies for the successful completion of this activity, testers are assured the use cases will be solid inputs for their deliverables. Static testing of the use case can be driven by a checklist, and levels of scrutiny should involve testers.

There are multiple approaches to static testing of a project's use cases. Especially useful for non-domain experts, Copeland suggests breaking use case validation into three parts: syntax, domain expert testing, and traceability.46 In this approach, each use case is tested. NASA recommends performing analysis on only a select set of use cases.47 Analyses such as boundary, element naming, and sponsorship are performed. Alistair Cockburn recommends a list of questions to consider when reading the use case.48 These questions are useful to identify gaps but leave other granular elements unexposed, which could hinder the ability to successfully employ use cases to drive test cases. The approach to use case review may vary, but clearly, teams must give formal attention to the quality of the use cases if they are to be of value in test case creation.

Use case workshop

There are many different theories about when and how use cases should be developed. In some organizations, a specific group, such as Systems Engineering or Architecture, owns responsibility for creation of use cases. They create the use cases in somewhat of a vacuum, at their own discretion, using whatever information they've been able to gather. They may interview stakeholders and developers to gather the needed information, or they may actually create the use cases based on published requirements. Groups that are "downstream" of the development process, such as Test and Documentation, may not be given any consideration. The fact that they are downstream of the design and development processes may indicate to some that they have little value to add to a discussion about requirements. This can degenerate the true value of use cases.

Bittner and Spence suggest a very different approach to developing use cases.49 They recommend starting early in the project lifecycle and holding a use case workshop with a diverse group of participants that includes a variety of skills and knowledge. They indicate the best mixture of participants for the workshop includes representation from each of the major stakeholder groups identified for the project. This approach offers many potential benefits. Including user representatives, developers, testers, architects, and/or other major stakeholders lessens the chance key issues will be overlooked, because each group brings its own unique view of the application to the table. It also gives the representatives from each group the opportunity to make sure their group's needs are met. The workshop provides the opportunity for building a good working relationship between members of each stakeholder team; and, hopefully, it also ensures that everyone walks away from the meeting with the same understanding of the requirements.

Depending on the team dynamics, size, and overall effort, a good argument for three separate workshops can be made. There should be the initial workshop where the use cases are generated. The second workshop should include all test case writers for use case reviews. Finally, sequence and class diagrams, or similar deliverables, make up the third workshop recommendation.

Some of the stakeholder groups gain an auxiliary benefit from attending such workshops -- they receive the opportunity to accomplish productive work earlier in the project lifecycle than they might otherwise. Test teams are an especially good example of this. Frequently, testers are not brought into the project until late in the development lifecycle. Thus, they are totally dependent on other groups to provide them with requirements information, external and internal design documents, and other relevant information. Once they receive this information, they must try to assimilate it without the benefit of having been involved in any of the earlier stakeholder meetings. By including the test case writers in the use case workshop, the team can develop an early understanding of the stakeholder needs related to the project. The test case writers can gain insight into the scope of the project and its technical complexity, which will allow them to provide more accurate test sizing earlier in the project lifecycle. They will be able to begin work on the test strategy and test cases weeks or even months ahead of when they would normally start these activities. This approach, when tightly coupled with change control, leads to significant improvement in the quality of testing.

To reap the workshop's full benefit, it is critical to include the correct test resources. Teams should include a technical test lead or other test subject matter expert who has hands-on experience with the application, rather than relying on an entry-level tester or a test project manager. Each test team that sends representatives to the meeting should map out its objectives for the meeting in advance. Going into the meeting, team representatives should have a list of the necessary deliverables their team will need from the workshop (e.g., sequence diagrams).


While use cases cannot cover all types of testing, they can provide significant value to several primary test phases, including the User Acceptance Test, Systems Integration Test, Function Verification Test, and Unit Test. To maximize this value, each test owner must be an active participant in ensuring use cases are created with quality and at a level sufficient for the needs of their particular test phase.

If use cases are constructed properly and with quality, they can provide many benefits to the testing organizations. They help to ensure the creation of more accurate test sizing earlier in the project lifecycle. They can aid developers in better understanding what the system is supposed to do, and thus, in building the correct solution. Improvements in the development cycle mean that a better quality product is passed on to the test teams, who ultimately help to deliver it to the stakeholders. Use cases contribute significantly to the ability to prove traceability of requirements to the solution being tested and delivered. The various artifacts created as part of the use case modeling process provide valuable and needed information to the test organizations, which can be utilized in developing test cases. Flow diagrams built during the modeling process can be useful in helping testers to identify the critical path, prioritize test case execution, and provide more efficient test coverage for a given test phase. Finally, use cases even help to alleviate the strains of global resourcing by reducing the requirement-to-implementation "translation" risk when using global solution teams.

To realize the maximum value of employing use cases for testing, the project should accommodate the need for use case workshops. Investing in a use case workshop early in the project lifecycle must include all of the necessary resources and skills needed to carry use case modeling through to the appropriate level for the project. Testers must also be able to clearly identify supplementary documentation and diagrams required to more thoroughly create tests appropriate for each test phase. All of these critical-path deliverables should be tracked at the project level. These steps will enable effective utilization of use cases to improve the solution's overall test quality.


  1. Bittner, K., and Spence, I. (2003). Use case modeling. Boston: Addison Wesley. p. 13.
  2. Black box testing is executed without technical knowledge of the underlying code and performed solely by executing the compiled application.
  3. Booch, G., Jacobson, I., and Rumbaugh, J. (1999) The Unified Modeling Language User Guide. Boston: Addison Wesley. p. 468.
  4. Bittner and Spence, p. 23.
  5. Adams, E. (2001, September). Achieving quality by design, part II: Using UML. The Rational Edge. IBM Corporation. Retrieved from http://www.ibm.com/developerworks/rational/library/3101.html
  6. Bittner and Spence.
  7. Object Management Group: http://www.omg.org
  8. Leffingwell, D., and Widrig, D. (2003). Managing Software Requirements: A Use Case Approach. Boston: Addison Wesley. pp. 320-321.
  9. Smith, J. (2003). "The estimation of effort and size based on use cases." IBM Rational Software Corporation. Retrieved from ftp://ftp.software.ibm.com/software/rational/web/whitepapers/2003/finalTP171.pdf
  10. Boehm, B.W. (1981). Software Engineering Economics. Englewood Cliffs, NJ: Prentice-Hall.
  11. Expecting such information so early in the project lifecycle could easily promote the undesirable effect of use case decomposition.
  12. Karner, G. (1993, September 17). "Resource estimation for objectory projects." Retrieved October 25, 2007 from http://www.bfpug.com.br
  13. Nageswaran, S. (2001). "Test effort estimation using use case points." Retrieved from http://www.cognizant.com/html/content/cogcommunity/Test_Effort_Estimation.pdf
  14. Copeland, L. (2004). Chapter 9. A Test Practitioner's Guide to Software Test Design. Available from http://www.books24x7.com/toc.asp?bkid=7898
  15. White box testing requires knowledge and access to the code of the application under development and is generally executed by development level testers.
  16. The development of both small- and large-scale systems benefits from this approach.
  17. IEEE. (1994). IEEE Standards Collection, Software Engineering. New York: IEEE.
  18. Booch, G., Jacobson, I., and Rumbaugh, J. (1999).
  19. Excessive traceability must be supported by business value.
  20. Williams, C. (2006). Method and system for generating and optimizing Test Suites. Patent 7134113. Retrieved from http://www.patentstorm.us/inventors/Clay_Edwin_Williams-3108806.html
  21. Risk-minimising, requirements-based testing of software systems (ranTEST). (n.d.). University of Duisburg-Essen, Systems Software Systems Engineering. Retrieved October 14, 2007 from http://www.sse.uni-due.de/wms/en/index.php?go=222
  22. Griffin, R.W. and Pustay, M.W. (2005). International Business, 4th Ed. Upper Saddle River, NJ: Prentice-Hall.
  23. Bittner and Spence.
  24. This same critical value applies to global resources coding the solution.
  25. Fromkin, V. & Rodman, J. (1983). An Introduction to Language. New York: CBS College Publishing.
  26. Copeland (2004).
  27. Alexander, I. & Maiden, N. (Eds.). (2004). Scenarios, stories, use cases: Through the systems development life-cycle. Available from http://www.books24x7.com/toc.asp?bkid=11188
  28. Copeland (2004).
  29. Ibid.
  30. Beizer, B. (1995). Black Box Testing: Techniques for Functional Testing of Software and Systems. New York:Wiley.
  31. Kaner, C. & Bach, J. (2005). Black box software testing, Spring 2005, Scripting. In Industry Worst Practise? PowerPoint slides. Retrieved from http://testingeducation.org/k04/documents/bbstScripting2005.ppt
  32. Critical-system applications will require more stringent, redundant testing across phases.
  33. McGregor, J.D. and Major, M.L. (2000, March). "Selecting test cases based on user priorities." Dr. Dobbs Portal: Architecture & Design. Retrieved from http://www.ddj.com/architect/184414580
  34. Leffingwell and Widrig.
  35. Bell, D. (2004). "UML's sequence diagram basics." Retrieved December 19, 2007 from http://www.ibm.com/developerworks/rational/library/3101.html
  36. Copeland (2004).
  37. Bell.
  38. Adams.
  39. Leffingwell and Widrig. p. 265.
  40. Bittner and Spence. p. 237.
  41. Leffingwell and Widrig. p. 306.
  42. Behrens, T. (2004, December 15). "Capturing business requirements using use cases." Retrieved October 15, 2007 from http://www.ibm.com/developerworks/rational/library/dec04/behrens/
  43. Bittner and Spence. p. 139.
  44. Berard, E.V. (n.d.). Be careful with "use cases." The Object Agency. Retrieved from: http://www.toa.com
  45. Alexander and Maiden. Ch. 12.
  46. Copeland, L. (2002). "Use cases and testing." StickyMinds.com. Retrieved from http://www.stickyminds.com/s.asp?F=S3428_ART_2
  47. National Aeronautics and Space Administration. (2003). "Analysis IV & V techniques report (DID06 OOD)." Retrieved from http://www.nasa.gov
  48. Cockburn, A. (n.d.). "Use case fundamentals." Retrieved November 1, 2007 from http://alistair.cockburn.us/index.php/Use_case_fundamentals
  49. Bittner and Spence.

Additional references

Berger, B. (2001). "The dangers of use cases employed as test cases." Article presented at Starwest 2001. StickyMinds.com. Retrieved from http://www.stickyminds.com/sitewide.asp?Function=edetail&ObjectType=ART&ObjectId=3096

Wood, D. and Reis, J. (1999). "Use case derived test cases." Paper Presented at StarEast 1999. StickyMinds.com. Retrieved from http://www.stickyminds.com/s.asp?F=S2021_ART_2



developerWorks: Sign in

Required fields are indicated with an asterisk (*).

Need an IBM ID?
Forgot your IBM ID?

Forgot your password?
Change your password

By clicking Submit, you agree to the developerWorks terms of use.


The first time you sign into developerWorks, a profile is created for you. Information in your profile (your name, country/region, and company name) is displayed to the public and will accompany any content you post, unless you opt to hide your company name. You may update your IBM account at any time.

All information submitted is secure.

Choose your display name

The first time you sign in to developerWorks, a profile is created for you, so you need to choose a display name. Your display name accompanies the content you post on developerWorks.

Please choose a display name between 3-31 characters. Your display name must be unique in the developerWorks community and should not be your email address for privacy reasons.

Required fields are indicated with an asterisk (*).

(Must be between 3 – 31 characters.)

By clicking Submit, you agree to the developerWorks terms of use.


All information submitted is secure.

Dig deeper into Rational software on developerWorks

ArticleTitle=Exploiting use cases to improve test quality