Presenting decision processes in a tabular format goes back to antiquity.1 Decision tables offer a simple, visual aid, and they can be applied in knowledge-based systems to perform verification processes efficiently. In software development, decision tables help test teams manage complex logic in software applications.2
This article introduces a decision-tables-based testing technique and describes an implementation using IBM Rational Functional Tester and IBM Rational Software Modeler. This technique is used to elaborate non-regression test suites that run a collection of reusable test scripts. Each test script described is generated with Functional Tester using the GUI record/playback technique.
My goal here is a "proof of concept." To that end, I developed a Java class library during a five-day period to implement this decision tables technique with IBM Rational tools. While the technique has not yet been deployed to an actual project, I will demonstrate the potential of this approach by using the IBM Rational toolset based on the Eclipse framework. The implementation I propose is entirely based on the standard, documented interfaces that are accessible to any customer.
Non-regression tests based on data-driven testing techniques, in which a single test script is used repeatedly with varying input data, are popular in the test automation community.3 This technique can be implemented with Functional Tester using data pools that are associated to a test script during or after its creation. Unfortunately, when testing applications involving complex logic, data-driven testing becomes difficult to implement without hard coding. Generally speaking, the behavior of the application is affected by variations in the data input to the different test scripts that compose the test suite. Equivalence partitioning of input data is required to identify the sets of input data that provide an equivalent behavior of the AUT (application under test). Conditions must be hard coded in each test suite to fork towards the right testing path and to address the data variation issue. This approach, which can be "fun" for developers, is not really appreciated by testers, especially when using a test-automation tool, since the hard coding of the conditions in the test scripts makes the maintenance and the extension of the test scripts more difficult to manage. Furthermore, it is difficult to optimize the test script decomposition without a clear strategy in mind.
The proposed approach
When a tester manually implements a test procedure, he selects input data according to his testing goal and he makes a decision based on the behavior of the AUT. The question is, how can we help the tester in that decision-making process, when using test automation tools, without the hard coding I described above?
This simple question led us to consider the decision table technique and to develop a Java class library to validate the concept. Decision tables are implemented with Functional Tester data pools. Decision scripts that interpret the testing logic provided by decision tables are incorporated into the test suite architecture, as shown in Figure 1. Other reusable tests scripts that compose the test suite are generated with Functional Tester using the GUI record/playback technique. A test segment is defined as a sequence of test scripts between two decision scripts or between the starting script and a decision script or between a decision script and an ending script. A data-driven table is linked to the test suite to run the test suite repeatedly with several combinations of input data injected into the different test scripts.
Figure 1: The test suite is composed of test scripts and decision scripts.
This technique provides the following benefits:
- A formalized approach to elaborate the test suites and test scripts
- The encapsulation of the test suite logic in the decision scripts
- A test suite architecture centered around decision points
- A testing logic easy to keep track of and to change with the decision tables that can be read and filled out by non-programmers
- A more flexible implementation of the data-driven approach.
The decision-table-based testing technique
When a decision point is reached in the testing process, the tester examines the state of the AUT and determines a test action. Each decision point can be specified with a decision table. A decision table has two parts: the conditions part and the actions part. The decision table specifies under what conditions a test action must be performed. Each condition expresses a relationship among variables that must be resolvable as true or false. All the possible combinations of conditions define a set of alternatives. For each alternative, a test action should be considered. The number of alternatives increases exponentially with the number of conditions, which may be expressed as 2NumberOfConditions. When the decision table becomes too complex, a hierarchy of new decision tables can be constructed.
Because some alternatives specified might be unrealistic, a test strategy should 1) verify that all alternatives can actually be reached and 2) describe how the AUT will behave under all alternative conditions. With a decision table, it is easy to add and remove conditions, depending on the test strategy. It is easy to increase test coverage by adding new test actions from iteration to iteration, according to the test strategy.
As illustrated in Figure 2, decision tables are useful when specifying, analyzing, and testing complex logic. They are efficient for describing situations where varying conditions produce different test actions. They are powerful for finding faults both in implementation and specifications.
Figure 2: Example of a decision table
Working with decision and data-driven tables
At every decision point, a decision table should specify what needs to be verified regarding the AUT (depending on conditions) as well as the next test action. Since the logic is defined in the decision table, the tester does not need to hard code any testing logic. The decision script just performs the verifications during execution, compares the result of the verifications with the alternatives provided by the decision table, and returns the next test script to run if a solution is found.
A test suite script contains several decision scripts and test scripts. All the elements of a test suite are defined in a driver table that specifies an un-ordered set of test segments. Each test segment consists of a collection of test scripts that are executed sequentially between two decision scripts. For each test segment, the driver table specifies the transition between a source test script and a target test script.
As the decision is computed dynamically by the decision script during execution, a mechanism of notification must be implemented for the test suite script to be notified by the decision script about the next test script to run. When the decision script notifies the test suite script about the next test script to run, the test suite script queries the driver table to find the next test segment to run. The process is illustrated in Figure 3.
Figure 3: The elements of a test suite
Potentially, any test script within a test suite is linked to a data pool for data input. When a data-driven table is linked to the test suite, it is possible to specify several combinations of data records to input to the test scripts and to dynamically change the behavior of the AUT. When the behavior of the AUT changes, the result provided by each decision script changes, and consequently the testing path across the AUT changes. With a single test suite script, it is possible to validate several testing paths. It is easy to consider new combinations of input data and to extend the test suite coverage, adding new conditions in the decision tables.
This approach clearly separates the testing logic encapsulated in the decision scripts from the test actions and verifications performed in the test scripts. The identification of the decision points in the AUT helps formalize and elaborate the test suite decomposition into test scripts.
Implementing the technique with Functional Tester
As part of this proof of concept, I developed a Java library to implement the decision-table-based technique with Functional Tester. To create a test suite script, the tester must do the following:
- Reuse the test suite code template and fill the test suite driver table
- Create the decision scripts with a code template and fill the decision tables
- Fill the data-driven table
The decision-table-based testing library
The decision-table-based testing library consists of Java classes that provide the following services:
- Services to initialize and to iterate through the test suite structure defined in the driver table
- Services to explore the decision tables and to compare alternatives with verifications performed on the AUT
The event-listener mechanism between the decision script and the test suite script is transparent for the tester. It is implemented in the library by the DecisionBuilder and the TestSuiteDriver classes shown in Figure 4. To use the services provided by the library, each test suite script and each decision script created by the tester must inherit from the TestSuiteHelper4 class, which provides an interface to the library. For that purpose, the tester selects this super helper class each time she creates a new test suite or a decision script.
Figure 4: The main classes of the decision-table-based testing library
Creating a new test suite
The test suites can be used to implement the automation test strategy at the business level or at the system use-case level. At the system use-case level, a test suite implements the use-case scenarios. At the business level, a test suite traces a business workflow across several use cases. For a given testing level, the test suites can also be organized according to the testing goals defined in the iteration test plan. For example, the test suite can focus on business rules verification or on services delivery or data integrity checking (creation, modification, deletion). While theory insists that working with only one test suite is possible, this is unmanageable in practice. Thus, the tester must design test suites according to the test architecture, and it makes sense to have a test suite that runs test suites.
To create a new test suite, the tester must do the following:
- Create an empty test script with Functional Tester
- Insert the code template in the test suite script (The code template for a test suite script is shown in Figure 5.)
- Specify the name of the driver table and the data-driven table
Figure 5: Code template for a test suite script
As shown in Figure 6, the structure of each test suite is described in the test suite driver table, a data pool that defines the transitions between the test scripts (i.e., the transition from source script to target script). The order has no importance, because the TestSuiteDriver class parses the driver data pool and loads in memory the test suite structure. Nevertheless, you must define the starting and the ending scripts. The tester can fill this table to specify the test suite or generate this data pool from the UML definition of the test suite (see "Modeling test suites with IBM Rational Software Modeler" farther below).
Figure 6: Example of test suite driver table
Creating a data-driven test suite
A data-driven table can be linked to the test suite script in order to 1) control the data input to the different test scripts and 2) create different paths across the AUT. The header of the data-driven table contains the name of the data pools used by the test scripts of the test suite. Each row of the data-driven table specifies a different combination of input data records to be used for each test script data pool. As shown in Figure 7, the first column of the data-driven table is a true/false flag used by the test suite to select, or not, a row, depending on the test objectives.
Figure 7: Example of test suite data-driven table
Each test script data pool also contains a flag that indicates whether or not the test script must select a record when iterating through the records of the data pool. When the test suite starts a new iteration, the TestSuiteDriver class reads the test suite driver table and sets the test script data pool selection flags, and this is repeated for all the test scripts. Thus, only the record specified in the driver table is considered when iterating through the records of a test script data pool. This mechanism is managed by the library and is completely transparent for the tester. The only constraint is the presence of the SelectRecord flag for all the data pools.
Creating a decision test script
The tester identifies the decision points in the test suite workflow and creates a decision test script for each decision point. When the test suite workflow is designed via a UML activity diagram, a decision test script is created for each condition (see "Modeling test suites with IBM Rational Software Modeler" farther below).
To implement a decision point, the tester must do the following:
- Create and fill the decision data pool
- Create an empty decision test script and insert the code template
- Register the verification points for each condition of the decision
First, the tester creates a decision table with a Functional Tester data pool. The decision table, an example of which is shown in Figure 8, can also be generated from the UML definition of the test suite (see "Modeling test suites with IBM Rational Software Modeler" farther below). The decision script compares the result of the verifications performed on the AUT with the condition entries in order to identify the test action to perform (i.e., the next test script to run). When a combination is not possible or not yet implemented, the row in the data pool is excluded and the test action is undefined.
Figure 8: Example of decision table
Second, the tester creates an empty test script, inserts the code template in the test script (illustrated in Figure 9), and specifies the name of the decision data pool. The tester uses Functional Tester to insert the verification points that capture the information needed by the decision table.
Figure 9: Code template for a decision script
Modeling test suites with IBM Rational Software Modeler
The test suite is designed with a UML activity diagram, in which each action corresponds to a test action (test script) and each decision corresponds to a decision script, as shown in Figure 10. The conditions specified at a decision point are used to generate the corresponding decision table. Activity diagrams are easy to use and to understand by non-developers.
Figure 10: The test suite implementation is generated from the UML specification.
An object node with the stereotype "datastore" can be linked to a test action in order to specify that a data pool is required for this test action. It is also possible to specify the structure of the data pool with a class. Each column of the data pool corresponds to an attribute in the class. A UML parser generates all the data pools required to run the test suite with Functional Tester, including the driver table, the decision tables, and the structure of the data-driven table and test script data pools. Both the activity diagram and the class diagram can be organized under a collaboration element, as illustrated in Figure 11. Traceability links can be created between the test suite definition and the use-case model, as illustrated in Figure 12. A more sophisticated approach could be developed with the transformations facilities provided by IBM Rational Software Modeler.
Figure 11: The test suite definition is encapsulated under a collaboration.
Figure 12: Traceability links between the test suite and other model elements
I developed a UML parser that generates the test suite driver table, the decision tables, and the data pools (i.e., the structure for the data-driven table and the test script input data tables) in an XML format. An Eclipse plug-in with a contextual menu is used to generate the test suite tables when selecting the collaboration, as shown in Figure 13.
Figure 13: Class library to generate the test suite tables from the UML definition
All the possible alternatives are automatically generated in the decision table data pool, as shown in Figure 14. Only the test actions specified in the activity diagram are generated. New test actions, which are not specified in the activity diagram, can be added in the decision table before importation in the Functional Tester test project.
Figure 14: All the possible combinations are generated in the decision table.
I believe this decision-tables-based testing technique greatly improves the tester's ability to manage decisions that must be made during automated testing. Using IBM Rational Functional Tester and IBM Rational Software Modeler, this technique can facilitate non-regression test suites that run a collection of reusable test scripts.
As I noted in the introduction, this technique has not yet been used in a real-life project, but the early implementation using the Java class library built for this purpose suggests that the technique is viable and feasible.
Further work is currently in progress to extend the test modeling approach introduced here. The model transformation services provided by IBM Rational Software Architect will be used for the test automation aided design.
I welcome your feedback on this approach; I can be reached at email@example.com.
1 The Babylonians, for example, used clay tablets to aid in calculation. See J.J. O'Connor and E.F. Robertson's 2000 piece, "An overview of Babylonian mathematics," at http://www-history.mcs.st-andrews.ac.uk/HistTopics/Babylonian_mathematics.html
2 See Marien de Wilde's PowerPoint presentation entitled "Decision Tables: A useful testing technique and more," at Software Quality NZ Inc., http://www.sqnz.org.nz/documents/Decision Table training session.ppt
3 Three papers are particularly relevant: Michael Kelly, "Using IBM Rational Functional Tester 6.1 to run your first functional regression test," http://www-128.ibm.com/developerworks/rational/library/05/412_kelly/, IBM developerWorks, November 2005; Michael Kelly, "Framework automation with IBM Rational Functional Tester: Data-driven," http://www-128.ibm.com/developerworks/rational/library/05/1108_kelly/, IBM developerWorks, November 2005; and Keith Zambelich, "Totally Data-Driven Automated Testing," a white paper: http://www.geocities.com/model_based_testing/online_papers.htm
4 Cf. Dennis Schultz, "Creating a super helper class in IBM Rational Functional Tester," http://www-128.ibm.com/developerworks/rational/library/1093.html, IBM developerWorks, December 2003.
5 See James Bach, "Test Strategy: What is it? What does it look like?" presentation 1998 www.satisfice.com; and Cem Kaner, "Improving the Maintainability of Automated Test Suites." Paper presented at Quality Week 1997.
6 See Zhen Ru Dai, "Model Driven Testing with UML 2.0": http://www.cs.kent.ac.uk/projects/kmf/mdaworkshop/submissions/Dai.pdf