Why does RQM link test assets into a test plan?
First of all, this is not a novel or ground-breaking approach but rather one based on standard industry practice. In fact, the test plan is one of the standard documents called out in the IEEE 829 “Standard for Software Test Documentation”. In the Wikipedia entry for IEEE 829 it calls out the following topics to be covered in a test plan:
- How the testing will be done (including SUT (system under test) configurations).
- Who will do it
- What will be tested
- How long it will take (although this may vary, depending upon resource availability).
- What the test coverage will be, i.e. what quality level is required
As you can see RQM has taken this as a basis for some of the sections in the basic test plan artifact. In addition, by examining the “What will be tested” and combining the notion in the Rational Jazz platform of keeping information in one artifact and linking it wherever it should be referenced, you can see where the linkages to test cases come into the test plan. The notion of “What the test coverage will be” is clearly the genesis of why requirements or requirements collections are linked into the test plan.
For a moment let’s talk about the need of reporting the team’s progress against a test plan. When you want to look at testing progress, there is a need to look at the pile of work that has been completed and is to be done. Also you want to look at what team members are assigned and working on as well how the team’s progress tracks against the time line put together for the testing project. As seen above in the content of the test plan, the notion of “How long will it take” is included. In RQM, the test schedule section includes testing intervals which define WHEN the test case execution will be performed. To best track the project, you want to report against the scheduled work that is supposed to be completed in the testing intervals. Running test execution reports against a test plan provides an automated way of getting this information in an easily consumable format. Of course, in order to get this information to come out, each of the test cases that should be executed during a given test interval needs to be:
- Linked to the test plan
- Test execution records (TERs) have been generated assigning the test case execution to test intervals of the test plan
I have found that many of our customers do not understand nor see the value in the test execution record as an integral component of scheduling the testing work. It is often overlooked as part of the test planning function that is typically done by the test leads or managers rather than the individual writing the test case or test script. Because it is attached to the test case, it is often viewed as a part of test creation focused on identifying the test environment rather than its planning function. For those that need a refresher, the test execution record maps the test case to a test environment to a test interval in a test plan. Other items identified include the “owner” of the test case execution and which test script is to be run. This test execution record can be seen as a plan item for execution of this test case as a part of the test plan. Once the TERs have been created against the test intervals of a test plan, the reports provide very clear and detailed testing project status against the test plan.
In summary, the test plan is the centralized and most important test asset in RQM to provide test reporting. Without the test plan established and elaborated, there can be no reasonable tracking of project progress against a plan. Any attempt to count test case executions without the use of test plans is difficult because it is expected that many projects (and releases) may exist in the same project area so that asset re-use is possible and encouraged. Furthermore, if release N’s functional tests become a subset of release N+1’s regression tests, you definitely want to keep multiple testing projects in the same project area in RQM. Separating out test execution results must be done by associating the test executions (test execution results) with a test plan at the time of execution. This is done as a planning step by generating TERs for all test cases in the test plan. Reports are then run against a test plan.