Validating Requirement Coverage by Test Execution
In the previous sections TestCases have been generated, inspected, analyzed and visually validated. Since ATG generates TestCases by recording individual runs of symbolically executing and stimulating the model, generated TestCases may fail when executing them with TestConductor. There are two main reasons for generated TestCases failing in test excecution:
-
Abstractions: ATG applies some abstractions on basic operating system routines and standard library constructs, such as streams, files, real-time properties, e.t.c. TestCases generated using these abstractions may show a different behavior on the real model execution. For a list of applied abstractions see ATG User Guide: Limitations.
- Concurrency:
-
Active objects non-determinism:If the model contains concurrency, i.e. multithreading using active objects, a recorded Scenario needs not be deterministic w.r.t. the ordering of observations for independent instances in the model.
-
Timing interleavings: ATG either records passing time in general or in terms of scheduling timeouts, timeout events and canceled timeouts of the individual instances in the model (cf. ATG User Guide: Test Case Generation Export Options.). The recorded time informations are represented by time-intervals on either the TestContext instance line or on the individual instance in the TestScenario defining the TestCase. If there is no causal or in particular temporal relation among observations in the model, then the recorded time behavior may be too specific for the individual run to generalize it in a TestCase specification. The TestCase may then fail because the relation of messages is over-specified in the TestScenario, i.e. the specification is too strong and the model has larger degrees to behave different from the specified timing when executing it with TestConductor1.
-
In order to validate the generated TestCases, it is thus recommeded, to execute them with TestConductor. Validation of ATG TestCases concerns on the one hand the general behavior and on the other hand the postulated coverage of the chosen coverage criteria:
-
Overall behavior: After executing a TestCase with TestConductor, the TestCase is assigned a verdict, i.e. a passed- or failed-result. A TestCase is passed if the execution adheres to the specified ordering of messages, the specified operation and event arguments, the specified timing. If the actual run differs from the TestCase specification in any of these regards, the TestCase is assigned a failed-verdict. In addition to the verdict, a html-report is generated for the TestCase and can be found below the TestCase in the Rhapsody browser.
When executing an entire TestContext or TestPackage, verdicts and reports are added to each individual TestCase and a cumulative verdict and report is added to the TestContext and the TestPackage, respectively.
-
Model coverage: For Test execution, model coverage measurement can be enabled (tag ComputeModelCoverage on the <<TestingConfiguration>> of the TestArchitecture).
When executing a TestCase with model coverage measurement, a model coverage report is generated for the individual TestCase.
When executing an entire TestContext or TestPackage, model coverage reports are added to each individual TestCase and a cumulative model coverage report is added to the TestContext and the TestPackage, respectively.
-
Requirement coverage: based upon model coverage measurement, also requirement coverage can be measured for TestCase execution with TestConductor (tag ComputeRequirementCoverage on the <<TestingConfiguration>> of the TestArchitecture). Requirement coverage can be measured for individual TestCases as well as for an entire TestContext - currently no Requirement Coverage Results are generated for TestPackages.
A Requirement Coverage Result is a stereotyped comment with <<fully>> and <<partially>> relations to requirements accroding to the definition of requirement coverage.
Executing TestContext TCon_SecSysController
Open features dialog of <<TestingConfiguration>> TPkg_SecSysConctroller_Pkg::TPkg_SecSysConctroller_Comp::DefaultConfig. Activate 'Tags' Tab :
-
Tag ComputeModelCoverage is already checked - this is the default for SysML/Harmony models. Otherwise check the tag.
-
Check tag ComputeRequirementCoverage to enable Requirement Coverage Measurement
Validating the TestCases by TestCase execution can be started by either executing TestCase after TestCase or by executing the entire TestContext in one sweep. In this tutorial, we decided to do the latter:
TestConductor recognizes that the TestContext has to be updated, i.e. driver-operations and stubs have to be generated from the TestCase specifications and some instrumentations have to be applied on the TestActors. For execution, the TestArchitecture has to be compiled and build:
Confirming the dialog lets TestConductor automatically perform the necessary steps and after successful compilation execute the TestContext. Execution will take some time - the actual progress of execution can be seen in the execution window, which automatically opens for TestCase execution. After all TestCases of the TestContext have been executed, the execution window shows that one TestCase failed:
Right-clicking the SDInstance in the execution result of ATG_TestCase_13 and invoking Show as SD from the context menu in the execution window
opens a witness scenario illustrating the reason for the failed result (Scroll down the witness diagram until the red message is shown):
The witness shows that message was observed unexpectedly before time-interval >= 20000ms has been completed (green messages were correctly and completely observed, red messages violate the test specification, blue messages were not or not completely observed in execution.). This is a case of over specification according to the reasons to fail for ATG TestCases explained above: ATG observed a run in which the TestContext invoked SA_set_TimeLimitFlag_in_itsSecSysController sometimes during a time period of 20000ms - this corresponds to SecSysController::t_Update, cf. lower part of the top-level and-state in statechart of SecSysController:
which enables the transition the send action. Since invocation of SA_set_TimeLimitFlag_in_itsSecSysController and the time interval itself aren't temporally or causally related, the TestCase specification is too strong expecting completion of the time-interval.
To weaken the TestCase specification accordingly, the time-interval is deleted from the TestCase specification.
After weakening TestCase specifications of TestCase ATG_TestCase_14 re-execution of TestContext SecSysController obtains 'passed' results for each individual TestCase as well as for the entire TestContext:
Validating ATG covered model elements using the TestConductor Model Coverage Report and the TestConductor Requirement Coverage Result
If model coverage measurement is enabled, TestConductor will generate a Model Coverage Report for each executed TestCase and a cumulative report for the entire TestContext, respectively.
If in addition to Model Coverage Measurement also Requirement Coverage Measurement is enabled, TestConductor will generate a Requirement Coverage Result for each executed TestCase and a cumulative report for the entire TestContext, respectively.
The model coverage report is generated mainly for other purposes than being comparable to the ATG model coverage result as provided in the tag covered_modelelements. Model coverage information is also available in tag covered_modelelements of Requirement Coverage Result of an individual TestCase and in the Requirement Coverage Result for the entire TestContext, respectively.
Suiting in particular the needs for validating the ATG model coverage and thus the requirement coverage using measured requirement coverage of TestConductor TestCase execution, the tag contains model element coverage in the same syntax and notation as the tag for an individual ATG scenario.
Exercises
-
Compare contents of tag covered_modelelements of Requirement Coverage Result of ATG_TestCase_13 to contents of tag covered_modelelements of ATG Scenario ATG_TestCase.13
Hints:
- Follow the TestObjective to ATG_TestCase.13 for navigation.
- Use '...' button right from the 'Value' field in features dialog of tag.
-
Compare <<fully>> and <<partially>> dependencies of Requirement Coverage Result of ATG_TestCase_13 to <<fully>> and <<partially>> dependencies of ATG Scenario ATG_TestCase.13
Hints:
- Follow the TestObjective to ATG_TestCase.13 for navigation.
- Uncheck View->Browser Display Options->Enable Ordering
-
Also the TestContext TCon_SecSysController has a Requirement Coverage Result. Compare the <<fully>> and <<partially>> dependencies of that result to <<fully>> and <<partially>> dependencies of the <<ATGRequirementCoverageSummary>> generated in section Calculating Requirement Coverage Summary.
1 TestConductor can not deal with TestCase specification with overlapping time-intervals. Since ATG generates TestCases based on a heuristic approach, it sometimes may occur that time-intervals overlap in the generated TestCase specification. In order to executes such TestCases with TestConductor, the user has to weaken the specification by resolving the overlap, e.g. by deleting one of the respective time-intervals