Top 10 modeling hints for system engineers: #1 The only effective way to ensure quality is with continuous verification

Bruce Douglass explains the importance of continuous verification. Not only does it help you avoid defects, it also improves the quality and vastly reduces the rework in your projects.

Share:

Bruce Douglass (Bruce.Douglass@us.ibm.com), Rational Chief Evangelist, Systems Engineering, IBM

author photoEmbedded Software Methodologist. Triathlete. Systems engineer. Contributor to UML and SysML specifications. Writer. Black Belt. Neuroscientist. Classical guitarist. High school dropout. Bruce Powel Douglass, who has a doctorate in neurocybernetics from the USD Medical School, has over 35 years of experience developing safety-critical real-time applications in a variety of hard real-time environments. He is the author of more than 5,700 book pages from a number of technical books including Real-Time UML, Real-Time UML Workshop for Embedded Systems, Real-Time Design Patterns, Doing Hard Time, Real-Time Agility, and Design Patterns for Embedded Systems in C. He is the Chief Evangelist at IBM Rational, where he is a thought leader in the systems space. Bruce consults with and mentors IBM customers all over the world. Follow Bruce on Twitter @BruceDouglass.



17 December 2013

Also available in Chinese Russian Japanese

I consider a requirements set good if and only the following condition is met:

The requirements set correctly represents the input > output control and data transformations of the system and their performance properties for all conditions, environments and operational uses of the system, without specifying internal design decisions.

Systems engineers hand off specifications to downstream engineering. Downstream engineering designs and implements correct realizations of the requirements. If the requirements meet that condition of goodness before they're handed off, you could avoid defects rather than having to fix them later (Tip #2).

How to meet the condition of goodness

Hygienic design, explained in hint #2 is the key to meet the condition of goodness. You must use principles, practices, tools, and methods to avoid disease (defects). Evidence suggests that the single best way to do that is to apply hygienic (i.e. verification) techniques during the creation of the work products, continuously, or at least approximately so.

To understand what this, verification needs to be defined clearly. There are two basic categories of verification: syntactic and semantic. Syntactic verification means the work product is well-formed and correct in form. For example, your project may have a requirements standard that says:

  • All requirements are uniquely numbered.
  • Each functional requirement uses the word shall to indicate the normative part of the statement.
  • All functional requirements are modified by at least one quality of service requirements.
  • Etc.

This is not the same as requirements that say correct things. It is only that the statements made comply with a set of well-formedness rules. Verification of syntactic correctness is usually performed by members of the quality assurance team and may be performed either manually or automatically.

Semantic verification means that the statements are correct, consistent, and accurate in content. Assuring semantic correctness is more commonly what is meant by the unqualified term "verification." Subject matter experts, mathematicians, or testers do the semantic correctness.

There are three primary methods for semantic verification of system work products:

  • Review
  • Formal (mathematical) analysis
  • Testing

Semantic review

Semantic review is the weakest form of semantic verification. The work product is reviewed by one or more peers or subject matter experts looking for defects and both internal and external inconsistencies. Human cognition is wonderfully effective in narrow focused analyses, but falters when the work product is large and/or complex. The issue becomes one of vigilance rather than intelligence. It is difficult to remain vigilant while spending tens or hundreds of hours examining the minute interconnected details of complex systems. Another problem with reviews is the cost. Having multiple people in a room reviewing a work product means that there is a high engineering overhead for review time. It is not uncommon to have a dozen or more people involved in a review. This means you are paying a dozen times the cost per unit time than you would spend on other means of verification.

Even in the best of times, semantic reviews are weaker than other types of verification. They do add value as a supplement. The issue I see in many companies is that this is the only verification means applied to most system engineering work products.


Formal analysis

Formal analysis has the advantages of eliminating personal opinion and providing rigorous analysis of system aspects such as safety, reliability, and robustness simultaneously. Formal methods can be applied at a variety of levels, from determining that all states of a system are reachable in finite time through formal theorem proving (e.g. "An elevator will arrive in response to a request within 1 minute."). One of the strengths of formal methods is that it can verify conditions regardless of execution paths. It can verify outcomes for all possible circumstance, something that cannot, even in principle, be done with testing. The primary difficulty with the application of formal methods is the very same degree of mathematical rigor that provides the value of the approach also makes it unusable by the vast majority of engineers. Further, as with all analyses, the outcome is only valid in the context of its axiomatic preconditions and class invariants (assumptions). Unless you have someone on staff with a PhD in formal methods, you should be resigned to using the lighter-weight techniques of formal methods.

Having said that, there is tool automation that supports some of the application of formal methods. The Automatic Test Generator (ATG) tool for IBM® Rational® Rhapsody®, for example, can produce a set of test vectors that visit all possible states of a state machine or identify that some states are not reachable.

It is impractical to expect formal methods to provide the bulk of semantic verification. As with reviews, it remains a useful addition to other verification means.


Testing

Testing remains the most practical means for the semantic verification of work products such as specifications, designs, implementations, and resulting systems. Testing is done by defining a set of test cases. Each test case is composed of a set of events and specific data values. These occur in a specific order and timing – with a known-to-be-correct outcome.

More tests can be added to improve coverage, but no finite test set can cover all possibilities.

An issue with testing, in principle, is that it only verifies the specific paths and data included in the test case. Other sequences of events and other data values are not verified. Because intelligent systems can achieve an essentially infinite set of states with an essentially infinite set of data values, you cannot exhaustively verify a system through testing.

A number of standards call out minimal levels of software test coverage. These include:

Structure coverage
All lines of code are exercised by the test set
 
Decision coverage
All Boolean decisions branches are covered in both true and false composite condition)
 
Modified condition/decision coverage (MCDC)
All Boolean phrases must independently assume their true and false cases at all decision points
 
Data equivalency set coverage
Data is classified into equivalency groups such that the behavior of the system is qualitatively the same for all values within the set, and the test set must include data from all equivalency sets
 

For semantic verification, the best approach is a combination of all three methods. You can rely primarily on testing, but make sure you augment testing with reviews and formal analyses.


Continuous verification

The hygienic approach proposed in this article is to apply verification techniques continuously as the work product is developed. Figure 1 shows the development of requirements models. In Figure 1, you can see the places where verification is performed. Notice that the inner loop (from Define the Use Case System Context down to Verify and Validate the Functional Requirements and back) is a nanocycle and is run every 20-60 minutes. So you take some small set of requirements, realize them in the model, execute and verify them, and repeat. This continues until all of the requirements bound to the use case are expressed and verified in the model.

Figure 1. Test-driven development-high-fidelity modeling workflow for requirements analysis
modeling diagram

Click to see larger image

Figure 1. Test-driven development-high-fidelity modeling workflow for requirements analysis

modeling diagram

As you execute this requirements analysis nanocycle, you apply test verification at the last step of this very short loop (Verify and Validate the Functional Requirements). You may also use formal methods to develop test cases to ensure you have covered the state space expressed in the use case model. You might even define a theorem about the system invariants and apply mathematical analysis to verify that theorem. You also validate the model during some of these cycles by demonstrating the model execution, along with any created or modified requirements (see the Update and Maintain Requirements task), to the customer to ensure that your understanding and expression of the requirements result in a system that meets their needs. At the end of the development of this (relatively small) work effort defining the use case, there is the review step (see the Perform Review task) where the stable and verified use case specification and associated textual requirements are verified using both semantic and syntactic review process.

Because these verification method are applied in place during the development activities, you can, for the most part, avoid defects that are only uncovered during a rigorous examination. This improves the quality and vastly reduces the rework typical in a system engineering environment in which verification comes late in the process.


Summary

There you have it – my top ten hints for model-based system engineering. This is not an exhaustive list of how to perform systems engineering but hopefully, it provides some help. As an industry, we certainly can use it.

Resources

Learn

Get products and technologies

Discuss

Comments

developerWorks: Sign in

Required fields are indicated with an asterisk (*).


Need an IBM ID?
Forgot your IBM ID?


Forgot your password?
Change your password

By clicking Submit, you agree to the developerWorks terms of use.

 


The first time you sign into developerWorks, a profile is created for you. Information in your profile (your name, country/region, and company name) is displayed to the public and will accompany any content you post, unless you opt to hide your company name. You may update your IBM account at any time.

All information submitted is secure.

Choose your display name



The first time you sign in to developerWorks, a profile is created for you, so you need to choose a display name. Your display name accompanies the content you post on developerWorks.

Please choose a display name between 3-31 characters. Your display name must be unique in the developerWorks community and should not be your email address for privacy reasons.

Required fields are indicated with an asterisk (*).

(Must be between 3 – 31 characters.)

By clicking Submit, you agree to the developerWorks terms of use.

 


All information submitted is secure.

Dig deeper into Rational software on developerWorks


static.content.url=http://www.ibm.com/developerworks/js/artrating/
SITE_ID=1
Zone=Rational
ArticleID=957900
ArticleTitle=Top 10 modeling hints for system engineers: #1 The only effective way to ensure quality is with continuous verification
publish-date=12172013