In my career I’ve been deeply involved with both modeling and requirements management disciplines and tools, so it always intrigues me when I hear debates over whether largely textual based (sometimes referred to as ‘traditional’ or ‘document-based’) or model-based approaches to defining and managing requirements are the right way to go.
We’ve all heard the argument that a picture paints a thousand words, but I’ve always vividly remembered something I heard at a conference some years ago which was “I’d have taken a 1000 words over this one unreadable diagram.”
My belief is that it is not an either-or decision. You need both. Models can add clarity to requirements specifications and can bring together a more holistic understanding of what’s expressed in the requirements. Models can be walked through with stakeholders and with the right language and tools (like SysML or UML in IBM Rational Rhapsody), they can even be run to validate that what is captured in the model is correct, consistent and complete. But what if you have contractual requirements to manage, documents of regulations or standards to comply with, or complex performance or availability constraints – you don’t want to clutter your model with so much detail that it becomes unusable.
My preference is for a combination of textual requirements and models, that can be described by the ‘Systems Engineering Club Sandwich’ (references 1&2) where textual requirements, which form the layers of bread - and maybe a bit dry on their own, are supplemented by models that form the layers of filling – they are richer and more expressive, together forming a tasty combination to help explore and elaborate requirements, perform decomposition and allocation, and maintain traceability. I recently got together with my colleague Paul Urban to record a 30 minute webcast entitled ‘The Tasty Way to Tackle Complexity - The Systems Engineering Club Sandwich of Requirements & Models’
where we take a look at some engineering challenges, where requirements work goes wrong, how the club sandwich approach works and how to use requirements and models together effectively. So if this hors d'oeuvre has made you hungry for more, please take a look. Paul and I are really interested to hear what you think.
1. "The Systems Engineering Sandwich: Combining Requirements, Models and
Design", Jeremy Dick, Jonathon Chard, INCOSE International Symposium,
Toulouse, July 2004.
2. Requirements Engineering, Hull, Jackson
& Dick, Springer 2004.
We have with us today Bruce Powel Douglass. He doesn't need an intro for most of us -- Embedded Software Methodologist. Triathlete. Systems engineer. Contributor to UML and SysML specifications. Writer. Black Belt. Neuroscientist. Classical guitarist. High school dropout. Bruce Powel Douglass, who has a doctorate in neurocybernetics from the USD Medical School, has over 35 years of experience developing safety-critical real-time applications in a variety of hard real-time environments. He is the author of over 5700 book pages from a number of technical books including Real-Time UML, Real-Time UML Workshop for Embedded Systems, Real-Time Design Patterns, Doing Hard Time, Real-Time Agility, and Design Patterns for Embedded Systems in C. He is the Chief Evangelist at IBM Rational, where he is a thought leader in the systems space and consulting with and mentors IBM customers all over the world. He can be followed on Twitter @BruceDouglass. Papers and presentations are available at his Real-Time UML Yahoo technical group and from his IBM thought leader page.
The problems with poor requirements are legion and I don’t want to get into that in this limited space (see Managing Your Requirements 101 – A Refresher Part 1: What is requirements management and why is it important?
). What I want to talk about here is verification and validation of the requirements qua requirements rather than at the end of the project when you’re supposed to be done.
The usual thing is that requirements are reviewed by a bunch of people locked in a room until they are ready to either 1) gnaw off their own arm or 2) approve the requirements. Then the requirements are passed off to a development team – which may consist of many engineering disciplines and lots of engineers – for design and implementation. In parallel, a testing group writes the verification and validation (V&V) plan (including the test cases, test procedures and test fixtures) to ensure that the system conforms to the requirements and that the system meets the need. After implementation, significant problems tracing back to poor requirements require portions of the design and implementation are thrown away and redone, resulting in projects that are late and over budget. Did I get that about right?
The key problem with this workflow is that the design and implementation are started and perhaps even finished without any real assurance about the quality of the requirements. The actions that determine that the requirements are right are deferred until implementation is complete. That means that if the requirements are not right, the implementation (and corresponding design) must be thrown away and redone. Internal to the development effort, unit/developer and integration testing verify the system is being built properly and meets the requirements. Then at the end, the system verification testing provides a final check to make sure that the requirements are correctly addressed by the implementation.
During this development effort, problems with requirements do emerge – such as requirements that are incomplete, inconsistent, or incorrect. When such problems are identified, this kicks off a change request effort and an update to the requirements specification (at least in any reasonable process), resulting in the modification of the system design and implementation. But wouldn’t it be better to not have these defects in the first place? And even more important, wouldn’t it be useful to know that the implementing the requirements will truly result in a system that actually meets the customer’s needs?
There are two concerns I want to address here: ensuring that the requirements are “good” (complete, consistent, accurate, and correct) and that they reflect the customer’s needs. And I want to do this before design and implementation are underway.
It isn’t obvious
Imagine you’re building a house for your family. You contract an architect who comes back to you after 3 months with a 657 page specification with statements like:
- … indented by 7 meters from the west border of the premises, there shall be the left corner of the house
- … The entrance door shall be indented by another 3.57 meters
- … 2.30 meters wide and 2.20 meters high, there shall be a left-hand hinge, opening to the inside
- …As you come in, there shall be two light switches and a socket on your right, at a height of 1.30 meters
My question to you is simple: is this the house you want to live in? How would you know? There might be 6500 requirements describing the house but it would almost impossible for any human to understand whether this is the house you want. For example:
- Is the house energy efficient?
- Does the floor plan work for your family uses or must you go through the bathroom to get to the kitchen?
- Is it structurally sound?
- Does it let in light from the southern exposure?
- Is there good visibility to the pond behind the house?
- Does it look nice?
What (real) architects do is they build models of the system that support the reasoning necessary to answer these questions. They don’t rely simply on hundreds or thousands of detailed textual statements about the house. Most systems that I’m involved with developing are considerably more complex than a house and have requirements that are both more technical and abstract.
Nevertheless, I still have the same basic need to be able to understand how the requirements fit together and reason about the emergent properties of the system. The problem of demonstrating that the implementation meets the stated requirements (“building the system right”) is called verification. The problem of showing that the solution meets the needs of the customer is called validation. Verification, in the presence of requirements defects, is an expensive proposition, largely due to the rework it entails. Implementation defects are generally easy and inexpensive to repair but the scope of the rework for requirements defects is usually far greater. Validation is potentially an even more expensive concern because not meeting the customer need is usually not discovered until the system is in their hands. Requirements defects are usually hundreds of times more expensive than implementation defects because the problems are introduced early, identified late in the project, and require you to throw away existing work, redesign and reimplement the solution, then integrate it into the system without breaking anything else.
A proposal: Verifiable and Validatable Requirements
The agile adage of “never be more than minutes away from demonstrating that the work you’re doing is right” applies to all work products, not just software source code. It’s easy to understand how you’d do that with source code (run and test it). But how do you do that with requirements? The Core Concept Premise:
You can only verify things that run. Conclusion:
Build only things that run. Solution:
Build executable requirements models to support early requirements verification
If we can build models of the requirements, we can verify and validate them before handing them off to the design team. The way I recommend you do that is to
- Organize requirements into use cases (user stories works too, if you swing that way)
- Use sequence diagrams to represent the required sequences of functionality for the set of requirements allocated to the use case (scenarios)
- Construct a normative (and executable) state machine that is behaviorally equivalent to that set of scenarios
- Add trace links from the requirements statements to elements of the use case model
- messages and behaviors in scenarios, and
- events (for messages from actors), actions (for messages to actors and internal behaviors), and states (conditions of the system) in the state machine
- Verify requirements are consistent, complete, accurate, and correct
- Validate requirements model with the customer
When problems are identified with the requirements during this functional use case analysis, they can be easily and inexpensively fixed before there is any design or implementation to throw away and redo. Constructing the Executable Use Case
Some people are confused with the fundamental notion of using a state machine to represent requirements, thinking that state machines are inherently a design tool. State machines are just a behavioral specification, and requirements are really just statements of behavior in which we are trying to characterize the required inputoutput control and data transformations of the system. It’s a natural fit. Consider the set of user stories for a cardiac pacemaker
- The pacemaker may be Off (not pacing or sensing) or executing the pacing mode of operation.
- The cardiac pacemaker shall pace the Atrium in Inhibit mode; that is, when an intrinsic heart beat is detected at or before the pacing rate, the pacemaker shall not send current into the heart muscle.
- If the heart does not beat by itself fast enough, as determined by the pacing rate, the pacemaker shall send an electrical current through the heart via the leads at the voltage potential specified by the Pulse Amplitude parameter (nominally 20mv, range [10..100]mv) for the period of time specified by the Pulse Length (nominally 10ms, range [1 .. 20]ms)
- The sensor shall be turned off before the pacing current is released.
- The sensor shall not be re-enabled following a pace for the period of time it takes the charge to dissipate to avoid damaging the sensor (nominally 150ms, setting range is [50..250]ms). This is known as the refractory time.
- When the pacing engine begins, it will disable the sensor and current output; the sensor shall not be enabled for the length of the refractory time.
A scenario of use is shown in the below figure (click to enlarge)
A state machine that describes the required behavior is shown below (click to enlarge)
Of course, this is a simple model, but it actually runs which means that we can then verify that is correct and we can use it to support validation with the customer as well. We can examine different sequences of incoming events with different data values and look at the outcomes to confirm that they are what we expect. Verifying the Requirements
For the verification of consistency of requirements, we must first decide what “inconsistent” means. I believe that inconsistent requirements manifest as incompatible outcomes in the same circumstance, such as when a traffic light would be Red because of one requirement but at the same time must also be Green to meet another. Since the execution of the requirements model has demonstrable outcomes, we can run the scenarios that represent the requirement and show through demonstration that all expected outcomes occur and that no undesired consequences arise. For the verification of completeness, we can first demonstrate – via trace links – that every requirement allocated to the use case is represented in at least one scenario as well as the normative state machine. Secondly, the precision of thought necessary to construct the model naturally raises questions during its creation. Have we considered what happens if the system is THIS state and then THAT occurs? What happens if THAT data is out of range? How quickly must THIS action occur? Have we created all of the scenarios and considered all of the operational variants? These questions will naturally occur to you as you construct the model and will result in the addition of new requirements or the correction of existing ones.
For correctness, I mean that the requirement specifies the proper outcome for a given situation. This is usually a combination of preconditions and a series of input-output event sequences resulting in a specified post-condition. With an executable use case model, we can show via test that for each situation, we have properly specified the output. We can do the same scenario with different data values to ensure that boundary values and potential singularities are properly addressed. We can change the execution order of incoming events to ensure that the specification properly handles all combinations of incoming events. Accuracy is a specific kind of correctness that has to do with quantitative outcomes rather than qualitative ones. For example, if the outcome is a control signal that is maintains an output that is proportional to an input (within an error range), we can run test cases to ensure that the specification actually achieves that. We can both execute the transformational logic in the requirements and formally (mathematically) analyze it as well if desired.
Be aware that this use case model is not the implementation. Even if the system use case model is functionally correct and executes properly, it is not operating on the desired delivery platform (hardware) and has not been optimized for cost, performance, reliability, safety, security, and other kinds of quality of service constraints. In fact, it has not been designed at all. All we’ve done is clearly and unambiguously state what a correctly designed system must do. This kind of model is known as a specification model and does not model the design. Validating the Requirements
Validation refers to confirming that the system meets the needs of the customer. Systems that meet the requirements may fail to provide value to the customer because
- the customer specified the wrong thing,
- the requirements missed some aspect of correctness, completeness, accuracy or consistency,
- the requirements were captured incorrectly or ambiguously,
- the requirements, though correctly stated, were misunderstood
The nice thing about the executable requirements model is that you can demonstrate what you’ve specified to the customer, not as pile of dead trees to be read over a period of weeks but instead as a representation that supports exploration, experimentation, and confirmation. You may have stated what will happen if the physician flips this switch, turns that knob, and then pushes that button, but what if the physician pushes the button first? What has been specified in that case? In a traditional requirements document, you’d have to search page by page looking for some indication as to what you specified would happen. With an executable requirements specification, you can simply say “I don’t know. Let’s try it and found out.” This means that the executable specification supports early validation of the requirements so that you can have a much higher confidence that the customer will be satisfied with the resulting product. So does it really work?
I’ve consulted to hundreds of projects, almost all of which were in the “systems” space, such as aircraft, space craft, medical systems, telecommunications equipment, automobiles and the like. I’ve used this approach extensively with the Rational Rhapsody toolset for modeling and (usually) DOORS for managing the textual requirements. My personal experience is that it results in far higher quality in terms of requirements and a shorter development time with less rework and happier end customers. By way of a public example, I was involved in the development of the Eaton Hybrid Drivetrain project
. We did this kind of use case functional analysis constructing executable use cases, and it identified many key requirements problems before they were discovered by downstream engineering. The resulting requirements specification was far more complete and correct after this work was done that in previous projects, meaning that the development team spent less time overall. Summary
Building a set of requirements is both daunting and necessary. It is necessary because without it, projects will take longer – sometimes far longer – and cost more. Requirements defects are acknowledged to be the most expensive kind of defects because they are typically discovered late (when you’re supposed to be done) and require significant work to be thrown away and redone. It is a daunting task because text – while expressive – is ambiguous, vague and difficult to demonstrate its quality. However, by building executable requirements models, the quality of the requirements can be greatly improved at minimal cost and effort.
For more detail on the approach and specific techniques, you can find more information in my books Real-Time UML Workshop
or Real-Time Agility
Last week I was really fortunate to spend a couple of days in London presenting to and talking to clients, business partners and industry analysts. It's always so good to hear what's really going on out there and to get many different perspectives on what's important today and for the future. The first day was at IBM's Innovate UK 2012 event where I was fortunate to be asked to present on all the really exciting new stuff we've done in the past year to help organizations building today's and the next generation of smarter products and systems, with particular focus on providing solutions for systems engineers and embedded software developers. You can catch the absolute latest news on our recent launch webpages
. That session included a whistle-stop tour of the developments in requirements management for complex systems with Rational DOORS 9.4 and our plans for DOORS Next Generation. Whistle-stop because we also had so much news to get through in architecture & design, planning, change & configuration management and quality management, as well as industry specific solutions for A&D, automotive, medical devices and electronic design. And because on the following day at IBM's Southbank facility we had a whole day dedicated to topics related to DOORS.
At the DOORS customer day we had attendees from across several industry sectors including transportation, aerospace & defense, banking & mail services. The day kicked off with Morgan Brown presenting the latest on IBM's requirements management and DOORS strategy. Morgan told us how the DOORS 9.x series is and will continue to be developed and enhanced to meet the needs of the large install base, in parallel with the introduction of DOORS Next Generation (DOORS NG). DOORS NG is planned to take the best paradigms for managing structured requirements from DOORS 9.x and marry those with the requirements management and team collaboration capabilities that have been developed on the Jazz collaborative lifecycle management platform over the last 4 or so years (and are in use in the form of Rational Requirements Composer). The development of DOORS NG is out in the open on jazz.net
where milestone builds can be downloaded, discussions held, defects/enhancements raised and plans viewed. DOORS NG has gone through four beta releases and is expected to be released in late November. Morgan explained that in its first release, DOORS NG is not intended to replace the DOORS 9.x product line, but it is expected that existing DOORS customers will try out DOORS NG on pilot and new projects, and will use the interoperability capabilities of the ReqIF data exchange and cross-tool traceability linking to exchange and/or link data between DOORS 9.x and DOORS NG. DOORS NG will also appeal to those looking for a requirements management tool that is on an integrated platform with design, test management and task/change management capabilities. Morgan reminded the audience of an IBM statement released earlier this year that existing DOORS customers with active support & subscription would be entitled to use both DOORS 9 and next generation capabilities when they become available. This was well received by the attendees since it means that they can try out DOORS NG when it ships without the need for an additional purchase.
Of course a day of technology insights never goes past without some piece of tech throwing an unexpected spanner in the works. This time it was the projector and the next presenter's Apple Mac that refused to talk to each other, so instead of a flow into a demo of DOORS NG, next up was Neal Middlemore to tell us about the improved integration of requirements and quality management with DOORS 9.4 and Rational Quality Manager
(RQM) 4.0. This release was a significant enhancement that brings the integration in line with IBM's strategy to support OSLC - Open Services for Lifecycle Collaboration
. OSLC is a new approach to tool integration that is open and vendor neutral. What's really different about OSLC is that data no longer needs to be copied or synchronized between tools in order to create cross-tool or cross-discipline visibility or relationships. So now quality professionals working in RQM can see requirements in DOORS and create links between test cases (and now because some organizations require it, test steps) and the requirements they are validating; and requirements professionals in DOORS can see linked test case information including test results, without the need for either to leave the comfort of their familiar tool or for data to be copied between the two tools. Neal demonstrated the value of the integration to requirements & quality professionals and showed how RQM can be used to manage manual testing or hook up with a number of IBM and partner solutions for various forms of test automation. You can also see a demo of the DOORS - RQM integration on YouTube
So, technical issue solved, it was back to Jon Walton to give a demo of DOORS Next Generation using the Beta 4 release. Jon spent most of his time in the web client, highlighting the support for key DOORS paradigms such as hierarchical structured requirements documents, and showed off the plethora of new capabilities provided by the Jazz platform such as database-wide requirement reuse, graphical traceability view, requirements definition techniques (use case diagrams, storyboards), cross-discipline dashboards (containing requirements project info mashed up with info from design, quality and task management) and task management. Jon also showed the desktop client of DOORS NG which is very familiar looking to existing DOORS users with some twists (reuse of requirements across documents for one) - the desktop client will primarily be for users who need to do extensive editing of large requirements documents. If you're currently using DOORS 9.x, this YouTube video gives a quick preview intro
of DOORS NG and how it's both similar to and different to DOORS 9.x. Watch this space for more to come on DOORS NG later this month.
Back to the earlier lifecycle integration theme started by Neal, next to present was Steve Rooks on how to use DOORS with IBM's solution for model-based systems engineering and model-driven embedded software development, Rational Rhapsody, to link requirements and design activities. Rational Rhapsody enables elaboration of requirements and construction of systems and software architectures using SysML and UML. Rhapsody Design Manager
provides an additional level of design collaboration capabilities. Models can be published to and/or stored and managed in a central repository, making them more easily accessible to a wider set of stakeholders so that designs can be better communicated and understood by all those involved in specifying, designing, building and validating a product or system. Rhapsody Design Manager uses OSLC to facilitate linking of design elements to other lifecycle artifacts - requirements, test cases, work items, etc. Like with the DOORS-RQM scenario described above, a systems engineer or software architect working in Rhapsody can see requirements in DOORS and easily create links between requirements and design model elements. Requirements and requirement links can even be included in model diagrams. And of course a DOORS user can see links to design elements without leaving DOORS or to participate in design reviews can navigate into Rhapsody Design Manager. You can read more about linking requirements and design and the DOORS-Rhapsody Design Manager integration in my recent post 'The House That Paul Built'
that talks about a recent webcast
on the topic.
After lunch, an IBM business partner Kovair
was invited to present on how their Kovair Omnibus solution provides bridges, synchronization and workflow support across multiple tools from multiple vendors. It's a common situation to find yourself trying to enact processes and workflows when you have a diverse set of tools. Kovair talked about their support for OSLC to be able to widen the number of tools they can help link together, but also highlighted scenarios where you would still want to copy or transform data between tools - it's not a choice of Link or Sync, it's Link and Sync as appropriate.
The next session was presented by Paul Fechtelkotter, market manager for energy & utilities at IBM Rational. Paul gave a really interesting presentation on the challenges of complex systems development for nuclear power plants and how the nuclear industry is now adopting systems engineering best practices starting with requirements management to enable them to get better change management, traceability, impact analysis and compliance support. You can learn more about how IBM Rational is helping the nuclear industry on our dedicated web page
Unfortunately I had to leave after Paul's session and didn't catch the remainder of the afternoon, but as you can see it was a day packed full of information. I hope you find my summary and links for more information useful. If you have any questions or comments on any of the topics I've covered here or indeed anything on IBM's requirements management strategy, Rational DOORS and the lifecycle integrations, please don't be afraid to ask by using the blog comments facility.
You've bought the plot of land for your dream home. You have your list of requirements - 4 bedrooms, 3 bathrooms, spacious kitchen, 2 living rooms, 2 garages, landscaped gardens, etc. Would you be happy to simply hand that list to the builders and let them start work? Unlikely, I think. Typically, you call in an architect, who can take your quantitative requirements and qualitative desires and produce a blueprint, the architectural design that incorporates your wishes where feasible and adds creative flourish based on the architect's knowledge of house design. The blueprint enables you and the builders to have a much clearer picture of the desired end result than that original list of requirements. And it affords you the opportunity to influence the architecture, and for the builders to question and look at feasibility & cost options, before the foundations are dug and the first bricks are laid.
The same applies in product development. Systems engineers who are responsible for the holistic product specification and design don't just use textual requirements lists to capture the problem domain and describe the proposed solution. They analyze the requirements, identifying integrated scenarios, and often depict those using modeling languages such as UML or SysML. These modeled scenarios are easier to discuss and review with all stakeholders, and as the systems engineer evolves the proposed architecture (also in the same modeling language) they can run the scenarios against the architecture in model simulations to find inconsistencies or gaps in the requirements and flaws in the design, long before any software is coded, circuit boards are soldered or metal is welded.
So what value are our textual requirements lists - should we throw them away in favor of models? Well, not everything can be expressed in the model and not everyone involved in a development effort maybe using models. Going back to the house building analogy, there are contracts, numerous standards and regulations to be adhered to, and simply details that would make the blueprints unreadable. The various contractors (and I know from recent experience that sub-contracting is the name of the game in house building these days!) involved in the building process need to ensure that they can meet the contractual and regulatory demands while delivering against the architecture. Again this is the same in product development, except in many cases, particularly safety-critical systems, traceability and demonstration of conformance to requirements and compliance to standards & regulations are demanded. This requires the ability to integrate requirements and modeling workflows, easily link requirements and design elements, and to report on that linked information.
The need and solutions for this capability are nothing new. Integrations between requirements management and modeling tools have existed for many years (I think I started using such an integration in the early 90's and I'm sure they preceded that time). But I know from first hand experience of using and indeed writing such integrations that they've not always been optimal in the way integration is performed and in the workflow that is supported. Typically it's meant synchronizing (i.e. copying) data between tools in order to create the traceability links in one of the tools. This brings up all sorts of issues like 'which tool is the master?', 'am I looking at the latest data?', 'what happens when information is deleted?', etc.
With Open Services for Lifecycle Collaboration (OSLC
) we now have a much better way to link data across product development and operations tools, even when the tools maybe from different vendors, open source or in-house. OSLC has learnt from the principles of the World Wide Web and enables
tool data to be shared and linked where it resides (called a ‘Linked
Data’ approach). OSLC provides a common vocabulary for ‘resources’ in
particular domains, i.e. what a requirement, test case, design element,
change request, work item, etc. looks like, so that regardless of tool,
technology or vendor, tools implementing OSLC specifications can share and link data.
With Rational DOORS 9.4 and Rational Rhapsody 8.0 with Design Manager 4.0, IBM is utilizing OSLC to provide a simplified workflow for linking requirements analysis and design. On September 20, Paul Urban (if you've been wondering about this blog post title, now you know the Paul I'm speaking of), Market Manager for IBM Rational Rhapsody, presented this simplified workflow and its benefits on a IEEE Spectrum webcast sponsored by IBM. You can watch and listen to the replay at your own leisure here
. I hope you it enjoy it - please let Paul and I know what you think by leaving feedback on this blog post.