A leading analyst and Systems Engineering expert, David Norfolk of Bloor Research recently published a white paper titled Reducing the risk of development failure with cost-effective capture and management of requirements. In this report, David dwells into the relevance of requirements management as a discipline and put forth his views on how the domain is changing with the advent of new development paradigms such as agile, mobile and devops.
If enterprise architecture helps to bridge the gap between business strategy and vision and its implementation in technology, from the CEO’s point of view, Requirements Management continues to help bridge the gap between business and technology at a lower level.
The report provides valuable insights with his deep coverage about why requirements management is relevant even more today, issues associated with managing requirements, challenges faced by the discipline, best practices from his experience and his thoughts on the capabilities of an ideal requirements engineering tool. Some of the topics the whitepaper discusses about are --
Challenges in requirements management
Managing changing requirements
Scope and ideal capabilities of a requirements engineering tool
Real life examples of benefits from investing in requirements
Inspired by recent experience in a large systems engineering project, Part 1 of this blog post series covered the practice of decomposing requirements, which brings about one of the most important traceability relationships in requirements engineering. Part 2 covered the next most important relationship: that between requirements and validation and verification activities.
Part 3 (probably the final part!) presents some other things about V&V we think we have learnt from undertaking a large project: what do you do about new requirements that arise from the V&V you have planned against your main requirements?
Once the need for V&V activities has been established (see Part 2), this will often give rise to new requirements. Broadly speaking, these requirements fall into two types:
Those that affect the design of the product.
Such requirements may constrain the design of the product, and may even add functions. For instance, should there be a need during commissioning to establish that the temperature of a fuel cell does not exceed a certain level, then it may be necessary to design in a means of measuring it.
Those that define the need the build a secondary system.
These requirements are for the construction of test artefacts, such as models and test equipment. In large programs, this test equipment can represent huge projects in themselves – like the construction of a new building to test a new design of jet engine.
Neither of these types of requirement should really be mixed in with the remaining requirements of without recording (through traceability) their origin, because if the choice of V&V changes, then we should be able to identify which parts of the design – or which pieces of test equipment – are present only to make the old V&V possible.
If the need to test a product gives rise to the need to build facilities to carry out that testing, then another development life-cycle is spawned for the purpose. Requirements will be collected for the facility, and further V&V may be required against those requirements. We enter a sort of recursive world of development life-cycles. Some call this “fractal” – the main development spawns smaller developments, which in turn spawn yet smaller ones.
Primary versus Secondary V&V
Secondary V&V arises when V&V activities lead to test artefacts that also need validating or calibrating. For example, from requirements about power delivery, there is a direct need to do some analysis using a model of fluid dynamics – what we call primary V&V. Requirements for the model itself are collected, and the need for validation of the model considered. For example, the model may need calibrating against some similar existing systems. This calibration activity is another form of V&V – what we call secondary V&V.
Requirements lead to V&V activities lead to requirements
As noted above, the need to for V&V leads to design changes or secondary systems, and thus to further requirements. This relationship should be captured; a requirement, rather than having a parent requirement, may in fact have a parent V&V activity. We could say that a requirement “enables” a V&V activity.
In the extreme, a secondary system may impose requirements on the primary product, for instance through the need to interface with it.
An information model
The diagram below illustrates the idea that requirements enable V&V activities
In the diagram, three traceability relationships are shown: “satisfies”, “verifies” and “enables”. There is an example of a primary V&V activity leading to an enabling requirement on the primary system itself – a requirement whose parent (through the “enables” relationship) is a V&V activity.
There are also two examples of secondary systems driven by requirements for primary V&V, and one of those gives rise, in turn, to a tertiary system.
In our current project, we have pushed this model only as far as secondary systems with an “enables” relationship and secondary V&V. An example is the one cited above – a fluid dynamics model that needs itself to be validated through calibration with similar existing systems.
As with all these things, there is probably a law of diminishing returns. How far should you push the requirement to V&V to requirement to V&V to requirement chain? You will have to decide!
A similar model probably exists for other aspects of development – manufacture, for instance. The need to manufacture a product gives rise to the need to construct the factory and plant to do so – a primary development spawning a secondary development. However, the traceability train goes not from requirement to V&V activity to requirement, but rather directly from primary requirement to secondary requirement. This could still be characterized as enablement.
We never build just “the product”. There are always other things that need designing and building that surround the system for various purposes, including testing components, sub-systems, systems and completed products.
What has been presented here is an attempt to get to grips with the traceability required to track the relationships between the product and its secondary (and possibly tertiary) systems.We implemented all this in a Rational DOORS database, with a small amount of customisation to ease the way.
Thank you for reading these blog entries, and please contact me if you are interested in reusing any of this experience and associated Rational DOORS customization.
About the author - Jeremy Dick works as Principal Analyst for Integrate Systems Engineering Ltd in a consultancy, research and thought leadership capacity. He has extensive experience in implementing practical requirements processes in significant organizations, including tool customization, training and mentoring. At Integrate, he has been developing the concept of Evidence-based Development, an extension of his previous work on “rich traceability”. Prior to this appointment, he worked for 9 years in Telelogic (now part of IBM Rational) in the UK Professional Services group as both an international ambassador for Telelogic in the field of requirements management, and a high-level consultant for Telelogic customers wishing to implement requirements management processes. During this time, he developed considerable expertise in customizing DOORS using DXL to support advanced engineering processes. His roles in Telelogic included a position in the DOORS product division to assist in the transfer of field knowledge to the product team. Co-author of a book entitled “Requirements Engineering” that has recently reached its 3rd edition, he is recognized internationally for his work on traceability. Jeremy can be reached out at jeremy.dick[at]integrate.biz
Inspired by recent experience in a large systems engineering project, Part 1 of this essay covered the practice of decomposing requirements, which brings about one of the most important traceability relationships in requirements engineering. Part 2 here covers the next most important relationship: that between requirements and validation and verification activities. Part 3 will continue the discussion of V&V, and how it itself gives rise to further requirements.
Verification & Validation (V&V)
I don’t care enough about the difference between validation and verification to want to enter into the divisive debate about it here. I am just going to say V&V and be done with it!
Kinds of V&V activity
There are many kinds of V&V activity, and organisations have varied ways of classifying them. In the project I am working on, the classifications are Analysis, Analogy, Inspection, Review, Test and Demonstration.
By their very nature, these types of activity tend to occur at different times of the life-cycle. Analysis, for instance, tends to occur early to predict properties of the proposed design and verify it against requirements. By contrast, demonstration tends to occur late as part of the acceptance tests.
Typically, a whole series of activities will be planned against a single requirement, some early, some late, allowing confidence to accumulate over the life-cycle of the project.
Requests for evidence
Despite the variety of kinds of activity, there is one thing they all have in common: they are requests for evidence of some kind or other. Indeed, I would favour calling V&V activities exactly that: “requests for evidence”.
Intention versus Fulfilment
Those activities that are carried out early in the development process provide evidence that the intended design will meet the requirements – they address design intention. Those activities applied late in the development process collect evidence that what has been built meets the requirements – they address design fulfilment.
Once the need for V&V activities has been established, this will often give rise to new requirements, either on the design of the product itself, or requirements for the construction of test artefacts, such as models and test equipment. (We never build just the product; there are always other things that need designing and building that surround the system for various purposes.)
The management of requirements arising from V&V will be the topic of Part 3.
Requirements decomposition and V&V planning
When planning V&V activities against a parent requirement, you need to take into account the V&V that will be carried out on its child requirements, and their child requirements, and so on.
Take, for instance, the following example where a user requirement is decomposed into a number of system requirements:
The only V&V activity planned against the user requirement is a commissioning test, which will occur late in the life-cycle. However, further V&V activities are defined against the child system requirements. Some of these are design inspections that occur very early, and some are system tests that occur relatively late, but still before commissioning.
There is, of course, a sense in which all these V&V activities provide evidence for the satisfaction of the user requirement, but some of the activities fit more directly against the system requirements. So when planning V&V activities, you need to ask the question: what activities can only be carried out against the parent requirement, and which can be delegated to child requirements? – because those that can be delegated are likely to provide evidence earlier in the life-cycle. And you always want that, if you can get it.
Granularity of V&V results
In the above example, there is one V&V activity that is linked to multiple requirements. In general, the relationship between requirements and V&V activities will be many-to-many.
However, this presents an issue when it comes to collating results of V&V against requirements. The System Test defined above may show positive results for filling, boiling and dispensing, but fail on the time taken to recover (cool down). So it has passed on all requirements except one. In terms of granularity of information, we need to record the result of the V&V activity against each linked requirement.
How is it best to do that? The only place to do that in the information model of the example is on the “verifies” links; there is a link for every requirement-V&V pair.
Another way is shown in the next example:
Here we have separated out the success criteria for each requirement for each test by adding subsidiary objects under the V&V activities (for instance, using the DOORS object hierarchy). Each success criterion has exactly one link to a requirement; a link from a criterion is implicitly a link from the V&V activity. (This link could be made explicit by retaining a link from the activity as well – not shown in the diagram.)
Now we have objects rather than links against which to record the results of the V&V activity (using an attribute of that object). This has the added advantage that it encourages a discipline of identifying precisely what the success criterion is for each requirement against each V&V activity. In addition, the V&V Activity and its list of success criteria can be used as a description/checklist for each particular test.
As results come in, the success/failure status on the success criteria can be rolled up through the “verifies” links to the associated requirements, and then on up through the “satisfies” links to the parent requirements. Both these relationships allow results to be summarised through the eyes of the requirements at every level.
V&V planning steps
These are the process steps we teach for planning V&V against requirements. They are numbered so as to continue from the process steps named in Part 1:
Determine the V&V activities you will need.
Consider what range of evidence you will need to collect to establish that the requirement has been met, and determine the best V&V activities for that. Aim to collect evidence as early as possible in the life-cycle, considering early proof of design intention as well as later design fulfilment. Capture the V&V activities into the database and (if using explicit links) link them to the associated requirements.
Identify the success criteria for each requirement against each V&V activity.
For each requirement/V&V activity pair, determine the success criteria to be applied. Capture each success criterion in a new object under the activity, and link it to the requirement.
Record the results of the V&V activity against each success criterion.
When the V&V activity has been competed, record the success or failure of the activity against each success criterion.
So this is what we now teach those engaged in planning and tracing V&V against requirements, in conjunction with requirements decomposition. It is wrong to assume that people will somehow automatically know how to do this kind of thing. By taking this approach, the V&V plan is well organized, defined at the most appropriate layers, with success criteria defined, and ready for the collection and roll-up of results.
About the author -Jeremy Dick works as Principal Analyst for Integrate Systems Engineering Ltd in a consultancy, research and thought leadership capacity. He has extensive experience in implementing practical requirements processes in significant organizations, including tool customization, training and mentoring. At Integrate, he has been developing the concept of Evidence-based Development, an extension of his previous work on “rich traceability”. Prior to this appointment, he worked for 9 years in Telelogic (now part of IBM Rational) in the UK Professional Services group as both an international ambassador for Telelogic in the field of requirements management, and a high-level consultant for Telelogic customers wishing to implement requirements management processes. During this time, he developed considerable expertise in customizing DOORS using DXL to support advanced engineering processes. His roles in Telelogic included a position in the DOORS product division to assist in the transfer of field knowledge to the product team. Co-author of a book entitled “Requirements Engineering” that has recently reached its 3rd edition, he is recognized internationally for his work on traceability.Jeremy can be reached out at jeremy.dick[at]integrate.biz
Jeremy Dick works as Principal Analyst for Integrate Systems Engineering Ltd in a consultancy, research and thought leadership capacity. He has extensive experience in implementing practical requirements processes in significant organizations, including tool customization, training and mentoring. At Integrate, he has been developing the concept of Evidence-based Development, an extension of his previous work on “rich traceability”. Prior to this appointment, he worked for 9 years in Telelogic (now part of IBM Rational) in the UK Professional Services group as both an international ambassador for Telelogic in the field of requirements management, and a high-level consultant for Telelogic customers wishing to implement requirements management processes. During this time, he developed considerable expertise in customizing DOORS using DXL to support advanced engineering processes. His roles in Telelogic included a position in the DOORS product division to assist in the transfer of field knowledge to the product team. Co-author of a book entitled “Requirements Engineering” that has recently reached its 3rd edition, he is recognized internationally for his work on traceability.Jeremy can be reached out at jeremy.dick[at]integrate.biz
It is a bit of a shock to find myself well into the fourth year on the same project! The nature of my work as a consultant means that it is rare for me to stick with a project beyond the initial phases of defining a requirements management process, establishing effective tool support and training the process enactors. But this time we have been able to stick with the requirements team supporting a large project long enough to see theory put into practice, and to see what it really means to apply the tools and techniques. We have gone well beyond just training, and find ourselves mentoring nearly 300 engineers in the application of DOORS for requirements capture, development and management. This has helped us keep our feet firmly on the ground – rubber on the road – and to walk with those who actually have to do the work.
So what have we learnt?
It is one thing to teach people how to write requirements statements that are clear, unambiguous, testable and traceable; it is quite another thing to help people understand how to take a requirement and develop it. None of the engineers we met on the project had previous experience of how to take a system requirement, for instance, and systematically decompose it through the design into sub-system and component requirements. We had to adapt our training and mentoring to address this skill.
Requirements decomposition establishes one of the essential requirements traceability relationships: how each layer of requirements contributes to the satisfaction of the layer above. This is often known as the satisfaction relationship, or as refinement in SysML. It is this relationship that connects the design to the development of requirements, and that lies at the heart of the ability to perform impact analysis.
Whatever layer you are engaged in – customer, system, sub-system or component – the same basic requirements development process can be applied. These are the process steps we teach for requirements development:
1. Collect and agree your requirements.
Here you actively seek out the requirements you are expected to fulfil. In an ideal world, development will be perfectly top-down, and you can wait for the layers above to allocate perfectly expressed requirements to you. In a practical world, you will have to be more proactive, and will have to cooperate with your requirement “customers” to obtain acceptably worded requirements.
2. Design against your requirements.
This stage is the creative bit that you perhaps most enjoy doing, and where the real engineering takes place. You will imagine how to design the system to meet the requirements, and what you need your requirement “suppliers” to do to contribute to your design. In other words, if you are at the system level, you design the system into sub-systems, and work out what each sub-system must do to meet the system requirements.
3. Decompose the requirements to reflect the design.
Now you enter the decomposed requirements and trace them back to the requirements they satisfy, thus making the requirements contents and traceability reflect your design. The wording of the decomposed requirements is important: if the original requirement read “The <system> shall ...”, then it is likely that the decomposed requirements will read “The <sub-system> shall ...” At this stage you can also capture rationale for the decomposition, including references to the design documentation or models, thus tracing also to design information.
4. Allocate the decomposed requirements.
Finally, you can pass on the decomposed requirements to those areas responsible for fulfilling them. These other areas engage in the same process, and together you systematically achieve alignment of requirements through all the layers.
The example below illustrates the end result of applying this process on a user requirement decomposed into system requirements. The rounded box contains the design rationale, and refers to a functional model of the product. (If you’ll forgive the shameless plug, such diagrams can be produced using a DOORS extension related to TraceLine. Ask me more if you are interested.)
The example just shown is a classic decomposition pattern. It is actually the decomposition of an overall performance into a combination of capacity and performance attributes of the product. We call this “decomposed”.
Other decomposition patterns are possible. Sometimes no decomposition is necessary, because the system requirement can be satisfied entirely by a single component or sub-system, as in the left-hand example below; or a constraint that will apply universally to all parts, as in the right-hand example. All that changes, perhaps, is the wording of the requirement to indicate the new target. We call this “direct flow”.
You will reach a point in the cascade of decomposed requirements when a requirement is satisfied entirely within the current area without the need to decompose further, as illustrated in the following. In this case, it is important to state the rationale for not flowing the requirement onwards, as otherwise it may be construed as a traceability gap. We call this “not developed further”.
I have seen a number of organisations that capture this flow-down type – Decomposed/Direct Flow/Not developed further – as an attribute of the requirement. We do this because it allows us to cross-check certain things: if you have marked a requirement as “Decomposed” but have not decomposed it, then that does indicate a design gap. However, if you mark the requirement as “Not developed further”, that gives you permission not to trace it further, i.e. not a design gap. (But you would do well to provide rationale!)
As requirements flow down through the layers, the complexity of the design becomes evident in the shape of the requirements graph. In general, satisfaction is a many-to-may relationship between requirements, and the figure below shows how this may be manifest. As the requirements are decomposed, they are refactored through the design.
Some patterns are questionable. Take these for instance:
Why decompose something into three requirements only to reduce them to a single one again? Or why collapse three requirements into one only to re-expand them in the next layer?
These patterns are not necessarily wrong, but they should be targeted for careful review.
So this is what we now teach those engaged in requirements decomposition, flow-down and traceability. It is wrong to assume that people will somehow automatically know how to do this kind of thing. By taking this approach, the flow-down of requirements reflects the design, and a clear satisfaction relationship is expressed in the traceability.
Today we have with us Mia McCroskey at Emerging Health Montefiore Information Technology who was recognized as IBM Champion this year. She shares with us her thoughts about the requirements management domain.
Welcome to IBM family! It¹s a great pleasure to have you with us as an IBM Champion. Congratulations, How do you feel?
I am honored to be recognized in this way not just this year but for the past several years that I have been asked to present my team's stories at Innovate. It may seem like our little team's requirements management needs are nothing like those of customers with huge DOORS ecosystems. But really we are an R&D site for the evolution of requirements management techniques and strategies. If a member of my team has an interesting idea for how to capture, structure, track, or trace requirements, we can try it without getting high level approvals or disrupting the work of hundreds of people. I take tremendous satisfaction from sharing our successes in a way that may help others improve their best practices.
Can you tell us something about what you do at Emerging Health, Montefiore Information Technology?
Emerging Health is primarily an IT delivery organization supporting healthcare delivery in the Bronx. Montefiore Medical Center is our parent company. My team, Product Development, is a software development shop tucked away in a corner doing very different work from most everyone else. Our application, Clinical Looking Glass, is a browser-based clinical intelligence tool that gives clinicians access to the enormous wealth of patient data gathered by all the other systems. Our end users can get an answer to a question like "are my clinic's diabetic patients getting the level of follow-up care required by our funding sources?" in a few minutes. In most healthcare environments, getting the data that you need to answer this question takes weeks.
My title is Manager, Product Development Lifecycle because I have my fingers on just about every stage of that lifecycle. Specifically, I lead our Requirements Management, Quality Assurance, Education, Support, and Implementation teams. I also manage outsourced development work, manage client relationships, and do hands on end-user training and support. We are an extremely team-oriented organization, with two formal development scrum teams and two teams that are at various stages of adopting an agile process. I'm deeply involved in that right now: it's challenging to apply Scrum to a training and support team, and to a team of data analysts and engineers.
Having said all that, my roots are in requirements. On a team of "happy path" stakeholders who get very excited by ideas for new functionality, I love to dig in and find the challenges that nobody wants to think about -- before we start coding. Sometimes I feel like a real buzz kill!
What are your thoughts on managing requirements effectively?
Agile models have forced us to completely reorganize our requirements elicitation, analysis, and management processes. But one thing that has not changed is my belief in a comprehensive, functionally organized requirements model.
But when I say "comprehensive," I don't mean laboriously detailed. The art of requirements analysis and management is in knowing -- or guessing right -- what details are going to be important later. By later I don't mean to the coders and testers in this iteration, I mean when we want to revise or augment the feature in six months. The requirements analyst has to be deeply plugged in to the business goals and vision in order to predict the future and capture the least amount, but most important information about what the team is doing.
A few years ago I spent many months constructing an "as built" specification for a market data system at the New York Stock Exchange. I had the original ten-year-old spec and a few dozen incremental release documents. We were getting ready to refactor the system and nobody knew every business rule and function. I vowed that no system I worked on would ever lack a spec that described everything it currently did. Incremental requirements specs that aren't integrated into the overall system are defects waiting to be discovered once the coders get ahold of them.
Having argued that I must add that a requirements model in document form is pretty nearly impossible to maintain in the way I describe. You've got to employ a database tool that supports the granularity of each requirement and allows you to describe each one through attributes. Then you can use filters and queries and views to present an infinite number of customized specifications -- all of the requirements implemented in a specific release, or all of the requirements related to a specific functional area, or the completed requirements related to a specific business objective or corporate mandate.
What are your thoughts on the role of requirements management in agile projects?
I recently spoke with a software development professional who was very proud of his organization's highly structured and detailed requirements templates that captured every detail before any work began "to be sure we deliver what's wanted." I felt like I was talking to tyrannosaurus rex. We all know that the day after you baseline that 400-page spec it's already out of date.
Agile with its short increments and "only write down what you really need to" mentality can seem seductively freeing. When our organization adopted Scrum, I stuck to my requirements model guns, and sure enough a few months later we couldn't remember decisions that we'd made a few sprints back, nor even exactly which sprint we'd done the work in. Since we weren't supposed to be doing "heavy" requirements, we'd been coached to: use the system and see what happens; or sift through dozens of completed user stories and hope the detail we wanted was actually mentioned in the acceptance criteria (which it was not because it was an in-sprint decision); or try to find relevant test cases and check the expected results. Instead, I launched DOORS, went to the functional area related to the question, and checked our documented business rule. Done.
What are some of the challenges you see in Healthcare Informatics projects?
Deriving meaningful information from the electronic medical record is essential to justifying the cost of those systems. We're piloting the use of predictive analytics -- combining statistical methods with the mass of patient data collected every day at our parent medical center -- to predict outcomes at the population level. To do it you need a very wide range of data: blood pressure, height, and weight, smoking patterns, history of heart disease, current blood sugar level, and on and on. Just bringing all this data together is the first challenge. Next is the analytic tool -- that's CLG. Finally you need big iron to process it. Most local and regional healthcare providers don't have the funding for, say, Watson. My team spent last summer optimizing our hardware and software environment, and CLG itself, to handle analysis of larger data sets faster, but within a medical-center friendly infrastructure budget.
Another area of critical concern is patient information. The need to pool patient data for direct care as well as population research is supported by legislature and funding sources. But we are bound, both legally and ethically, to protect patient identity in every circumstance.
Clinical Looking Glass has the capacity to show patient contact information to users who have been granted permission to see it -- usually clinicians who are actively providing care and need to contact the patients. While this is a critical feature of CLG, we expect to have to develop more granular levels of access as new types of clients adopt the product. For example, our Regional Health Information Organization (RHIO) client has data from twenty-two healthcare institutions. Some patients have declined to have their identity shared across the organization. We have to build the capability to mask these patients' identity even to our users who have permission to see it.
We have with us today Bruce Powel Douglass. He doesn't need an intro for most of us -- Embedded Software Methodologist. Triathlete. Systems engineer. Contributor to UML and SysML specifications. Writer. Black Belt. Neuroscientist. Classical guitarist. High school dropout. Bruce Powel Douglass, who has a doctorate in neurocybernetics from the USD Medical School, has over 35 years of experience developing safety-critical real-time applications in a variety of hard real-time environments. He is the author of over 5700 book pages from a number of technical books including Real-Time UML, Real-Time UML Workshop for Embedded Systems, Real-Time Design Patterns, Doing Hard Time, Real-Time Agility, and Design Patterns for Embedded Systems in C. He is the Chief Evangelist at IBM Rational, where he is a thought leader in the systems space and consulting with and mentors IBM customers all over the world. He can be followed on Twitter @BruceDouglass. Papers and presentations are available at his Real-Time UML Yahoo technical group and from his IBM thought leader page.
The usual thing is that requirements are reviewed by a bunch of people locked in a room until they are ready to either 1) gnaw off their own arm or 2) approve the requirements. Then the requirements are passed off to a development team – which may consist of many engineering disciplines and lots of engineers – for design and implementation. In parallel, a testing group writes the verification and validation (V&V) plan (including the test cases, test procedures and test fixtures) to ensure that the system conforms to the requirements and that the system meets the need. After implementation, significant problems tracing back to poor requirements require portions of the design and implementation are thrown away and redone, resulting in projects that are late and over budget. Did I get that about right?
The key problem with this workflow is that the design and implementation are started and perhaps even finished without any real assurance about the quality of the requirements. The actions that determine that the requirements are right are deferred until implementation is complete. That means that if the requirements are not right, the implementation (and corresponding design) must be thrown away and redone. Internal to the development effort, unit/developer and integration testing verify the system is being built properly and meets the requirements. Then at the end, the system verification testing provides a final check to make sure that the requirements are correctly addressed by the implementation.
During this development effort, problems with requirements do emerge – such as requirements that are incomplete, inconsistent, or incorrect. When such problems are identified, this kicks off a change request effort and an update to the requirements specification (at least in any reasonable process), resulting in the modification of the system design and implementation. But wouldn’t it be better to not have these defects in the first place? And even more important, wouldn’t it be useful to know that the implementing the requirements will truly result in a system that actually meets the customer’s needs?
There are two concerns I want to address here: ensuring that the requirements are “good” (complete, consistent, accurate, and correct) and that they reflect the customer’s needs. And I want to do this before design and implementation are underway.
It isn’t obvious
Imagine you’re building a house for your family. You contract an architect who comes back to you after 3 months with a 657 page specification with statements like:
… indented by 7 meters from the west border of the premises, there shall be the left corner of the house
… The entrance door shall be indented by another 3.57 meters
… 2.30 meters wide and 2.20 meters high, there shall be a left-hand hinge, opening to the inside
…As you come in, there shall be two light switches and a socket on your right, at a height of 1.30 meters
My question to you is simple: is this the house you want to live in? How would you know? There might be 6500 requirements describing the house but it would almost impossible for any human to understand whether this is the house you want. For example:
Is the house energy efficient?
Does the floor plan work for your family uses or must you go through the bathroom to get to the kitchen?
Is it structurally sound?
Does it let in light from the southern exposure?
Is there good visibility to the pond behind the house?
Does it look nice?
What (real) architects do is they build models of the system that support the reasoning necessary to answer these questions. They don’t rely simply on hundreds or thousands of detailed textual statements about the house. Most systems that I’m involved with developing are considerably more complex than a house and have requirements that are both more technical and abstract.
Nevertheless, I still have the same basic need to be able to understand how the requirements fit together and reason about the emergent properties of the system. The problem of demonstrating that the implementation meets the stated requirements (“building the system right”) is called verification. The problem of showing that the solution meets the needs of the customer is called validation. Verification, in the presence of requirements defects, is an expensive proposition, largely due to the rework it entails. Implementation defects are generally easy and inexpensive to repair but the scope of the rework for requirements defects is usually far greater. Validation is potentially an even more expensive concern because not meeting the customer need is usually not discovered until the system is in their hands. Requirements defects are usually hundreds of times more expensive than implementation defects because the problems are introduced early, identified late in the project, and require you to throw away existing work, redesign and reimplement the solution, then integrate it into the system without breaking anything else.
A proposal: Verifiable and Validatable Requirements
The agile adage of “never be more than minutes away from demonstrating that the work you’re doing is right” applies to all work products, not just software source code. It’s easy to understand how you’d do that with source code (run and test it). But how do you do that with requirements?
The Core Concept
Premise: You can only verify things that run. Conclusion: Build only things that run. Solution: Build executable requirements models to support early requirements verification
If we can build models of the requirements, we can verify and validate them before handing them off to the design team. The way I recommend you do that is to
Organize requirements into use cases (user stories works too, if you swing that way)
Use sequence diagrams to represent the required sequences of functionality for the set of requirements allocated to the use case (scenarios)
Construct a normative (and executable) state machine that is behaviorally equivalent to that set of scenarios
Add trace links from the requirements statements to elements of the use case model
messages and behaviors in scenarios, and
events (for messages from actors), actions (for messages to actors and internal behaviors), and states (conditions of the system) in the state machine
Verify requirements are consistent, complete, accurate, and correct
Validate requirements model with the customer
When problems are identified with the requirements during this functional use case analysis, they can be easily and inexpensively fixed before there is any design or implementation to throw away and redo.
Constructing the Executable Use Case
Some people are confused with the fundamental notion of using a state machine to represent requirements, thinking that state machines are inherently a design tool. State machines are just a behavioral specification, and requirements are really just statements of behavior in which we are trying to characterize the required inputoutput control and data transformations of the system. It’s a natural fit. Consider the set of user stories for a cardiac pacemaker
The pacemaker may be Off (not pacing or sensing) or executing the pacing mode of operation.
The cardiac pacemaker shall pace the Atrium in Inhibit mode; that is, when an intrinsic heart beat is detected at or before the pacing rate, the pacemaker shall not send current into the heart muscle.
If the heart does not beat by itself fast enough, as determined by the pacing rate, the pacemaker shall send an electrical current through the heart via the leads at the voltage potential specified by the Pulse Amplitude parameter (nominally 20mv, range [10..100]mv) for the period of time specified by the Pulse Length (nominally 10ms, range [1 .. 20]ms)
The sensor shall be turned off before the pacing current is released.
The sensor shall not be re-enabled following a pace for the period of time it takes the charge to dissipate to avoid damaging the sensor (nominally 150ms, setting range is [50..250]ms). This is known as the refractory time.
When the pacing engine begins, it will disable the sensor and current output; the sensor shall not be enabled for the length of the refractory time.
A scenario of use is shown in the below figure (click to enlarge).
A state machine that describes the required behavior is shown below (click to enlarge).
Of course, this is a simple model, but it actually runs which means that we can then verify that is correct and we can use it to support validation with the customer as well. We can examine different sequences of incoming events with different data values and look at the outcomes to confirm that they are what we expect.
Verifying the Requirements For the verification of consistency of requirements, we must first decide what “inconsistent” means. I believe that inconsistent requirements manifest as incompatible outcomes in the same circumstance, such as when a traffic light would be Red because of one requirement but at the same time must also be Green to meet another. Since the execution of the requirements model has demonstrable outcomes, we can run the scenarios that represent the requirement and show through demonstration that all expected outcomes occur and that no undesired consequences arise. For the verification of completeness, we can first demonstrate – via trace links – that every requirement allocated to the use case is represented in at least one scenario as well as the normative state machine. Secondly, the precision of thought necessary to construct the model naturally raises questions during its creation. Have we considered what happens if the system is THIS state and then THAT occurs? What happens if THAT data is out of range? How quickly must THIS action occur? Have we created all of the scenarios and considered all of the operational variants? These questions will naturally occur to you as you construct the model and will result in the addition of new requirements or the correction of existing ones.
For correctness, I mean that the requirement specifies the proper outcome for a given situation. This is usually a combination of preconditions and a series of input-output event sequences resulting in a specified post-condition. With an executable use case model, we can show via test that for each situation, we have properly specified the output. We can do the same scenario with different data values to ensure that boundary values and potential singularities are properly addressed. We can change the execution order of incoming events to ensure that the specification properly handles all combinations of incoming events. Accuracy is a specific kind of correctness that has to do with quantitative outcomes rather than qualitative ones. For example, if the outcome is a control signal that is maintains an output that is proportional to an input (within an error range), we can run test cases to ensure that the specification actually achieves that. We can both execute the transformational logic in the requirements and formally (mathematically) analyze it as well if desired.
Be aware that this use case model is not the implementation. Even if the system use case model is functionally correct and executes properly, it is not operating on the desired delivery platform (hardware) and has not been optimized for cost, performance, reliability, safety, security, and other kinds of quality of service constraints. In fact, it has not been designed at all. All we’ve done is clearly and unambiguously state what a correctly designed system must do. This kind of model is known as a specification model and does not model the design.
Validating the Requirements Validation refers to confirming that the system meets the needs of the customer. Systems that meet the requirements may fail to provide value to the customer because
the customer specified the wrong thing,
the requirements missed some aspect of correctness, completeness, accuracy or consistency,
the requirements were captured incorrectly or ambiguously,
the requirements, though correctly stated, were misunderstood
The nice thing about the executable requirements model is that you can demonstrate what you’ve specified to the customer, not as pile of dead trees to be read over a period of weeks but instead as a representation that supports exploration, experimentation, and confirmation. You may have stated what will happen if the physician flips this switch, turns that knob, and then pushes that button, but what if the physician pushes the button first? What has been specified in that case? In a traditional requirements document, you’d have to search page by page looking for some indication as to what you specified would happen. With an executable requirements specification, you can simply say “I don’t know. Let’s try it and found out.” This means that the executable specification supports early validation of the requirements so that you can have a much higher confidence that the customer will be satisfied with the resulting product.
So does it really work?
I’ve consulted to hundreds of projects, almost all of which were in the “systems” space, such as aircraft, space craft, medical systems, telecommunications equipment, automobiles and the like. I’ve used this approach extensively with the Rational Rhapsody toolset for modeling and (usually) DOORS for managing the textual requirements. My personal experience is that it results in far higher quality in terms of requirements and a shorter development time with less rework and happier end customers. By way of a public example, I was involved in the development of the Eaton Hybrid Drivetrain project. We did this kind of use case functional analysis constructing executable use cases, and it identified many key requirements problems before they were discovered by downstream engineering. The resulting requirements specification was far more complete and correct after this work was done that in previous projects, meaning that the development team spent less time overall.
Summary Building a set of requirements is both daunting and necessary. It is necessary because without it, projects will take longer – sometimes far longer – and cost more. Requirements defects are acknowledged to be the most expensive kind of defects because they are typically discovered late (when you’re supposed to be done) and require significant work to be thrown away and redone. It is a daunting task because text – while expressive – is ambiguous, vague and difficult to demonstrate its quality. However, by building executable requirements models, the quality of the requirements can be greatly improved at minimal cost and effort.
On Thursday 14 March I presented on an IBM sponsored Dr Dobbs webcast on the topic of ‘3 Reasons to Throw Away Your Requirements Documents’. If you didn’t catch the webcast, you can view it on-demand and download the slides. If you did attend I hope you enjoyed it and in this blog post I want to answer some of the questions that I didn't get time to cover in the webcast, so scroll down to see if I've covered your question here. But first, for those of you who didn't make it, here's a quick recap of points I made during the webcast:
What do I mean by ‘Throw Away Your Requirements Documents’? I’m not saying do away with your requirements or your requirements management process. What am I saying is invest in improving your requirements management process and support those process improvements with the right tooling. In the webcast audience poll, 60% said they are using documents and/or spreadsheets for requirements, so I set out to provide the case for why they should consider moving to a requirements management tool.
Reason #1 – Collaboration. Using documents and spreadsheets, you can get into all sorts of problems like working from the wrong version of requirements. An integrated requirements management tool (one that has a collaborative repository that has open interfaces to share requirements data with design, test and development environments) helps you ensure that all engineering / development team members and their stakeholders are working from the right version and view of requirements for their role. I shared information from a case study on MBDA Missile Systems where collaboration was a major challenge.
Reason #2 – Traceability. Traceability helps you get context and an audit trail for decisions made, perform coverage analysis, detect gold plating and perform impact analysis (See blog posts ‘What is Traceability?’ and ‘The uses and value of traceability’ for more explanation). But using documents and spreadsheets to create and manage traceability gives it a bad name – it becomes a tedious, error-prone overhead activity. With an integrated requirements management tool traceability links can be easily created and navigated. Traceability becomes part of the process rather than a bulk catch-up exercise, and through open interfaces it becomes easy to extend traceability from requirements to artifacts in other tools such as designs, work items and test cases. And most importantly there are automated views and reports that enable you to make active use of traceability for the purposes I mentioned above. I shared information from a case study on Invensys Rail Dimetronic where traceability is essential.
Reason #3 – Agility. As many engineering and development organizations are looking to improve time to market and reduce costs, agile approaches are becoming increasingly popular (in the webcast audience poll 56% said they were using some sort of hybrid waterfall/iterative approach and 20% were agile), and with that comes changes to the way requirements might have been traditionally defined and most importantly to the way that changes to requirements are managed. Change is allowed and encouraged but you must have reviews and impact analysis to make informed decisions about whether to accept the change and implement in the current iteration or sprint, plan it into a future iteration or reject the change altogether. And to do that requires more than just a backlog managed in a spreadsheet or an agile planning tool. At IBM Rational we keep a prioritized backlog of user stories and epics as work items in plans managed in Rational Team Concert. User stories and epics are then decomposed and more fully described by requirements and supporting visual and textual artifacts created and managed in Rational DOORS Next Generation.
And I started and concluded by looking at the most compelling reason for improving your requirements management process with the support of the right tooling: Return on Investment. I shared information from a case study on Emerging Health, Montefiore IT where they a achieved, among other benefits, a 69% reduction in the cost of quality (test preparation, testing and rework) within 6 months of deploying an improved requirements management process supported by IBM Rational DOORS. There’s also a follow-up video to this case study that was recorded after the client had adopted agile development practices.
Ok now onto some of the questions that were submitted but I didn’t time to answer during the webcast. I’ve divided them into sections based on the type of question so you can easily scan down to topics you’re interested in:
Alternative approaches to managing requirements
Q. What's the advantage of a requirements management tool if I could link my change management tickets to fine grained requirements (use cases, storyboards) that are maintained on a wiki or other collaboration tool (e.g. Sharepoint or IBM Connections)? It would seem that the work items would give you the needed traceability and metrics?
A. While you might be able to create the level of traceability you require, how would you report on it? Would you have to build bespoke views and reports? And would the use of a wiki for ‘fine grained requirements’ provide you with a view of requirements in context with one another as opposed to individual wiki pages? Where would you document additional properties of requirements? Could you easily reuse requirements across projects? A requirements management tool like IBM Rational DOORS Next Generation provides built-in traceability views and reports, enables to you to structure requirements in context with another in document-like views, provides user-defined properties for recording additional information and facilitates reuse of requirements.
Requirements Management and Agile Development
Q. We are mostly using water fall model, but looking into checking if Agile works for our customers. I am very interested in the role of a Requirements Analyst in the Agile world. In our world, the analyst is a facilitator of requirements gathering and not a SME on the business application we are gathering requirements for.
A. That facilitation role and the analysis skills of the Requirements or Business Analyst are still essential in Agile development. A common mistake when moving to moving to more agile approaches is appointing a ‘Product Owner’, who is a subject matter expert in the business domain you are building an application or product for, and expecting them to write the requirements or ‘user stories’. While they are experts in the business and you need that expertise on hand for Agile to work, they are not usually skilled in getting to the root goals or needs of the business problems you’re looking to address with the product or application. Without the skills of the Requirements or Business Analyst, it’s all too easy for the user stories to become about automating the way things are done today rather than addressing the real business issues in an optimal way. You can read more about the role of the analyst in Agile development in an interview with Mary Gorman of EBG Consulting.
Q. How does the IBM requirements toolset compare to more "agile" focused toolsets like Atlassian, Rally, etc.?
A. IBM is very committed to supporting agile development, particularly when agile is scaled to large, distributed development teams. At the heart of our agile development capabilities is Rational Team Concert which supports agile planning, task management and change management. As stated in the webcast though, we’ve found in our own continuous delivery process and heard from clients, that a place is needed to capture more details of requirements and their associated properties, in context with one another, together with supporting artifacts like storyboards, workflow scenarios and use cases. The integration of Rational Team Concert with Rational DOORS Next Generation provides those additional capabilities while preserving traceability to user stories and epics managed in the product backlog.
IBM Requirements Management solutions
Q. It seems this discussion is on Rational DOORS. Is Rational Requisite Pro still offered? If so then how do these two products compare?
A. IBM continues to support, maintain and respond to enhancement requests for Requisite Pro, but our future direction for requirements management for IT application development lies with Rational Requirements Composer and we provide migration support and a trade-up program. Please contact your IBM representative for more details. If you are using Requisite Pro for requirements management for complex products or embedded systems development, then you might also want to look at whether Rational DOORS or Rational DOORS Next Generation is the right move forward for your organization. But don’t worry we’re not forcing you to migrate today.
Q. You've mentioned DOORS Next Generation in your presentation, what is that? Does it replace DOORS?
A. IBM Rational DOORS Next Generation is a requirements management application on a collaborative lifecycle management platform for systems and software engineering that provides requirements collaboration, planning, reuse and lifecycle traceability. DOORS Next Generation (DOORS NG) was introduced in 2012 to take advantage of the common, collaborative ‘Jazz’ platform shared by Rational Team Concert, Rational Quality Manager, Rational Design Manager and Rational Requirements Composer; and to extend the requirements management capabilities in Requirements Composer to meet the needs of product & systems development organizations developing complex and/or embedded systems. DOORS NG will enable IBM to introduce new capabilities, faster, that would have been more difficult to deliver with existing DOORS product. However DOORS NG does not replace DOORS today. We have a very large install base of DOORS users working on programs that can last tens of years, and we will continue to support, maintain and enhance the DOORS 9.x series to meet the needs of those users. We encourage existing DOORS users to take a look at DOORS NG, to try it out on pilot projects and use the interoperability capabilities to exchange and/or link data with DOORS 9.x. Existing DOORS customers with active support & subscription are entitled to use DOORS NG without requiring an additional purchase – you can use your existing DOORS license entitlements to use with either DOORS or DOORS NG or a combination of the two. But we are not telling existing DOORS users to migrate live projects to DOORS NG today. DOORS NG in it’s early releases is attractive to organizations who don’t currently use a requirements management tool and are looking for a web based solution that also offers common platform integration with change management, agile planning, test management and design management capabilities. You can download a trial and follow development plans for DOORS NG on Jazz.net.
Q. How does Rational DOORS Next Generation compare to Rational Requirements Composer?
A. DOORS Next Generation is based on Rational Requirements Composer but extended to meet the requirements management needs of product & systems development organizations developing complex and/or embedded systems. In the first releases of DOORS NG, the web client is identical to Requirements Composer, but DOORS NG also features a rich client which is designed for usability of editing large requirements specifications. The strategy for the two products is that while Requirements Composer will be focused on the needs of business analysis and IT application development teams, DOORS NG will be focused on the needs of systems engineers and product & systems development teams.
I’d like to wrap up this post by thanking all of you who attended the webcast, participated in the polls, asked some great questions and completed the survey feedback. If you missed it, you can catch a replay or download the slides. I’d welcome any additional comments or questions here.
Part 4 of this series ‘What is Traceability’ looked at the definition of requirements traceability and different types of traceability relationships. In this part, let’s look at what traceability can be used for and where it delivers value to application, system or product development.
Why is traceability necessary/important? Isn’t traceability just an overhead, an onerous documentation task that’s only done in industries where it’s mandated? In my view it’s true that requirements traceability practices have originated from industries like aerospace & defense, where one use of traceability is to show that contractual requirements have been addressed, but it also has so much more value to bring when used effectively, and you have the right tools to maintain it and report on it. The following lists some of the ways that traceability delivers value:
Context: If you can trace back from a design or test to a user requirement, you then have the reason for the existence of that design or test and through the information in the user requirement (and through any related requirements you can trace to), you have more supporting context to help create the optimal design to meet the requirement or the most effective test to verify that the requirement is met
Audit trail & compliance: Taking an example of a new person joining a project, traceability can help them navigate the project and see why particular requirements, designs, tests, etc. exist. This is also of upmost value and importance when you need to demonstrate compliance to a regulation or standard to an auditor – the traceability trail can help you quickly show you are addressing the regulation or standard.
Coverage: How do you know whether you’ve covered all the user requirements in the derived systems requirements, designs, tests, etc.? Traceability can help you here – you can see gaps indicated by where a higher level artifact doesn’t have any relationships to lower level artifacts. Of course even if a relationship exists you still need to follow it and examine the lower level artifact to ensure it does what the traceability relationship says it should, but at least you had a signpost to direct you to the right place to look.
Gold plating: As well as highlighting coverage gaps, traceability can help you investigate possible ‘gold plating’ or over-engineering. If you have a lower level element without any relationships to a higher level then you can ask the question – why does this exist if it’s not apparently satisfying a requirement? There might be a perfectly valid reason but this gives the opportunity to identify and eliminate anything that could bring unnecessary additional time and cost to the project.
Impact analysis: I think this is the most valuable reason for traceability. If you have traceability relationships in place you can following those relationships when say a user requirement changes, to identify all the related system/sub-system/component requirements, design elements, tests, work items, etc. that are potentially impacted by the change. This will enable you to fully scope out the impact of the change before it’s made (if you decide to proceed), giving you far more control over the cost and time impact of change requests. This can of course also work in reverse – if a design change is necessary, say because the original design proves infeasible, you can more easily see what impact that has if any on your ability to still meet the requirements. Both of these scenarios have great project management benefits – you can have informed discussions in the development team and with your customer/stakeholders about whether to make the change.
What’s that I hear again – what about agile? Aren’t traceability and the benefits it’s proposed to have only necessary/of value in waterfall development? Well, take another look at that list of benefit scenarios – don’t you always want to be able to do these things, regardless of development methodology? If you’re in a fast changing, evolving project don’t you need to be more informed, have the right information at your fingertips, in order that you can respond quickly but effectively? I argue that if you have the right tools in place to create and utilize traceability, that it is even more essential if you’re adopting agile practices.
Speaking of the right tools, tools can automate the various types of traceability reporting and analysis I described above. For impact analysis, tools can quickly display all of the related artifacts connected to a particular requirement. And more than that they can detect changes in a requirement that has links and automatically mark those links as suspicious. This is very effective where you have different people working on different parts of the application/system and you need to be aware of changes in other areas that might impact your work. A suspect link indicates that a change has been made to the artifact at either end of the link and that you should check that the link still holds true – i.e. if the user requirement has changed, does the linked system requirement still satisfy it? This proactive impact notification mechanism helps to avoid inconsistencies across your specifications.
So you have my views on the uses and value of traceability, but what do you think? Please use the comment function to leave feedback and additional ideas.
I’ll leave you with a couple of links that have additional discussion of the value of traceability and in particular how much traceability is enough:
What is traceability? Or more specifically what is requirements traceability? Well rather than repeat what is already a good collection of definitions, I’ll refer you to http://en.wikipedia.org/wiki/Requirements_traceability. From there I’d summarize three elements to requirements traceability:
Following the life of a requirement – from idea to implementation
How requirements impact each other, and how requirements impact other development lifecycle artifacts (such as designs, tests, tasks, source code, hardware specs, etc.) and vice versa.
The decomposition of requirements – from high level user/customer/market needs to system, sub-system, software or hardware component requirements; and transformation into design specifications and the implementation realization of the requirement.
Traceability in this context is about relationships between requirements at the same or different levels of detail, and between requirements and other lifecycle artifacts as listed above. It also extends to relationships beyond those directly involving requirements – i.e. the relationship of a defect report to a test case – this is referred to as ‘lifecycle traceability’. Traceability relationships can be of multiple types, for example:
Satisfaction: a system requirement (or more likely a number of system requirements) ‘satisfies’ a user requirement e.g. system requirement ‘The engine shall have at least 200bhp’ satisfies user requirement ‘The car shall be capable of accelerating from 0-60mph in under 8 seconds’.
Verification: a test case ‘verifies’ a requirement e.g. test case ‘0-60mph acceleration test’ (consisting of a number of test steps) verifies user requirement ‘The car shall be capable of accelerating from 0-60mph in under 8 seconds’.
Dependency (often used where interfaces are concerned): a requirement ‘depends’ on another requirement e.g. requirement ‘the power socket shall take 3 pins’ depends on requirement ‘the plug shall have 3 pins’.
Basic traceability establishes a relationship or link between one or more elements. Typed traceability adds the relationship type with its associated semantics (examples above). Rich traceability (ref: Requirements Engineering, Hull, Jackson & Dick, Springer, 2004) adds additional information on the traceability relationship such as the rationale explaining why a group of systems requirements satisfies a particular user requirements; or as is often the case, you can’t be 100% certain on specification or design decisions, you might document any assumptions you made in deriving a set of systems requirements from a user requirement. The rich traceability approach is particularly valuable in heavily regulated industries and safety-critical systems where audit trails of decisions made are vitally important to provide assurance and reduce risks.
Once traceability has been established there are multiple ways in which it can be viewed and reported on. Perhaps the oldest and most commonly recognized method is the traceability matrix where you can see the intersection between two sets of requirements and a check or cross shows where a link exists. This method doesn’t scale particularly well since the matrix could become very large. It’s also sometimes used for creating the links, but it’s not ideal for that either since you can typically only see a small amount of information on the requirements.
Another way to see traceability is to pick a starting point, e.g. the user requirements and display the related systems requirements alongside the user requirement they are linked to, in a traceability column. You can typically choose how much detail of the linked requirement is displayed, and you can even make it recursive, going down as many levels of requirements as you need/is practical to manage in a single view.
Graphical displays are great for getting a bigger picture view of traceability rather than immediately focusing in on the details of particular relationship. You can explore the traceability tree, zooming in/out or collapsing/expanding parts of the tree, or changing the focus (starting point) of the tree.
But what about in agile development, I hear you cry? Well that could be another topic in its own right – watch this space - but relationships still exist between typical artifacts created in agile approaches (such as between product features and user stories), and I argue that as long as traceability is created ‘as you go’ and automated by tools as much as is practical, that it’s even more essential to stay informed when changes are happening rapidly and ensure you are looking at the correct versions of related artifacts.
In a follow-on post in this Requirements 101 series, I’ll take a look at what traceability can be used for – highlighting where its application can bring significant value to your projects. But for now I’ll leave you with a few resources below that I’d recommend you take a look at, and ask you to let me know if you think this post was useful (or not!) and provide any feedback or additional information using the comment function.
As we reach the close of a year and move into a new one,
it's often time to take stock of what we've been doing and making plans for the
New Year. We look at what we've been doing right and what we could change and
improve. So since this is a requirements management blog I thought it would
worth posing the question and giving an opinion on whether requirements
management (in the domain of systems and product development - my focus area)
is still relevant today and as we move into 2013?
I'm not really sure when requirements management as a formally
recognized discipline can be said to have come into being, but I do believe that
it really started to take shape in the early 90's, primarily based on work
coming out of the aerospace industry, and that's when commercial specialized
tools for requirements management, such as IBM Rational DOORS (then known
simply as DOORS from a company called QSS), first emerged. In 2005, I was leading
a team working on a campaign to promote the value of requirements management to
a wider audience than a core set of requirements specialists. We declared
2005 as the 'Year of Requirements Management' because of its increased
recognition as a discipline and the emergence of greater
tool capabilities for making requirements more easily accessible to a wider set
So as we move towards 2013, is requirements management still
as relevant? Do we still have further to go on becoming more effective at it?
In a recent Aberdeen Group report ‘Managing Systems Design Complexity: 3 Tips to Save Time’ by Michelle Boucher, where a survey of the effectiveness of systems engineering
capabilities of system and product development organizations is reported, two
of the three key recommendations made are directly related to requirements
management in the areas of visual requirements definition and requirements
traceability. In the other recommendation on improving change management across
engineering disciplines, Michelle says that impact analysis is core to such
improvement and that’s enabled by requirements management and traceability. From
study, a clear link can be shown between more effective requirements management
and traceability to business benefits such as reduced cycle times, improved
quality and increased product revenues. I also recently heard from another
analyst that one of the key challenges they are hearing from product
development organizations is getting a better handle on interrelationships
between requirements across engineering disciplines, so they can respond more
effectively to changes.
So my answer to the question I posed of is requirements
management still relevant is a resounding YES! We’ve made significant progress
but complexity of the systems we build has also increased and we need to keep
pace with changes in practices and technologies, so I expect effective
requirements management to remain a cornerstone of successful product
development and for practices and supporting tooling to continue to evolve.
But what you do think? Will requirements management be as
important in the future? How will it/should it change?
Last week the UK chapter of INCOSE (International Council on Systems Engineering) held their annual systems engineering conference on the Warwick University campus. I'd like to share some of what I heard during the conference, both on systems engineering in general, and more specifically on requirements management practices in the systems engineering domain.
One of the keynote speakers was Dr Sandy Wilson, President & Managing Director, General Dynamics UK. Dr Wilson spoke about the key challenges in the defense industry - the rate of change in threats and technology and the need to lower costs. He challenged the V model - said it's a nice diagram but its linearity is an issue - the world is not linear or rigid but the SE V diagram is. He spoke about the need for the defense industry to become more agile but that today change is cumbersome due to contractual issues and governance constraints. There are two main types of defense procurement done in the UK - the longer term needs are met by EPs (Equipment Programmes) and the urgent tactical needs by UORs (Urgent Operational Requirements). The former is bogged down in top level scrutiny and check boxes. The latter is helped by the top level sense of urgency and support. An example of a UOR was the decision to implement the multinational no-fly zone over Libya. Dr Wilson proposed that all defense projects should become more like UORs - more agile. He said that "an 80% solution delivered 1 year earlier is better than 90% delivered 4 years late". I heard that delivering incremental capability needs asset management and tracking, configuration management and a more agile approach to systems engineering - valuing "Product over Process". As well as changes in the way companies deliver capabilities, a change is needed in the way the customer (governments) do their acquisition and contracts in order to enable more agility.
Dr Jeremy Dick of Integrate Systems Engineering and co-author of the book 'Requirements Engineering' presented a case study in the aerospace industry on developing the assurance case for a (safety) critical system in parallel with requirements analysis, design, verification & validation, using an extension of his technique for documenting the rationale for traceability relationships known as 'rich traceability'. In addition to developing a requirements 'flow-down' (through levels of requirements to design), the 'evidence' supporting the flow-down is documented. The evidence in the early stages can be how you expect the lower level requirements or design elements to satisfy the higher level and your evidence to suggest that your argument is sound. In parallel your verification & validation strategies should be evolved, including an argument and supporting evidence for how the test(s) will prove the requirement(s) is/are met. Jeremy was asked how the textual requirements, arguments and evidence would fit with a MBSE (Model-Based Systems Engineering) approach. Jeremy answered that he favours (and in fact came up with the concept of - ref: "The Systems Engineering Sandwich: Combining Requirements, Models and Design", Jeremy Dick, Jonathon Chard, INCOSE International Symposium, Toulouse, July 2004) the sandwich model - interleaved layers of requirements and modeling used to decompose a system specification adn design (you can read more on that concept in the post 'Food for thought: The Systems Engineering Club Sandwich').
Chris Rolison, CEO, Comply Serve, continued the theme of progressive assurance with focus on the rail industry. Chris highlighted the complexity challenges in major rail infrastructure projects, and the issues presented by paper-based systems, silos in organization structures, and the supply chain. Chris said that "up to 80% of the engineering requirements can change during design & build" - not because the customer changes their mind but because of all the external factors involved in building a rail system. Chris went onto describe a more collaborative, requirements-driven design approach where systems engineering principles are applied, supported by a collaborative platform (ComplyPro which is based on IBM Rational DOORS).
Alastair Mavin of Rolls Royce 'lent' us his EARS (Easy Approach to Requirements Syntax (link is to an IEEE publication - sign in required) an application of a template with an underlying rule set on how to describe requirements using natural language but in a more structured, consistent way. He described the latest version of the template EARS+ (or as he nicknamed it 'Big EARS' !) and the benefits of the approach - simplicity and structure combined.
I could go on for pages about all of the great content shared at this excellent event but I'll leave it there with the main requirements related topics, except to quote from the keynote speaker on day 2: "The core of Systems Engineering is defining requirements and delivering against them". I'd put it this way - you can't have successful systems engineering without effective requirements management.
Neal Middlemore has over 14 years of experience in requirements management, this encompasses the associated disciplines of change management of requirements and validation/verification. Neal comes from an avionics systems engineering background and has been working with the DOORS product for over 10 years.
One of the most fundamental benefits that businesses want to get from using requirements management tools is consistent traceability. It doesn’t matter if it is an IT system being developed or an aircraft carrier, the levels of complexity being dealt with determine that traceability across multiple levels of requirements, from stakeholder requests to detailed implementation, is not simple to maintain manually.
Further hurdles are put in our way by the need to comply with legislative requirements, so many different industries these days have requirements placed upon them by government and international standards bodies along with internal corporate standards.
To prove compliance with these legislative rules it is necessary for projects to not only prove that requirements are being managed but also to provide the how and where of their management. What features relate which stakeholder request? What aspects of the solution satisfy safety regulations? Has the realization of the requirement been tested and by whom on what platform?
To answer these questions it is necessary to utilize the traceability capabilities provided by modern tools but it’s not enough to let a tool decide how things relate to each other, it isn’t enough to let individual users decide these relationships. Traceability needs to be considered in the larger context of how you report on it to answer the very questions that led you to consider traceability for in the first place. It’s more than ‘does object A relate to object B?’.
An initiative to define an information architecture of your governance solution will assist in defining your process artifacts and their relationships to one another. Traceability becomes an asset rather than an overhead. A good way to begin such an initiative is having a workshop attended by all project stakeholders i.e. those who have a vested interest in ensuring project success. Most likely these attendees will have a good understanding of the process and what the project needs to deliver. Undoubtedly each will also bring differing perspectives and understanding of what information is required. The test manager may want to see traceability from individual test artifacts through to the requirements being tested or to determine regulatory compliance has been achieved and whether verification methods have been agreed.
A subject matter expert (SME) may want to see the design in the context of the system level requirements and how it relates to stakeholder requirements. Everybody wants to see something that is applicable to their job roles, even wishing to see things outside of their typical discipline domains. By asking the questions and documenting the answers you can start to put together an information architecture that makes sense for your project. It’s likely that many information architectures will exist. Not every project is the same and these will have differing sets of required attributes, views and reports of project information and agreed structure of inter artifact relationships.
Modern tools can often create templates for this kind of information to allow the deployment of additional artifact containers as and when required. These can even enforce traceability to ensure the integrity across the project. Above all, it is vital that the needs of the project is documented and communicated alongside of the information architecture to all project members.
Last week I was really fortunate to spend a couple of days in London presenting to and talking to clients, business partners and industry analysts. It's always so good to hear what's really going on out there and to get many different perspectives on what's important today and for the future. The first day was at IBM's Innovate UK 2012 event where I was fortunate to be asked to present on all the really exciting new stuff we've done in the past year to help organizations building today's and the next generation of smarter products and systems, with particular focus on providing solutions for systems engineers and embedded software developers. You can catch the absolute latest news on our recent launch webpages. That session included a whistle-stop tour of the developments in requirements management for complex systems with Rational DOORS 9.4 and our plans for DOORS Next Generation. Whistle-stop because we also had so much news to get through in architecture & design, planning, change & configuration management and quality management, as well as industry specific solutions for A&D, automotive, medical devices and electronic design. And because on the following day at IBM's Southbank facility we had a whole day dedicated to topics related to DOORS.
At the DOORS customer day we had attendees from across several industry sectors including transportation, aerospace & defense, banking & mail services. The day kicked off with Morgan Brown presenting the latest on IBM's requirements management and DOORS strategy. Morgan told us how the DOORS 9.x series is and will continue to be developed and enhanced to meet the needs of the large install base, in parallel with the introduction of DOORS Next Generation (DOORS NG). DOORS NG is planned to take the best paradigms for managing structured requirements from DOORS 9.x and marry those with the requirements management and team collaboration capabilities that have been developed on the Jazz collaborative lifecycle management platform over the last 4 or so years (and are in use in the form of Rational Requirements Composer). The development of DOORS NG is out in the open on jazz.net where milestone builds can be downloaded, discussions held, defects/enhancements raised and plans viewed. DOORS NG has gone through four beta releases and is expected to be released in late November. Morgan explained that in its first release, DOORS NG is not intended to replace the DOORS 9.x product line, but it is expected that existing DOORS customers will try out DOORS NG on pilot and new projects, and will use the interoperability capabilities of the ReqIF data exchange and cross-tool traceability linking to exchange and/or link data between DOORS 9.x and DOORS NG. DOORS NG will also appeal to those looking for a requirements management tool that is on an integrated platform with design, test management and task/change management capabilities. Morgan reminded the audience of an IBM statement released earlier this year that existing DOORS customers with active support & subscription would be entitled to use both DOORS 9 and next generation capabilities when they become available. This was well received by the attendees since it means that they can try out DOORS NG when it ships without the need for an additional purchase.
Of course a day of technology insights never goes past without some piece of tech throwing an unexpected spanner in the works. This time it was the projector and the next presenter's Apple Mac that refused to talk to each other, so instead of a flow into a demo of DOORS NG, next up was Neal Middlemore to tell us about the improved integration of requirements and quality management with DOORS 9.4 and Rational Quality Manager (RQM) 4.0. This release was a significant enhancement that brings the integration in line with IBM's strategy to support OSLC - Open Services for Lifecycle Collaboration. OSLC is a new approach to tool integration that is open and vendor neutral. What's really different about OSLC is that data no longer needs to be copied or synchronized between tools in order to create cross-tool or cross-discipline visibility or relationships. So now quality professionals working in RQM can see requirements in DOORS and create links between test cases (and now because some organizations require it, test steps) and the requirements they are validating; and requirements professionals in DOORS can see linked test case information including test results, without the need for either to leave the comfort of their familiar tool or for data to be copied between the two tools. Neal demonstrated the value of the integration to requirements & quality professionals and showed how RQM can be used to manage manual testing or hook up with a number of IBM and partner solutions for various forms of test automation. You can also see a demo of the DOORS - RQM integration on YouTube.
So, technical issue solved, it was back to Jon Walton to give a demo of DOORS Next Generation using the Beta 4 release. Jon spent most of his time in the web client, highlighting the support for key DOORS paradigms such as hierarchical structured requirements documents, and showed off the plethora of new capabilities provided by the Jazz platform such as database-wide requirement reuse, graphical traceability view, requirements definition techniques (use case diagrams, storyboards), cross-discipline dashboards (containing requirements project info mashed up with info from design, quality and task management) and task management. Jon also showed the desktop client of DOORS NG which is very familiar looking to existing DOORS users with some twists (reuse of requirements across documents for one) - the desktop client will primarily be for users who need to do extensive editing of large requirements documents. If you're currently using DOORS 9.x, this YouTube video gives a quick preview intro of DOORS NG and how it's both similar to and different to DOORS 9.x. Watch this space for more to come on DOORS NG later this month.
Back to the earlier lifecycle integration theme started by Neal, next to present was Steve Rooks on how to use DOORS with IBM's solution for model-based systems engineering and model-driven embedded software development, Rational Rhapsody, to link requirements and design activities. Rational Rhapsody enables elaboration of requirements and construction of systems and software architectures using SysML and UML. Rhapsody Design Managerprovides an additional level of design collaboration capabilities. Models can be published to and/or stored and managed in a central repository, making them more easily accessible to a wider set of stakeholders so that designs can be better communicated and understood by all those involved in specifying, designing, building and validating a product or system. Rhapsody Design Manager uses OSLC to facilitate linking of design elements to other lifecycle artifacts - requirements, test cases, work items, etc. Like with the DOORS-RQM scenario described above, a systems engineer or software architect working in Rhapsody can see requirements in DOORS and easily create links between requirements and design model elements. Requirements and requirement links can even be included in model diagrams. And of course a DOORS user can see links to design elements without leaving DOORS or to participate in design reviews can navigate into Rhapsody Design Manager. You can read more about linking requirements and design and the DOORS-Rhapsody Design Manager integration in my recent post 'The House That Paul Built' that talks about a recent webcast on the topic.
After lunch, an IBM business partner Kovair was invited to present on how their Kovair Omnibus solution provides bridges, synchronization and workflow support across multiple tools from multiple vendors. It's a common situation to find yourself trying to enact processes and workflows when you have a diverse set of tools. Kovair talked about their support for OSLC to be able to widen the number of tools they can help link together, but also highlighted scenarios where you would still want to copy or transform data between tools - it's not a choice of Link or Sync, it's Link and Sync as appropriate.
The next session was presented by Paul Fechtelkotter, market manager for energy & utilities at IBM Rational. Paul gave a really interesting presentation on the challenges of complex systems development for nuclear power plants and how the nuclear industry is now adopting systems engineering best practices starting with requirements management to enable them to get better change management, traceability, impact analysis and compliance support. You can learn more about how IBM Rational is helping the nuclear industry on our dedicated web page.
Unfortunately I had to leave after Paul's session and didn't catch the remainder of the afternoon, but as you can see it was a day packed full of information. I hope you find my summary and links for more information useful. If you have any questions or comments on any of the topics I've covered here or indeed anything on IBM's requirements management strategy, Rational DOORS and the lifecycle integrations, please don't be afraid to ask by using the blog comments facility.
You've bought the plot of land for your dream home. You have your list of requirements - 4 bedrooms, 3 bathrooms, spacious kitchen, 2 living rooms, 2 garages, landscaped gardens, etc. Would you be happy to simply hand that list to the builders and let them start work? Unlikely, I think. Typically, you call in an architect, who can take your quantitative requirements and qualitative desires and produce a blueprint, the architectural design that incorporates your wishes where feasible and adds creative flourish based on the architect's knowledge of house design. The blueprint enables you and the builders to have a much clearer picture of the desired end result than that original list of requirements. And it affords you the opportunity to influence the architecture, and for the builders to question and look at feasibility & cost options, before the foundations are dug and the first bricks are laid.
The same applies in product development. Systems engineers who are responsible for the holistic product specification and design don't just use textual requirements lists to capture the problem domain and describe the proposed solution. They analyze the requirements, identifying integrated scenarios, and often depict those using modeling languages such as UML or SysML. These modeled scenarios are easier to discuss and review with all stakeholders, and as the systems engineer evolves the proposed architecture (also in the same modeling language) they can run the scenarios against the architecture in model simulations to find inconsistencies or gaps in the requirements and flaws in the design, long before any software is coded, circuit boards are soldered or metal is welded.
So what value are our textual requirements lists - should we throw them away in favor of models? Well, not everything can be expressed in the model and not everyone involved in a development effort maybe using models. Going back to the house building analogy, there are contracts, numerous standards and regulations to be adhered to, and simply details that would make the blueprints unreadable. The various contractors (and I know from recent experience that sub-contracting is the name of the game in house building these days!) involved in the building process need to ensure that they can meet the contractual and regulatory demands while delivering against the architecture. Again this is the same in product development, except in many cases, particularly safety-critical systems, traceability and demonstration of conformance to requirements and compliance to standards & regulations are demanded. This requires the ability to integrate requirements and modeling workflows, easily link requirements and design elements, and to report on that linked information.
The need and solutions for this capability are nothing new. Integrations between requirements management and modeling tools have existed for many years (I think I started using such an integration in the early 90's and I'm sure they preceded that time). But I know from first hand experience of using and indeed writing such integrations that they've not always been optimal in the way integration is performed and in the workflow that is supported. Typically it's meant synchronizing (i.e. copying) data between tools in order to create the traceability links in one of the tools. This brings up all sorts of issues like 'which tool is the master?', 'am I looking at the latest data?', 'what happens when information is deleted?', etc.
With Open Services for Lifecycle Collaboration (OSLC) we now have a much better way to link data across product development and operations tools, even when the tools maybe from different vendors, open source or in-house. OSLC has learnt from the principles of the World Wide Web and enables
tool data to be shared and linked where it resides (called a ‘Linked
Data’ approach). OSLC provides a common vocabulary for ‘resources’ in
particular domains, i.e. what a requirement, test case, design element,
change request, work item, etc. looks like, so that regardless of tool,
technology or vendor, tools implementing OSLC specifications can share and link data.
With Rational DOORS 9.4 and Rational Rhapsody 8.0 with Design Manager 4.0, IBM is utilizing OSLC to provide a simplified workflow for linking requirements analysis and design. On September 20, Paul Urban (if you've been wondering about this blog post title, now you know the Paul I'm speaking of), Market Manager for IBM Rational Rhapsody, presented this simplified workflow and its benefits on a IEEE Spectrum webcast sponsored by IBM. You can watch and listen to the replay at your own leisure here. I hope you it enjoy it - please let Paul and I know what you think by leaving feedback on this blog post.
The importance of communication and collaboration in developing and managing good requirements were discussed in our earlier post on How to enable effective requirements communication and collaboration. In this guest blog post, Melissa Robinson - a Senior Technical Specialist at IBM writes about how Rational DOORS addresses this aspects with Discussions.
Melissa started her career at Telelogic enabling Product Management with technical support around requirements management. Melissa spent 3 years supporting clients getting started with Requirements management at Telelogic. After IBM acquired Telelogic in 2008, Melissa transitioned roles to support clients with Enteprise Architecture initiatives. She received the Carnegie Mellon certification in Enterprise Architecture in 2008 and is TOGAF certified. Melissa now supports clients getting started with evaluating and implementing both requirements management and enterprise architecture solutions.
Note: Please click on the screenshots for a better view
Why did we make this decision? Who made this decision? Who approved this requirement?
These are some of the questions we can help answer with effective collaboration messaging with DOORS. Collaboration messaging is now enabled in DOORS and DOORS Web Access (DWA) with the addition of DOORS Discussions. Discussions allow users to contribute and add comments to requirement objects or requirement modules, users can even add comments to base-lined requirements. Discussions offer a method of having a conversation on requirements. DOORS discussions really break the communication barrier by allowing users to easily make comments or start a discussion on any requirement, including read-only requirements. Discussions can be created in DOORS or DWA and viewed in both DOORS and DWA. Both DWA Editor and DWA Reviewer roles can contribute to Discussions. Discussions capture comments so that you can later review ancillary information about your requirements. Discussions allow everyone to contribute comments and provide a full understanding of requirements.
Here is a simple scenario for using DOORS Discussions. A DWA Reviewer user creates a Discussion on a requirement. A DOORS user then reviews this comment and contributes a comment on the requirement. The DWA user reviews the latest comment and closes the Discussion.
A DOORS user, Susan, reviews the current Discussion created by a DWA Reader user, Kavita. Susan can open the requirements module with a pre-created Discussion view to review the Discussions. Below Susan reviews the Discussion on Requirement AMR-STK-66.
Susan can contribute a comment to the open Discussion.
Kavita reviews the new Discussion comment in DWA. Notice that Kavita is a Reviewer in DWA. As a Reviewer, she can create and add comments to Discussions. Kavita can also close Discussions that she started. Later, Kavita can contribute another comment to the open Discussion.
As the person who first opened the Discussion, Kavita can close this Discussion.Later, in DOORS, Susan can review the latest status of the Discussion using the Discussion Thread view. As a Database Manager role, Susan can choose to re-open the closed Discussion at any time.
Discussions open up the communication thread between several different types of DOORS users. Discussions allow requirements reviewers to exchange views and comments about the content of a requirements module or the content of a requirement object in a module.
We believe the post gave you a sneak preview of how DOORS Discussions help in effectively collaborating and communicating between various stakeholders during requirements management. Feel free to contact melissarobinson[at]us.ibm.com if you have any queries about the topic. Melissa will be discussing the topic in detail in an upcoming webcast on October 5, 2012. Don't miss the opportunity to watch the action live. Register now @ http://bit.ly/DOORS_Discussions
I was lucky enough last week to travel to the INCOSE
(International Council on Systems Engineering) International Symposium 2012 near Rome, Italy.
An excellent opportunity to meet the systems engineering community and hear
about their interests and concerns. We had lots of traffic to the very stylish
IBM booth where we talked about the IBM Rational solutions for systems
engineering and the latest from IBM Research on tool interoperability and
design optimization & trade-off. I’d like to claim the traffic was to due
to my presence but in fact there was lots of excitement and interest in the
must have giveaway of the conference, the IBM Limited Edition of Systems
Engineering for Dummies book
(if you weren’t there and don’t have a copy, you
can download a PDF version).
Being at the INCOSE event reminded me of the very active and
interesting discussion I recently provoked on the INCOSE LinkedIn group with the posting of the link to my
previous blog post ‘Traceability – How Much is Enough?’.
It’s a great read with some very provocative statements about whether
traceability is at all useful and that it’s the root cause of failure on
projects that overrun and overspend versus those that say it’s absolutely vital
on safety-critical systems or where the project is contract-driven. In the end I
think some consensus was reached between these two camps that ‘just enough’
traceability to keep a project on track, provide customer/market need context
to engineers, facilitate impact analysis, and (if needed) to meet industry
standards and regulations, is sufficient. Any more is excessive and wasteful
and likely to bog down progress towards to delivering innovative products and
During a quiet time at the IBM booth, I also had chance to
chat with my colleague Brian Nolan (marketing manager for aerospace &
defense industry at IBM Rational) about effective traceability, since Brian
is very interested in this topic and has presented on a Dr Dobbs
webcast on ‘3 Ways to Improve Traceability and Impact Analysis’.
Brian believes in what I would describe as ‘traceability by design’, meaning
that traceability is automatically established while you decompose your system
design (for example, use case to use case realization to sequence diagram and
so on). This discussion also reminded me of what another colleague Greg Gorman
(program director for IBM systems and software engineering solutions and the
INCOSE Corporate Advisory Board member from IBM) described several years ago as
‘link while you think’, meaning traceability is created by the tools, while you
are performing requirements decomposition, design and development, rather than
as an overhead activity afterwards.
I think we’ve now moved some way beyond ‘link while you
think’. While an information model with ‘just enough’ traceability for your
project needs is essential to avoid traceability spiraling out of control, with
new approaches such as Linked Lifecycle Data from the OSLC (Open Services for Lifecycle Collaboration) community, and tools that recognize implicit traceability, provide
new ways to visualize lifecycle traceability and perform effective impact
analysis, we can make traceability work for us to help engineering become more
agile, while staying within costs and schedule and produce innovative, higher
quality products and systems.
I'm writing from Innovate 2012 in Orlando, Florida where thousands are attending sessions and sharing thoughts about software development and systems engineering. One topic that keeps coming up is that of traceability. On Sunday at VoiCE (Voice of the Customer Event), we had some great discussions with clients in the industrial sector building complex and embedded systems such as planes, cars and medical devices about traceability scenarios they have. There was a lively discussion around how much traceability is enough. One client, who is working in aerospace, needs to comply with DO-178B, and requires traceability all the way from a high level customer requirement through to individual lines of code. Others asked 'do you really need that fine grained traceability?' and 'won't that be very difficult to manage?' Another described that they have 26 teams and 16 applications to manage, and in the past had many (I think I heard 50!) locations where requirements were stored, usually in spreadsheets, making traceability very difficult. Now with the 'right schema' in place and using IBM Rational Requirements Composer, they have a solution that makes traceability much easier, and an environment that is manageable for the long term as it scales. Having the right schema - the information model of artifacts and what relationships they have was stressed as a vital ingredient in any recipe for successful traceability.
In a breakout session yesterday, data was shared that on a deep space exploration mission project, there are over 80,000 items in the requirements database (IBM Rational DOORS) and over 40,000 links - mind blowing complexity of data and relationships, and that's on one of many projects they have running today.
The right culture, process and tools for your application/system/product/service, organization and industry are necessary to prevent traceability across not only requirements, but into designs, work items, tests and so on, spiraling into an uncontrollable, unusable spaghetti of artifacts and links.
So for you and your projects, how much traceability is enough, how are you managing it and what would you like to see in the future to make the creation, maintenance and most importantly utilization of traceability easier to do and more effective?
Many articles, papers and books have talked about the impact of fixing a defect at later stages of development life cycle. Researches have also been proved that majority of the software defects found in a project are related to requirements and continues to be higher than that arising due to design or coding issues. According to studies conducted, approximately 60%-70% of IT project failures result from poor requirements gathering, analysis, and management*. The most successful products and applications have been built with a thorough understanding of what they are intended for. Understanding and managing requirements are by far the most important aspect in the development life cycle irrespective of which technology, development process, industry or purpose of the software or system under development. The beauty of an effective requirements management is that it provides complete insights into how the development of a system is progressing for every stakeholder in the team yet at a level of abstraction neutral to technology, platform or perspective. It thus becomes the corner stone of software or systems development!
In today’s world, developing smarter products at the lowest time to market has become a norm for success. Often the level of pressure that mounts to get started and come up with a prototype or the product itself is high and requirements are cornered with least importance. But if we check statistics, we can see many examples that have caused hefty prices and disasters due to hasty development or lack of thrust given to understand the real requirements. Read a compilation by Computer World UK here. These two videos from IBM and IAG Consulting respectively talks about why you should consider requirements seriously.
If you watched the above videos, it is clear that the process of how requirements, are managed and collaboration and communication between stakeholders are equally important for an effective requirements management. Also many mistakenly believe that requirements management is something that takes place during the definition stages of a project and is then complete. However the reality is that requirements exist in some form at virtually every stage of development.
The three main factors to be considered while defining the requirements management are
Requirements definition. The requirements should not only be concise and unambiguous, they should also be testable.
Requirements classification. To manage project development effectively, each requirement statement must be classified appropriately for the application to aid in effective decision making and resource allocation.
Requirements traceability. Stakeholders of a project must be able to track requirements through the stages of development .
Stay tuned for more posts on requirements management, analysis and other topics...
Have some suggestions/opinions/comments/topics...send us!
* Source: The Meta Group market research firm, a division of Gartner, March 2003