Modified on by VijaySankar
Prof. Neil Maiden is Professor of Systems Engineering at City University London. He is and has been a principal and co-investigator on numerous EPSRC- and EU-funded research projects. He has published over 150 peer-reviewed papers in academic journals, conferences and workshops proceedings. He was Program Chair for the 12th IEEE International Conference on Requirements Engineering in Kyoto in 2004, and was Editor of the IEEE Software’s Requirements column from 2005 until earlier this year. He can be reached out at N.A.M.Maiden[at]city.ac.uk
Requirements work is still regularly perceived as stenography, one in which the analyst listens and documents while the stakeholders tell the analyst what they want. This perception is reinforced by the requirements techniques that we use on most projects – the observations of work that we make, the interviews with stakeholders that we hold, and the questionnaires that we distribute to collect data about problems and requirements. These techniques hardly set the pulses racing. Nor do these techniques help us discover stakeholders’ real requirements.
Stakeholders don’t know what they want
One reason for this is that eliciting requirements relies on stakeholders knowing what they want and need. However, most stakeholders do not know what they want or need. They are limited by their perceptions of what is possible – what new business models can offer and new technologies can enable. Your average stakeholder is neither a business visionary nor a technology watcher. So is it surprising that their answers to your interview questions are so, well, banal?
Indeed, many businesses have come to realize that customers are more often rear-view mirrors rather than guides to the future. A new approach is needed – one that empowers your stakeholders. My advice? If you want to discover your stakeholders’ real requirements, encourage the stakeholders to create them.
Make them up.
Why not? After all, when you interview someone, the requirements that they report to you are the results of their own, often limited, creative thinking about a new system – creative thinking that you are capturing at the end of, too late to influence.
In this blog post, I argue it is more effective to get in earlier – for analysts to facilitate creative thinking about requirements as soon as requirements work starts. Think of requirements as the outcomes of creative work – desirable inventions that your stakeholders are guided to come up with. After all, many of your digital solutions should be giving you some form of business advantage, and establishing this advantage starts with requirements – what the solution will give to your business.
Creativity in Requirements Work
Perhaps to the surprise of many in software engineering, creativity is well understood. Many different definitions, models and theories of creativity are available, from domains ranging from social psychology to artificial intelligence. As a software engineer, I was delighted to discover how well the phenomenon has been studied. And how much software engineers could leverage from it.
I like the definition of creativity from Sternberg and Lubart. I consider it prototypical of many of the definitions out there. Creativity is:
“the ability to produce work that is both novel (i.e. original, unexpected) and appropriate (i.e. useful, adaptive concerning task constraints)”
Creative problem solving methods have been available since the 1950s. What is striking about many of these methods is their similarity to software development methods that emerged 25 years later. The CPS method guides people through activities such as problem finding, goal finding and solution acceptance – stages similar to analysis and testing phases of software development. What is different is the focus on creative thinking at each of these stages – creative thinking to maintain a critical advantage in business.
My team at City University London has been leveraging creativity methods and techniques in requirements projects of different types for over a decade, with great results. One approach has been to run creativity workshops – risk-free spaces in which stakeholders can discover and explore ideas often not feasible within more traditional requirements work. A workshop is normally divided into half-day segments in which stakeholders work with different creativity techniques such as reasoning analogically from other domains, removing constraints, and combining visual storyboards. We’ve successful ran such workshops in domains from air traffic control and electric vehicle use to policing.
Another approach is to embed creative thinking into the early stages of agile projects – what we refer to as creativity on a shoestring because of the need to provoke creative thinking in less than an hour. We’ve learned to prioritise epics with more creative potential. We’ve identified creativity techniques that deliver new ideas in less than an hour – techniques such as hall of fame, creativity triggers and combining user stories.
Get More Creative
Much requirements work is creative. We need to adapt what we do to reflect this. Fortunately there are many creativity processes and techniques out there to experiment with. Do try – you will be rewarded.
Modified on by VijaySankar
Alex Ivanov is a Senior Software Engineer II with Honors at Raytheon Integrated Defense Systems. Alex has more than 10 years experience as a Requirements (DOORS) Database Manager supporting a large scale distributed requirements database in the aerospace and defense industry, specializing in writing re-usable DXL, training, user support and consulting with programs to ensure they get the most out of their use of IBM Rational DOORS. Alex in an IBM Certified Deployment Professional DOORS v9 and has been recognized as a three time IBM Champion (2011, 2012, 2013). In 2011 Alex was elected the President of the New England Rational User Group.
1. How does it feel to be a returning IBM Champion?
I’m honored to be selected for this honor for the 3rd consecutive year. Ironically enough it was in 2010 that I 1st started to reach out across Raytheon the best practices that my team and I had developed around how to effectively use IBM Rational DOORS. I thought to myself, we have hundreds of programs at Raytheon that use DOORS and yet not everyone is aware of how to make the best use of the tool and more over the customizations that have been developed through custom DXL to make it even easier to use DOORS. This was the beginning of my vision to standardize the way requirements are managed at Raytheon, and little did I know it would lead to discovering my true passion.
2. Can you tell us something about what you do at Raytheon?
I lead team of engineers that maintain our customizations for how to effectively use DOORS and moreover I consult with programs all across the company on how to best architect their DOORS database and take advantage of the automation we have available. I’m most passionate about reaching out to others across the organization that are eager to improve on how systems engineering uses DOORS to their maximum advantage. I’m happy to say that over the past 3 years I’ve been able to spread our best practices across numerous Raytheon locations across the US, and have been able to do wonders with social media through the use of wiki pages, blogs, communities and generating training videos.
3. What are your thoughts on managing requirements effectively?
A tool is just a tool unless you have a sound process and training to go along with it. It is much more important to get people to understand the process and how the tooling helps them do their job, than to simply rely on the tool to solve all their problems. I believe it’s very important to have a sound process in place and certainly if there are best practices to leverage on how to use tools people should strive to take advantage of them, without losing sight of their process and what they are responsible for delivering. I can’t stress how important it is to determine your project architecture and the relationships of what you will be managing in DOORS, it might seem time consuming at 1st but it will save you a lot of time down the line.
4. So, how long have you been using Rational DOORS?
I started using DOORS in 2000 which is when I began my career at Raytheon. At the time I was a software developer who had just graduated from Boston University and DOORS was the tool which had the software requirements to which I had to develop code for. I believe at the time it was DOORS version 4.0 and I can certainly say that the tool has come a long way since then.
5. That's a long time; in your opinion, what are some of the greatest assets of the product and well, the pain points?
Without a doubt one of the greatest assets is the ability to customize the product for your process. Having said that I believe it’s very important to have a solid understanding of the systems engineering process and a clear understanding of how to architect your project for success. Only once you have a reusable architecture can you turn the focus on how to write reusable DXL to compliment your project template and architecture. As far as the pain points, certainly one of them has to be how manual it is to manage the database, whether it be attributes, views or access without custom DXL scripting it would be rather time consuming to carry out many tasks.
6. Do you have some tips or tricks to share with the DOORS users out there?
I’m happy to say that there are a ton of free resources online and I would highly recommend people take the time to watch webcasts, join a local Rational user group and just network with your peers who have their own experiences to share. If you aren’t a member already I encourage everyone to join the Global Rational User Group, their monthly newsletters are a great source of information and many presentations are archived for viewing right on the site. Another great resource for DOORS and DXL are the developerWorks forums, having asked and answered questions on the Rational DOORS DXL forum I highly recommend it. Numerous webcasts are available on 321Gang's website, Managing DOORS: The Administrator’s Toolbox is one where I take you through some examples on how to write reusable DXL to make it much easier to manage attributes and views.
7. What advice do you give to the budding DOORS administrators?
I would encourage everyone to find a mentor, this can be anyone that you look up to and just ask them questions, I know that I would not be where I am today had it not been for the mentoring and support of numerous people in my life and I am forever grateful to them. Something I have learned throughout the years is that it’s important to ask questions, and don’t assume the person who is asking you to do something has the right answers. As you gain experience you’ll be able to tailor the solution for your customers and they will thank you for it. It’s unbelievable how many resources are available online for learning, I’m a big fan of watching and creating training videos and Youtube has numerous channels I’d recommend subscribing to, a couple of which are: IBMRational and IBMJazz.
I believe it’s very important to always want to improve, whether this be in your personal or professional life. I encourage everyone to grab a book on any subject that is of interest to them, you’d be amazed how much you will learn. I’ve read dozens of books over the past few years and have a reading list for that want to follow along on http://www.shelfari.com/alexivanov/shelf
Modified on by VijaySankar
In this guest blog post, Requirements Engineering Expert, Jeremy Dick continues with his discussion on practical applications of traceability. Read the first two parts here.
What’s really going on when you decompose a requirement?
What’s really going on when you plan V&V against a requirement?
Inspired by recent experience in a large systems engineering project, Part 1 of this blog post series covered the practice of decomposing requirements, which brings about one of the most important traceability relationships in requirements engineering. Part 2 covered the next most important relationship: that between requirements and validation and verification activities.
Part 3 (probably the final part!) presents some other things about V&V we think we have learnt from undertaking a large project: what do you do about new requirements that arise from the V&V you have planned against your main requirements?
Once the need for V&V activities has been established (see Part 2), this will often give rise to new requirements. Broadly speaking, these requirements fall into two types:
Those that affect the design of the product.
Such requirements may constrain the design of the product, and may even add functions. For instance, should there be a need during commissioning to establish that the temperature of a fuel cell does not exceed a certain level, then it may be necessary to design in a means of measuring it.
Those that define the need the build a secondary system.
These requirements are for the construction of test artefacts, such as models and test equipment. In large programs, this test equipment can represent huge projects in themselves – like the construction of a new building to test a new design of jet engine.
Neither of these types of requirement should really be mixed in with the remaining requirements of without recording (through traceability) their origin, because if the choice of V&V changes, then we should be able to identify which parts of the design – or which pieces of test equipment – are present only to make the old V&V possible.
If the need to test a product gives rise to the need to build facilities to carry out that testing, then another development life-cycle is spawned for the purpose. Requirements will be collected for the facility, and further V&V may be required against those requirements. We enter a sort of recursive world of development life-cycles. Some call this “fractal” – the main development spawns smaller developments, which in turn spawn yet smaller ones.
Primary versus Secondary V&V
Secondary V&V arises when V&V activities lead to test artefacts that also need validating or calibrating. For example, from requirements about power delivery, there is a direct need to do some analysis using a model of fluid dynamics – what we call primary V&V. Requirements for the model itself are collected, and the need for validation of the model considered. For example, the model may need calibrating against some similar existing systems. This calibration activity is another form of V&V – what we call secondary V&V.
Requirements lead to V&V activities lead to requirements
As noted above, the need to for V&V leads to design changes or secondary systems, and thus to further requirements. This relationship should be captured; a requirement, rather than having a parent requirement, may in fact have a parent V&V activity. We could say that a requirement “enables” a V&V activity.
In the extreme, a secondary system may impose requirements on the primary product, for instance through the need to interface with it.
An information model
The diagram below illustrates the idea that requirements enable V&V activities
In the diagram, three traceability relationships are shown: “satisfies”, “verifies” and “enables”. There is an example of a primary V&V activity leading to an enabling requirement on the primary system itself – a requirement whose parent (through the “enables” relationship) is a V&V activity.
There are also two examples of secondary systems driven by requirements for primary V&V, and one of those gives rise, in turn, to a tertiary system.
In our current project, we have pushed this model only as far as secondary systems with an “enables” relationship and secondary V&V. An example is the one cited above – a fluid dynamics model that needs itself to be validated through calibration with similar existing systems.
As with all these things, there is probably a law of diminishing returns. How far should you push the requirement to V&V to requirement to V&V to requirement chain? You will have to decide!
A similar model probably exists for other aspects of development – manufacture, for instance. The need to manufacture a product gives rise to the need to construct the factory and plant to do so – a primary development spawning a secondary development. However, the traceability train goes not from requirement to V&V activity to requirement, but rather directly from primary requirement to secondary requirement. This could still be characterized as enablement.
We never build just “the product”. There are always other things that need designing and building that surround the system for various purposes, including testing components, sub-systems, systems and completed products.
What has been presented here is an attempt to get to grips with the traceability required to track the relationships between the product and its secondary (and possibly tertiary) systems.We implemented all this in a Rational DOORS database, with a small amount of customisation to ease the way.
Thank you for reading these blog entries, and please contact me if you are interested in reusing any of this experience and associated Rational DOORS customization.
Read the first two parts here -
What’s really going on when you decompose a requirement?
What’s really going on when you plan V&V against a requirement?
About the author - Jeremy Dick works as Principal Analyst for Integrate Systems Engineering Ltd in a consultancy, research and thought leadership capacity. He has extensive experience in implementing practical requirements processes in significant organizations, including tool customization, training and mentoring. At Integrate, he has been developing the concept of Evidence-based Development, an extension of his previous work on “rich traceability”. Prior to this appointment, he worked for 9 years in Telelogic (now part of IBM Rational) in the UK Professional Services group as both an international ambassador for Telelogic in the field of requirements management, and a high-level consultant for Telelogic customers wishing to implement requirements management processes. During this time, he developed considerable expertise in customizing DOORS using DXL to support advanced engineering processes. His roles in Telelogic included a position in the DOORS product division to assist in the transfer of field knowledge to the product team. Co-author of a book entitled “Requirements Engineering” that has recently reached its 3rd edition, he is recognized internationally for his work on traceability. Jeremy can be reached out at jeremy.dick[at]integrate.biz
Modified on by VijaySankar
In this guest blog post, Requirements Engineering Expert, Jeremy Dick continues with his discussion on practical applications of traceability. Read the first part here -
The practical applications of traceability Part 1: What’s really going on when you decompose a requirement?
Inspired by recent experience in a large systems engineering project, Part 1 of this essay covered the practice of decomposing requirements, which brings about one of the most important traceability relationships in requirements engineering. Part 2 here covers the next most important relationship: that between requirements and validation and verification activities. Part 3 will continue the discussion of V&V, and how it itself gives rise to further requirements.
Verification & Validation (V&V)
I don’t care enough about the difference between validation and verification to want to enter into the divisive debate about it here. I am just going to say V&V and be done with it!
Kinds of V&V activity
There are many kinds of V&V activity, and organisations have varied ways of classifying them. In the project I am working on, the classifications are Analysis, Analogy, Inspection, Review, Test and Demonstration.
By their very nature, these types of activity tend to occur at different times of the life-cycle. Analysis, for instance, tends to occur early to predict properties of the proposed design and verify it against requirements. By contrast, demonstration tends to occur late as part of the acceptance tests.
Typically, a whole series of activities will be planned against a single requirement, some early, some late, allowing confidence to accumulate over the life-cycle of the project.
Requests for evidence
Despite the variety of kinds of activity, there is one thing they all have in common: they are requests for evidence of some kind or other. Indeed, I would favour calling V&V activities exactly that: “requests for evidence”.
Intention versus Fulfilment
Those activities that are carried out early in the development process provide evidence that the intended design will meet the requirements – they address design intention. Those activities applied late in the development process collect evidence that what has been built meets the requirements – they address design fulfilment.
Once the need for V&V activities has been established, this will often give rise to new requirements, either on the design of the product itself, or requirements for the construction of test artefacts, such as models and test equipment. (We never build just the product; there are always other things that need designing and building that surround the system for various purposes.)
The management of requirements arising from V&V will be the topic of Part 3.
Requirements decomposition and V&V planning
When planning V&V activities against a parent requirement, you need to take into account the V&V that will be carried out on its child requirements, and their child requirements, and so on.
Take, for instance, the following example where a user requirement is decomposed into a number of system requirements:
The only V&V activity planned against the user requirement is a commissioning test, which will occur late in the life-cycle. However, further V&V activities are defined against the child system requirements. Some of these are design inspections that occur very early, and some are system tests that occur relatively late, but still before commissioning.
There is, of course, a sense in which all these V&V activities provide evidence for the satisfaction of the user requirement, but some of the activities fit more directly against the system requirements. So when planning V&V activities, you need to ask the question: what activities can only be carried out against the parent requirement, and which can be delegated to child requirements? – because those that can be delegated are likely to provide evidence earlier in the life-cycle. And you always want that, if you can get it.
Granularity of V&V results
In the above example, there is one V&V activity that is linked to multiple requirements. In general, the relationship between requirements and V&V activities will be many-to-many.
However, this presents an issue when it comes to collating results of V&V against requirements. The System Test defined above may show positive results for filling, boiling and dispensing, but fail on the time taken to recover (cool down). So it has passed on all requirements except one. In terms of granularity of information, we need to record the result of the V&V activity against each linked requirement.
How is it best to do that? The only place to do that in the information model of the example is on the “verifies” links; there is a link for every requirement-V&V pair.
Another way is shown in the next example:
Here we have separated out the success criteria for each requirement for each test by adding subsidiary objects under the V&V activities (for instance, using the DOORS object hierarchy). Each success criterion has exactly one link to a requirement; a link from a criterion is implicitly a link from the V&V activity. (This link could be made explicit by retaining a link from the activity as well – not shown in the diagram.)
Now we have objects rather than links against which to record the results of the V&V activity (using an attribute of that object). This has the added advantage that it encourages a discipline of identifying precisely what the success criterion is for each requirement against each V&V activity. In addition, the V&V Activity and its list of success criteria can be used as a description/checklist for each particular test.
As results come in, the success/failure status on the success criteria can be rolled up through the “verifies” links to the associated requirements, and then on up through the “satisfies” links to the parent requirements. Both these relationships allow results to be summarised through the eyes of the requirements at every level.
V&V planning steps
These are the process steps we teach for planning V&V against requirements. They are numbered so as to continue from the process steps named in Part 1:
Determine the V&V activities you will need.
Consider what range of evidence you will need to collect to establish that the requirement has been met, and determine the best V&V activities for that. Aim to collect evidence as early as possible in the life-cycle, considering early proof of design intention as well as later design fulfilment. Capture the V&V activities into the database and (if using explicit links) link them to the associated requirements.
Identify the success criteria for each requirement against each V&V activity.
For each requirement/V&V activity pair, determine the success criteria to be applied. Capture each success criterion in a new object under the activity, and link it to the requirement.
Record the results of the V&V activity against each success criterion.
When the V&V activity has been competed, record the success or failure of the activity against each success criterion.
So this is what we now teach those engaged in planning and tracing V&V against requirements, in conjunction with requirements decomposition. It is wrong to assume that people will somehow automatically know how to do this kind of thing. By taking this approach, the V&V plan is well organized, defined at the most appropriate layers, with success criteria defined, and ready for the collection and roll-up of results.
Read the first part here - The practical applications of traceability Part 1: What’s really going on when you decompose a requirement?
About the author - Jeremy Dick works as Principal Analyst for Integrate Systems Engineering Ltd in a consultancy, research and thought leadership capacity. He has extensive experience in implementing practical requirements processes in significant organizations, including tool customization, training and mentoring. At Integrate, he has been developing the concept of Evidence-based Development, an extension of his previous work on “rich traceability”. Prior to this appointment, he worked for 9 years in Telelogic (now part of IBM Rational) in the UK Professional Services group as both an international ambassador for Telelogic in the field of requirements management, and a high-level consultant for Telelogic customers wishing to implement requirements management processes. During this time, he developed considerable expertise in customizing DOORS using DXL to support advanced engineering processes. His roles in Telelogic included a position in the DOORS product division to assist in the transfer of field knowledge to the product team. Co-author of a book entitled “Requirements Engineering” that has recently reached its 3rd edition, he is recognized internationally for his work on traceability.Jeremy can be reached out at jeremy.dick[at]integrate.biz
Modified on by VijaySankar
Jeremy Dick works as Principal Analyst for Integrate Systems Engineering Ltd in a consultancy, research and thought leadership capacity. He has extensive experience in implementing practical requirements processes in significant organizations, including tool customization, training and mentoring. At Integrate, he has been developing the concept of Evidence-based Development, an extension of his previous work on “rich traceability”. Prior to this appointment, he worked for 9 years in Telelogic (now part of IBM Rational) in the UK Professional Services group as both an international ambassador for Telelogic in the field of requirements management, and a high-level consultant for Telelogic customers wishing to implement requirements management processes. During this time, he developed considerable expertise in customizing DOORS using DXL to support advanced engineering processes. His roles in Telelogic included a position in the DOORS product division to assist in the transfer of field knowledge to the product team. Co-author of a book entitled “Requirements Engineering” that has recently reached its 3rd edition, he is recognized internationally for his work on traceability.Jeremy can be reached out at jeremy.dick[at]integrate.biz
It is a bit of a shock to find myself well into the fourth year on the same project! The nature of my work as a consultant means that it is rare for me to stick with a project beyond the initial phases of defining a requirements management process, establishing effective tool support and training the process enactors. But this time we have been able to stick with the requirements team supporting a large project long enough to see theory put into practice, and to see what it really means to apply the tools and techniques. We have gone well beyond just training, and find ourselves mentoring nearly 300 engineers in the application of DOORS for requirements capture, development and management. This has helped us keep our feet firmly on the ground – rubber on the road – and to walk with those who actually have to do the work.
So what have we learnt?
It is one thing to teach people how to write requirements statements that are clear, unambiguous, testable and traceable; it is quite another thing to help people understand how to take a requirement and develop it. None of the engineers we met on the project had previous experience of how to take a system requirement, for instance, and systematically decompose it through the design into sub-system and component requirements. We had to adapt our training and mentoring to address this skill.
Requirements decomposition establishes one of the essential requirements traceability relationships: how each layer of requirements contributes to the satisfaction of the layer above. This is often known as the satisfaction relationship, or as refinement in SysML. It is this relationship that connects the design to the development of requirements, and that lies at the heart of the ability to perform impact analysis.
Whatever layer you are engaged in – customer, system, sub-system or component – the same basic requirements development process can be applied. These are the process steps we teach for requirements development:
1. Collect and agree your requirements.
Here you actively seek out the requirements you are expected to fulfil. In an ideal world, development will be perfectly top-down, and you can wait for the layers above to allocate perfectly expressed requirements to you. In a practical world, you will have to be more proactive, and will have to cooperate with your requirement “customers” to obtain acceptably worded requirements.
2. Design against your requirements.
This stage is the creative bit that you perhaps most enjoy doing, and where the real engineering takes place. You will imagine how to design the system to meet the requirements, and what you need your requirement “suppliers” to do to contribute to your design. In other words, if you are at the system level, you design the system into sub-systems, and work out what each sub-system must do to meet the system requirements.
3. Decompose the requirements to reflect the design.
Now you enter the decomposed requirements and trace them back to the requirements they satisfy, thus making the requirements contents and traceability reflect your design. The wording of the decomposed requirements is important: if the original requirement read “The <system> shall ...”, then it is likely that the decomposed requirements will read “The <sub-system> shall ...” At this stage you can also capture rationale for the decomposition, including references to the design documentation or models, thus tracing also to design information.
4. Allocate the decomposed requirements.
Finally, you can pass on the decomposed requirements to those areas responsible for fulfilling them. These other areas engage in the same process, and together you systematically achieve alignment of requirements through all the layers.
The example below illustrates the end result of applying this process on a user requirement decomposed into system requirements. The rounded box contains the design rationale, and refers to a functional model of the product. (If you’ll forgive the shameless plug, such diagrams can be produced using a DOORS extension related to TraceLine. Ask me more if you are interested.)
The example just shown is a classic decomposition pattern. It is actually the decomposition of an overall performance into a combination of capacity and performance attributes of the product. We call this “decomposed”.
Other decomposition patterns are possible. Sometimes no decomposition is necessary, because the system requirement can be satisfied entirely by a single component or sub-system, as in the left-hand example below; or a constraint that will apply universally to all parts, as in the right-hand example. All that changes, perhaps, is the wording of the requirement to indicate the new target. We call this “direct flow”.
You will reach a point in the cascade of decomposed requirements when a requirement is satisfied entirely within the current area without the need to decompose further, as illustrated in the following. In this case, it is important to state the rationale for not flowing the requirement onwards, as otherwise it may be construed as a traceability gap. We call this “not developed further”.
I have seen a number of organisations that capture this flow-down type – Decomposed/Direct Flow/Not developed further – as an attribute of the requirement. We do this because it allows us to cross-check certain things: if you have marked a requirement as “Decomposed” but have not decomposed it, then that does indicate a design gap. However, if you mark the requirement as “Not developed further”, that gives you permission not to trace it further, i.e. not a design gap. (But you would do well to provide rationale!)
As requirements flow down through the layers, the complexity of the design becomes evident in the shape of the requirements graph. In general, satisfaction is a many-to-may relationship between requirements, and the figure below shows how this may be manifest. As the requirements are decomposed, they are refactored through the design.
Some patterns are questionable. Take these for instance:
Why decompose something into three requirements only to reduce them to a single one again? Or why collapse three requirements into one only to re-expand them in the next layer?
These patterns are not necessarily wrong, but they should be targeted for careful review.
So this is what we now teach those engaged in requirements decomposition, flow-down and traceability. It is wrong to assume that people will somehow automatically know how to do this kind of thing. By taking this approach, the flow-down of requirements reflects the design, and a clear satisfaction relationship is expressed in the traceability.
Read the second part here - The practical application of traceability Part 2: What’s really going on when you plan V&V against a requirement?
Modified on by VijaySankar
Just like the famous (mis)quote of Mark Twain, rumors of the demise of the requirements management tool IBM Rational DOORS are not only exaggerations but in fact untrue. I’d like to put straight here some of the myths and misunderstanding that I’ve heard and seen perpetuated:
Myth: IBM is discontinuing support and development for the DOORS 9.x series
Truth: IBM continues to develop the DOORS 9.x series with the same level of development resources. We have new functionality to announce in 2013 and plan further releases in the years ahead. In addition IBM has recently invested additional development resources in creating a new Requirements Management tool called IBM Rational DOORS Next Generation (DOORS NG).
Myth: IBM is replacing the DOORS 9.x series with DOORS Next Generation (DOORS NG) and expects customers to migrate now
Truth: DOORS 9.x will be developed in parallel with DOORS NG for many years to come. DOORS NG was released for the first time in November 2012. The tool has many functions DOORS 9.5 does not have, e.g. fully functional web client, type and data re-use (requirements & attributes), team collaboration, task management, but also does not have all of the capabilities relied on by DOORS 9.x users. We encourage users to evaluate the capabilities of DOORS NG and start projects or move projects to DOORS NG when practical and beneficial to the project.
Myth: To move to DOORS Next Generation, I’ll need to buy a whole new tool
Truth: Customers with active support & subscription on their DOORS 9.x licenses are entitled to use DOORS Next Generation - it is included as part of the DOORS 9.x package. Customers can choose to use either DOORS 9.x, DOORS Next Generation or a combination of both. There is no need for additional purchases or even for an exchange of licenses. All licenses are available to customers in the IBM license key center.
Myth: IBM offer no migration path from DOORS 9.x to DOORS NG
Truth: IBM do offer functions to transition into using DOORS NG for existing DOORS 9.x customers:
Work with DOORS 9.x and DOORS NG alongside each other - both tools support linking of information between both databases.
Work with suppliers by exchanging requirements through the standard exchange format of ReqIF. This works best between DOORS 9.x and DOORS NG but is also designed to work with other RM tools.
Where DOORS NG projects wish to be initialized with data from DOORS 9.x we offer re-factoring functions to harmonize the transition of data from DOORS 9.x to DOORS NG.
Myth: Everyone knows IBM’s strategy on requirements management and DOORS
Truth: IBM takes great care to regularly communicate our product strategy and roadmap at trade shows, IBM conferences and on webcasts. If in doubt, come and ask IBM! Please submit questions using the comment feature here or contact your IBM representative.
For information about the products, visit -
IBM Rational DOORS
IBM Rational DOORS Next Generation
Today we have with us Mia McCroskey at Emerging Health Montefiore Information Technology who was recognized as IBM Champion this year. She shares with us her thoughts about the requirements management domain.
Welcome to IBM family! It¹s a great pleasure to have you with us as an IBM Champion. Congratulations, How do you feel?
I am honored to be recognized in this way not just this year but for the past several years that I have been asked to present my team's stories at Innovate. It may seem like our little team's requirements management needs are nothing like those of customers with huge DOORS ecosystems. But really we are an R&D site for the evolution of requirements management techniques and strategies. If a member of my team has an interesting idea for how to capture, structure, track, or trace requirements, we can try it without getting high level approvals or disrupting the work of hundreds of people. I take tremendous satisfaction from sharing our successes in a way that may help others improve their best practices.
Can you tell us something about what you do at Emerging Health, Montefiore Information Technology?
Emerging Health is primarily an IT delivery organization supporting healthcare delivery in the Bronx. Montefiore Medical Center is our parent company. My team, Product Development, is a software development shop tucked away in a corner doing very different work from most everyone else. Our application, Clinical Looking Glass, is a browser-based clinical intelligence tool that gives clinicians access to the enormous wealth of patient data gathered by all the other systems. Our end users can get an answer to a question like "are my clinic's diabetic patients getting the level of follow-up care required by our funding sources?" in a few minutes. In most healthcare environments, getting the data that you need to answer this question takes weeks.
My title is Manager, Product Development Lifecycle because I have my fingers on just about every stage of that lifecycle. Specifically, I lead our Requirements Management, Quality Assurance, Education, Support, and Implementation teams. I also manage outsourced development work, manage client relationships, and do hands on end-user training and support. We are an extremely team-oriented organization, with two formal development scrum teams and two teams that are at various stages of adopting an agile process. I'm deeply involved in that right now: it's challenging to apply Scrum to a training and support team, and to a team of data analysts and engineers.
Having said all that, my roots are in requirements. On a team of "happy path" stakeholders who get very excited by ideas for new functionality, I love to dig in and find the challenges that nobody wants to think about -- before we start coding. Sometimes I feel like a real buzz kill!
What are your thoughts on managing requirements effectively?
Agile models have forced us to completely reorganize our requirements elicitation, analysis, and management processes. But one thing that has not changed is my belief in a comprehensive, functionally organized requirements model.
But when I say "comprehensive," I don't mean laboriously detailed. The art of requirements analysis and management is in knowing -- or guessing right -- what details are going to be important later. By later I don't mean to the coders and testers in this iteration, I mean when we want to revise or augment the feature in six months. The requirements analyst has to be deeply plugged in to the business goals and vision in order to predict the future and capture the least amount, but most important information about what the team is doing.
A few years ago I spent many months constructing an "as built" specification for a market data system at the New York Stock Exchange. I had the original ten-year-old spec and a few dozen incremental release documents. We were getting ready to refactor the system and nobody knew every business rule and function. I vowed that no system I worked on would ever lack a spec that described everything it currently did. Incremental requirements specs that aren't integrated into the overall system are defects waiting to be discovered once the coders get ahold of them.
Having argued that I must add that a requirements model in document form is pretty nearly impossible to maintain in the way I describe. You've got to employ a database tool that supports the granularity of each requirement and allows you to describe each one through attributes. Then you can use filters and queries and views to present an infinite number of customized specifications -- all of the requirements implemented in a specific release, or all of the requirements related to a specific functional area, or the completed requirements related to a specific business objective or corporate mandate.
What are your thoughts on the role of requirements management in agile projects?
I recently spoke with a software development professional who was very proud of his organization's highly structured and detailed requirements templates that captured every detail before any work began "to be sure we deliver what's wanted." I felt like I was talking to tyrannosaurus rex. We all know that the day after you baseline that 400-page spec it's already out of date.
Agile with its short increments and "only write down what you really need to" mentality can seem seductively freeing. When our organization adopted Scrum, I stuck to my requirements model guns, and sure enough a few months later we couldn't remember decisions that we'd made a few sprints back, nor even exactly which sprint we'd done the work in. Since we weren't supposed to be doing "heavy" requirements, we'd been coached to: use the system and see what happens; or sift through dozens of completed user stories and hope the detail we wanted was actually mentioned in the acceptance criteria (which it was not because it was an in-sprint decision); or try to find relevant test cases and check the expected results. Instead, I launched DOORS, went to the functional area related to the question, and checked our documented business rule. Done.
What are some of the challenges you see in Healthcare Informatics projects?
Deriving meaningful information from the electronic medical record is essential to justifying the cost of those systems. We're piloting the use of predictive analytics -- combining statistical methods with the mass of patient data collected every day at our parent medical center -- to predict outcomes at the population level. To do it you need a very wide range of data: blood pressure, height, and weight, smoking patterns, history of heart disease, current blood sugar level, and on and on. Just bringing all this data together is the first challenge. Next is the analytic tool -- that's CLG. Finally you need big iron to process it. Most local and regional healthcare providers don't have the funding for, say, Watson. My team spent last summer optimizing our hardware and software environment, and CLG itself, to handle analysis of larger data sets faster, but within a medical-center friendly infrastructure budget.
Another area of critical concern is patient information. The need to pool patient data for direct care as well as population research is supported by legislature and funding sources. But we are bound, both legally and ethically, to protect patient identity in every circumstance.
Clinical Looking Glass has the capacity to show patient contact information to users who have been granted permission to see it -- usually clinicians who are actively providing care and need to contact the patients. While this is a critical feature of CLG, we expect to have to develop more granular levels of access as new types of clients adopt the product. For example, our Regional Health Information Organization (RHIO) client has data from twenty-two healthcare institutions. Some patients have declined to have their identity shared across the organization. We have to build the capability to mask these patients' identity even to our users who have permission to see it.
We have with us today Bruce Powel Douglass. He doesn't need an intro for most of us -- Embedded Software Methodologist. Triathlete. Systems engineer. Contributor to UML and SysML specifications. Writer. Black Belt. Neuroscientist. Classical guitarist. High school dropout. Bruce Powel Douglass, who has a doctorate in neurocybernetics from the USD Medical School, has over 35 years of experience developing safety-critical real-time applications in a variety of hard real-time environments. He is the author of over 5700 book pages from a number of technical books including Real-Time UML, Real-Time UML Workshop for Embedded Systems, Real-Time Design Patterns, Doing Hard Time, Real-Time Agility, and Design Patterns for Embedded Systems in C. He is the Chief Evangelist at IBM Rational, where he is a thought leader in the systems space and consulting with and mentors IBM customers all over the world. He can be followed on Twitter @BruceDouglass. Papers and presentations are available at his Real-Time UML Yahoo technical group and from his IBM thought leader page.
The problems with poor requirements are legion and I don’t want to get into that in this limited space (see Managing Your Requirements 101 – A Refresher Part 1: What is requirements management and why is it important?
). What I want to talk about here is verification and validation of the requirements qua requirements rather than at the end of the project when you’re supposed to be done.
The usual thing is that requirements are reviewed by a bunch of people locked in a room until they are ready to either 1) gnaw off their own arm or 2) approve the requirements. Then the requirements are passed off to a development team – which may consist of many engineering disciplines and lots of engineers – for design and implementation. In parallel, a testing group writes the verification and validation (V&V) plan (including the test cases, test procedures and test fixtures) to ensure that the system conforms to the requirements and that the system meets the need. After implementation, significant problems tracing back to poor requirements require portions of the design and implementation are thrown away and redone, resulting in projects that are late and over budget. Did I get that about right?
The key problem with this workflow is that the design and implementation are started and perhaps even finished without any real assurance about the quality of the requirements. The actions that determine that the requirements are right are deferred until implementation is complete. That means that if the requirements are not right, the implementation (and corresponding design) must be thrown away and redone. Internal to the development effort, unit/developer and integration testing verify the system is being built properly and meets the requirements. Then at the end, the system verification testing provides a final check to make sure that the requirements are correctly addressed by the implementation.
During this development effort, problems with requirements do emerge – such as requirements that are incomplete, inconsistent, or incorrect. When such problems are identified, this kicks off a change request effort and an update to the requirements specification (at least in any reasonable process), resulting in the modification of the system design and implementation. But wouldn’t it be better to not have these defects in the first place? And even more important, wouldn’t it be useful to know that the implementing the requirements will truly result in a system that actually meets the customer’s needs?
There are two concerns I want to address here: ensuring that the requirements are “good” (complete, consistent, accurate, and correct) and that they reflect the customer’s needs. And I want to do this before design and implementation are underway.
It isn’t obvious
Imagine you’re building a house for your family. You contract an architect who comes back to you after 3 months with a 657 page specification with statements like:
- … indented by 7 meters from the west border of the premises, there shall be the left corner of the house
- … The entrance door shall be indented by another 3.57 meters
- … 2.30 meters wide and 2.20 meters high, there shall be a left-hand hinge, opening to the inside
- …As you come in, there shall be two light switches and a socket on your right, at a height of 1.30 meters
My question to you is simple: is this the house you want to live in? How would you know? There might be 6500 requirements describing the house but it would almost impossible for any human to understand whether this is the house you want. For example:
- Is the house energy efficient?
- Does the floor plan work for your family uses or must you go through the bathroom to get to the kitchen?
- Is it structurally sound?
- Does it let in light from the southern exposure?
- Is there good visibility to the pond behind the house?
- Does it look nice?
What (real) architects do is they build models of the system that support the reasoning necessary to answer these questions. They don’t rely simply on hundreds or thousands of detailed textual statements about the house. Most systems that I’m involved with developing are considerably more complex than a house and have requirements that are both more technical and abstract.
Nevertheless, I still have the same basic need to be able to understand how the requirements fit together and reason about the emergent properties of the system. The problem of demonstrating that the implementation meets the stated requirements (“building the system right”) is called verification. The problem of showing that the solution meets the needs of the customer is called validation. Verification, in the presence of requirements defects, is an expensive proposition, largely due to the rework it entails. Implementation defects are generally easy and inexpensive to repair but the scope of the rework for requirements defects is usually far greater. Validation is potentially an even more expensive concern because not meeting the customer need is usually not discovered until the system is in their hands. Requirements defects are usually hundreds of times more expensive than implementation defects because the problems are introduced early, identified late in the project, and require you to throw away existing work, redesign and reimplement the solution, then integrate it into the system without breaking anything else.
A proposal: Verifiable and Validatable Requirements
The agile adage of “never be more than minutes away from demonstrating that the work you’re doing is right” applies to all work products, not just software source code. It’s easy to understand how you’d do that with source code (run and test it). But how do you do that with requirements? The Core Concept Premise:
You can only verify things that run. Conclusion:
Build only things that run. Solution:
Build executable requirements models to support early requirements verification
If we can build models of the requirements, we can verify and validate them before handing them off to the design team. The way I recommend you do that is to
- Organize requirements into use cases (user stories works too, if you swing that way)
- Use sequence diagrams to represent the required sequences of functionality for the set of requirements allocated to the use case (scenarios)
- Construct a normative (and executable) state machine that is behaviorally equivalent to that set of scenarios
- Add trace links from the requirements statements to elements of the use case model
- messages and behaviors in scenarios, and
- events (for messages from actors), actions (for messages to actors and internal behaviors), and states (conditions of the system) in the state machine
- Verify requirements are consistent, complete, accurate, and correct
- Validate requirements model with the customer
When problems are identified with the requirements during this functional use case analysis, they can be easily and inexpensively fixed before there is any design or implementation to throw away and redo. Constructing the Executable Use Case
Some people are confused with the fundamental notion of using a state machine to represent requirements, thinking that state machines are inherently a design tool. State machines are just a behavioral specification, and requirements are really just statements of behavior in which we are trying to characterize the required inputoutput control and data transformations of the system. It’s a natural fit. Consider the set of user stories for a cardiac pacemaker
- The pacemaker may be Off (not pacing or sensing) or executing the pacing mode of operation.
- The cardiac pacemaker shall pace the Atrium in Inhibit mode; that is, when an intrinsic heart beat is detected at or before the pacing rate, the pacemaker shall not send current into the heart muscle.
- If the heart does not beat by itself fast enough, as determined by the pacing rate, the pacemaker shall send an electrical current through the heart via the leads at the voltage potential specified by the Pulse Amplitude parameter (nominally 20mv, range [10..100]mv) for the period of time specified by the Pulse Length (nominally 10ms, range [1 .. 20]ms)
- The sensor shall be turned off before the pacing current is released.
- The sensor shall not be re-enabled following a pace for the period of time it takes the charge to dissipate to avoid damaging the sensor (nominally 150ms, setting range is [50..250]ms). This is known as the refractory time.
- When the pacing engine begins, it will disable the sensor and current output; the sensor shall not be enabled for the length of the refractory time.
A scenario of use is shown in the below figure (click to enlarge)
A state machine that describes the required behavior is shown below (click to enlarge)
Of course, this is a simple model, but it actually runs which means that we can then verify that is correct and we can use it to support validation with the customer as well. We can examine different sequences of incoming events with different data values and look at the outcomes to confirm that they are what we expect. Verifying the Requirements
For the verification of consistency of requirements, we must first decide what “inconsistent” means. I believe that inconsistent requirements manifest as incompatible outcomes in the same circumstance, such as when a traffic light would be Red because of one requirement but at the same time must also be Green to meet another. Since the execution of the requirements model has demonstrable outcomes, we can run the scenarios that represent the requirement and show through demonstration that all expected outcomes occur and that no undesired consequences arise. For the verification of completeness, we can first demonstrate – via trace links – that every requirement allocated to the use case is represented in at least one scenario as well as the normative state machine. Secondly, the precision of thought necessary to construct the model naturally raises questions during its creation. Have we considered what happens if the system is THIS state and then THAT occurs? What happens if THAT data is out of range? How quickly must THIS action occur? Have we created all of the scenarios and considered all of the operational variants? These questions will naturally occur to you as you construct the model and will result in the addition of new requirements or the correction of existing ones.
For correctness, I mean that the requirement specifies the proper outcome for a given situation. This is usually a combination of preconditions and a series of input-output event sequences resulting in a specified post-condition. With an executable use case model, we can show via test that for each situation, we have properly specified the output. We can do the same scenario with different data values to ensure that boundary values and potential singularities are properly addressed. We can change the execution order of incoming events to ensure that the specification properly handles all combinations of incoming events. Accuracy is a specific kind of correctness that has to do with quantitative outcomes rather than qualitative ones. For example, if the outcome is a control signal that is maintains an output that is proportional to an input (within an error range), we can run test cases to ensure that the specification actually achieves that. We can both execute the transformational logic in the requirements and formally (mathematically) analyze it as well if desired.
Be aware that this use case model is not the implementation. Even if the system use case model is functionally correct and executes properly, it is not operating on the desired delivery platform (hardware) and has not been optimized for cost, performance, reliability, safety, security, and other kinds of quality of service constraints. In fact, it has not been designed at all. All we’ve done is clearly and unambiguously state what a correctly designed system must do. This kind of model is known as a specification model and does not model the design. Validating the Requirements
Validation refers to confirming that the system meets the needs of the customer. Systems that meet the requirements may fail to provide value to the customer because
- the customer specified the wrong thing,
- the requirements missed some aspect of correctness, completeness, accuracy or consistency,
- the requirements were captured incorrectly or ambiguously,
- the requirements, though correctly stated, were misunderstood
The nice thing about the executable requirements model is that you can demonstrate what you’ve specified to the customer, not as pile of dead trees to be read over a period of weeks but instead as a representation that supports exploration, experimentation, and confirmation. You may have stated what will happen if the physician flips this switch, turns that knob, and then pushes that button, but what if the physician pushes the button first? What has been specified in that case? In a traditional requirements document, you’d have to search page by page looking for some indication as to what you specified would happen. With an executable requirements specification, you can simply say “I don’t know. Let’s try it and found out.” This means that the executable specification supports early validation of the requirements so that you can have a much higher confidence that the customer will be satisfied with the resulting product. So does it really work?
I’ve consulted to hundreds of projects, almost all of which were in the “systems” space, such as aircraft, space craft, medical systems, telecommunications equipment, automobiles and the like. I’ve used this approach extensively with the Rational Rhapsody toolset for modeling and (usually) DOORS for managing the textual requirements. My personal experience is that it results in far higher quality in terms of requirements and a shorter development time with less rework and happier end customers. By way of a public example, I was involved in the development of the Eaton Hybrid Drivetrain project
. We did this kind of use case functional analysis constructing executable use cases, and it identified many key requirements problems before they were discovered by downstream engineering. The resulting requirements specification was far more complete and correct after this work was done that in previous projects, meaning that the development team spent less time overall. Summary
Building a set of requirements is both daunting and necessary. It is necessary because without it, projects will take longer – sometimes far longer – and cost more. Requirements defects are acknowledged to be the most expensive kind of defects because they are typically discovered late (when you’re supposed to be done) and require significant work to be thrown away and redone. It is a daunting task because text – while expressive – is ambiguous, vague and difficult to demonstrate its quality. However, by building executable requirements models, the quality of the requirements can be greatly improved at minimal cost and effort.
For more detail on the approach and specific techniques, you can find more information in my books Real-Time UML Workshop
or Real-Time Agility
Today we have with us, Theresa Kratschmer writing about the importance of metrics in requirements management. Theresa Kratschmer is a senior software engineer who joined IBM T.J. Watson Research in 1996. There she worked on defect analysis, requirements, and Orthogonal Defect Classification deployment. In 2010, Theresa moved to sales where she is a technical specialist focusing on the Rational Jazz products. Previous to joining IBM, Theresa developed software for real-time, graphics, and database applications in the medical electronics industry. She can be reached out at theresak[at]us.ibm.com
Everybody is talking about metrics today. Numbers have always been important to builders, financial wizards, and politicians. If you were a software developer though, you may have taken a lot of math courses but you didn’t really use that math directly. So, why the sudden interest in software and metrics or specifically in requirements and metrics?
Metrics are important whenever you need accuracy such as building a house or sewing a dress. They are important when you need to work efficiently and you can’t afford waste. In today’s business environment, every organization out there, whether a finance company, government agency, or healthcare organization needs to work more effectively and produce more data, more projects, more software with far fewer resources. Numbers, or metrics, is the way to do this. Since requirements are the foundation of software, it is one of the best places to apply metrics. By applying measurements to metrics, we get insight into the organization’s requirements activities. We find out how much progress is being made, whether we have gaps in the downstream deliverable related to requirements and how big our project risk might be when changes occur to those requirements. Applying measurements to requirements also allows us to make continuous improvements.
So if metrics are so valuable, why aren’t more people collecting them? There are primarily three reasons
- It often takes too long to collect the metrics. By the time you end up gathering all the information you need, it’s out of date already.
- How do you interpret the information that you collect? Some people feel that if you don’t have access to a person with a PhD, you’ll never understand the numbers.
- Even if you collect the data and you correctly interpret it, is the organization actually going to do anything with that data to actually make improvements?
The best way to collect metrics, of course, is to do it automatically as the team goes through their daily activities. As they define and refine requirements, process requirement reviews, perform change management and quality management activities, data can be collected automatically, processed, and made available in real-time. This is exactly what an integrated solution like Jazz platform
does. It basically removes this obstacle and makes data collection, processing and display of data a natural part of the project team’s daily activity.
Interpretation of Data
Understanding data and resultant reports does take some skill. But it is not complicated. I always start by articulating exactly what information I want to view or what questions I need answered. Then I look for the data that will help me answer those questions. For example, you might want to which high priority requirements have no test cases associated with them. One of the capabilities with the Jazz suite that helps here is that the reports have a section called “What does this report tell me?”. When you run a report, you actually have some automatic interpretation built in. Of course, I always recommend people start simply. Your skills will grow as you get more comfortable understanding and interpreting the data. Jazz also allows you to set up filters which help you answer specific questions at the click of a button. This makes analysis automatic and very, very easy.
Improvement Actions for Continuous Improvement
Identifying and implementing actions is one of the most important aspects of ensuring continuous improvement. It’s very important when analyzing data that indicates weaknesses in your process to actually identify improvement actions. Not only do you need to identify how you will improve your project and processes you follow, but you also need to assign owners and a date for implementation to make sure you do actually make improvements.
I hope I’ve convinced you that metrics are important – well worth the time and effort it takes to collect and analyze data. By identifying and implementing improvement actions your organization can make considerable progress in working more efficiently and really achieving the ability to do more with less.
Anthony Kesterton is a Technical Consultant in the Financial Services Sector for IBM Rational in the United Kingdom. He has a wide experience in IT development including many years teaching, mentoring and using various IBM Rational tools, as both an end-user and as an IBM employee. He is a regular contributor on the forums on jazz.net. He is co-author of the IBM Redbook “Building SOA Solutions Using the Rational SDP”. As an IBM Community volunteer, he works on programmes dedicated to encouraging children to take an interest in Science, Technology, Engineering and Mathematics (STEM) at school. He also regularly writes in IBM Technical Field Professionals' blog here. Anthony can be reached out at akesterton[at]uk.ibm.com
One of the more interesting discussions I have had about requirements was how to indicate that a requirement must be implemented in the final system. The business wanted to make every requirement “mandatory” which gives no indication to anyone which requirements are really important. Eventually, the business analysts compromised and had levels of “mandatoriness” (I think I might have invented a new term) starting with “least mandatory” to “most mandatory”. The business was happy to accept this. To me, this showed the importance gaining agreement on the requirement attributes. It should not stop at attributes – attribute values, requirements types, traceability and document templates are also very important for a project. All this information should be part of a Requirements Management Plan.
A Requirements Management Plan has nothing to do with time and resources on the project but everything to do with structure of the requirements. A Requirements Management Plan is a way to capture the kind of requirements that will be used on the project, their attributes and other useful information. A typical Requirements Management Plan should contain the following:
- Types of requirements: For example, business requirements, system requirements, and performance requirements.
- Attributes: Important information associated with each requirement type, for example: priority, source of the requirement, or the stability of the requirement
- Attribute values and the meaning of these values: Each attribute should have a range of possible values, and these must be agreed upon and documented. For example, an attribute for priority may have a value from 1 to 5, where 1 means the highest priority and 5 means the lowest.
- Traceability between requirements: There is usually a hierarchy of requirement types. This hierarchy needs to be captured and explained. For example, a feature of the system may link to a software requirement, with the feature at the top of this hierarchy.
- Document templates: Many organizations have standard ways they present information in document form. They might have a Software Requirements Specification with a specific format and structure.
Having the discussion about this plan, and getting agreement on the content of the plan can be enlightening. It can uncover what kind of information is important to the project, how the project plans to manage the requirements (via the attributes), and most importantly, what kinds requirements where overlooked. While the plan is being discussed, be prepared for heated debates about potentially every aspect of the plan.
Spend time creating a Requirements Management Plan for your project as soon as possible. Be prepared for changes to that plan over time as the project works out the really useful attributes or even requirement types. Most importantly, get agreement on the Requirements Management Plan – it really does help a project build requirements that clarify rather than obscure the intention of a project.
Neal Middlemore has over 14 years of experience in requirements management, this encompasses the associated disciplines of change management of requirements and validation/verification. Neal comes from an avionics systems engineering background and has been working with the DOORS product for over 10 years.
One of the most fundamental benefits that businesses want to get from using requirements management tools is consistent traceability. It doesn’t matter if it is an IT system being developed or an aircraft carrier, the levels of complexity being dealt with determine that traceability across multiple levels of requirements, from stakeholder requests to detailed implementation, is not simple to maintain manually.
Further hurdles are put in our way by the need to comply with legislative requirements, so many different industries these days have requirements placed upon them by government and international standards bodies along with internal corporate standards.
To prove compliance with these legislative rules it is necessary for projects to not only prove that requirements are being managed but also to provide the how and where of their management. What features relate which stakeholder request? What aspects of the solution satisfy safety regulations? Has the realization of the requirement been tested and by whom on what platform?
To answer these questions it is necessary to utilize the traceability capabilities provided by modern tools but it’s not enough to let a tool decide how things relate to each other, it isn’t enough to let individual users decide these relationships. Traceability needs to be considered in the larger context of how you report on it to answer the very questions that led you to consider traceability for in the first place. It’s more than ‘does object A relate to object B?’.
An initiative to define an information architecture of your governance solution will assist in defining your process artifacts and their relationships to one another. Traceability becomes an asset rather than an overhead. A good way to begin such an initiative is having a workshop attended by all project stakeholders i.e. those who have a vested interest in ensuring project success. Most likely these attendees will have a good understanding of the process and what the project needs to deliver. Undoubtedly each will also bring differing perspectives and understanding of what information is required.
The test manager may want to see traceability from individual test artifacts through to the requirements being tested or to determine regulatory compliance has been achieved and whether verification methods have been agreed.
A subject matter expert (SME) may want to see the design in the context of the system level requirements and how it relates to stakeholder requirements.
Everybody wants to see something that is applicable to their job roles, even wishing to see things outside of their typical discipline domains. By asking the questions and documenting the answers you can start to put together an information architecture that makes sense for your project. It’s likely that many information architectures will exist. Not every project is the same and these will have differing sets of required attributes, views and reports of project information and agreed structure of inter artifact relationships.
Modern tools can often create templates for this kind of information to allow the deployment of additional artifact containers as and when required. These can even enforce traceability to ensure the integrity across the project. Above all, it is vital that the needs of the project is documented and communicated alongside of the information architecture to all project members.
Eric has worked in the software development industry for over 20 years and is co-author of UML for Database Design and UML for Mere Mortals both published by Addison Wesley. Eric is currently responsible for capabilities marketing of Rational’s application lifecycle management solutions including Agile Software Delivery, Quality & Test Management, Requirements Management and Collaborative Lifecycle Management. He rejoined IBM in 2008 as the team leader for InfoSphere Optim Solutions and later was responsible for Information Governance Solutions. Prior to rejoining IBM, he worked for Ivar Jacobson Consulting as VP of Sales and Marketing. Before joining Ivar Jacobson, he was director of product marketing for CAST Software. Previously working for IBM, Eric held several roles within the Rational Software group including program director for business, industry and technical solutions, product manager for Rational Rose and team market manager for Rational Desktop Product. He also spent several years with Logic Works Inc. (Acquired by Platinum Technologies and CA), as product manager for ERwin.
As I think about IT today, there comes a rebirth in some ways of the importance of architecture and requirements. We are in an era of “ANY” -- meaning that applications and data can be accessed from anywhere, by anyone, and at any time.
Looking back at the applications of yesteryear (two or three years ago), we didn’t expect much from the web or mobile-based applications. We could view, run some reports or do some basic tasks, but to do the real work, we needed to go to the fat-client. Now, in today’s era of any, the user interface may look different, but the capabilities had better be the same since we expect near full capabilities no matter our device or interface.
This puts a new found set of requirements on applications and their development, and is making modeling and requirements (analysis and design) relevant again, but with a new twist – AGILITY. It is no longer a question of “what platform am I developing for” – the question is how quickly can we get it up and running on the latest version of Apple, Android, HTML 5 and whatever other platforms our clients expect the application to run on … and it had better run on all of the latest versions, with no delays, when updated operation systems come out.
And the question that I often receive now, however, is “can I be agile and meet these needs at the same time”? The plain answer is, yes, you can. However, agility doesn’t me you cannot ignore requirements and design. I am not talking about write-once, run-anywhere, rather instead understand the true requirements so that the various development teams can articulate them in code brought to life as features for the users, as they expect to see them. Users are looking for the application to be specific to their hardware/OS (iPad/AppleOS, Droid/Android…) as the hardware has become the platform for not just running the application, but the expected look, feel and usability of it now, too. This often means different developers for different deployment platforms, certainly at the User Interface level.
Designing applications requires that we are prepared. Architectures must be solidified and communicated. Requirements must be consistent and shared. We must model architectures so that developers can build to the designs and not recreate their own, wasting time and resources, and we must share those designs across the team.
Does this get in the way of agility? NO, it will speed agility. By sharing designs, assigning tasks based on architecture needs, we can speed time to market and our ability to deliver high quality software. In the era of any, we may have multiple teams working on the same front-end capabilities for different platforms even though the back-end is the same. But the more they can share, the faster they can be deployed and having the right requirements from users, the more satisfied they will be. We see people changing their desired platform as employers, vendors and suppliers change requirements, so we need to be prepared for the customer who is using an iPad today to be using an Android device tomorrow with the same requirements on the application. Just look at how the world of Blackberry has evolved.
So, as you think about your next project, don’t skimp on requirements and architectures or you may be limiting your agility in the future rather than speeding your time to satisfied clients.
The importance of communication and collaboration in developing and managing good requirements were discussed in our earlier post on How to enable effective requirements communication and collaboration
. In this guest blog post, Melissa Robinson - a Senior Technical Specialist at IBM writes about how Rational DOORS addresses this aspects with Discussions. Melissa started her career at Telelogic enabling Product Management with technical support around requirements management. Melissa spent 3 years supporting clients getting started with Requirements management at Telelogic. After IBM acquired Telelogic in 2008, Melissa transitioned roles to support clients with Enteprise Architecture initiatives. She received the Carnegie Mellon certification in Enterprise Architecture in 2008 and is TOGAF certified. Melissa now supports clients getting started with evaluating and implementing both requirements management and enterprise architecture solutions.
Note: Please click on the screenshots for a better view
Why did we make this decision? Who made this decision? Who approved this requirement?
These are some of the questions we can help answer with effective collaboration messaging with DOORS. Collaboration messaging is now enabled in DOORS and DOORS Web Access (DWA) with the addition of DOORS Discussions. Discussions allow users to contribute and add comments to requirement objects or requirement modules, users can even add comments to base-lined requirements. Discussions offer a method of having a conversation on requirements. DOORS discussions really break the communication barrier by allowing users to easily make comments or start a discussion on any requirement, including read-only requirements. Discussions can be created in DOORS or DWA and viewed in both DOORS and DWA. Both DWA Editor and DWA Reviewer roles can contribute to Discussions. Discussions capture comments so that you can later review ancillary information about your requirements. Discussions allow everyone to contribute comments and provide a full understanding of requirements.
Here is a simple scenario for using DOORS Discussions. A DWA Reviewer user creates a Discussion on a requirement. A DOORS user then reviews this comment and contributes a comment on the requirement. The DWA user reviews the latest comment and closes the Discussion.
A DOORS user, Susan, reviews the current Discussion created by a DWA Reader user, Kavita. Susan can open the requirements module with a pre-created Discussion view to review the Discussions. Below Susan reviews the Discussion on Requirement AMR-STK-66.
Susan can contribute a comment to the open Discussion.
Kavita reviews the new Discussion comment in DWA. Notice that Kavita is a Reviewer in DWA. As a Reviewer, she can create and add comments to Discussions. Kavita can also close Discussions that she started. Later, Kavita can contribute another comment to the open Discussion.
As the person who first opened the Discussion, Kavita can close this Discussion.Later, in DOORS, Susan can review the latest status of the Discussion using the Discussion Thread view. As a Database Manager role, Susan can choose to re-open the closed Discussion at any time.
Discussions open up the communication thread between several different types of DOORS users. Discussions allow requirements reviewers to exchange views and comments about the content of a requirements module or the content of a requirement object in a module.
We believe the post gave you a sneak preview of how DOORS Discussions help in effectively collaborating and communicating between various stakeholders during requirements management. Feel free to contact melissarobinson[at]us.ibm.com if you have any queries about the topic. Melissa will be discussing the topic in detail in an upcoming webcast on October 5, 2012. Don't miss the opportunity to watch the action live.
Register now @ http://bit.ly/DOORS_Discussions
In this post Jim Hays
writes about the various options available in testing how the requirements are met in IBM Rational DOORS.
Jim works as a Senior Systems Engineer at IBM. Jim started his career in 1982 working for software providers. Jim’s career history is an interesting one; he hasn’t worked for many software companies over the course of his career. He worked for Applied Data Research (7 years) that eventually got bought by Computer Associates. He then moved to Goal Systems, that got bought by Legent (6 years). Legent, then got bought by Computer Associates. He then spent almost 10 years at Sterling Commerce that just got bought by IBM. After Sterling Commerce he joined Telelogic where he got into the ALM market that eventually got purchased by IBM Rational (7 years).
I’ve been involved with DOORS for over 7 years, and absolutely love the tool. My job at IBM is to technically support our sales team, and our customers, not only for DOORS but many other solutions we offer. I have had a lot of experience working with our DOORS customers understanding how they use DOORS, and even though DOORS is a requirements management solution there are other types of information being put into DOORS other than just requirements. An example of that is the fact that customers will put in test data into a DOORS module, and enable easy linking between the requirements and their related validation/verification results.
Note: Please click on the screenshots for a better view
Provided below is an example showing that. In this DOORS View we see traceability between 4 modules:
User Requirements>Functional Requirements>Functional Test Plan>Functional Test Cases
DOORS has had in it for years a capability called the Test Tracking Toolkit. This enables one to capture in a DOORS module test results by duplicating attributes based upon creating a new test run to store test run results for each run uniquely. This over time will create a lot of attributes in order to capture and store test run results per run. Both of these described usages of capturing tests and test results enable quite easy linking between the DOORS requirements and their related tests and test results. Below are the options available utilizing the Test Tracking toolkit.
(Read in clockwise from top left)
So what are the positives and negatives of both of these usages of DOORS modules to capture test, validation, and/or verification information. The positive is the ability to store and easily link requirements to testing results using standard DOORS linking. If requirements change that are linked to test-based modules, then standard DOORS “suspect links” would notify the folks maintaining the test-based DOORS modules of that requirement change to see if they need to update their test plan and or have to retest the test case. The other question is who is maintaining/updating the DOORS test-based modules? Are the actual testers going into DOORS to update the test results? Are the testers using a spreadsheet to capture test results and giving that to a DOORS user to update in DOORS? It is my opinion that either one of these scenarios discussed are fine for projects that only do manual testing, and don’t have a lot of testing (i.e. test runs) to perform. The other issue is that the actual testing can be occurring on different environments. For example if one built an application that is web-based and will run on different operating systems, then one would need to test all of those types of configurations . If one is building an embedded software device or a software application, then one might want to not just do manual
testing, but instead automate the execution and capturing results via automated
In my opinion I think a solution like DOORS for requirements management is great for that; whereas, I believe the folks that are in charge of Quality and/or performing the task of testing should have a solution that is suited for the role they play in a project-Test Management. So the final option for testing I will discuss is how DOORS (managing requirements) can integrate to IBM’s Test Management solution called Rational Quality Manager (RQM). RQM can provide a nice environment for the support of both manual and automated testing.
Provided below are an example “dashboard” that users can configure based upon what they would like to see and an example of a RQM Test Plan.
Provided below is an example of how one used a DOORS View that would be a view of requirements from DOORS that are known by this particular RQM project. Requirements driven testing enable requirements from DOORS to be used to automatically generate Test Cases and build specific links between the DOORS requirements to specific test cases.Screenshot provided in the right show the results of that link creation automation by showing traceability between test cases to DOORS requirements, and also could show development software assets.
The integration between DOORS and RQM is utilizing the OSLC (Open Services for Lifecycle Collaboration). Below is showing a “rich hover” ability to see details about linked items without actually having to navigate the link. One can also see the results of the test execution (pass or fail)
As the testers are doing their work then the requirements from DOORS can map data from RQM into DOORS-based attributes. Below is an example showing the traceability between DOORS requirements and the testing side of things in DOORS. I can see that the latest test case run passed.Provided below are screenshots showing coverage analysis relating the DOORS requirements to the Test Plan and associated test case.
Finally, provided below screenshot shows the test case execution results that were performed via RQM. These are mapped to DOORS attributes via the bi-directional integration and regular DOORS sorting and filtering can be used. For example if I wanted to see what Test Cases failed and or passed.
Hope the blog post was useful. Feel free to contact Jim @ haysji[at]us.ibm.com if you have any queries regarding the options for testing in DOORS.
Also, Jim will be hosting a webinar on the same topic in which he will go in depth the ideas presented in this post about the options related to testing in conjunction to using DOORS. Register for the session here - http://bit.ly/DOORS-Testing_Options
Date: September 7, 2012 (Friday)
Time: 1PM EDT
There is no doubt that the evolution of computing has moved into the era of the mobile device and any business that wants to remain competitive in an increasingly difficult economic environment needs to embrace this with the care and attention it deserves.
With over a million devices being activated every single day it is predicted that by 2013 mobile phones will overtake the PC as the most common web access device worldwide!
Companies that simply try and tweak their existing web sites to run in mobile browsers are missing a trick as users expect very sleek interfaces that make use of their devices capabilities such as Geo-location and camera. With this in mind the perceived quality of the application comes from both its functionality and perhaps more importantly the design. The design of the applications user interface is also critical when trying to improve brand loyalty and therefore businesses are keen to see their brand image extended to mobile devices where they can reach a much wider audience.
So where does Requirements Management fit in?
Well, a good requirements management process that is incorporated into the wider development life cycle will allow the business to communicate exactly what is required of the mobile application; enabling the development team to be clear on what needs to be created, and for testers to begin writing test cases earlier in the lifecycle. With the need for good user interface design it is important that requirements are not purely textual and so with tools like Rational Requirements Composer
, Business Analysts can model Use Cases, Business Processes and also visualize the user interface through UI Sketches, Screenflows and Storyboards. This means the business can be very clear on what the expected result should be and remove any unnecessary ambiguity that only slows down development and ultimately prevents applications going to market quickly.
One significant trend in the development of mobile apps is the adoption of Agile methodologies such as Scrum, which fits in well with the short timescales and rapid change and release management that mobile apps have. This makes it even more important for the Requirements Management process to also be agile and allow greater collaboration not only within the Business Analysts team, but also with the other teams involved such as development and testing. A web based requirements management tool like RRC encourages wider engagement with stakeholders and also provides dashboards with live information, collaborative reviews and reporting to promote visibility and allow decisions to be made quickly and effectively.
Traceability is one of the corner stones of good Requirements Management, because without it you cannot determine if the resulting product has actually satisfied the original requirements outlined by the business. RRC is a part of the wider solution called Collaborative Life cycle Management and allows the requirements to be linked to the resulting development work items and associated test plans, test cases and defects. This means that from any given requirement you can see exactly what development task is going to implement it, which test case is going to test it and what defects were found in relation to that requirement.
In summary, mobile application development is an exciting and important part of any business plan and due to its inherent complexity you really need the right tools to make it a success.
Modified on by VijaySankar
There have been interesting thoughts on how requirements and should requirements be managed in agile projects. Some believe that requirements management is meaningless; some believe they are still a critical factor irrespective of the methodologies one follow. Here we bring to you an interview with Mary Gorman, an Agile Business Analysis Expert at EBG Consulting where she argues that business analysis is essential for agile success.
Mary Gorman, CBAP, CSM, is VP of quality and delivery at EBG Consulting, whose experts help deliver high-value products that delight customers. Mary works with global clients, speaks at industry conferences, and writes on requirements topics for the business analysis community. In addition to serving on the IIBA® Business Analysis Body of Knowledge® Committee for four years, Mary helped create the first certification exam for the Certified Business Analysis Professional™ (CBAP®).
We've heard comments like "you can throw away all that requirements and analysis stuff now we're going Agile." Have you come across this and what's your view?
It's a common misconception. In fact, requirements drive agile teams! At EBG Consulting we find when agile teams collaboratively analyze requirements, they can speed development and delivery of high value products. The ability to be focused, nimble and disciplined about your requirements is essential for successful agile delivery.
So how does business analysis change as you adopt Agile practices?
You plan and analyze regularly to support a steady flow of product delivery, a hallmark of successful agile teams. On agile projects, we plan to re-plan. A plan represents your allocation of requirements—really options for satisfying product needs—to delivery cycles. Rather than trying to acquire all the possible requirements upfront at the start and create one big plan, you plan continually, using feedback from prior deliveries to adjust your plan. This in turn means you are continually analyzing requirements to discover high value options for the next delivery. Planning and analysis are interdependent and synergistic. See this article for more details.
Agile analysis is done just-in-time—you want your requirements to be “fresh”. You adjust the precision and granularity of requirements taking a just-enough approach. You make use of good analysis tools and techniques. For example, you might sketch a context diagram to quickly visualize the interfaces needed for a release, a minimum marketable feature, a use case or a story. Or organically explore requirements using a data model or state diagram. For more ideas to tune analysis for agile, visit Agile Analysis Challenges
What about the role of the business analyst in Agile?
In our forthcoming book, my co-author, Ellen Gottesdiener (EBG’s founder and president), and I write about a product partnership that collaborates to discover and deliver valued products. The partners include diverse perspectives from the business, customer and technology communities. We have found this partnership is critical for agile product success. (The book’s title is Discover to Deliver: Agile Product Planning and Analysis. Read more about it here.
Often business analysts ask us where they ‘fit’ in this partnership. Our response is to ask, “What are your skills?” An agile team needs strong skills in analysis, modeling, elicitation, facilitation, risk analysis, prioritization, strategic thinking, verification and validation along with a sound understanding of the product’s domain. The person who possesses a combination of such skills will be a valuable and valued team member.
Who should attend the webcast on How Business Analysis is Essential to Agile Success and what will they gain from it?
The webinar’s content has broad applicability. It may benefit someone involved with planning, analysis, valuation, validation and verification; teams and organizations transitioning to agile; product champions and product owners who have the responsibility for making decisions about what product options to deliver, anyone on an agile team who recognizes that user stories, user story maps, personas are often not enough to communicate product needs. The webinar provides ideas for holistically exploring and evaluating product options within a framework the agile team can use to reach a shared understanding of high value product needs.
Mary will be joining with us on Wednesday, June 27, 2012 for a webinar where she will expand on the below discussions to build a case that business analysis is your key for
Discovering product options
Collaborating to create Agile plans
Conversing daily about what to deliver
Adapting your product with each delivery cycle to respond to changes in business needs
Maintaining a constant flow of business value
Register for the webinar here
A replay will also be posted in case you missed the webinar on Wednesday. We believe this interview gave an insight into why business analysis is critical to agile projects. For more interactive discussion join us in the webinar.
What you think about requirements in agile? What level of requirements management/analysis do you follow in your agile projects?