IBM Rational DOORS family solutions offer best practices in requirements management and traceability, saving organizations time and money, through improved collaboration with stakeholders to eliminate inaccurate, incomplete, and omitted requirements.
Doors Next Generation is a requirements management application for optimizing requirements communication, collaboration and verification throughout your organization and supply chain. This scalable solution can help you meet business goals by managing project scope and cost. Rational DOORS lets you capture, trace, analyze and manage changes to information while maintaining compliance to regulations and standards.
Top three reasons your organization needs Doors Next Generation
- Reduce development costs by up to 57%
- Accelerate time to market by up to 20%
- Reduce cost of quality by up to 69%
Key Feature's :
Centralized location : Requirements management in a centralized location improves team collaboration, provides access to full editing, configuration, analysis and reporting capabilities through a desktop client. It also supports the Requirements Interchange Format, enabling suppliers and development partners to contribute requirements documents, sections or attributes that can be traced back to central requirements. Records and displays requirements text, graphics, tables, requirements attributes, change bars, traceablity links and more.
Link requirements : Traceability by linking requirements to design items, test plans, test cases and other requirements. Users can concurrently edit separate product and system requirement documents and link entries between documents. Requirement entries can also be linked to models, text specifications, code files, test procedures and documents created with other applications
Scalability : Address changing requirements management needs. Offers an explorer-like hierarchy with multiple levels of folders and projects for simple navigation no matter how large the database grows.
Change management : Integrations to help manage changes to requirements with either a simple, pre-defined change proposal system or a more thorough, customizable change control workflow. It integrates with Rational change management software for requirements change control and for requirements workflow management. It also integrates with other rational solutions, including IBM Rational Quality Manager, IBM Rational Rhapsody, IBM Rational Focal Point and others like HP Quality-Center for visibility of requirements to create test cases for traceability and for status report on coverage of requirements by test cases, and also with Microsoft Team Foundation Server (TFS) to enable Microsoft Visual Studio development teams to create and maintain traceability between requirements in Rational DOORS and TFS Work Items in Visual Studio.
If your organization wants to replace outdated and expensive legacy tools, and needs better control over multiple versions of documents, IBM appreciates the opportunity to discuss REQUIREMENTS with you. If you’d like to see a live demo please click dngdemo,
Would you like to start benefiting by using IBM Rational DOORS Next Generation? Start your free trial today Free Trial
Yet another edition of Innovate - The IBM Technical Summit is knocking your doors!.
The 2014 event promises to be even more exciting with top-notch keynotes, over 450 breakout sessions, labs, certifications and our biggest exhibit hall ever. As in previous events, Requirements Management is one of the key areas of interest at Innovate which attracts speakers and attendees from across the globe representing a wide range of industries. In 2013, we had two tracks for Requirements Management with sixteen sessions each with one track focusing on IT and another focusing on Systems Engineering. We had 25 real life case studies, 2 panel discussions and 4 instructor led sessions.
Managing requirements has always been a cornerstone in both software and systems development. The importance of the discipline continues to grow and is expected to take a leading role in the coming years. This is an opportunity to showcase your thoughts on the discipline, and how requirements management tools like DOORS or Requirements Composer can aid in managing effectively the requirements for project successes. Here are some of the topics from last year and an expected list of topics
· Requirements Management in Agile Projects
· Managing requirements in developing Safety Critical Systems
· Requirements engineering and supporting layered requirements and models
· Requirements Reuse: Methods and best practice
· Requirements management for complex systems and teams
· Requirements definition and management case studies
· Best practices in aligning business goals and IT
· Value-based requirements engineering
· DOORS, Requirements Composer and other Rational products best practices
Some session topics from Innovate 2013
· Managing Parallel Streams of Requirements in DOORS at an Automotive OEM
· Successful RRC adoption through a Community of Practice: Case Study from Blue Cross Blue Shield of North Carolina
· Agile Requirements: Maintaining the Model Through Iterations (Emerging Health)
· Cerner Corporation: Migrating from Requisite Pro to RRC
Share your experience, thoughts and best practices on requirements at an event attended by industry experts and IBM core development teams. Here are the top three reasons on why you should submit your paper for Innovate 2014.
Submit your papers before February 7, 2014 and stand a chance to present at Innovate 2014!. For more details, visit https://www-950.ibm.com/events/tools/innovate/innovate2014ems/screens/intro.xhtml
Modified on by VijaySankar
We had a fantastic DOORS customer webcast panel on September 26 with experts from across the industries talking about their experience in using IBM Rational DOORS for requirements management. If we have to take one success metric for the webcast; it was that we overshoot by half an hour from the designated one hour for the discussion because we had some wonderful discussions going on. Our panelists have graciously agreed to respond to the questions on an offline basis, and we are publishing the answers here.
If you missed this golden opportunity and is wondering whether you can get another chance? YES! Or you would like to listen to it again; we have posted the replay here. You can listen to it anytime now. Register here
How to copy the objects with in-links and out-links from one module to another module?
< >Use DXL scripts that capture link information, which can be edited if necessary, and then copy the objects, and then run another dxl script to re-create the links.
Paul Lusardi has shared with us couple of samples. If you are interested, contact us
What factors determine using new database vs a new project in a database to differentiate/segregate work?
< >The biggest criteria for using the same database is Linking. If there are Projects in a database that you need to link to, the new Project should be in that same DB.The default should be to use the same database so that you only have to maintain one set of users, groups, standards, etc…. the possible exception is if you have 2 distinct sets of security measures (HIPAA, DoD or DoE cleared data) then you will want to segregate that out to a different server.
Does anyone on Panel publish documents directly from DOORS without Add-on applications?
< >[Patrick] I use DXL scripts which export the data out to RTF with some typical publishing formatting
[Mia] < >I do. Whether you can depends on the demand for documents (do you need to do them once a week? Daily? Quarterly?) and how critical their formatting is to your organization. We have configured views in most modules for output. A simple one has just the object number and the Object Heading/Object Text. For a full spec, we include trace columns for one or more linked modules, with object identifiers and even object headers or text in those columns. In the output you get the main object from the current module, followed by all the objects it’s linked to. I keep in a label that specifies “in link” or “out link.”I used to painstakingly populate the Paragraph Style attribute in order to map to MS Word, but because our need for documents is very low I have stopped enforcing this. We have a set of MS Word templates with the title page, toc, other front matter, and a standard set of styles. Plain Vanilla DOORS maps to basic Word styles fairly well.I pick the appropriate view and filters and select Export to Word. I select the appropriate Word template, and let it rip. I then do some minimal post processing, like getting rid of the object IDs on headings. Obviously if you’re doing very long documents this would be impractical.In my previous job we had interns create a DXL script that handled the post processing – you can get to something that’s usable without a huge investment.
How do you deal with sharing of your database with the government or other contractors? Do you share read-only partitions ?
[Paul] < >Previous to coming to NuScale, I co-developed the Unified RM Database where we allowed outside access, using access control and company VPN login credentials to enable multi-company development and reviewer teams to look at the data.The questions would be …..do you want them to see work in progress? Or just a snapshot of baselined work? Perhaps a separate server with a restored baseline set would be best in that case. I will defer to an expert on baseline sets though
How do you get rid of allowing DOORS Links?
[Patrick] We use a link schema with Pairings as necessary, and then create and delete but not purge the DOORS Links module with a copy in every sub-Project/Folder, and leave it as the default Link Module. That way, users cannot create a link that is not intended between any module pair.
Paul Lusardi has shared us a presentation. If you are interested, contact us
Is there a "Copy View" script available?
Paul Lusardi has shared us a presentation. If you are interested, contact us
Please ask about doing risk assessment in DOORS: Do any of the panelists document FMEA or risk analysis in DOORS? How do they do it?
< >[Paul] Yes and no. Textbook FMEA? No. using DOORS to analyze potential risks by means of looking at risk informed design? Yes, this is a “developing” activity[Patrick] We capture the risk status in DOORS, but not the actual assessment data
Do any of the panelists integrate DOORS with other tools like Rational Quality Manager?
< >[Patrick] We are looking into it, but have not actually started using in Production. Going to deploy DOORS/CQ by year end.[Paul] Only RPE integrations are used here
How has your work differed because you use SCRUM vs. when you did not?
Our pre-sprint requirements work is at a higher level – that is, not down in the functional weeds. We spend more time exploring new features from the user’s perspective, and more time on storyboarding and similar activities. The features we’ve built since going to agile are much more user-friendly and have much better workflows than the older stuff, where much of the workflow and usability decisions were left to the engineers.
Detailed functional analysis of requirements happens during the sprint, with the requirements analyst on the team working closely with the engineers and testers. For very complex features, this analysis process will begin before the sprint so that we have enough functional detail for the team to estimate the work. We have a small backlog grooming team that, members of the scrum teams, who help the requirements analyst with this.
For Mia, for your agile, how many full-time admins needed to maintain DOORS?
[Mia] We’re very small, so we don’t even have one full time admin. Our system engineer handles the environment, and I handle user administration. It’s a tiny part of each of our jobs. I don’t see that our approach represents a particularly different administrative need than any other. If we had a hundred users and multiple DOORS databases, then we’d dedicated administrative resources just like any other implementation.
@Mia - Are requirements for HW to SW interfaces traced through ICD(s) in DOORS?
We’re software only, so no HW/SW interfaces. However, I strongly support the use of interface specifications between software systems and have used an interface module to sit in between systems that need to interact. The trick is to stay on the requirements side of the requirements/design line. The cost of maintenance on such a model is high – probably too heavy for our agile process to maintain. However, should we need to support integration with an external application (one that our teams are not developing), our API requirements will undergo close scrutiny and possibly be carved out into a separate module so that we can use trace links to do impact analysis on external “clients.”
Do you DOORS lend itself to any specific Devops , ALM methodology liek Agile etc?
We adopted a continuous deployment model this year and it had no impact on our use of DOORS. That is, DOORS is still a critical part of our development tooling. We were already capturing release version information for requirements in DOORS, and this information remains very useful in understanding what feature set exists in a given version of our application.
Try IBM Rational DOORS in a Sandbox
Webcast - Achieving sustainable requirements across the supply chain with IBM Rational DOORS
Modified on by VijaySankar
Prof. Lawrence Chung (firstname.lastname@example.org) is in Computer Science at the University of Texas at Dallas. He has been working in System/Requirements Engineering and System/Software Architecture. He was the principal author of the research monograph “Non-Functional Requirements in Software Engineering", and has been involved in developing “RE-Tools” (a multi-notational tool for RE) with Dr. Sam Supakkul, “HOPE” (a smartphone application for people with disabilities) with Dr. Rutvij Mehta, and “Silverlining” (a cloud forecaster) with Tom Hill and many others. He has been a keynote speaker, invited lecturer, co-editor-in-chief for Journal of Innovative Software, editorial board member for Requirements Engineering Journal, editor for ETRI Journal, and program co-chair for international events. He received his Ph.D. in Computer Science in 1993 from University of Toronto.
What are non-functional requirements (NFRs)?
NFRs colloquially have been called “-ilities” and “-ities”, since many words referring to NFRs end with “-ility” (e.g., usability, flexibility, reliability, maintainability) or “-ity” (e.g., security, integrity, simplicity, ubiquity). There are of course many other words that do not end with either “-ility” or “-ity”, such as performance, user-friendliness, power consumption, and esthetics, but still refer to NFRs.
Functional requirements (FRs), in contrast, are about functions, activities, tasks, etc. that may accept some input and produce some output.
Consider, for example, “add” (“+”) on a calculator which adds two numbers given as input and produces another number as output shown on the screen. Now suppose you type “2 + 3 =” now and the calculator shows “5”, but one year from now. In this case, the “add” on the calculator is functionally correct but non-functionally terrible, in particular, concerning performance.
As even in this simple example, a system which fulfills only functional requirements is often times not usable or even not useful.
So, handle NFRs and handle them appropriately. Don’t spend time only on FRs.
The “soft” Characteristics of NFRs and how to deal with them:
NFRs are global, subjective, interacting and graded.
FRs, such as “The calculator shall offer an “add” function”, are local in the sense that they are specific to the particular functions and not applicable to other functions or globally to other systems, such as a “subtract” function or a banking system. However, NFR terms such as “performance” can be applied to many other functions and systems, such as a “subtract” function and a banking system, and also those parts of such a function and a system.
In contrast to FRs, NFRs are subjective in both their definitions and the manner they need to be met – some are more subjective than others. Concerning definitions, for example, usability may mean simplicity and the availability of many help facilities to some people, while the same may mean something different, such as minimal learning curve and fast response. Concerning the manner whereby NFRs are seen satisfactorily met also depends on the (perception of the) user. For example, the keyboard with tiny keys on a smartphone may be usable to young people but not to old people. Also, large keys may be good enough for some people in using a smartphone, but a context-sensitive help may additionally be needed for a smartphone to be considered usable for some other people.
So, clarify the definitions of NFRs. Don’t assume they have unanimously agreeable definitions.
So, operationalize NFRs. Don’t just leave them without how they can be met.
NFRs are also interacting with each other, either synergistically or antagonistically or both. For example, a heavy authentication mechanism, for the purpose of enhanced security, may be hurting usability. If it takes three different passwords, which have to be changed every month and should consist of at least one special character, one digit, one upper case character, one function key, etc., in order to get in the system, the user of the system is unlikely to feel that the system is user-friendly. Hence, a conflict between security and user-friendliness. But a heavy security mechanism may help prevent unauthorized people from entering fake data in the system, hence a synergy between the security and the accuracy of data.
So, identify conflicts among NFRs. Don’t think you can do anything with them individually without any negative consequences.
So, identify synergies among NFRs. This is how we get “the whole becoming bigger than the sum of its parts”.
NFRs are graded, in the sense that they are usually met to different degrees. For example, an “add” function may be seen to be very good, good, bad or very bad, concerning its performance or usability, and different ways to implement the add function may affect the function differently – e.g., fully positively, partially positively, fully negatively, and partially negatively.
So, consider the degree of contributions between NFR-related concepts. Don’t simply think NFR-related concepts affect each other in a binary manner – either a complete satisfaction or dissatisfaction.
In a nutshell, NFRs cannot be defined or met absolutely in a clear-cut sense, i.e., soft.
So, satisfice NFRs. Don’t think NFRs can be satisfied absolutely, whatever the term “absolutely” might mean.
Product- vs. process-oriented approaches:
In science, objective measurements are important. But, are we mature enough to do that in system/software engineering? Also, consider:
“Not everything that can be counted counts, and not everything that counts can be counted.” [Albert Einstein]
According to this wisdom, it seems we need to measure important NFRs and only if we can. For example, you wouldn’t say “I love you 8 love units tonight”. It also seems that we need to shift our emphasis from measuring how well NFRs are met by a system/software artifact to how to handle NFRs during the process of developing the artifact in such a manner that the resulting artifact can be measured well.
So, treat NFRs as (soft)goals to satisfice. Do not repeat developing, scraping, and redeveloping a system, if they do not meet the expected NFRs, until a good system is finally produced.
Rationalize decisions using NFRs:
A (functional) problem may be solvable in many different ways. For example, break entries may be stopped by having a security guard, a housedog, a fortified gate, a home security software system, etc. Similarly, a (functional) goal may be achievable in many different ways also. Which one do we decide to choose and how? We use NFRs as the criteria in making the decision on selecting among the (functional) alternatives. Furthermore, NFRs treated as softgoals naturally lead to the consideration of such alternatives, among which a selection is made.
So, use NFRs as softgoals in exploring alternatives and also as the criteria in selecting among them.
How many NFRs are out there?
There can be many FRs. How about NFRs? If we go through a reasonably comprehensive dictionary and consider how many words can end with “-ility”, “-ity”, “-ness”, etc., this might give a hint. It’s not in the order of tens or even hundreds, but potentially thousands and tens of thousands. Alas, but we have resource limitations – a limited amount of time and money, our memory and reasoning capabilities, etc.
So, prioritize NFRs and their operationalizations throughout the softgoal-oriented process. Don’t simply claim “Our system satisfies all the possible NFRs and absolutely”.
A leading analyst and Systems Engineering expert, David Norfolk of Bloor Research recently published a white paper titled Reducing the risk of development failure with cost-effective capture and management of requirements. In this report, David dwells into the relevance of requirements management as a discipline and put forth his views on how the domain is changing with the advent of new development paradigms such as agile, mobile and devops.
If enterprise architecture helps to bridge the gap between business strategy and vision and its implementation in technology, from the CEO’s point of view, Requirements Management continues to help bridge the gap between business and technology at a lower level.
The report provides valuable insights with his deep coverage about why requirements management is relevant even more today, issues associated with managing requirements, challenges faced by the discipline, best practices from his experience and his thoughts on the capabilities of an ideal requirements engineering tool. Some of the topics the whitepaper discusses about are --
Challenges in requirements management
Managing changing requirements
Scope and ideal capabilities of a requirements engineering tool
Real life examples of benefits from investing in requirements
Read the whitepaper here - Reducing the risk of development failure with cost-effective capture and management of requirements
Modified on by VijaySankar
Today, we are starting a new series of interview blog posts -- Coffee Time with Requirements Experts. Through this series, we try to bring to you the thoughts, career experience and advice of experts from the industry. Enjoy!
We have with us Jared Pulham, a Senior Product Manager at IBM. He focuses specifically on requirements management tools and capabilities for Jazz. Jared is responsible for one of IBM's requirements management products, Rational Requirements Composer. He has over 15 years of industry experience in software testing and development with a background and experience in many companies across industries. He joined IBM through Telelogic acquisition where he was a Director of Product Management. He regularly writes at jazz.net blog . He can be reached out at jared.pulham[at]uk.ibm.com
Q. Throughout your career in the Software development industry how have things in the field changed?
Watching the improvements and changes in development processes from waterfall models to adoption of faster development models proposed under the agile manifesto. Likewise the tools and emphasis on features (in those tools) with features/concepts to support different roles in the development process has continued to revolutionize and change the way organizations work.
Q. You have been mostly associated with the discipline of requirements management in your career; what attracted you to it?
I believe that requirements are the driver for shaping businesses and are the mechanism for deciding what should be developed to meet the customers need. As a business and tool leader myself I feel this is the right place to help myself and other organizations achieve similar successful results.
Q. How do you see tools and techniques helping professionals in a requirements domain?
Tools help bring development teams together to collaborate, bring them out of silos, allow them to organize development requirements better in structures that makes it easy for all/others to understand, see and spot gaps where development is missing, and easily recognize changes in the project.
Q. What do you think are some of the challenges faced by Business Analysts or Requirements Engineers today?
Understanding how they can work in faster projects adjusting to agile and iterative processes. Knowing how to help meet business goals and objectives through joined up thinking and development.
Q. What interests you outside your job?
Sports, sailing and technology improvements in mobile and Web devices for lifestyle
Q. How do you keep yourself current in this fast changing technology field?
Working with many customers across industries and markets. Writing blogs and papers that can help challenge the thought for requirements in development as well as speaking through conferences where I get a chance to meet other thought leaders for development processes and tool development.
Q. What's your advice to budding analysts/engineers considering focusing on requirements processes or tools?
Focus on understanding the market drivers for your specific industry because customer demand (requirements) will always drive a project and understanding how to translate that demand into use cases, business cases and actual development content for your project will help you better support your organization. Once you understand those requirements look at the other members of your team and understand who in your team will best benefit from those requirements to improve the business (through development).
Modified on by VijaySankar
Prof. Neil Maiden is Professor of Systems Engineering at City University London. He is and has been a principal and co-investigator on numerous EPSRC- and EU-funded research projects. He has published over 150 peer-reviewed papers in academic journals, conferences and workshops proceedings. He was Program Chair for the 12th IEEE International Conference on Requirements Engineering in Kyoto in 2004, and was Editor of the IEEE Software’s Requirements column from 2005 until earlier this year. He can be reached out at N.A.M.Maiden[at]city.ac.uk
Requirements work is still regularly perceived as stenography, one in which the analyst listens and documents while the stakeholders tell the analyst what they want. This perception is reinforced by the requirements techniques that we use on most projects – the observations of work that we make, the interviews with stakeholders that we hold, and the questionnaires that we distribute to collect data about problems and requirements. These techniques hardly set the pulses racing. Nor do these techniques help us discover stakeholders’ real requirements.
Stakeholders don’t know what they want
One reason for this is that eliciting requirements relies on stakeholders knowing what they want and need. However, most stakeholders do not know what they want or need. They are limited by their perceptions of what is possible – what new business models can offer and new technologies can enable. Your average stakeholder is neither a business visionary nor a technology watcher. So is it surprising that their answers to your interview questions are so, well, banal?
Indeed, many businesses have come to realize that customers are more often rear-view mirrors rather than guides to the future. A new approach is needed – one that empowers your stakeholders. My advice? If you want to discover your stakeholders’ real requirements, encourage the stakeholders to create them.
Make them up.
Why not? After all, when you interview someone, the requirements that they report to you are the results of their own, often limited, creative thinking about a new system – creative thinking that you are capturing at the end of, too late to influence.
In this blog post, I argue it is more effective to get in earlier – for analysts to facilitate creative thinking about requirements as soon as requirements work starts. Think of requirements as the outcomes of creative work – desirable inventions that your stakeholders are guided to come up with. After all, many of your digital solutions should be giving you some form of business advantage, and establishing this advantage starts with requirements – what the solution will give to your business.
Creativity in Requirements Work
Perhaps to the surprise of many in software engineering, creativity is well understood. Many different definitions, models and theories of creativity are available, from domains ranging from social psychology to artificial intelligence. As a software engineer, I was delighted to discover how well the phenomenon has been studied. And how much software engineers could leverage from it.
I like the definition of creativity from Sternberg and Lubart. I consider it prototypical of many of the definitions out there. Creativity is:
“the ability to produce work that is both novel (i.e. original, unexpected) and appropriate (i.e. useful, adaptive concerning task constraints)”
Creative problem solving methods have been available since the 1950s. What is striking about many of these methods is their similarity to software development methods that emerged 25 years later. The CPS method guides people through activities such as problem finding, goal finding and solution acceptance – stages similar to analysis and testing phases of software development. What is different is the focus on creative thinking at each of these stages – creative thinking to maintain a critical advantage in business.
My team at City University London has been leveraging creativity methods and techniques in requirements projects of different types for over a decade, with great results. One approach has been to run creativity workshops – risk-free spaces in which stakeholders can discover and explore ideas often not feasible within more traditional requirements work. A workshop is normally divided into half-day segments in which stakeholders work with different creativity techniques such as reasoning analogically from other domains, removing constraints, and combining visual storyboards. We’ve successful ran such workshops in domains from air traffic control and electric vehicle use to policing.
Another approach is to embed creative thinking into the early stages of agile projects – what we refer to as creativity on a shoestring because of the need to provoke creative thinking in less than an hour. We’ve learned to prioritise epics with more creative potential. We’ve identified creativity techniques that deliver new ideas in less than an hour – techniques such as hall of fame, creativity triggers and combining user stories.
Get More Creative
Much requirements work is creative. We need to adapt what we do to reflect this. Fortunately there are many creativity processes and techniques out there to experiment with. Do try – you will be rewarded.
Modified on by VijaySankar
Here is coverage of Requirements Management for Systems Engineering track keynote based on the presentation that Bill Shaw (Systems Program Director) and Richard Watson (Senior Product Manager for Requirements Management tools) delivered at Innovate 2013 today.
For those new to the space, IBM Rational DOORS is a widely recognized product in the requirements management area and here is how we see our products are meant for -
Rational DOORS the trusted, de-facto standard requirements management tool for employing Systematic Engineering methodologies to build complex and embedded systems.
Rational DOORS Web Access, an add-on for DOORS enables globally distributed stakeholders with visibility into requirements and traceability relationships (managed in Rational DOORS), with the ability to communicate via online requirements discussions. Using a Web browser, DOORS Web Access provides access to view and discuss requirements—with no additional software installed on your desktop.
And finally the latest addition to the family, Rational DOORS Next Generation is the next generation requirements management solution built on the IBM Rational Jazz platform.
With such a plethora of offerings, we believe we have the right requirements solution for you. As we mentioned earlier, introduction of DOORS Next Generation DOES NOT mean we are moving away from DOORS. We are continuing our investment in DOORS and we will continue to release better and improved versions of DOORS in future. We believe DOORS Next Generation takes the requirements management capabilities we offer to the next level especially with the foundation of an open collaborative platform. DOORS NG, designed from the ground up to accommodate an ever growing and complex ecosystem, greater need for collaboration and usability for a broader community of stakeholders plans to extend the capabilities for requirements change management & Product Line Engineering. Packaging DOORS NG within DOORS helps our customers to try both the products without purchasing two licenses.
New in Rational DOORS
We have now included four more pre-configured templates supplied from Systems and Software Engineering enabling our customers in kick-starting their projects.
Systems Engineering Template
A simple, pre-configured information schema for using DOORS to support systems engineering
Aerospace & Defense
Developing requirements against the DO-178B safety standard
Supporting the ISO26262 functional safety standard
FDA Design Control practices defined in 21 CFR Part 820
We are continuing our investments in replacing the requirement of installing Rational Publishing Engine (RPE) client for report generation. In DOORS 9.5.1, we have included the support for parameterized RPE templates. Advanced styling and configurations options are also now included.
We have been making some significant improvements in the Open Services for Lifecycle Collaboration (OSLC) front including an enhanced Rational DOORS-Design Manager integration. For this integration, we have used link discovery technique rather than back-linking, thus helping in a better integration - Links will only be stored within the creating application and discovery is done in the background on a real time basis. This investment also helps in improving our 3rd party integrations.
Starting this version of DOORS, we will be using ETL(Extract, Transform and Load) to integrate with Rational Insight. Since this is the method used for integrating RRC and DOORS NG with Insight, the metrics capabilities remain the same across the products. This enables specific metrics defined in DOOR being reused in DOORS Next Generation. Thus one can deploy insight over DOORS 9.x data and while piloting DOORS NG, all the metrics data would be made available in it automatically. Check this article for more details - Improve the value of your CLM reports by using metrics.
Based on feedback from customers, we have made good amount of usability enhancements in 9.5.1. Some of them include
Link preview with OSLC style rich hover on links to understand better traceability navigation
Better navigation into baselines and improvement in management of baselines
Improved support for DOORS table formatting
We have also made some significant improvements to DOORS Web Access. Some of them include
Simplified configuration and deployment
Improvements to Database Explorer
Support for DOORS project view in the Database Explorer now
New in Rational DOORS Next Generation
We have been continuing to make improvements in the product since its first release in November 2012. Our priorities are to focus on product quality and usability and from a long term perspective - on requirements configuration management. Some of the major updates to DOORS Next Generation are
Unobtrusive locking of data to avoid save conflicts
Improved graphical markup to help in change management
Improved multi-user and offline edits of non-native data
Improved data support for product evaluation
Note: Roadmap and strategies mentioned in this post are subject to change and request you to be in touch with IBM reps to understand the latest road maps
Modified on by VijaySankar
Alex Ivanov is a Senior Software Engineer II with Honors at Raytheon Integrated Defense Systems. Alex has more than 10 years experience as a Requirements (DOORS) Database Manager supporting a large scale distributed requirements database in the aerospace and defense industry, specializing in writing re-usable DXL, training, user support and consulting with programs to ensure they get the most out of their use of IBM Rational DOORS. Alex in an IBM Certified Deployment Professional DOORS v9 and has been recognized as a three time IBM Champion (2011, 2012, 2013). In 2011 Alex was elected the President of the New England Rational User Group.
1. How does it feel to be a returning IBM Champion?
I’m honored to be selected for this honor for the 3rd consecutive year. Ironically enough it was in 2010 that I 1st started to reach out across Raytheon the best practices that my team and I had developed around how to effectively use IBM Rational DOORS. I thought to myself, we have hundreds of programs at Raytheon that use DOORS and yet not everyone is aware of how to make the best use of the tool and more over the customizations that have been developed through custom DXL to make it even easier to use DOORS. This was the beginning of my vision to standardize the way requirements are managed at Raytheon, and little did I know it would lead to discovering my true passion.
2. Can you tell us something about what you do at Raytheon?
I lead team of engineers that maintain our customizations for how to effectively use DOORS and moreover I consult with programs all across the company on how to best architect their DOORS database and take advantage of the automation we have available. I’m most passionate about reaching out to others across the organization that are eager to improve on how systems engineering uses DOORS to their maximum advantage. I’m happy to say that over the past 3 years I’ve been able to spread our best practices across numerous Raytheon locations across the US, and have been able to do wonders with social media through the use of wiki pages, blogs, communities and generating training videos.
3. What are your thoughts on managing requirements effectively?
A tool is just a tool unless you have a sound process and training to go along with it. It is much more important to get people to understand the process and how the tooling helps them do their job, than to simply rely on the tool to solve all their problems. I believe it’s very important to have a sound process in place and certainly if there are best practices to leverage on how to use tools people should strive to take advantage of them, without losing sight of their process and what they are responsible for delivering. I can’t stress how important it is to determine your project architecture and the relationships of what you will be managing in DOORS, it might seem time consuming at 1st but it will save you a lot of time down the line.
4. So, how long have you been using Rational DOORS?
I started using DOORS in 2000 which is when I began my career at Raytheon. At the time I was a software developer who had just graduated from Boston University and DOORS was the tool which had the software requirements to which I had to develop code for. I believe at the time it was DOORS version 4.0 and I can certainly say that the tool has come a long way since then.
5. That's a long time; in your opinion, what are some of the greatest assets of the product and well, the pain points?
Without a doubt one of the greatest assets is the ability to customize the product for your process. Having said that I believe it’s very important to have a solid understanding of the systems engineering process and a clear understanding of how to architect your project for success. Only once you have a reusable architecture can you turn the focus on how to write reusable DXL to compliment your project template and architecture. As far as the pain points, certainly one of them has to be how manual it is to manage the database, whether it be attributes, views or access without custom DXL scripting it would be rather time consuming to carry out many tasks.
6. Do you have some tips or tricks to share with the DOORS users out there?
I’m happy to say that there are a ton of free resources online and I would highly recommend people take the time to watch webcasts, join a local Rational user group and just network with your peers who have their own experiences to share. If you aren’t a member already I encourage everyone to join the Global Rational User Group, their monthly newsletters are a great source of information and many presentations are archived for viewing right on the site. Another great resource for DOORS and DXL are the developerWorks forums, having asked and answered questions on the Rational DOORS DXL forum I highly recommend it. Numerous webcasts are available on 321Gang's website, Managing DOORS: The Administrator’s Toolbox is one where I take you through some examples on how to write reusable DXL to make it much easier to manage attributes and views.
7. What advice do you give to the budding DOORS administrators?
I would encourage everyone to find a mentor, this can be anyone that you look up to and just ask them questions, I know that I would not be where I am today had it not been for the mentoring and support of numerous people in my life and I am forever grateful to them. Something I have learned throughout the years is that it’s important to ask questions, and don’t assume the person who is asking you to do something has the right answers. As you gain experience you’ll be able to tailor the solution for your customers and they will thank you for it. It’s unbelievable how many resources are available online for learning, I’m a big fan of watching and creating training videos and Youtube has numerous channels I’d recommend subscribing to, a couple of which are: IBMRational and IBMJazz.
I believe it’s very important to always want to improve, whether this be in your personal or professional life. I encourage everyone to grab a book on any subject that is of interest to them, you’d be amazed how much you will learn. I’ve read dozens of books over the past few years and have a reading list for that want to follow along on http://www.shelfari.com/alexivanov/shelf
Modified on by VijaySankar
In this guest blog post, Requirements Engineering Expert, Jeremy Dick continues with his discussion on practical applications of traceability. Read the first two parts here.
What’s really going on when you decompose a requirement?
What’s really going on when you plan V&V against a requirement?
Inspired by recent experience in a large systems engineering project, Part 1 of this blog post series covered the practice of decomposing requirements, which brings about one of the most important traceability relationships in requirements engineering. Part 2 covered the next most important relationship: that between requirements and validation and verification activities.
Part 3 (probably the final part!) presents some other things about V&V we think we have learnt from undertaking a large project: what do you do about new requirements that arise from the V&V you have planned against your main requirements?
Once the need for V&V activities has been established (see Part 2), this will often give rise to new requirements. Broadly speaking, these requirements fall into two types:
Those that affect the design of the product.
Such requirements may constrain the design of the product, and may even add functions. For instance, should there be a need during commissioning to establish that the temperature of a fuel cell does not exceed a certain level, then it may be necessary to design in a means of measuring it.
Those that define the need the build a secondary system.
These requirements are for the construction of test artefacts, such as models and test equipment. In large programs, this test equipment can represent huge projects in themselves – like the construction of a new building to test a new design of jet engine.
Neither of these types of requirement should really be mixed in with the remaining requirements of without recording (through traceability) their origin, because if the choice of V&V changes, then we should be able to identify which parts of the design – or which pieces of test equipment – are present only to make the old V&V possible.
If the need to test a product gives rise to the need to build facilities to carry out that testing, then another development life-cycle is spawned for the purpose. Requirements will be collected for the facility, and further V&V may be required against those requirements. We enter a sort of recursive world of development life-cycles. Some call this “fractal” – the main development spawns smaller developments, which in turn spawn yet smaller ones.
Primary versus Secondary V&V
Secondary V&V arises when V&V activities lead to test artefacts that also need validating or calibrating. For example, from requirements about power delivery, there is a direct need to do some analysis using a model of fluid dynamics – what we call primary V&V. Requirements for the model itself are collected, and the need for validation of the model considered. For example, the model may need calibrating against some similar existing systems. This calibration activity is another form of V&V – what we call secondary V&V.
Requirements lead to V&V activities lead to requirements
As noted above, the need to for V&V leads to design changes or secondary systems, and thus to further requirements. This relationship should be captured; a requirement, rather than having a parent requirement, may in fact have a parent V&V activity. We could say that a requirement “enables” a V&V activity.
In the extreme, a secondary system may impose requirements on the primary product, for instance through the need to interface with it.
An information model
The diagram below illustrates the idea that requirements enable V&V activities
In the diagram, three traceability relationships are shown: “satisfies”, “verifies” and “enables”. There is an example of a primary V&V activity leading to an enabling requirement on the primary system itself – a requirement whose parent (through the “enables” relationship) is a V&V activity.
There are also two examples of secondary systems driven by requirements for primary V&V, and one of those gives rise, in turn, to a tertiary system.
In our current project, we have pushed this model only as far as secondary systems with an “enables” relationship and secondary V&V. An example is the one cited above – a fluid dynamics model that needs itself to be validated through calibration with similar existing systems.
As with all these things, there is probably a law of diminishing returns. How far should you push the requirement to V&V to requirement to V&V to requirement chain? You will have to decide!
A similar model probably exists for other aspects of development – manufacture, for instance. The need to manufacture a product gives rise to the need to construct the factory and plant to do so – a primary development spawning a secondary development. However, the traceability train goes not from requirement to V&V activity to requirement, but rather directly from primary requirement to secondary requirement. This could still be characterized as enablement.
We never build just “the product”. There are always other things that need designing and building that surround the system for various purposes, including testing components, sub-systems, systems and completed products.
What has been presented here is an attempt to get to grips with the traceability required to track the relationships between the product and its secondary (and possibly tertiary) systems.We implemented all this in a Rational DOORS database, with a small amount of customisation to ease the way.
Thank you for reading these blog entries, and please contact me if you are interested in reusing any of this experience and associated Rational DOORS customization.
Read the first two parts here -
What’s really going on when you decompose a requirement?
What’s really going on when you plan V&V against a requirement?
About the author - Jeremy Dick works as Principal Analyst for Integrate Systems Engineering Ltd in a consultancy, research and thought leadership capacity. He has extensive experience in implementing practical requirements processes in significant organizations, including tool customization, training and mentoring. At Integrate, he has been developing the concept of Evidence-based Development, an extension of his previous work on “rich traceability”. Prior to this appointment, he worked for 9 years in Telelogic (now part of IBM Rational) in the UK Professional Services group as both an international ambassador for Telelogic in the field of requirements management, and a high-level consultant for Telelogic customers wishing to implement requirements management processes. During this time, he developed considerable expertise in customizing DOORS using DXL to support advanced engineering processes. His roles in Telelogic included a position in the DOORS product division to assist in the transfer of field knowledge to the product team. Co-author of a book entitled “Requirements Engineering” that has recently reached its 3rd edition, he is recognized internationally for his work on traceability. Jeremy can be reached out at jeremy.dick[at]integrate.biz
Modified on by VijaySankar
In this guest blog post, Requirements Engineering Expert, Jeremy Dick continues with his discussion on practical applications of traceability. Read the first part here -
The practical applications of traceability Part 1: What’s really going on when you decompose a requirement?
Inspired by recent experience in a large systems engineering project, Part 1 of this essay covered the practice of decomposing requirements, which brings about one of the most important traceability relationships in requirements engineering. Part 2 here covers the next most important relationship: that between requirements and validation and verification activities. Part 3 will continue the discussion of V&V, and how it itself gives rise to further requirements.
Verification & Validation (V&V)
I don’t care enough about the difference between validation and verification to want to enter into the divisive debate about it here. I am just going to say V&V and be done with it!
Kinds of V&V activity
There are many kinds of V&V activity, and organisations have varied ways of classifying them. In the project I am working on, the classifications are Analysis, Analogy, Inspection, Review, Test and Demonstration.
By their very nature, these types of activity tend to occur at different times of the life-cycle. Analysis, for instance, tends to occur early to predict properties of the proposed design and verify it against requirements. By contrast, demonstration tends to occur late as part of the acceptance tests.
Typically, a whole series of activities will be planned against a single requirement, some early, some late, allowing confidence to accumulate over the life-cycle of the project.
Requests for evidence
Despite the variety of kinds of activity, there is one thing they all have in common: they are requests for evidence of some kind or other. Indeed, I would favour calling V&V activities exactly that: “requests for evidence”.
Intention versus Fulfilment
Those activities that are carried out early in the development process provide evidence that the intended design will meet the requirements – they address design intention. Those activities applied late in the development process collect evidence that what has been built meets the requirements – they address design fulfilment.
Once the need for V&V activities has been established, this will often give rise to new requirements, either on the design of the product itself, or requirements for the construction of test artefacts, such as models and test equipment. (We never build just the product; there are always other things that need designing and building that surround the system for various purposes.)
The management of requirements arising from V&V will be the topic of Part 3.
Requirements decomposition and V&V planning
When planning V&V activities against a parent requirement, you need to take into account the V&V that will be carried out on its child requirements, and their child requirements, and so on.
Take, for instance, the following example where a user requirement is decomposed into a number of system requirements:
The only V&V activity planned against the user requirement is a commissioning test, which will occur late in the life-cycle. However, further V&V activities are defined against the child system requirements. Some of these are design inspections that occur very early, and some are system tests that occur relatively late, but still before commissioning.
There is, of course, a sense in which all these V&V activities provide evidence for the satisfaction of the user requirement, but some of the activities fit more directly against the system requirements. So when planning V&V activities, you need to ask the question: what activities can only be carried out against the parent requirement, and which can be delegated to child requirements? – because those that can be delegated are likely to provide evidence earlier in the life-cycle. And you always want that, if you can get it.
Granularity of V&V results
In the above example, there is one V&V activity that is linked to multiple requirements. In general, the relationship between requirements and V&V activities will be many-to-many.
However, this presents an issue when it comes to collating results of V&V against requirements. The System Test defined above may show positive results for filling, boiling and dispensing, but fail on the time taken to recover (cool down). So it has passed on all requirements except one. In terms of granularity of information, we need to record the result of the V&V activity against each linked requirement.
How is it best to do that? The only place to do that in the information model of the example is on the “verifies” links; there is a link for every requirement-V&V pair.
Another way is shown in the next example:
Here we have separated out the success criteria for each requirement for each test by adding subsidiary objects under the V&V activities (for instance, using the DOORS object hierarchy). Each success criterion has exactly one link to a requirement; a link from a criterion is implicitly a link from the V&V activity. (This link could be made explicit by retaining a link from the activity as well – not shown in the diagram.)
Now we have objects rather than links against which to record the results of the V&V activity (using an attribute of that object). This has the added advantage that it encourages a discipline of identifying precisely what the success criterion is for each requirement against each V&V activity. In addition, the V&V Activity and its list of success criteria can be used as a description/checklist for each particular test.
As results come in, the success/failure status on the success criteria can be rolled up through the “verifies” links to the associated requirements, and then on up through the “satisfies” links to the parent requirements. Both these relationships allow results to be summarised through the eyes of the requirements at every level.
V&V planning steps
These are the process steps we teach for planning V&V against requirements. They are numbered so as to continue from the process steps named in Part 1:
Determine the V&V activities you will need.
Consider what range of evidence you will need to collect to establish that the requirement has been met, and determine the best V&V activities for that. Aim to collect evidence as early as possible in the life-cycle, considering early proof of design intention as well as later design fulfilment. Capture the V&V activities into the database and (if using explicit links) link them to the associated requirements.
Identify the success criteria for each requirement against each V&V activity.
For each requirement/V&V activity pair, determine the success criteria to be applied. Capture each success criterion in a new object under the activity, and link it to the requirement.
Record the results of the V&V activity against each success criterion.
When the V&V activity has been competed, record the success or failure of the activity against each success criterion.
So this is what we now teach those engaged in planning and tracing V&V against requirements, in conjunction with requirements decomposition. It is wrong to assume that people will somehow automatically know how to do this kind of thing. By taking this approach, the V&V plan is well organized, defined at the most appropriate layers, with success criteria defined, and ready for the collection and roll-up of results.
Read the first part here - The practical applications of traceability Part 1: What’s really going on when you decompose a requirement?
About the author - Jeremy Dick works as Principal Analyst for Integrate Systems Engineering Ltd in a consultancy, research and thought leadership capacity. He has extensive experience in implementing practical requirements processes in significant organizations, including tool customization, training and mentoring. At Integrate, he has been developing the concept of Evidence-based Development, an extension of his previous work on “rich traceability”. Prior to this appointment, he worked for 9 years in Telelogic (now part of IBM Rational) in the UK Professional Services group as both an international ambassador for Telelogic in the field of requirements management, and a high-level consultant for Telelogic customers wishing to implement requirements management processes. During this time, he developed considerable expertise in customizing DOORS using DXL to support advanced engineering processes. His roles in Telelogic included a position in the DOORS product division to assist in the transfer of field knowledge to the product team. Co-author of a book entitled “Requirements Engineering” that has recently reached its 3rd edition, he is recognized internationally for his work on traceability.Jeremy can be reached out at jeremy.dick[at]integrate.biz
Modified on by VijaySankar
This year, Innovate - The IBM Technical Summit will be held from June 2 to 6 at Orlando, Florida. As in previous editions, we will be having two tracks dedicated to requirements management at the event. One track would be focusing on requirements definition and management for IT/Application development and the other track will be focusing on Requirements Engineering for Systems and Product development. Each track will have 16 sessions starting with the individual track kickoffs. This year the conference will witness personalities like Steve Wozniak, Eric Ries and many other.
The requirements management track will showcase more than 25 real life case studies and best practices from across the industries ranging from Banking to Insurance to Aerospace & Defense to Energy & Utilities. Some of the interesting sessions to watch out for are
Agile Requirements: Maintaining the Model Through Iterations (by Mia McCroskey, IBM Champion at Emerging Healthcare IT)
DOORS Next Generation: 1st Impressions and Key Comparisons (to IBM Rational DOORS) [by Alex Ivanov, IBM Champion at Raytheon]
Successful RRC adoption through a Community of Practice: Case Study from Blue Cross Blue Shield of North Carolina
Cerner Corporation: Migrating from Requisite Pro to RRC
Piloting Rational Requirements Composer for Enterprise-wide deployment: The Good, the Bad, and the Ugly (Fidelity)
Managing Parallel Streams of Requirements in DOORS at an Automotive OEM
A DOORS-based solution for semi-automated risk analysis within MedTech
We also have two instructor led workshops for Rational DOORS and Rational Requirements Composer. Learn more here. Demo booths, self paced tutorial labs for DOORS, DOORS Next Generation and RRC are also organized during the event.
Few other attractions for anyone interested in the field of requirements management are panel discussion, Meet the developer sessions, sessions from organizations like IAG Consulting, IIBA and INCHRON.
Every year Innovate attracts 4000+ professionals from across the world. Join us for one of the biggest technical conferences to learn, meet and network with like minded technical professionals. For more details about the conference visit Innovate 2013 - The IBM Technical Summit
Here is the detailed agenda of requirements management topics at Innovate -
ps: agenda subject to change
Modified on by VijaySankar
Jeremy Dick works as Principal Analyst for Integrate Systems Engineering Ltd in a consultancy, research and thought leadership capacity. He has extensive experience in implementing practical requirements processes in significant organizations, including tool customization, training and mentoring. At Integrate, he has been developing the concept of Evidence-based Development, an extension of his previous work on “rich traceability”. Prior to this appointment, he worked for 9 years in Telelogic (now part of IBM Rational) in the UK Professional Services group as both an international ambassador for Telelogic in the field of requirements management, and a high-level consultant for Telelogic customers wishing to implement requirements management processes. During this time, he developed considerable expertise in customizing DOORS using DXL to support advanced engineering processes. His roles in Telelogic included a position in the DOORS product division to assist in the transfer of field knowledge to the product team. Co-author of a book entitled “Requirements Engineering” that has recently reached its 3rd edition, he is recognized internationally for his work on traceability.Jeremy can be reached out at jeremy.dick[at]integrate.biz
It is a bit of a shock to find myself well into the fourth year on the same project! The nature of my work as a consultant means that it is rare for me to stick with a project beyond the initial phases of defining a requirements management process, establishing effective tool support and training the process enactors. But this time we have been able to stick with the requirements team supporting a large project long enough to see theory put into practice, and to see what it really means to apply the tools and techniques. We have gone well beyond just training, and find ourselves mentoring nearly 300 engineers in the application of DOORS for requirements capture, development and management. This has helped us keep our feet firmly on the ground – rubber on the road – and to walk with those who actually have to do the work.
So what have we learnt?
It is one thing to teach people how to write requirements statements that are clear, unambiguous, testable and traceable; it is quite another thing to help people understand how to take a requirement and develop it. None of the engineers we met on the project had previous experience of how to take a system requirement, for instance, and systematically decompose it through the design into sub-system and component requirements. We had to adapt our training and mentoring to address this skill.
Requirements decomposition establishes one of the essential requirements traceability relationships: how each layer of requirements contributes to the satisfaction of the layer above. This is often known as the satisfaction relationship, or as refinement in SysML. It is this relationship that connects the design to the development of requirements, and that lies at the heart of the ability to perform impact analysis.
Whatever layer you are engaged in – customer, system, sub-system or component – the same basic requirements development process can be applied. These are the process steps we teach for requirements development:
1. Collect and agree your requirements.
Here you actively seek out the requirements you are expected to fulfil. In an ideal world, development will be perfectly top-down, and you can wait for the layers above to allocate perfectly expressed requirements to you. In a practical world, you will have to be more proactive, and will have to cooperate with your requirement “customers” to obtain acceptably worded requirements.
2. Design against your requirements.
This stage is the creative bit that you perhaps most enjoy doing, and where the real engineering takes place. You will imagine how to design the system to meet the requirements, and what you need your requirement “suppliers” to do to contribute to your design. In other words, if you are at the system level, you design the system into sub-systems, and work out what each sub-system must do to meet the system requirements.
3. Decompose the requirements to reflect the design.
Now you enter the decomposed requirements and trace them back to the requirements they satisfy, thus making the requirements contents and traceability reflect your design. The wording of the decomposed requirements is important: if the original requirement read “The <system> shall ...”, then it is likely that the decomposed requirements will read “The <sub-system> shall ...” At this stage you can also capture rationale for the decomposition, including references to the design documentation or models, thus tracing also to design information.
4. Allocate the decomposed requirements.
Finally, you can pass on the decomposed requirements to those areas responsible for fulfilling them. These other areas engage in the same process, and together you systematically achieve alignment of requirements through all the layers.
The example below illustrates the end result of applying this process on a user requirement decomposed into system requirements. The rounded box contains the design rationale, and refers to a functional model of the product. (If you’ll forgive the shameless plug, such diagrams can be produced using a DOORS extension related to TraceLine. Ask me more if you are interested.)
The example just shown is a classic decomposition pattern. It is actually the decomposition of an overall performance into a combination of capacity and performance attributes of the product. We call this “decomposed”.
Other decomposition patterns are possible. Sometimes no decomposition is necessary, because the system requirement can be satisfied entirely by a single component or sub-system, as in the left-hand example below; or a constraint that will apply universally to all parts, as in the right-hand example. All that changes, perhaps, is the wording of the requirement to indicate the new target. We call this “direct flow”.
You will reach a point in the cascade of decomposed requirements when a requirement is satisfied entirely within the current area without the need to decompose further, as illustrated in the following. In this case, it is important to state the rationale for not flowing the requirement onwards, as otherwise it may be construed as a traceability gap. We call this “not developed further”.
I have seen a number of organisations that capture this flow-down type – Decomposed/Direct Flow/Not developed further – as an attribute of the requirement. We do this because it allows us to cross-check certain things: if you have marked a requirement as “Decomposed” but have not decomposed it, then that does indicate a design gap. However, if you mark the requirement as “Not developed further”, that gives you permission not to trace it further, i.e. not a design gap. (But you would do well to provide rationale!)
As requirements flow down through the layers, the complexity of the design becomes evident in the shape of the requirements graph. In general, satisfaction is a many-to-may relationship between requirements, and the figure below shows how this may be manifest. As the requirements are decomposed, they are refactored through the design.
Some patterns are questionable. Take these for instance:
Why decompose something into three requirements only to reduce them to a single one again? Or why collapse three requirements into one only to re-expand them in the next layer?
These patterns are not necessarily wrong, but they should be targeted for careful review.
So this is what we now teach those engaged in requirements decomposition, flow-down and traceability. It is wrong to assume that people will somehow automatically know how to do this kind of thing. By taking this approach, the flow-down of requirements reflects the design, and a clear satisfaction relationship is expressed in the traceability.
Read the second part here - The practical application of traceability Part 2: What’s really going on when you plan V&V against a requirement?
Requirements come in a variety of forms and level of details. Well defined and thought out requirements have demonstrated time and time again to reduce development time, increase quality and lower the cost of a software project. Projects that lack specific requirements quickly fall into many of the typical traps of project failure.
In this era of agile, mobile and ever faster-paced world of today, some teams question whether managing requirements are still relevant today. Well I say teams still need to know what they are supposed to do. The developer may have ideas but in order to deliver something useful there must be a consensus on functionality and needs. And they need more than just a general idea scribbled on a sticky note, but specifically exactly what does the business and customer expect. Developers should not be expected to fill in the gaps with assumptions. Clear definitions of requirements actually accelerate the development process by reducing uncertainty and wasteful work.
Another point of view argues that requirements only have to be defined once at the start of a project , and not worry about them again during development. Sure you could consider that approach but then we quickly realize that change is all around us. There is no way we can ever realistically say the requirements are "done". Requirements are continually evolving even as development is underway and therefore the team needs to stay attuned and make adjustments along the way. Otherwise the risk is that the end result although matching up to the original requests, fails to meet the business need upon delivery. The only way to address this effectively is to realize that requirements are an intricate part of the lifecycle that continually evolve with the business and that the delivery team needs to be in sync to deliver these expected results.
To ensure requirements are written for the entire lifecycle it is important to realize that business needs come in many different forms, formats and language. Bringing them all together in a single place, removing redundancy, and connecting interrelated content is the first step to requirements definition and management. Capturing your stakeholder and user requirements is only the first step in realizing your project. As your project progresses and evolves, these initial requirements give rise to other requirements which elaborate, satisfy, and verify them. The result is a web of interdependent artifacts and relationships which represent the nature of the dependencies between these artifacts. Navigating and understanding this web of information is critical to the success of your project. Each stakeholder in a project may be interested in different information (e.g. defect backlog, current tasks, or release schedule). What helps here is a customizable dashboard displaying all the information stakeholders, developers and testers need at any time. As a new project is started and progresses forward many of requirements you define and capture will change, be added to, or sometimes removed through the course of development. You need the capability to track requirements information changes so that you can understand what has taken place throughout that process.
To learn about automated solutions for managing requirements, I recommend listening to this informative webcast Three Reasons to Throw Away your Requirements Documents
In summary I want to remind business analysts and the rest of the software delivery teams, that in this fast-paced delivery environment today, requirements are more important than ever. They provide the roadmap of exactly what is expected of the team by the business and customers. Everyone on the team needs to appreciate that requirements are never final and that they naturally evolve with the market and business. Teams that can capture evolving requirements and implement them in a rapid and iterative fashion will find a competitive edge in the market. Automating requirements provides the additional key benefit of traceability and collaboration to identify gaps and the impact of changes while facilitating the engagement of experts on the team to improve the quality of results.
Modified on by VijaySankar
Just like the famous (mis)quote of Mark Twain, rumors of the demise of the requirements management tool IBM Rational DOORS are not only exaggerations but in fact untrue. I’d like to put straight here some of the myths and misunderstanding that I’ve heard and seen perpetuated:
Myth: IBM is discontinuing support and development for the DOORS 9.x series
Truth: IBM continues to develop the DOORS 9.x series with the same level of development resources. We have new functionality to announce in 2013 and plan further releases in the years ahead. In addition IBM has recently invested additional development resources in creating a new Requirements Management tool called IBM Rational DOORS Next Generation (DOORS NG).
Myth: IBM is replacing the DOORS 9.x series with DOORS Next Generation (DOORS NG) and expects customers to migrate now
Truth: DOORS 9.x will be developed in parallel with DOORS NG for many years to come. DOORS NG was released for the first time in November 2012. The tool has many functions DOORS 9.5 does not have, e.g. fully functional web client, type and data re-use (requirements & attributes), team collaboration, task management, but also does not have all of the capabilities relied on by DOORS 9.x users. We encourage users to evaluate the capabilities of DOORS NG and start projects or move projects to DOORS NG when practical and beneficial to the project.
Myth: To move to DOORS Next Generation, I’ll need to buy a whole new tool
Truth: Customers with active support & subscription on their DOORS 9.x licenses are entitled to use DOORS Next Generation - it is included as part of the DOORS 9.x package. Customers can choose to use either DOORS 9.x, DOORS Next Generation or a combination of both. There is no need for additional purchases or even for an exchange of licenses. All licenses are available to customers in the IBM license key center.
Myth: IBM offer no migration path from DOORS 9.x to DOORS NG
Truth: IBM do offer functions to transition into using DOORS NG for existing DOORS 9.x customers:
Work with DOORS 9.x and DOORS NG alongside each other - both tools support linking of information between both databases.
Work with suppliers by exchanging requirements through the standard exchange format of ReqIF. This works best between DOORS 9.x and DOORS NG but is also designed to work with other RM tools.
Where DOORS NG projects wish to be initialized with data from DOORS 9.x we offer re-factoring functions to harmonize the transition of data from DOORS 9.x to DOORS NG.
Myth: Everyone knows IBM’s strategy on requirements management and DOORS
Truth: IBM takes great care to regularly communicate our product strategy and roadmap at trade shows, IBM conferences and on webcasts. If in doubt, come and ask IBM! Please submit questions using the comment feature here or contact your IBM representative.
For information about the products, visit -
IBM Rational DOORS
IBM Rational DOORS Next Generation
Today we have with us Mia McCroskey at Emerging Health Montefiore Information Technology who was recognized as IBM Champion this year. She shares with us her thoughts about the requirements management domain.
Welcome to IBM family! It¹s a great pleasure to have you with us as an IBM Champion. Congratulations, How do you feel?
I am honored to be recognized in this way not just this year but for the past several years that I have been asked to present my team's stories at Innovate. It may seem like our little team's requirements management needs are nothing like those of customers with huge DOORS ecosystems. But really we are an R&D site for the evolution of requirements management techniques and strategies. If a member of my team has an interesting idea for how to capture, structure, track, or trace requirements, we can try it without getting high level approvals or disrupting the work of hundreds of people. I take tremendous satisfaction from sharing our successes in a way that may help others improve their best practices.
Can you tell us something about what you do at Emerging Health, Montefiore Information Technology?
Emerging Health is primarily an IT delivery organization supporting healthcare delivery in the Bronx. Montefiore Medical Center is our parent company. My team, Product Development, is a software development shop tucked away in a corner doing very different work from most everyone else. Our application, Clinical Looking Glass, is a browser-based clinical intelligence tool that gives clinicians access to the enormous wealth of patient data gathered by all the other systems. Our end users can get an answer to a question like "are my clinic's diabetic patients getting the level of follow-up care required by our funding sources?" in a few minutes. In most healthcare environments, getting the data that you need to answer this question takes weeks.
My title is Manager, Product Development Lifecycle because I have my fingers on just about every stage of that lifecycle. Specifically, I lead our Requirements Management, Quality Assurance, Education, Support, and Implementation teams. I also manage outsourced development work, manage client relationships, and do hands on end-user training and support. We are an extremely team-oriented organization, with two formal development scrum teams and two teams that are at various stages of adopting an agile process. I'm deeply involved in that right now: it's challenging to apply Scrum to a training and support team, and to a team of data analysts and engineers.
Having said all that, my roots are in requirements. On a team of "happy path" stakeholders who get very excited by ideas for new functionality, I love to dig in and find the challenges that nobody wants to think about -- before we start coding. Sometimes I feel like a real buzz kill!
What are your thoughts on managing requirements effectively?
Agile models have forced us to completely reorganize our requirements elicitation, analysis, and management processes. But one thing that has not changed is my belief in a comprehensive, functionally organized requirements model.
But when I say "comprehensive," I don't mean laboriously detailed. The art of requirements analysis and management is in knowing -- or guessing right -- what details are going to be important later. By later I don't mean to the coders and testers in this iteration, I mean when we want to revise or augment the feature in six months. The requirements analyst has to be deeply plugged in to the business goals and vision in order to predict the future and capture the least amount, but most important information about what the team is doing.
A few years ago I spent many months constructing an "as built" specification for a market data system at the New York Stock Exchange. I had the original ten-year-old spec and a few dozen incremental release documents. We were getting ready to refactor the system and nobody knew every business rule and function. I vowed that no system I worked on would ever lack a spec that described everything it currently did. Incremental requirements specs that aren't integrated into the overall system are defects waiting to be discovered once the coders get ahold of them.
Having argued that I must add that a requirements model in document form is pretty nearly impossible to maintain in the way I describe. You've got to employ a database tool that supports the granularity of each requirement and allows you to describe each one through attributes. Then you can use filters and queries and views to present an infinite number of customized specifications -- all of the requirements implemented in a specific release, or all of the requirements related to a specific functional area, or the completed requirements related to a specific business objective or corporate mandate.
What are your thoughts on the role of requirements management in agile projects?
I recently spoke with a software development professional who was very proud of his organization's highly structured and detailed requirements templates that captured every detail before any work began "to be sure we deliver what's wanted." I felt like I was talking to tyrannosaurus rex. We all know that the day after you baseline that 400-page spec it's already out of date.
Agile with its short increments and "only write down what you really need to" mentality can seem seductively freeing. When our organization adopted Scrum, I stuck to my requirements model guns, and sure enough a few months later we couldn't remember decisions that we'd made a few sprints back, nor even exactly which sprint we'd done the work in. Since we weren't supposed to be doing "heavy" requirements, we'd been coached to: use the system and see what happens; or sift through dozens of completed user stories and hope the detail we wanted was actually mentioned in the acceptance criteria (which it was not because it was an in-sprint decision); or try to find relevant test cases and check the expected results. Instead, I launched DOORS, went to the functional area related to the question, and checked our documented business rule. Done.
What are some of the challenges you see in Healthcare Informatics projects?
Deriving meaningful information from the electronic medical record is essential to justifying the cost of those systems. We're piloting the use of predictive analytics -- combining statistical methods with the mass of patient data collected every day at our parent medical center -- to predict outcomes at the population level. To do it you need a very wide range of data: blood pressure, height, and weight, smoking patterns, history of heart disease, current blood sugar level, and on and on. Just bringing all this data together is the first challenge. Next is the analytic tool -- that's CLG. Finally you need big iron to process it. Most local and regional healthcare providers don't have the funding for, say, Watson. My team spent last summer optimizing our hardware and software environment, and CLG itself, to handle analysis of larger data sets faster, but within a medical-center friendly infrastructure budget.
Another area of critical concern is patient information. The need to pool patient data for direct care as well as population research is supported by legislature and funding sources. But we are bound, both legally and ethically, to protect patient identity in every circumstance.
Clinical Looking Glass has the capacity to show patient contact information to users who have been granted permission to see it -- usually clinicians who are actively providing care and need to contact the patients. While this is a critical feature of CLG, we expect to have to develop more granular levels of access as new types of clients adopt the product. For example, our Regional Health Information Organization (RHIO) client has data from twenty-two healthcare institutions. Some patients have declined to have their identity shared across the organization. We have to build the capability to mask these patients' identity even to our users who have permission to see it.
We have with us today Bruce Powel Douglass. He doesn't need an intro for most of us -- Embedded Software Methodologist. Triathlete. Systems engineer. Contributor to UML and SysML specifications. Writer. Black Belt. Neuroscientist. Classical guitarist. High school dropout. Bruce Powel Douglass, who has a doctorate in neurocybernetics from the USD Medical School, has over 35 years of experience developing safety-critical real-time applications in a variety of hard real-time environments. He is the author of over 5700 book pages from a number of technical books including Real-Time UML, Real-Time UML Workshop for Embedded Systems, Real-Time Design Patterns, Doing Hard Time, Real-Time Agility, and Design Patterns for Embedded Systems in C. He is the Chief Evangelist at IBM Rational, where he is a thought leader in the systems space and consulting with and mentors IBM customers all over the world. He can be followed on Twitter @BruceDouglass. Papers and presentations are available at his Real-Time UML Yahoo technical group and from his IBM thought leader page.
The problems with poor requirements are legion and I don’t want to get into that in this limited space (see Managing Your Requirements 101 – A Refresher Part 1: What is requirements management and why is it important?
). What I want to talk about here is verification and validation of the requirements qua requirements rather than at the end of the project when you’re supposed to be done.
The usual thing is that requirements are reviewed by a bunch of people locked in a room until they are ready to either 1) gnaw off their own arm or 2) approve the requirements. Then the requirements are passed off to a development team – which may consist of many engineering disciplines and lots of engineers – for design and implementation. In parallel, a testing group writes the verification and validation (V&V) plan (including the test cases, test procedures and test fixtures) to ensure that the system conforms to the requirements and that the system meets the need. After implementation, significant problems tracing back to poor requirements require portions of the design and implementation are thrown away and redone, resulting in projects that are late and over budget. Did I get that about right?
The key problem with this workflow is that the design and implementation are started and perhaps even finished without any real assurance about the quality of the requirements. The actions that determine that the requirements are right are deferred until implementation is complete. That means that if the requirements are not right, the implementation (and corresponding design) must be thrown away and redone. Internal to the development effort, unit/developer and integration testing verify the system is being built properly and meets the requirements. Then at the end, the system verification testing provides a final check to make sure that the requirements are correctly addressed by the implementation.
During this development effort, problems with requirements do emerge – such as requirements that are incomplete, inconsistent, or incorrect. When such problems are identified, this kicks off a change request effort and an update to the requirements specification (at least in any reasonable process), resulting in the modification of the system design and implementation. But wouldn’t it be better to not have these defects in the first place? And even more important, wouldn’t it be useful to know that the implementing the requirements will truly result in a system that actually meets the customer’s needs?
There are two concerns I want to address here: ensuring that the requirements are “good” (complete, consistent, accurate, and correct) and that they reflect the customer’s needs. And I want to do this before design and implementation are underway.
It isn’t obvious
Imagine you’re building a house for your family. You contract an architect who comes back to you after 3 months with a 657 page specification with statements like:
- … indented by 7 meters from the west border of the premises, there shall be the left corner of the house
- … The entrance door shall be indented by another 3.57 meters
- … 2.30 meters wide and 2.20 meters high, there shall be a left-hand hinge, opening to the inside
- …As you come in, there shall be two light switches and a socket on your right, at a height of 1.30 meters
My question to you is simple: is this the house you want to live in? How would you know? There might be 6500 requirements describing the house but it would almost impossible for any human to understand whether this is the house you want. For example:
- Is the house energy efficient?
- Does the floor plan work for your family uses or must you go through the bathroom to get to the kitchen?
- Is it structurally sound?
- Does it let in light from the southern exposure?
- Is there good visibility to the pond behind the house?
- Does it look nice?
What (real) architects do is they build models of the system that support the reasoning necessary to answer these questions. They don’t rely simply on hundreds or thousands of detailed textual statements about the house. Most systems that I’m involved with developing are considerably more complex than a house and have requirements that are both more technical and abstract.
Nevertheless, I still have the same basic need to be able to understand how the requirements fit together and reason about the emergent properties of the system. The problem of demonstrating that the implementation meets the stated requirements (“building the system right”) is called verification. The problem of showing that the solution meets the needs of the customer is called validation. Verification, in the presence of requirements defects, is an expensive proposition, largely due to the rework it entails. Implementation defects are generally easy and inexpensive to repair but the scope of the rework for requirements defects is usually far greater. Validation is potentially an even more expensive concern because not meeting the customer need is usually not discovered until the system is in their hands. Requirements defects are usually hundreds of times more expensive than implementation defects because the problems are introduced early, identified late in the project, and require you to throw away existing work, redesign and reimplement the solution, then integrate it into the system without breaking anything else.
A proposal: Verifiable and Validatable Requirements
The agile adage of “never be more than minutes away from demonstrating that the work you’re doing is right” applies to all work products, not just software source code. It’s easy to understand how you’d do that with source code (run and test it). But how do you do that with requirements? The Core Concept Premise:
You can only verify things that run. Conclusion:
Build only things that run. Solution:
Build executable requirements models to support early requirements verification
If we can build models of the requirements, we can verify and validate them before handing them off to the design team. The way I recommend you do that is to
- Organize requirements into use cases (user stories works too, if you swing that way)
- Use sequence diagrams to represent the required sequences of functionality for the set of requirements allocated to the use case (scenarios)
- Construct a normative (and executable) state machine that is behaviorally equivalent to that set of scenarios
- Add trace links from the requirements statements to elements of the use case model
- messages and behaviors in scenarios, and
- events (for messages from actors), actions (for messages to actors and internal behaviors), and states (conditions of the system) in the state machine
- Verify requirements are consistent, complete, accurate, and correct
- Validate requirements model with the customer
When problems are identified with the requirements during this functional use case analysis, they can be easily and inexpensively fixed before there is any design or implementation to throw away and redo. Constructing the Executable Use Case
Some people are confused with the fundamental notion of using a state machine to represent requirements, thinking that state machines are inherently a design tool. State machines are just a behavioral specification, and requirements are really just statements of behavior in which we are trying to characterize the required inputoutput control and data transformations of the system. It’s a natural fit. Consider the set of user stories for a cardiac pacemaker
- The pacemaker may be Off (not pacing or sensing) or executing the pacing mode of operation.
- The cardiac pacemaker shall pace the Atrium in Inhibit mode; that is, when an intrinsic heart beat is detected at or before the pacing rate, the pacemaker shall not send current into the heart muscle.
- If the heart does not beat by itself fast enough, as determined by the pacing rate, the pacemaker shall send an electrical current through the heart via the leads at the voltage potential specified by the Pulse Amplitude parameter (nominally 20mv, range [10..100]mv) for the period of time specified by the Pulse Length (nominally 10ms, range [1 .. 20]ms)
- The sensor shall be turned off before the pacing current is released.
- The sensor shall not be re-enabled following a pace for the period of time it takes the charge to dissipate to avoid damaging the sensor (nominally 150ms, setting range is [50..250]ms). This is known as the refractory time.
- When the pacing engine begins, it will disable the sensor and current output; the sensor shall not be enabled for the length of the refractory time.
A scenario of use is shown in the below figure (click to enlarge)
A state machine that describes the required behavior is shown below (click to enlarge)
Of course, this is a simple model, but it actually runs which means that we can then verify that is correct and we can use it to support validation with the customer as well. We can examine different sequences of incoming events with different data values and look at the outcomes to confirm that they are what we expect. Verifying the Requirements
For the verification of consistency of requirements, we must first decide what “inconsistent” means. I believe that inconsistent requirements manifest as incompatible outcomes in the same circumstance, such as when a traffic light would be Red because of one requirement but at the same time must also be Green to meet another. Since the execution of the requirements model has demonstrable outcomes, we can run the scenarios that represent the requirement and show through demonstration that all expected outcomes occur and that no undesired consequences arise. For the verification of completeness, we can first demonstrate – via trace links – that every requirement allocated to the use case is represented in at least one scenario as well as the normative state machine. Secondly, the precision of thought necessary to construct the model naturally raises questions during its creation. Have we considered what happens if the system is THIS state and then THAT occurs? What happens if THAT data is out of range? How quickly must THIS action occur? Have we created all of the scenarios and considered all of the operational variants? These questions will naturally occur to you as you construct the model and will result in the addition of new requirements or the correction of existing ones.
For correctness, I mean that the requirement specifies the proper outcome for a given situation. This is usually a combination of preconditions and a series of input-output event sequences resulting in a specified post-condition. With an executable use case model, we can show via test that for each situation, we have properly specified the output. We can do the same scenario with different data values to ensure that boundary values and potential singularities are properly addressed. We can change the execution order of incoming events to ensure that the specification properly handles all combinations of incoming events. Accuracy is a specific kind of correctness that has to do with quantitative outcomes rather than qualitative ones. For example, if the outcome is a control signal that is maintains an output that is proportional to an input (within an error range), we can run test cases to ensure that the specification actually achieves that. We can both execute the transformational logic in the requirements and formally (mathematically) analyze it as well if desired.
Be aware that this use case model is not the implementation. Even if the system use case model is functionally correct and executes properly, it is not operating on the desired delivery platform (hardware) and has not been optimized for cost, performance, reliability, safety, security, and other kinds of quality of service constraints. In fact, it has not been designed at all. All we’ve done is clearly and unambiguously state what a correctly designed system must do. This kind of model is known as a specification model and does not model the design. Validating the Requirements
Validation refers to confirming that the system meets the needs of the customer. Systems that meet the requirements may fail to provide value to the customer because
- the customer specified the wrong thing,
- the requirements missed some aspect of correctness, completeness, accuracy or consistency,
- the requirements were captured incorrectly or ambiguously,
- the requirements, though correctly stated, were misunderstood
The nice thing about the executable requirements model is that you can demonstrate what you’ve specified to the customer, not as pile of dead trees to be read over a period of weeks but instead as a representation that supports exploration, experimentation, and confirmation. You may have stated what will happen if the physician flips this switch, turns that knob, and then pushes that button, but what if the physician pushes the button first? What has been specified in that case? In a traditional requirements document, you’d have to search page by page looking for some indication as to what you specified would happen. With an executable requirements specification, you can simply say “I don’t know. Let’s try it and found out.” This means that the executable specification supports early validation of the requirements so that you can have a much higher confidence that the customer will be satisfied with the resulting product. So does it really work?
I’ve consulted to hundreds of projects, almost all of which were in the “systems” space, such as aircraft, space craft, medical systems, telecommunications equipment, automobiles and the like. I’ve used this approach extensively with the Rational Rhapsody toolset for modeling and (usually) DOORS for managing the textual requirements. My personal experience is that it results in far higher quality in terms of requirements and a shorter development time with less rework and happier end customers. By way of a public example, I was involved in the development of the Eaton Hybrid Drivetrain project
. We did this kind of use case functional analysis constructing executable use cases, and it identified many key requirements problems before they were discovered by downstream engineering. The resulting requirements specification was far more complete and correct after this work was done that in previous projects, meaning that the development team spent less time overall. Summary
Building a set of requirements is both daunting and necessary. It is necessary because without it, projects will take longer – sometimes far longer – and cost more. Requirements defects are acknowledged to be the most expensive kind of defects because they are typically discovered late (when you’re supposed to be done) and require significant work to be thrown away and redone. It is a daunting task because text – while expressive – is ambiguous, vague and difficult to demonstrate its quality. However, by building executable requirements models, the quality of the requirements can be greatly improved at minimal cost and effort.
For more detail on the approach and specific techniques, you can find more information in my books Real-Time UML Workshop
or Real-Time Agility
On Thursday 14 March I presented on an IBM sponsored Dr Dobbs webcast on the topic of ‘3 Reasons to Throw Away Your Requirements Documents’. If you didn’t catch the webcast, you can view it on-demand and download the slides. If you did attend I hope you enjoyed it and in this blog post I want to answer some of the questions that I didn't get time to cover in the webcast, so scroll down to see if I've covered your question here. But first, for those of you who didn't make it, here's a quick recap of points I made during the webcast:
- What do I mean by ‘Throw Away Your Requirements Documents’? I’m not saying do away with your requirements or your requirements management process. What am I saying is invest in improving your requirements management process and support those process improvements with the right tooling. In the webcast audience poll, 60% said they are using documents and/or spreadsheets for requirements, so I set out to provide the case for why they should consider moving to a requirements management tool.
- Reason #1 – Collaboration. Using documents and spreadsheets, you can get into all sorts of problems like working from the wrong version of requirements. An integrated requirements management tool (one that has a collaborative repository that has open interfaces to share requirements data with design, test and development environments) helps you ensure that all engineering / development team members and their stakeholders are working from the right version and view of requirements for their role. I shared information from a case study on MBDA Missile Systems where collaboration was a major challenge.
- Reason #2 – Traceability. Traceability helps you get context and an audit trail for decisions made, perform coverage analysis, detect gold plating and perform impact analysis (See blog posts ‘What is Traceability?’ and ‘The uses and value of traceability’ for more explanation). But using documents and spreadsheets to create and manage traceability gives it a bad name – it becomes a tedious, error-prone overhead activity. With an integrated requirements management tool traceability links can be easily created and navigated. Traceability becomes part of the process rather than a bulk catch-up exercise, and through open interfaces it becomes easy to extend traceability from requirements to artifacts in other tools such as designs, work items and test cases. And most importantly there are automated views and reports that enable you to make active use of traceability for the purposes I mentioned above. I shared information from a case study on Invensys Rail Dimetronic where traceability is essential.
- Reason #3 – Agility. As many engineering and development organizations are looking to improve time to market and reduce costs, agile approaches are becoming increasingly popular (in the webcast audience poll 56% said they were using some sort of hybrid waterfall/iterative approach and 20% were agile), and with that comes changes to the way requirements might have been traditionally defined and most importantly to the way that changes to requirements are managed. Change is allowed and encouraged but you must have reviews and impact analysis to make informed decisions about whether to accept the change and implement in the current iteration or sprint, plan it into a future iteration or reject the change altogether. And to do that requires more than just a backlog managed in a spreadsheet or an agile planning tool. At IBM Rational we keep a prioritized backlog of user stories and epics as work items in plans managed in Rational Team Concert. User stories and epics are then decomposed and more fully described by requirements and supporting visual and textual artifacts created and managed in Rational DOORS Next Generation.
- And I started and concluded by looking at the most compelling reason for improving your requirements management process with the support of the right tooling: Return on Investment. I shared information from a case study on Emerging Health, Montefiore IT where they a achieved, among other benefits, a 69% reduction in the cost of quality (test preparation, testing and rework) within 6 months of deploying an improved requirements management process supported by IBM Rational DOORS. There’s also a follow-up video to this case study that was recorded after the client had adopted agile development practices.
Ok now onto some of the questions that were submitted but I didn’t time to answer during the webcast. I’ve divided them into sections based on the type of question so you can easily scan down to topics you’re interested in:
Alternative approaches to managing requirements
Q. What's the advantage of a requirements management tool if I could link my change management tickets to fine grained requirements (use cases, storyboards) that are maintained on a wiki or other collaboration tool (e.g. Sharepoint or IBM Connections)? It would seem that the work items would give you the needed traceability and metrics?
A. While you might be able to create the level of traceability you require, how would you report on it? Would you have to build bespoke views and reports? And would the use of a wiki for ‘fine grained requirements’ provide you with a view of requirements in context with one another as opposed to individual wiki pages? Where would you document additional properties of requirements? Could you easily reuse requirements across projects? A requirements management tool like IBM Rational DOORS Next Generation provides built-in traceability views and reports, enables to you to structure requirements in context with another in document-like views, provides user-defined properties for recording additional information and facilitates reuse of requirements.
Requirements Management and Agile Development
Q. We are mostly using water fall model, but looking into checking if Agile works for our customers. I am very interested in the role of a Requirements Analyst in the Agile world. In our world, the analyst is a facilitator of requirements gathering and not a SME on the business application we are gathering requirements for.
A. That facilitation role and the analysis skills of the Requirements or Business Analyst are still essential in Agile development. A common mistake when moving to moving to more agile approaches is appointing a ‘Product Owner’, who is a subject matter expert in the business domain you are building an application or product for, and expecting them to write the requirements or ‘user stories’. While they are experts in the business and you need that expertise on hand for Agile to work, they are not usually skilled in getting to the root goals or needs of the business problems you’re looking to address with the product or application. Without the skills of the Requirements or Business Analyst, it’s all too easy for the user stories to become about automating the way things are done today rather than addressing the real business issues in an optimal way. You can read more about the role of the analyst in Agile development in an interview with Mary Gorman of EBG Consulting.
Q. How does the IBM requirements toolset compare to more "agile" focused toolsets like Atlassian, Rally, etc.?
A. IBM is very committed to supporting agile development, particularly when agile is scaled to large, distributed development teams. At the heart of our agile development capabilities is Rational Team Concert which supports agile planning, task management and change management. As stated in the webcast though, we’ve found in our own continuous delivery process and heard from clients, that a place is needed to capture more details of requirements and their associated properties, in context with one another, together with supporting artifacts like storyboards, workflow scenarios and use cases. The integration of Rational Team Concert with Rational DOORS Next Generation provides those additional capabilities while preserving traceability to user stories and epics managed in the product backlog.
IBM Requirements Management solutions
Q. It seems this discussion is on Rational DOORS. Is Rational Requisite Pro still offered? If so then how do these two products compare?
A. IBM continues to support, maintain and respond to enhancement requests for Requisite Pro, but our future direction for requirements management for IT application development lies with Rational Requirements Composer and we provide migration support and a trade-up program. Please contact your IBM representative for more details. If you are using Requisite Pro for requirements management for complex products or embedded systems development, then you might also want to look at whether Rational DOORS or Rational DOORS Next Generation is the right move forward for your organization. But don’t worry we’re not forcing you to migrate today.
Q. You've mentioned DOORS Next Generation in your presentation, what is that? Does it replace DOORS?
A. IBM Rational DOORS Next Generation is a requirements management application on a collaborative lifecycle management platform for systems and software engineering that provides requirements collaboration, planning, reuse and lifecycle traceability. DOORS Next Generation (DOORS NG) was introduced in 2012 to take advantage of the common, collaborative ‘Jazz’ platform shared by Rational Team Concert, Rational Quality Manager, Rational Design Manager and Rational Requirements Composer; and to extend the requirements management capabilities in Requirements Composer to meet the needs of product & systems development organizations developing complex and/or embedded systems. DOORS NG will enable IBM to introduce new capabilities, faster, that would have been more difficult to deliver with existing DOORS product. However DOORS NG does not replace DOORS today. We have a very large install base of DOORS users working on programs that can last tens of years, and we will continue to support, maintain and enhance the DOORS 9.x series to meet the needs of those users. We encourage existing DOORS users to take a look at DOORS NG, to try it out on pilot projects and use the interoperability capabilities to exchange and/or link data with DOORS 9.x. Existing DOORS customers with active support & subscription are entitled to use DOORS NG without requiring an additional purchase – you can use your existing DOORS license entitlements to use with either DOORS or DOORS NG or a combination of the two. But we are not telling existing DOORS users to migrate live projects to DOORS NG today. DOORS NG in it’s early releases is attractive to organizations who don’t currently use a requirements management tool and are looking for a web based solution that also offers common platform integration with change management, agile planning, test management and design management capabilities. You can download a trial and follow development plans for DOORS NG on Jazz.net.
Q. How does Rational DOORS Next Generation compare to Rational Requirements Composer?
A. DOORS Next Generation is based on Rational Requirements Composer but extended to meet the requirements management needs of product & systems development organizations developing complex and/or embedded systems. In the first releases of DOORS NG, the web client is identical to Requirements Composer, but DOORS NG also features a rich client which is designed for usability of editing large requirements specifications. The strategy for the two products is that while Requirements Composer will be focused on the needs of business analysis and IT application development teams, DOORS NG will be focused on the needs of systems engineers and product & systems development teams.
I’d like to wrap up this post by thanking all of you who attended the webcast, participated in the polls, asked some great questions and completed the survey feedback. If you missed it, you can catch a replay or download the slides. I’d welcome any additional comments or questions here.
Today we have with us, Theresa Kratschmer writing about the importance of metrics in requirements management. Theresa Kratschmer is a senior software engineer who joined IBM T.J. Watson Research in 1996. There she worked on defect analysis, requirements, and Orthogonal Defect Classification deployment. In 2010, Theresa moved to sales where she is a technical specialist focusing on the Rational Jazz products. Previous to joining IBM, Theresa developed software for real-time, graphics, and database applications in the medical electronics industry. She can be reached out at theresak[at]us.ibm.com
Everybody is talking about metrics today. Numbers have always been important to builders, financial wizards, and politicians. If you were a software developer though, you may have taken a lot of math courses but you didn’t really use that math directly. So, why the sudden interest in software and metrics or specifically in requirements and metrics?
Metrics are important whenever you need accuracy such as building a house or sewing a dress. They are important when you need to work efficiently and you can’t afford waste. In today’s business environment, every organization out there, whether a finance company, government agency, or healthcare organization needs to work more effectively and produce more data, more projects, more software with far fewer resources. Numbers, or metrics, is the way to do this. Since requirements are the foundation of software, it is one of the best places to apply metrics. By applying measurements to metrics, we get insight into the organization’s requirements activities. We find out how much progress is being made, whether we have gaps in the downstream deliverable related to requirements and how big our project risk might be when changes occur to those requirements. Applying measurements to requirements also allows us to make continuous improvements.
So if metrics are so valuable, why aren’t more people collecting them? There are primarily three reasons
- It often takes too long to collect the metrics. By the time you end up gathering all the information you need, it’s out of date already.
- How do you interpret the information that you collect? Some people feel that if you don’t have access to a person with a PhD, you’ll never understand the numbers.
- Even if you collect the data and you correctly interpret it, is the organization actually going to do anything with that data to actually make improvements?
The best way to collect metrics, of course, is to do it automatically as the team goes through their daily activities. As they define and refine requirements, process requirement reviews, perform change management and quality management activities, data can be collected automatically, processed, and made available in real-time. This is exactly what an integrated solution like Jazz platform
does. It basically removes this obstacle and makes data collection, processing and display of data a natural part of the project team’s daily activity.
Interpretation of Data
Understanding data and resultant reports does take some skill. But it is not complicated. I always start by articulating exactly what information I want to view or what questions I need answered. Then I look for the data that will help me answer those questions. For example, you might want to which high priority requirements have no test cases associated with them. One of the capabilities with the Jazz suite that helps here is that the reports have a section called “What does this report tell me?”. When you run a report, you actually have some automatic interpretation built in. Of course, I always recommend people start simply. Your skills will grow as you get more comfortable understanding and interpreting the data. Jazz also allows you to set up filters which help you answer specific questions at the click of a button. This makes analysis automatic and very, very easy.
Improvement Actions for Continuous Improvement
Identifying and implementing actions is one of the most important aspects of ensuring continuous improvement. It’s very important when analyzing data that indicates weaknesses in your process to actually identify improvement actions. Not only do you need to identify how you will improve your project and processes you follow, but you also need to assign owners and a date for implementation to make sure you do actually make improvements.
I hope I’ve convinced you that metrics are important – well worth the time and effort it takes to collect and analyze data. By identifying and implementing improvement actions your organization can make considerable progress in working more efficiently and really achieving the ability to do more with less.
Anthony Kesterton is a Technical Consultant in the Financial Services Sector for IBM Rational in the United Kingdom. He has a wide experience in IT development including many years teaching, mentoring and using various IBM Rational tools, as both an end-user and as an IBM employee. He is a regular contributor on the forums on jazz.net. He is co-author of the IBM Redbook “Building SOA Solutions Using the Rational SDP”. As an IBM Community volunteer, he works on programmes dedicated to encouraging children to take an interest in Science, Technology, Engineering and Mathematics (STEM) at school. He also regularly writes in IBM Technical Field Professionals' blog here. Anthony can be reached out at akesterton[at]uk.ibm.com
One of the more interesting discussions I have had about requirements was how to indicate that a requirement must be implemented in the final system. The business wanted to make every requirement “mandatory” which gives no indication to anyone which requirements are really important. Eventually, the business analysts compromised and had levels of “mandatoriness” (I think I might have invented a new term) starting with “least mandatory” to “most mandatory”. The business was happy to accept this. To me, this showed the importance gaining agreement on the requirement attributes. It should not stop at attributes – attribute values, requirements types, traceability and document templates are also very important for a project. All this information should be part of a Requirements Management Plan.
A Requirements Management Plan has nothing to do with time and resources on the project but everything to do with structure of the requirements. A Requirements Management Plan is a way to capture the kind of requirements that will be used on the project, their attributes and other useful information. A typical Requirements Management Plan should contain the following:
- Types of requirements: For example, business requirements, system requirements, and performance requirements.
- Attributes: Important information associated with each requirement type, for example: priority, source of the requirement, or the stability of the requirement
- Attribute values and the meaning of these values: Each attribute should have a range of possible values, and these must be agreed upon and documented. For example, an attribute for priority may have a value from 1 to 5, where 1 means the highest priority and 5 means the lowest.
- Traceability between requirements: There is usually a hierarchy of requirement types. This hierarchy needs to be captured and explained. For example, a feature of the system may link to a software requirement, with the feature at the top of this hierarchy.
- Document templates: Many organizations have standard ways they present information in document form. They might have a Software Requirements Specification with a specific format and structure.
Having the discussion about this plan, and getting agreement on the content of the plan can be enlightening. It can uncover what kind of information is important to the project, how the project plans to manage the requirements (via the attributes), and most importantly, what kinds requirements where overlooked. While the plan is being discussed, be prepared for heated debates about potentially every aspect of the plan.
Spend time creating a Requirements Management Plan for your project as soon as possible. Be prepared for changes to that plan over time as the project works out the really useful attributes or even requirement types. Most importantly, get agreement on the Requirements Management Plan – it really does help a project build requirements that clarify rather than obscure the intention of a project.