Modified on by VijaySankar
The world of requirements management has developed significantly in the last decade or so and has increasingly become one of the corner stones of successful software and systems engineering projects. We have been discussing various aspects of the domain from a best practices perspective and how tools can help managing your requirements efficiently and effectively.
Starting today we will discuss various aspects of the requirements management discipline at a bird’s eye view level. These are meant to be introductory in nature and also intend to serve as refreshers for those who are already in the field. The domain and best practices have developed to an enormous level of sophistication that; it is difficult to cover everything in a set of blog posts. However we intend to make these posts as a quick reference and starting point for you to think seriously about the domain.
Have you heard about the Gaudi’s unfinished Cathedral or Airane 5 explosion? The former one is a hundred year project still under progress which couldn’t be finished because of unclear and changing requirements
and the latter one resulted in over $7 billion loss when the rocket exploded on its first voyage due to a software error; specifically floating point number error
. The importance of requirements management can be established from three unique perspectives – project overshoot and thus missing the market opportunity due to unclear and changing requirements; project failures due to unmet or misunderstood requirements and finally cost burden due to errors and missed requirements found late in the development cycle.
In a classic IEEE Spectrum article, Robert N. Charette writes about Why Software Fails
. Among the top reasons for failure of software projects are poor definition of requirements, poor management of risk, communication failure among stakeholders and increasing complexity of projects. In IBM GBS, ineffective requirements managements in one among the top five reasons for troubled projects. Many research firms (Standish Group’s CHAOS report, Gartner, CMU-SEI) and academicians (A Davis, Robert B.Grady, Steve Easterbrook
) have studied and quantified the failure rates of software projects (for example, in the above IEEE article Robert opines that 40-50% of software development time is spent on rework and cost of fixing a bug in the field can be as high as 100 times compared to when fixed at development stage). In all of them, the preliminary reasons for failures or overshoots are ineffective management of requirements.
So what exactly is requirements management?
Before moving to requirements management, let’s understand what a requirement is? A requirement can be anything from an abstract need to a well drilled down implementation detail of a system. Essentially it can be considered the detailed view of a need under consideration. IEEE Standard Glossary of Software Engineering Terminology
defines a requirement as a condition or capability needed by a user to solve a problem or achieve an objective; or a condition or capability that must be met or possessed by a system or system component to satisfy a contract, standard, specification, or other formally imposed documents; or a documented representation of a condition or capability as in former two. Thus what a requirement essentially represents depends on to whom we are talking to – it could be the need to a client; a business requirement for customers; a system requirement for vendors or a specification for a developer and tester. We will come to the different types of requirements later. Requirements Management can be considered the management of requirements essentially from when a customer provides the needs or a product development process is started. It includes managing the definition, elaboration and changing requirements during the development cycle and systems development. Peter Zielczynski, a requirements management expert defines the following major steps in requirements management (Requirements Management Using IBM® Rational® RequisitePro®, Peter Zielczynski
Establishing a requirements management plan
Developing the Vision document
Creating use cases
Creating test cases from use cases
Creating test cases from the supplementary specification
Zave (Classification of Research Efforts in Requirements Engineering. ACM Computing Surveys (1997)) defines Requirements Engineering as “the branch of software engineering concerned with the real-world goals for, functions of, and constraints on software systems. It is also concerned with the relationship of these factors to precise specifications of software behavior, and to their evolution over time and across software families.” While in practical terms, this could be considered same as requirements management, we can say requirements engineering addresses various aspects of requirements development; requirements management is the set of processes in systems and software engineering that interfaces with requirements engineering. We will try to delve into more details in another post when we consider V&V (Verification & Validation) model.
This is the first part of our six part blog posts series on basics of requirements management. Read the remaining parts here -
1. What is requirements management and why is it important?
2. How to write good requirements and types of requirements
3. Why base line your requirements?
4. What is Traceability?
5. The uses and value of traceability
6. Revisiting Requirements Elicitation
In my career I’ve been deeply involved with both modeling and requirements management disciplines and tools, so it always intrigues me when I hear debates over whether largely textual based (sometimes referred to as ‘traditional’ or ‘document-based’) or model-based approaches to defining and managing requirements are the right way to go.
We’ve all heard the argument that a picture paints a thousand words, but I’ve always vividly remembered something I heard at a conference some years ago which was “I’d have taken a 1000 words over this one unreadable diagram.”
My belief is that it is not an either-or decision. You need both. Models can add clarity to requirements specifications and can bring together a more holistic understanding of what’s expressed in the requirements. Models can be walked through with stakeholders and with the right language and tools (like SysML or UML in IBM Rational Rhapsody), they can even be run to validate that what is captured in the model is correct, consistent and complete. But what if you have contractual requirements to manage, documents of regulations or standards to comply with, or complex performance or availability constraints – you don’t want to clutter your model with so much detail that it becomes unusable.
My preference is for a combination of textual requirements and models, that can be described by the ‘Systems Engineering Club Sandwich’ (references 1&2) where textual requirements, which form the layers of bread - and maybe a bit dry on their own, are supplemented by models that form the layers of filling – they are richer and more expressive, together forming a tasty combination to help explore and elaborate requirements, perform decomposition and allocation, and maintain traceability. I recently got together with my colleague Paul Urban to record a 30 minute webcast entitled ‘The Tasty Way to Tackle Complexity - The Systems Engineering Club Sandwich of Requirements & Models’
where we take a look at some engineering challenges, where requirements work goes wrong, how the club sandwich approach works and how to use requirements and models together effectively. So if this hors d'oeuvre has made you hungry for more, please take a look. Paul and I are really interested to hear what you think.
1. "The Systems Engineering Sandwich: Combining Requirements, Models and
Design", Jeremy Dick, Jonathon Chard, INCOSE International Symposium,
Toulouse, July 2004.
2. Requirements Engineering, Hull, Jackson
& Dick, Springer 2004.
Modified on by VijaySankar
We had a fantastic DOORS customer webcast panel on September 26 with experts from across the industries talking about their experience in using IBM Rational DOORS for requirements management. If we have to take one success metric for the webcast; it was that we overshoot by half an hour from the designated one hour for the discussion because we had some wonderful discussions going on. Our panelists have graciously agreed to respond to the questions on an offline basis, and we are publishing the answers here.
If you missed this golden opportunity and is wondering whether you can get another chance? YES! Or you would like to listen to it again; we have posted the replay here. You can listen to it anytime now. Register here
How to copy the objects with in-links and out-links from one module to another module?
< >Use DXL scripts that capture link information, which can be edited if necessary, and then copy the objects, and then run another dxl script to re-create the links.
Paul Lusardi has shared with us couple of samples. If you are interested, contact us
What factors determine using new database vs a new project in a database to differentiate/segregate work?
< >The biggest criteria for using the same database is Linking. If there are Projects in a database that you need to link to, the new Project should be in that same DB.The default should be to use the same database so that you only have to maintain one set of users, groups, standards, etc…. the possible exception is if you have 2 distinct sets of security measures (HIPAA, DoD or DoE cleared data) then you will want to segregate that out to a different server.
Does anyone on Panel publish documents directly from DOORS without Add-on applications?
< >[Patrick] I use DXL scripts which export the data out to RTF with some typical publishing formatting
[Mia] < >I do. Whether you can depends on the demand for documents (do you need to do them once a week? Daily? Quarterly?) and how critical their formatting is to your organization. We have configured views in most modules for output. A simple one has just the object number and the Object Heading/Object Text. For a full spec, we include trace columns for one or more linked modules, with object identifiers and even object headers or text in those columns. In the output you get the main object from the current module, followed by all the objects it’s linked to. I keep in a label that specifies “in link” or “out link.”I used to painstakingly populate the Paragraph Style attribute in order to map to MS Word, but because our need for documents is very low I have stopped enforcing this. We have a set of MS Word templates with the title page, toc, other front matter, and a standard set of styles. Plain Vanilla DOORS maps to basic Word styles fairly well.I pick the appropriate view and filters and select Export to Word. I select the appropriate Word template, and let it rip. I then do some minimal post processing, like getting rid of the object IDs on headings. Obviously if you’re doing very long documents this would be impractical.In my previous job we had interns create a DXL script that handled the post processing – you can get to something that’s usable without a huge investment.
How do you deal with sharing of your database with the government or other contractors? Do you share read-only partitions ?
[Paul] < >Previous to coming to NuScale, I co-developed the Unified RM Database where we allowed outside access, using access control and company VPN login credentials to enable multi-company development and reviewer teams to look at the data.The questions would be …..do you want them to see work in progress? Or just a snapshot of baselined work? Perhaps a separate server with a restored baseline set would be best in that case. I will defer to an expert on baseline sets though
How do you get rid of allowing DOORS Links?
[Patrick] We use a link schema with Pairings as necessary, and then create and delete but not purge the DOORS Links module with a copy in every sub-Project/Folder, and leave it as the default Link Module. That way, users cannot create a link that is not intended between any module pair.
Paul Lusardi has shared us a presentation. If you are interested, contact us
Is there a "Copy View" script available?
Paul Lusardi has shared us a presentation. If you are interested, contact us
Please ask about doing risk assessment in DOORS: Do any of the panelists document FMEA or risk analysis in DOORS? How do they do it?
< >[Paul] Yes and no. Textbook FMEA? No. using DOORS to analyze potential risks by means of looking at risk informed design? Yes, this is a “developing” activity[Patrick] We capture the risk status in DOORS, but not the actual assessment data
Do any of the panelists integrate DOORS with other tools like Rational Quality Manager?
< >[Patrick] We are looking into it, but have not actually started using in Production. Going to deploy DOORS/CQ by year end.[Paul] Only RPE integrations are used here
How has your work differed because you use SCRUM vs. when you did not?
Our pre-sprint requirements work is at a higher level – that is, not down in the functional weeds. We spend more time exploring new features from the user’s perspective, and more time on storyboarding and similar activities. The features we’ve built since going to agile are much more user-friendly and have much better workflows than the older stuff, where much of the workflow and usability decisions were left to the engineers.
Detailed functional analysis of requirements happens during the sprint, with the requirements analyst on the team working closely with the engineers and testers. For very complex features, this analysis process will begin before the sprint so that we have enough functional detail for the team to estimate the work. We have a small backlog grooming team that, members of the scrum teams, who help the requirements analyst with this.
For Mia, for your agile, how many full-time admins needed to maintain DOORS?
[Mia] We’re very small, so we don’t even have one full time admin. Our system engineer handles the environment, and I handle user administration. It’s a tiny part of each of our jobs. I don’t see that our approach represents a particularly different administrative need than any other. If we had a hundred users and multiple DOORS databases, then we’d dedicated administrative resources just like any other implementation.
@Mia - Are requirements for HW to SW interfaces traced through ICD(s) in DOORS?
We’re software only, so no HW/SW interfaces. However, I strongly support the use of interface specifications between software systems and have used an interface module to sit in between systems that need to interact. The trick is to stay on the requirements side of the requirements/design line. The cost of maintenance on such a model is high – probably too heavy for our agile process to maintain. However, should we need to support integration with an external application (one that our teams are not developing), our API requirements will undergo close scrutiny and possibly be carved out into a separate module so that we can use trace links to do impact analysis on external “clients.”
Do you DOORS lend itself to any specific Devops , ALM methodology liek Agile etc?
We adopted a continuous deployment model this year and it had no impact on our use of DOORS. That is, DOORS is still a critical part of our development tooling. We were already capturing release version information for requirements in DOORS, and this information remains very useful in understanding what feature set exists in a given version of our application.
Try IBM Rational DOORS in a Sandbox
Webcast - Achieving sustainable requirements across the supply chain with IBM Rational DOORS
Modified on by VijaySankar
Today, we are starting a new series of interview blog posts -- Coffee Time with Requirements Experts. Through this series, we try to bring to you the thoughts, career experience and advice of experts from the industry. Enjoy!
We have with us Jared Pulham, a Senior Product Manager at IBM. He focuses specifically on requirements management tools and capabilities for Jazz. Jared is responsible for one of IBM's requirements management products, Rational Requirements Composer. He has over 15 years of industry experience in software testing and development with a background and experience in many companies across industries. He joined IBM through Telelogic acquisition where he was a Director of Product Management. He regularly writes at jazz.net blog . He can be reached out at jared.pulham[at]uk.ibm.com
Q. Throughout your career in the Software development industry how have things in the field changed?
Watching the improvements and changes in development processes from waterfall models to adoption of faster development models proposed under the agile manifesto. Likewise the tools and emphasis on features (in those tools) with features/concepts to support different roles in the development process has continued to revolutionize and change the way organizations work.
Q. You have been mostly associated with the discipline of requirements management in your career; what attracted you to it?
I believe that requirements are the driver for shaping businesses and are the mechanism for deciding what should be developed to meet the customers need. As a business and tool leader myself I feel this is the right place to help myself and other organizations achieve similar successful results.
Q. How do you see tools and techniques helping professionals in a requirements domain?
Tools help bring development teams together to collaborate, bring them out of silos, allow them to organize development requirements better in structures that makes it easy for all/others to understand, see and spot gaps where development is missing, and easily recognize changes in the project.
Q. What do you think are some of the challenges faced by Business Analysts or Requirements Engineers today?
Understanding how they can work in faster projects adjusting to agile and iterative processes. Knowing how to help meet business goals and objectives through joined up thinking and development.
Q. What interests you outside your job?
Sports, sailing and technology improvements in mobile and Web devices for lifestyle
Q. How do you keep yourself current in this fast changing technology field?
Working with many customers across industries and markets. Writing blogs and papers that can help challenge the thought for requirements in development as well as speaking through conferences where I get a chance to meet other thought leaders for development processes and tool development.
Q. What's your advice to budding analysts/engineers considering focusing on requirements processes or tools?
Focus on understanding the market drivers for your specific industry because customer demand (requirements) will always drive a project and understanding how to translate that demand into use cases, business cases and actual development content for your project will help you better support your organization. Once you understand those requirements look at the other members of your team and understand who in your team will best benefit from those requirements to improve the business (through development).
Modified on by VijaySankar
Just like the famous (mis)quote of Mark Twain, rumors of the demise of the requirements management tool IBM Rational DOORS are not only exaggerations but in fact untrue. I’d like to put straight here some of the myths and misunderstanding that I’ve heard and seen perpetuated:
Myth: IBM is discontinuing support and development for the DOORS 9.x series
Truth: IBM continues to develop the DOORS 9.x series with the same level of development resources. We have new functionality to announce in 2013 and plan further releases in the years ahead. In addition IBM has recently invested additional development resources in creating a new Requirements Management tool called IBM Rational DOORS Next Generation (DOORS NG).
Myth: IBM is replacing the DOORS 9.x series with DOORS Next Generation (DOORS NG) and expects customers to migrate now
Truth: DOORS 9.x will be developed in parallel with DOORS NG for many years to come. DOORS NG was released for the first time in November 2012. The tool has many functions DOORS 9.5 does not have, e.g. fully functional web client, type and data re-use (requirements & attributes), team collaboration, task management, but also does not have all of the capabilities relied on by DOORS 9.x users. We encourage users to evaluate the capabilities of DOORS NG and start projects or move projects to DOORS NG when practical and beneficial to the project.
Myth: To move to DOORS Next Generation, I’ll need to buy a whole new tool
Truth: Customers with active support & subscription on their DOORS 9.x licenses are entitled to use DOORS Next Generation - it is included as part of the DOORS 9.x package. Customers can choose to use either DOORS 9.x, DOORS Next Generation or a combination of both. There is no need for additional purchases or even for an exchange of licenses. All licenses are available to customers in the IBM license key center.
Myth: IBM offer no migration path from DOORS 9.x to DOORS NG
Truth: IBM do offer functions to transition into using DOORS NG for existing DOORS 9.x customers:
Work with DOORS 9.x and DOORS NG alongside each other - both tools support linking of information between both databases.
Work with suppliers by exchanging requirements through the standard exchange format of ReqIF. This works best between DOORS 9.x and DOORS NG but is also designed to work with other RM tools.
Where DOORS NG projects wish to be initialized with data from DOORS 9.x we offer re-factoring functions to harmonize the transition of data from DOORS 9.x to DOORS NG.
Myth: Everyone knows IBM’s strategy on requirements management and DOORS
Truth: IBM takes great care to regularly communicate our product strategy and roadmap at trade shows, IBM conferences and on webcasts. If in doubt, come and ask IBM! Please submit questions using the comment feature here or contact your IBM representative.
For information about the products, visit -
IBM Rational DOORS
IBM Rational DOORS Next Generation
Modified on by VijaySankar
Note: This is the fourth post in our series of Managing Your Requirements 101. Read the first three posts here:
What is traceability? Or more specifically what is requirements traceability? Well rather than repeat what is already a good collection of definitions, I’ll refer you to http://en.wikipedia.org/wiki/Requirements_traceability
. From there I’d summarize three elements to requirements traceability:
Following the life of a requirement – from idea to implementation
How requirements impact each other, and how requirements impact other development lifecycle artifacts (such as designs, tests, tasks, source code, hardware specs, etc.) and vice versa.
The decomposition of requirements – from high level user/customer/market needs to system, sub-system, software or hardware component requirements; and transformation into design specifications and the implementation realization of the requirement.
Traceability in this context is about relationships between requirements at the same or different levels of detail, and between requirements and other lifecycle artifacts as listed above. It also extends to relationships beyond those directly involving requirements – i.e. the relationship of a defect report to a test case – this is referred to as ‘lifecycle traceability’. Traceability relationships can be of multiple types, for example:
Satisfaction: a system requirement (or more likely a number of system requirements) ‘satisfies’ a user requirement e.g. system requirement ‘The engine shall have at least 200bhp’ satisfies user requirement ‘The car shall be capable of accelerating from 0-60mph in under 8 seconds’.
Verification: a test case ‘verifies’ a requirement e.g. test case ‘0-60mph acceleration test’ (consisting of a number of test steps) verifies user requirement ‘The car shall be capable of accelerating from 0-60mph in under 8 seconds’.
Dependency (often used where interfaces are concerned): a requirement ‘depends’ on another requirement e.g. requirement ‘the power socket shall take 3 pins’ depends on requirement ‘the plug shall have 3 pins’.
Basic traceability establishes a relationship or link between one or more elements. Typed traceability adds the relationship type with its associated semantics (examples above). Rich traceability (ref: Requirements Engineering, Hull, Jackson & Dick, Springer, 2004) adds additional information on the traceability relationship such as the rationale explaining why a group of systems requirements satisfies a particular user requirements; or as is often the case, you can’t be 100% certain on specification or design decisions, you might document any assumptions you made in deriving a set of systems requirements from a user requirement. The rich traceability approach is particularly valuable in heavily regulated industries and safety-critical systems where audit trails of decisions made are vitally important to provide assurance and reduce risks.
Once traceability has been established there are multiple ways in which it can be viewed and reported on. Perhaps the oldest and most commonly recognized method is the traceability matrix where you can see the intersection between two sets of requirements and a check or cross shows where a link exists. This method doesn’t scale particularly well since the matrix could become very large. It’s also sometimes used for creating the links, but it’s not ideal for that either since you can typically only see a small amount of information on the requirements.
Another way to see traceability is to pick a starting point, e.g. the user requirements and display the related systems requirements alongside the user requirement they are linked to, in a traceability column. You can typically choose how much detail of the linked requirement is displayed, and you can even make it recursive, going down as many levels of requirements as you need/is practical to manage in a single view.
Graphical displays are great for getting a bigger picture view of traceability rather than immediately focusing in on the details of particular relationship. You can explore the traceability tree, zooming in/out or collapsing/expanding parts of the tree, or changing the focus (starting point) of the tree.
But what about in agile development, I hear you cry? Well that could be another topic in its own right – watch this space - but relationships still exist between typical artifacts created in agile approaches (such as between product features and user stories), and I argue that as long as traceability is created ‘as you go’ and automated by tools as much as is practical, that it’s even more essential to stay informed when changes are happening rapidly and ensure you are looking at the correct versions of related artifacts.
In a follow-on post in this Requirements 101 series, I’ll take a look at what traceability can be used for – highlighting where its application can bring significant value to your projects. But for now I’ll leave you with a few resources below that I’d recommend you take a look at, and ask you to let me know if you think this post was useful (or not!) and provide any feedback or additional information using the comment function.
Modified on by VijaySankar
Note: This is the third post in our series of Managing Your Requirements 101. Read the first two posts here -
Usually projects start with unclear requirements and expectations. Lack of base lined requirements can result in chaos with lots of requirements changes resulting in requirements and scope creeps. Baselines can also help in acceptance testing and prototyping efforts. Baselines are especially valuable in fixed price contracts.
A baseline is all about getting to a common base agreement between stakeholders. It essentially involves setting the right expectations including responsibilities, risks, assumptions, deliverable and approaches. Once an agreement is reached; it could be put in source control to manage the base line going forward.
Why bother base lining requirements? As mentioned in earlier posts, requirements are the foundation stones to a project and unless we know what we are creating; how do we know what changes to make in due course? Starting the projects without a proper analysis of requirements is a recipe for disaster - it’s like building a house without a blue print. When it comes to software projects, lack of base lines can incentivize clients to make endless changes while the project is in progress and resulting in requirements and scope creeps. Requirements must be initially base-lined and put under change control in the Statement of Work (SOW) so that the project can be planned, estimated and executed. When it comes to a requirements management tool like Rational DOORS or Rational Requirements Composer, a requirements project baseline captures the entire project at a specific moment in time including folder structures and artifacts. Baselines also play a significant role in enabling traceability. It provides the foundation linkages to establish the traceability matrix later in the project.
What to be included in a baseline? Though the contents of a baseline can vary; it is essentially provides the functional and nonfunctional requirements taken into consideration for a release or an iteration. It may contain other aspects like sub system and hardware dependencies also. It is also important to note here that requirements baselines evolve over time. The Business Analyst or Project Manager concerned takes the call on creating new baselines as requirements change or new requirements pops up. As mentioned above, a requirements baseline essentially captures the entire state of a project as t a given point in time. Essentially this includes the vision/scope document, glossary of terms, use case (stories). The starting point for not resulting in requirements creep is setting the boundaries.
Ideal time for base-lining. Baselines drive formal change controls. A project manager is always trying to address the triple constraint – scope, time and cost (coined by Kathy Schwalbe) . Baselines help in managing the scope constraint and focus on other aspects. Baselines also pave way to setting the schedules. Karl E. Wiegers in his book (More About Software Requirements) provides an exhaustive list of factors to be considered before defining a requirements baseline.
What do you think about baselines?
This is the third part of our six part blog posts series on basics of requirements management. Read the remaining parts here -
1. What is requirements management and why is it important?
2. How to write good requirements and types of requirements
3. Why base line your requirements?
4. What is Traceability?
5. The uses and value of traceability
6. Revisiting Requirements Elicitation
As we reach the close of a year and move into a new one,
it's often time to take stock of what we've been doing and making plans for the
New Year. We look at what we've been doing right and what we could change and
improve. So since this is a requirements management blog I thought it would
worth posing the question and giving an opinion on whether requirements
management (in the domain of systems and product development - my focus area)
is still relevant today and as we move into 2013?
I'm not really sure when requirements management as a formally
recognized discipline can be said to have come into being, but I do believe that
it really started to take shape in the early 90's, primarily based on work
coming out of the aerospace industry, and that's when commercial specialized
tools for requirements management, such as IBM Rational DOORS (then known
simply as DOORS from a company called QSS), first emerged. In 2005, I was leading
a team working on a campaign to promote the value of requirements management to
a wider audience than a core set of requirements specialists. We declared
2005 as the 'Year of Requirements Management' because of its increased
recognition as a discipline and the emergence of greater
tool capabilities for making requirements more easily accessible to a wider set
So as we move towards 2013, is requirements management still
as relevant? Do we still have further to go on becoming more effective at it?
In a recent Aberdeen Group report ‘Managing Systems Design Complexity: 3 Tips to Save Time’ by Michelle Boucher, where a survey of the effectiveness of systems engineering
capabilities of system and product development organizations is reported, two
of the three key recommendations made are directly related to requirements
management in the areas of visual requirements definition and requirements
traceability. In the other recommendation on improving change management across
engineering disciplines, Michelle says that impact analysis is core to such
improvement and that’s enabled by requirements management and traceability. From
study, a clear link can be shown between more effective requirements management
and traceability to business benefits such as reduced cycle times, improved
quality and increased product revenues. I also recently heard from another
analyst that one of the key challenges they are hearing from product
development organizations is getting a better handle on interrelationships
between requirements across engineering disciplines, so they can respond more
effectively to changes.
So my answer to the question I posed of is requirements
management still relevant is a resounding YES! We’ve made significant progress
but complexity of the systems we build has also increased and we need to keep
pace with changes in practices and technologies, so I expect effective
requirements management to remain a cornerstone of successful product
development and for practices and supporting tooling to continue to evolve.
But what you do think? Will requirements management be as
important in the future? How will it/should it change?
The importance of communication and collaboration in developing and managing good requirements were discussed in our earlier post on How to enable effective requirements communication and collaboration
. In this guest blog post, Melissa Robinson - a Senior Technical Specialist at IBM writes about how Rational DOORS addresses this aspects with Discussions. Melissa started her career at Telelogic enabling Product Management with technical support around requirements management. Melissa spent 3 years supporting clients getting started with Requirements management at Telelogic. After IBM acquired Telelogic in 2008, Melissa transitioned roles to support clients with Enteprise Architecture initiatives. She received the Carnegie Mellon certification in Enterprise Architecture in 2008 and is TOGAF certified. Melissa now supports clients getting started with evaluating and implementing both requirements management and enterprise architecture solutions.
Note: Please click on the screenshots for a better view
Why did we make this decision? Who made this decision? Who approved this requirement?
These are some of the questions we can help answer with effective collaboration messaging with DOORS. Collaboration messaging is now enabled in DOORS and DOORS Web Access (DWA) with the addition of DOORS Discussions. Discussions allow users to contribute and add comments to requirement objects or requirement modules, users can even add comments to base-lined requirements. Discussions offer a method of having a conversation on requirements. DOORS discussions really break the communication barrier by allowing users to easily make comments or start a discussion on any requirement, including read-only requirements. Discussions can be created in DOORS or DWA and viewed in both DOORS and DWA. Both DWA Editor and DWA Reviewer roles can contribute to Discussions. Discussions capture comments so that you can later review ancillary information about your requirements. Discussions allow everyone to contribute comments and provide a full understanding of requirements.
Here is a simple scenario for using DOORS Discussions. A DWA Reviewer user creates a Discussion on a requirement. A DOORS user then reviews this comment and contributes a comment on the requirement. The DWA user reviews the latest comment and closes the Discussion.
A DOORS user, Susan, reviews the current Discussion created by a DWA Reader user, Kavita. Susan can open the requirements module with a pre-created Discussion view to review the Discussions. Below Susan reviews the Discussion on Requirement AMR-STK-66.
Susan can contribute a comment to the open Discussion.
Kavita reviews the new Discussion comment in DWA. Notice that Kavita is a Reviewer in DWA. As a Reviewer, she can create and add comments to Discussions. Kavita can also close Discussions that she started. Later, Kavita can contribute another comment to the open Discussion.
As the person who first opened the Discussion, Kavita can close this Discussion.Later, in DOORS, Susan can review the latest status of the Discussion using the Discussion Thread view. As a Database Manager role, Susan can choose to re-open the closed Discussion at any time.
Discussions open up the communication thread between several different types of DOORS users. Discussions allow requirements reviewers to exchange views and comments about the content of a requirements module or the content of a requirement object in a module.
We believe the post gave you a sneak preview of how DOORS Discussions help in effectively collaborating and communicating between various stakeholders during requirements management. Feel free to contact melissarobinson[at]us.ibm.com if you have any queries about the topic. Melissa will be discussing the topic in detail in an upcoming webcast on October 5, 2012. Don't miss the opportunity to watch the action live.
Register now @ http://bit.ly/DOORS_Discussions
In this post Jim Hays
writes about the various options available in testing how the requirements are met in IBM Rational DOORS.
Jim works as a Senior Systems Engineer at IBM. Jim started his career in 1982 working for software providers. Jim’s career history is an interesting one; he hasn’t worked for many software companies over the course of his career. He worked for Applied Data Research (7 years) that eventually got bought by Computer Associates. He then moved to Goal Systems, that got bought by Legent (6 years). Legent, then got bought by Computer Associates. He then spent almost 10 years at Sterling Commerce that just got bought by IBM. After Sterling Commerce he joined Telelogic where he got into the ALM market that eventually got purchased by IBM Rational (7 years).
I’ve been involved with DOORS for over 7 years, and absolutely love the tool. My job at IBM is to technically support our sales team, and our customers, not only for DOORS but many other solutions we offer. I have had a lot of experience working with our DOORS customers understanding how they use DOORS, and even though DOORS is a requirements management solution there are other types of information being put into DOORS other than just requirements. An example of that is the fact that customers will put in test data into a DOORS module, and enable easy linking between the requirements and their related validation/verification results.
Note: Please click on the screenshots for a better view
Provided below is an example showing that. In this DOORS View we see traceability between 4 modules:
User Requirements>Functional Requirements>Functional Test Plan>Functional Test Cases
DOORS has had in it for years a capability called the Test Tracking Toolkit. This enables one to capture in a DOORS module test results by duplicating attributes based upon creating a new test run to store test run results for each run uniquely. This over time will create a lot of attributes in order to capture and store test run results per run. Both of these described usages of capturing tests and test results enable quite easy linking between the DOORS requirements and their related tests and test results. Below are the options available utilizing the Test Tracking toolkit.
(Read in clockwise from top left)
So what are the positives and negatives of both of these usages of DOORS modules to capture test, validation, and/or verification information. The positive is the ability to store and easily link requirements to testing results using standard DOORS linking. If requirements change that are linked to test-based modules, then standard DOORS “suspect links” would notify the folks maintaining the test-based DOORS modules of that requirement change to see if they need to update their test plan and or have to retest the test case. The other question is who is maintaining/updating the DOORS test-based modules? Are the actual testers going into DOORS to update the test results? Are the testers using a spreadsheet to capture test results and giving that to a DOORS user to update in DOORS? It is my opinion that either one of these scenarios discussed are fine for projects that only do manual testing, and don’t have a lot of testing (i.e. test runs) to perform. The other issue is that the actual testing can be occurring on different environments. For example if one built an application that is web-based and will run on different operating systems, then one would need to test all of those types of configurations . If one is building an embedded software device or a software application, then one might want to not just do manual
testing, but instead automate the execution and capturing results via automated
In my opinion I think a solution like DOORS for requirements management is great for that; whereas, I believe the folks that are in charge of Quality and/or performing the task of testing should have a solution that is suited for the role they play in a project-Test Management. So the final option for testing I will discuss is how DOORS (managing requirements) can integrate to IBM’s Test Management solution called Rational Quality Manager (RQM). RQM can provide a nice environment for the support of both manual and automated testing.
Provided below are an example “dashboard” that users can configure based upon what they would like to see and an example of a RQM Test Plan.
Provided below is an example of how one used a DOORS View that would be a view of requirements from DOORS that are known by this particular RQM project. Requirements driven testing enable requirements from DOORS to be used to automatically generate Test Cases and build specific links between the DOORS requirements to specific test cases.Screenshot provided in the right show the results of that link creation automation by showing traceability between test cases to DOORS requirements, and also could show development software assets.
The integration between DOORS and RQM is utilizing the OSLC (Open Services for Lifecycle Collaboration). Below is showing a “rich hover” ability to see details about linked items without actually having to navigate the link. One can also see the results of the test execution (pass or fail)
As the testers are doing their work then the requirements from DOORS can map data from RQM into DOORS-based attributes. Below is an example showing the traceability between DOORS requirements and the testing side of things in DOORS. I can see that the latest test case run passed.Provided below are screenshots showing coverage analysis relating the DOORS requirements to the Test Plan and associated test case.
Finally, provided below screenshot shows the test case execution results that were performed via RQM. These are mapped to DOORS attributes via the bi-directional integration and regular DOORS sorting and filtering can be used. For example if I wanted to see what Test Cases failed and or passed.
Hope the blog post was useful. Feel free to contact Jim @ haysji[at]us.ibm.com if you have any queries regarding the options for testing in DOORS.
Also, Jim will be hosting a webinar on the same topic in which he will go in depth the ideas presented in this post about the options related to testing in conjunction to using DOORS. Register for the session here - http://bit.ly/DOORS-Testing_Options
Date: September 7, 2012 (Friday)
Time: 1PM EDT
I was lucky enough last week to travel to the INCOSE
(International Council on Systems Engineering
) International Symposium 2012
near Rome, Italy.
An excellent opportunity to meet the systems engineering community and hear
about their interests and concerns. We had lots of traffic to the very stylish
IBM booth where we talked about the IBM Rational solutions for systems
engineering and the latest from IBM Research on tool interoperability and
design optimization & trade-off. I’d like to claim the traffic was to due
to my presence but in fact there was lots of excitement and interest in the
must have giveaway of the conference, the IBM Limited Edition of Systems
Engineering for Dummies book
(if you weren’t there and don’t have a copy, you
can download a PDF version
Being at the INCOSE event reminded me of the very active and
I recently provoked on the INCOSE LinkedIn group with the posting of the link to my
previous blog post ‘Traceability – How Much is Enough?’
It’s a great read with some very provocative statements about whether
traceability is at all useful and that it’s the root cause of failure on
projects that overrun and overspend versus those that say it’s absolutely vital
on safety-critical systems or where the project is contract-driven. In the end I
think some consensus was reached between these two camps that ‘just enough’
traceability to keep a project on track, provide customer/market need context
to engineers, facilitate impact analysis, and (if needed) to meet industry
standards and regulations, is sufficient. Any more is excessive and wasteful
and likely to bog down progress towards to delivering innovative products and
During a quiet time at the IBM booth, I also had chance to
chat with my colleague Brian Nolan (marketing manager for aerospace &
defense industry at IBM Rational) about effective traceability, since Brian
is very interested in this topic and has presented on a Dr Dobbs
webcast on ‘3 Ways to Improve Traceability and Impact Analysis’.
Brian believes in what I would describe as ‘traceability by design’, meaning
that traceability is automatically established while you decompose your system
design (for example, use case to use case realization to sequence diagram and
so on). This discussion also reminded me of what another colleague Greg Gorman
(program director for IBM systems and software engineering solutions and the
INCOSE Corporate Advisory Board member from IBM) described several years ago as
‘link while you think’, meaning traceability is created by the tools, while you
are performing requirements decomposition, design and development, rather than
as an overhead activity afterwards.
I think we’ve now moved some way beyond ‘link while you
think’. While an information model with ‘just enough’ traceability for your
project needs is essential to avoid traceability spiraling out of control, with
new approaches such as Linked Lifecycle Data
from the OSLC (Open Services for Lifecycle Collaboration) community
and tools that recognize implicit traceability, provide
new ways to visualize lifecycle traceability and perform effective impact
analysis, we can make traceability work for us to help engineering become more
agile, while staying within costs and schedule and produce innovative, higher
quality products and systems.
As we mentioned in an earlier blog post
, DOORS 9.4 and DOORS Web Access (DWA) 1.5 were released during Innovate 2012.This blog post provides insight into what’s changed in this release of DOORS and some of the significant new features. I have also provided a few resources where you can learn more about this release.
DOORS –RQM Integration based on OSLC
The most significant changes in DOORS 9.4 are the improvements to OSLC based integrations. A new integration based on OSLC has been provided for Rational Quality Manager
(RQM). Let's see how it is different from the existing (RQMI) integration based on a point-to-point solution. Provided below is a simplified representation of how RQMI works for the DOORS-RQM integration and contrasts it with the new OSLC based integration.
As you can clearly see, the integration has been made so simple in terms of software and storage, yet more powerful. The new integration provides a stable architecture for future enhancements and provides an automated migration. If we consider the installation and configuration aspects, the new integration no longer requires the server and java client components.
What does this mean to a typical DOORS user?
- This enables real-time lifecycle traceability to RQM test cases. This can be achieved through either the hover over menu from DOORS that display RQM artifacts or directly from RQMWhat does this mean to a typical RQM user?
- The real-time integration enables the RQM user to review and edit the automatically created draft test cases (with the new requirement reconciliation wizard) based on new requirements, and trace them back to DOORS. Full test coverage when requirements are changing is enabled with features like
- Automatic display of requirements not covered by test cases in current test plan
- Provision for linking existing test cases to new requirements
- Display of modified and removed requirements
- Enhanced suspect-ability analysis
Another important improvement in this release is the enhanced traceability to meet regulatory requirements. This enables
- Linking of one or more requirements to each test step of a manual test script
- Managing the association of requirements to related test cases
- Display of links during test execution and in test case results
Apart from this, integration to Design Manager RSA and Rhapsody beta
based on OSLC is also available in DOORS 9.4. However Design Manager is still in beta and this integration will be available only during later part of this year. For more details, visit jazz.net
. Also the data exchange mechanism has been upgraded to the latest version of OMG (Object Management Group) ReqIF (Requirements Interchange Format) from RIF. This helps in improved communication of requirements between organizations in a supply chain. Support for data exchange and linked data between DOORS 9.x and DOORS Next Generation is also included. Reporting across DOORS 9.x and DOORS Next Generation is also included.
There are going to be some changes in licensing when we consider using DOORS with Rational Publishing Engine (RPE). We have removed the requirement for a RPE license while using RPE custom templates from directly within DOORS. But you still need a license for creating new custom templates; however it is not required to drive the reports. Usability Enhancements
We will briefly look at the usability improvements that have gone into DOORS 9.4. Many of these usability improvements reduce the need of writing custom DXL scripts. In DOORS 9.4, we have provided a stronger support to define and manage how more than one user can work on a module simultaneously. It is controlled with a widget allowing you to set a sharing level for editing as shown below.
The Views now support color coding and the user is allowed to control the background color of attributes. The views have been also extended to 128 columns.
Another small, yet significant usability improvement is the possibility to remove multiple views in a single selection. And finally DOORS now supports rich text exporting to Microsoft Excel.
What do you think about the improvements? If you have questions or comments please leave them here. If you need more information about the product, trials or resources, visit IBM Rational Requirements Management Web Page.
IBM Rational DOORS family solutions offer best practices in requirements management and traceability, saving organizations time and money, through improved collaboration with stakeholders to eliminate inaccurate, incomplete, and omitted requirements.
Doors Next Generation is a requirements management application for optimizing requirements communication, collaboration and verification throughout your organization and supply chain. This scalable solution can help you meet business goals by managing project scope and cost. Rational DOORS lets you capture, trace, analyze and manage changes to information while maintaining compliance to regulations and standards.
Top three reasons your organization needs Doors Next Generation
- Reduce development costs by up to 57%
- Accelerate time to market by up to 20%
- Reduce cost of quality by up to 69%
Key Feature's :
Centralized location : Requirements management in a centralized location improves team collaboration, provides access to full editing, configuration, analysis and reporting capabilities through a desktop client. It also supports the Requirements Interchange Format, enabling suppliers and development partners to contribute requirements documents, sections or attributes that can be traced back to central requirements. Records and displays requirements text, graphics, tables, requirements attributes, change bars, traceablity links and more.
Link requirements : Traceability by linking requirements to design items, test plans, test cases and other requirements. Users can concurrently edit separate product and system requirement documents and link entries between documents. Requirement entries can also be linked to models, text specifications, code files, test procedures and documents created with other applications
Scalability : Address changing requirements management needs. Offers an explorer-like hierarchy with multiple levels of folders and projects for simple navigation no matter how large the database grows.
Change management : Integrations to help manage changes to requirements with either a simple, pre-defined change proposal system or a more thorough, customizable change control workflow. It integrates with Rational change management software for requirements change control and for requirements workflow management. It also integrates with other rational solutions, including IBM Rational Quality Manager, IBM Rational Rhapsody, IBM Rational Focal Point and others like HP Quality-Center for visibility of requirements to create test cases for traceability and for status report on coverage of requirements by test cases, and also with Microsoft Team Foundation Server (TFS) to enable Microsoft Visual Studio development teams to create and maintain traceability between requirements in Rational DOORS and TFS Work Items in Visual Studio.
If your organization wants to replace outdated and expensive legacy tools, and needs better control over multiple versions of documents, IBM appreciates the opportunity to discuss REQUIREMENTS with you. If you’d like to see a live demo please click dngdemo,
Would you like to start benefiting by using IBM Rational DOORS Next Generation? Start your free trial today Free Trial
Register Now: https://attendee.gotowebinar.com/register/5214791604961762305
Many regulatory agencies and industry standards (ISO 26262, DO-178, IEC 61508 and IEC 62304) require engineering tools used as part of the safety development life-cycle to be qualified to ensure that the tool is functioning correctly for the intended purpose. Using software tools in a qualified capacity enables users to achieve optimum value from the use of the tools, and significantly reduces the burden of costly and recurring manual reviews of the tool outputs.
Learn all about the new IBM DOORS® Next Generation tool qualification kit (TQK) from IBM business partner CertTech and see how you can apply it effectively in your safety development life-cycle to reduce development costs associated with compliance to industry standards.
Speaker: Jeffrey Gray, CEO of CertTech
For any queries reach out to this blog post author - Adrian Whitfield at email@example.com
Modified on by KavishGoel
Register Now: https://attendee.gotowebinar.com/register/4027093644804267522
Many regulatory agencies and industry standards require engineering tools used as part of the safety development life-cycle to be qualified to ensure that the tool is functioning correctly for the intended purpose. Reliance on software tools for eliminating, reducing or automating elements of the safety life-cycle continues to increase. Formal tool qualification provides objective evidence of compliance with the regulatory standards, ensuring the tool functionality correctly performs the intended purpose. Using software tools in a qualified capacity enables users to achieve optimum value from the use of the tools, and significantly reduces the burden of costly and recurring manual reviews of the tool outputs.
The tool qualification process required by industry standards can be a complex process, and an additional burden on the development organization that is already concerned with making their own deliverables. Therefore the use of "off the shelf" tool qualification kits can be a desirable option to reduce development costs overall.
IBM business partner CertTech has recently introduced a new tool qualification kit (TQK) for IBM DOORS® Next Generation, to support highly regulated industries such as Automotive, Aerospace and Medical Electronics. The kit has being approved for use within the Automotive ISO-26262:2011 safety development life-cycle by the international service corporation assessor TÜV SÜD with a full report included as part of the kit.
Join leading experts in this field on June the 10th, to learn more about understanding Tool Qualification and of course how this new offering can be best applied within your safety development life-cycle.
For any queries reach out to this blog post author - Adrian Whitfield at firstname.lastname@example.org
Modified on by KeithCollyer
DOORS Next Generation has been out in the field and in serious use for some time, and we know that many long-time DOORS users have been looking to extend it in the same way as they can with DOORS. The scripting capabilities in DNG are not as full as those provided through DXL, but they are being developed with each release. We already have a number of examples published on jazz.net. I am sure that many of you will have had ideas as well. I encourage you to add yours to the existing catalog.
If anybody is looking for ideas, a recent piece of work I was involved with came up with the following list of candidates:
Set attribute value on import based on keyword. E.g., if text has "shall", set Priority to "High".
Module comparison. The Use Case is a document is exported for review by people without access to DNG, they send back a new version that contains changes. The recipient wants to see the changes. In some ways, this is similar to round trip, but where the historical version is retained rather than updated. This is similar to the DOORS 9 module comparison
Parent-child linking. This maintains traceability as required by many standards. For DOORS, we always recommended that such links not be created (as they make traceability analysis more difficult), but that the requirements should be written in such a way that they are not needed. This can always be done, but often requires some thought. Note that this linking should not (generally) be done between requirements and headings, only where one requirement is a child of another.
Allow viewing and editing of traceability as a matrix
Make permissions visible
I'm sure that there are many other ideas out there!
Yet another edition of Innovate - The IBM Technical Summit is knocking your doors!.
The 2014 event promises to be even more exciting with top-notch keynotes, over 450 breakout sessions, labs, certifications and our biggest exhibit hall ever. As in previous events, Requirements Management is one of the key areas of interest at Innovate which attracts speakers and attendees from across the globe representing a wide range of industries. In 2013, we had two tracks for Requirements Management with sixteen sessions each with one track focusing on IT and another focusing on Systems Engineering. We had 25 real life case studies, 2 panel discussions and 4 instructor led sessions.
Managing requirements has always been a cornerstone in both software and systems development. The importance of the discipline continues to grow and is expected to take a leading role in the coming years. This is an opportunity to showcase your thoughts on the discipline, and how requirements management tools like DOORS or Requirements Composer can aid in managing effectively the requirements for project successes. Here are some of the topics from last year and an expected list of topics
· Requirements Management in Agile Projects
· Managing requirements in developing Safety Critical Systems
· Requirements engineering and supporting layered requirements and models
· Requirements Reuse: Methods and best practice
· Requirements management for complex systems and teams
· Requirements definition and management case studies
· Best practices in aligning business goals and IT
· Value-based requirements engineering
· DOORS, Requirements Composer and other Rational products best practices
Some session topics from Innovate 2013
· Managing Parallel Streams of Requirements in DOORS at an Automotive OEM
· Successful RRC adoption through a Community of Practice: Case Study from Blue Cross Blue Shield of North Carolina
· Agile Requirements: Maintaining the Model Through Iterations (Emerging Health)
· Cerner Corporation: Migrating from Requisite Pro to RRC
Share your experience, thoughts and best practices on requirements at an event attended by industry experts and IBM core development teams. Here are the top three reasons on why you should submit your paper for Innovate 2014.
Submit your papers before February 7, 2014 and stand a chance to present at Innovate 2014!. For more details, visit https://www-950.ibm.com/events/tools/innovate/innovate2014ems/screens/intro.xhtml
Modified on by VijaySankar
Prof. Lawrence Chung (email@example.com) is in Computer Science at the University of Texas at Dallas. He has been working in System/Requirements Engineering and System/Software Architecture. He was the principal author of the research monograph “Non-Functional Requirements in Software Engineering", and has been involved in developing “RE-Tools” (a multi-notational tool for RE) with Dr. Sam Supakkul, “HOPE” (a smartphone application for people with disabilities) with Dr. Rutvij Mehta, and “Silverlining” (a cloud forecaster) with Tom Hill and many others. He has been a keynote speaker, invited lecturer, co-editor-in-chief for Journal of Innovative Software, editorial board member for Requirements Engineering Journal, editor for ETRI Journal, and program co-chair for international events. He received his Ph.D. in Computer Science in 1993 from University of Toronto.
What are non-functional requirements (NFRs)?
NFRs colloquially have been called “-ilities” and “-ities”, since many words referring to NFRs end with “-ility” (e.g., usability, flexibility, reliability, maintainability) or “-ity” (e.g., security, integrity, simplicity, ubiquity). There are of course many other words that do not end with either “-ility” or “-ity”, such as performance, user-friendliness, power consumption, and esthetics, but still refer to NFRs.
Functional requirements (FRs), in contrast, are about functions, activities, tasks, etc. that may accept some input and produce some output.
Consider, for example, “add” (“+”) on a calculator which adds two numbers given as input and produces another number as output shown on the screen. Now suppose you type “2 + 3 =” now and the calculator shows “5”, but one year from now. In this case, the “add” on the calculator is functionally correct but non-functionally terrible, in particular, concerning performance.
As even in this simple example, a system which fulfills only functional requirements is often times not usable or even not useful.
So, handle NFRs and handle them appropriately. Don’t spend time only on FRs.
The “soft” Characteristics of NFRs and how to deal with them:
NFRs are global, subjective, interacting and graded.
FRs, such as “The calculator shall offer an “add” function”, are local in the sense that they are specific to the particular functions and not applicable to other functions or globally to other systems, such as a “subtract” function or a banking system. However, NFR terms such as “performance” can be applied to many other functions and systems, such as a “subtract” function and a banking system, and also those parts of such a function and a system.
In contrast to FRs, NFRs are subjective in both their definitions and the manner they need to be met – some are more subjective than others. Concerning definitions, for example, usability may mean simplicity and the availability of many help facilities to some people, while the same may mean something different, such as minimal learning curve and fast response. Concerning the manner whereby NFRs are seen satisfactorily met also depends on the (perception of the) user. For example, the keyboard with tiny keys on a smartphone may be usable to young people but not to old people. Also, large keys may be good enough for some people in using a smartphone, but a context-sensitive help may additionally be needed for a smartphone to be considered usable for some other people.
So, clarify the definitions of NFRs. Don’t assume they have unanimously agreeable definitions.
So, operationalize NFRs. Don’t just leave them without how they can be met.
NFRs are also interacting with each other, either synergistically or antagonistically or both. For example, a heavy authentication mechanism, for the purpose of enhanced security, may be hurting usability. If it takes three different passwords, which have to be changed every month and should consist of at least one special character, one digit, one upper case character, one function key, etc., in order to get in the system, the user of the system is unlikely to feel that the system is user-friendly. Hence, a conflict between security and user-friendliness. But a heavy security mechanism may help prevent unauthorized people from entering fake data in the system, hence a synergy between the security and the accuracy of data.
So, identify conflicts among NFRs. Don’t think you can do anything with them individually without any negative consequences.
So, identify synergies among NFRs. This is how we get “the whole becoming bigger than the sum of its parts”.
NFRs are graded, in the sense that they are usually met to different degrees. For example, an “add” function may be seen to be very good, good, bad or very bad, concerning its performance or usability, and different ways to implement the add function may affect the function differently – e.g., fully positively, partially positively, fully negatively, and partially negatively.
So, consider the degree of contributions between NFR-related concepts. Don’t simply think NFR-related concepts affect each other in a binary manner – either a complete satisfaction or dissatisfaction.
In a nutshell, NFRs cannot be defined or met absolutely in a clear-cut sense, i.e., soft.
So, satisfice NFRs. Don’t think NFRs can be satisfied absolutely, whatever the term “absolutely” might mean.
Product- vs. process-oriented approaches:
In science, objective measurements are important. But, are we mature enough to do that in system/software engineering? Also, consider:
“Not everything that can be counted counts, and not everything that counts can be counted.” [Albert Einstein]
According to this wisdom, it seems we need to measure important NFRs and only if we can. For example, you wouldn’t say “I love you 8 love units tonight”. It also seems that we need to shift our emphasis from measuring how well NFRs are met by a system/software artifact to how to handle NFRs during the process of developing the artifact in such a manner that the resulting artifact can be measured well.
So, treat NFRs as (soft)goals to satisfice. Do not repeat developing, scraping, and redeveloping a system, if they do not meet the expected NFRs, until a good system is finally produced.
Rationalize decisions using NFRs:
A (functional) problem may be solvable in many different ways. For example, break entries may be stopped by having a security guard, a housedog, a fortified gate, a home security software system, etc. Similarly, a (functional) goal may be achievable in many different ways also. Which one do we decide to choose and how? We use NFRs as the criteria in making the decision on selecting among the (functional) alternatives. Furthermore, NFRs treated as softgoals naturally lead to the consideration of such alternatives, among which a selection is made.
So, use NFRs as softgoals in exploring alternatives and also as the criteria in selecting among them.
How many NFRs are out there?
There can be many FRs. How about NFRs? If we go through a reasonably comprehensive dictionary and consider how many words can end with “-ility”, “-ity”, “-ness”, etc., this might give a hint. It’s not in the order of tens or even hundreds, but potentially thousands and tens of thousands. Alas, but we have resource limitations – a limited amount of time and money, our memory and reasoning capabilities, etc.
So, prioritize NFRs and their operationalizations throughout the softgoal-oriented process. Don’t simply claim “Our system satisfies all the possible NFRs and absolutely”.
A leading analyst and Systems Engineering expert, David Norfolk of Bloor Research recently published a white paper titled Reducing the risk of development failure with cost-effective capture and management of requirements. In this report, David dwells into the relevance of requirements management as a discipline and put forth his views on how the domain is changing with the advent of new development paradigms such as agile, mobile and devops.
If enterprise architecture helps to bridge the gap between business strategy and vision and its implementation in technology, from the CEO’s point of view, Requirements Management continues to help bridge the gap between business and technology at a lower level.
The report provides valuable insights with his deep coverage about why requirements management is relevant even more today, issues associated with managing requirements, challenges faced by the discipline, best practices from his experience and his thoughts on the capabilities of an ideal requirements engineering tool. Some of the topics the whitepaper discusses about are --
Challenges in requirements management
Managing changing requirements
Scope and ideal capabilities of a requirements engineering tool
Real life examples of benefits from investing in requirements
Read the whitepaper here - Reducing the risk of development failure with cost-effective capture and management of requirements
Modified on by VijaySankar
Prof. Neil Maiden is Professor of Systems Engineering at City University London. He is and has been a principal and co-investigator on numerous EPSRC- and EU-funded research projects. He has published over 150 peer-reviewed papers in academic journals, conferences and workshops proceedings. He was Program Chair for the 12th IEEE International Conference on Requirements Engineering in Kyoto in 2004, and was Editor of the IEEE Software’s Requirements column from 2005 until earlier this year. He can be reached out at N.A.M.Maiden[at]city.ac.uk
Requirements work is still regularly perceived as stenography, one in which the analyst listens and documents while the stakeholders tell the analyst what they want. This perception is reinforced by the requirements techniques that we use on most projects – the observations of work that we make, the interviews with stakeholders that we hold, and the questionnaires that we distribute to collect data about problems and requirements. These techniques hardly set the pulses racing. Nor do these techniques help us discover stakeholders’ real requirements.
Stakeholders don’t know what they want
One reason for this is that eliciting requirements relies on stakeholders knowing what they want and need. However, most stakeholders do not know what they want or need. They are limited by their perceptions of what is possible – what new business models can offer and new technologies can enable. Your average stakeholder is neither a business visionary nor a technology watcher. So is it surprising that their answers to your interview questions are so, well, banal?
Indeed, many businesses have come to realize that customers are more often rear-view mirrors rather than guides to the future. A new approach is needed – one that empowers your stakeholders. My advice? If you want to discover your stakeholders’ real requirements, encourage the stakeholders to create them.
Make them up.
Why not? After all, when you interview someone, the requirements that they report to you are the results of their own, often limited, creative thinking about a new system – creative thinking that you are capturing at the end of, too late to influence.
In this blog post, I argue it is more effective to get in earlier – for analysts to facilitate creative thinking about requirements as soon as requirements work starts. Think of requirements as the outcomes of creative work – desirable inventions that your stakeholders are guided to come up with. After all, many of your digital solutions should be giving you some form of business advantage, and establishing this advantage starts with requirements – what the solution will give to your business.
Creativity in Requirements Work
Perhaps to the surprise of many in software engineering, creativity is well understood. Many different definitions, models and theories of creativity are available, from domains ranging from social psychology to artificial intelligence. As a software engineer, I was delighted to discover how well the phenomenon has been studied. And how much software engineers could leverage from it.
I like the definition of creativity from Sternberg and Lubart. I consider it prototypical of many of the definitions out there. Creativity is:
“the ability to produce work that is both novel (i.e. original, unexpected) and appropriate (i.e. useful, adaptive concerning task constraints)”
Creative problem solving methods have been available since the 1950s. What is striking about many of these methods is their similarity to software development methods that emerged 25 years later. The CPS method guides people through activities such as problem finding, goal finding and solution acceptance – stages similar to analysis and testing phases of software development. What is different is the focus on creative thinking at each of these stages – creative thinking to maintain a critical advantage in business.
My team at City University London has been leveraging creativity methods and techniques in requirements projects of different types for over a decade, with great results. One approach has been to run creativity workshops – risk-free spaces in which stakeholders can discover and explore ideas often not feasible within more traditional requirements work. A workshop is normally divided into half-day segments in which stakeholders work with different creativity techniques such as reasoning analogically from other domains, removing constraints, and combining visual storyboards. We’ve successful ran such workshops in domains from air traffic control and electric vehicle use to policing.
Another approach is to embed creative thinking into the early stages of agile projects – what we refer to as creativity on a shoestring because of the need to provoke creative thinking in less than an hour. We’ve learned to prioritise epics with more creative potential. We’ve identified creativity techniques that deliver new ideas in less than an hour – techniques such as hall of fame, creativity triggers and combining user stories.
Get More Creative
Much requirements work is creative. We need to adapt what we do to reflect this. Fortunately there are many creativity processes and techniques out there to experiment with. Do try – you will be rewarded.