Yet another edition of Innovate - The IBM Technical Summit is knocking your doors!.
The 2014 event promises to be even more exciting with top-notch keynotes, over 450 breakout sessions, labs, certifications and our biggest exhibit hall ever. As in previous events, Requirements Management is one of the key areas of interest at Innovate which attracts speakers and attendees from across the globe representing a wide range of industries. In 2013, we had two tracks for Requirements Management with sixteen sessions each with one track focusing on IT and another focusing on Systems Engineering. We had 25 real life case studies, 2 panel discussions and 4 instructor led sessions.
Managing requirements has always been a cornerstone in both software and systems development. The importance of the discipline continues to grow and is expected to take a leading role in the coming years. This is an opportunity to showcase your thoughts on the discipline, and how requirements management tools like DOORS or Requirements Composer can aid in managing effectively the requirements for project successes. Here are some of the topics from last year and an expected list of topics
· Requirements Management in Agile Projects
· Managing requirements in developing Safety Critical Systems
· Requirements engineering and supporting layered requirements and models
· Requirements Reuse: Methods and best practice
· Requirements management for complex systems and teams
· Requirements definition and management case studies
· Best practices in aligning business goals and IT
· Value-based requirements engineering
· DOORS, Requirements Composer and other Rational products best practices
Some session topics from Innovate 2013
· Managing Parallel Streams of Requirements in DOORS at an Automotive OEM
· Successful RRC adoption through a Community of Practice: Case Study from Blue Cross Blue Shield of North Carolina
· Agile Requirements: Maintaining the Model Through Iterations (Emerging Health)
· Cerner Corporation: Migrating from Requisite Pro to RRC
Share your experience, thoughts and best practices on requirements at an event attended by industry experts and IBM core development teams. Here are the top three reasons on why you should submit your paper for Innovate 2014.
Submit your papers before February 7, 2014 and stand a chance to present at Innovate 2014!. For more details, visit https://www-950.ibm.com/events/tools/innovate/innovate2014ems/screens/intro.xhtml
Modified on by VijaySankar
We had a fantastic DOORS customer webcast panel on September 26 with experts from across the industries talking about their experience in using IBM Rational DOORS for requirements management. If we have to take one success metric for the webcast; it was that we overshoot by half an hour from the designated one hour for the discussion because we had some wonderful discussions going on. Our panelists have graciously agreed to respond to the questions on an offline basis, and we are publishing the answers here.
If you missed this golden opportunity and is wondering whether you can get another chance? YES! Or you would like to listen to it again; we have posted the replay here. You can listen to it anytime now. Register here
How to copy the objects with in-links and out-links from one module to another module?
< >Use DXL scripts that capture link information, which can be edited if necessary, and then copy the objects, and then run another dxl script to re-create the links.
Paul Lusardi has shared with us couple of samples. If you are interested, contact us
What factors determine using new database vs a new project in a database to differentiate/segregate work?
< >The biggest criteria for using the same database is Linking. If there are Projects in a database that you need to link to, the new Project should be in that same DB.The default should be to use the same database so that you only have to maintain one set of users, groups, standards, etc…. the possible exception is if you have 2 distinct sets of security measures (HIPAA, DoD or DoE cleared data) then you will want to segregate that out to a different server.
Does anyone on Panel publish documents directly from DOORS without Add-on applications?
< >[Patrick] I use DXL scripts which export the data out to RTF with some typical publishing formatting
[Mia] < >I do. Whether you can depends on the demand for documents (do you need to do them once a week? Daily? Quarterly?) and how critical their formatting is to your organization. We have configured views in most modules for output. A simple one has just the object number and the Object Heading/Object Text. For a full spec, we include trace columns for one or more linked modules, with object identifiers and even object headers or text in those columns. In the output you get the main object from the current module, followed by all the objects it’s linked to. I keep in a label that specifies “in link” or “out link.”I used to painstakingly populate the Paragraph Style attribute in order to map to MS Word, but because our need for documents is very low I have stopped enforcing this. We have a set of MS Word templates with the title page, toc, other front matter, and a standard set of styles. Plain Vanilla DOORS maps to basic Word styles fairly well.I pick the appropriate view and filters and select Export to Word. I select the appropriate Word template, and let it rip. I then do some minimal post processing, like getting rid of the object IDs on headings. Obviously if you’re doing very long documents this would be impractical.In my previous job we had interns create a DXL script that handled the post processing – you can get to something that’s usable without a huge investment.
How do you deal with sharing of your database with the government or other contractors? Do you share read-only partitions ?
[Paul] < >Previous to coming to NuScale, I co-developed the Unified RM Database where we allowed outside access, using access control and company VPN login credentials to enable multi-company development and reviewer teams to look at the data.The questions would be …..do you want them to see work in progress? Or just a snapshot of baselined work? Perhaps a separate server with a restored baseline set would be best in that case. I will defer to an expert on baseline sets though
How do you get rid of allowing DOORS Links?
[Patrick] We use a link schema with Pairings as necessary, and then create and delete but not purge the DOORS Links module with a copy in every sub-Project/Folder, and leave it as the default Link Module. That way, users cannot create a link that is not intended between any module pair.
Paul Lusardi has shared us a presentation. If you are interested, contact us
Is there a "Copy View" script available?
Paul Lusardi has shared us a presentation. If you are interested, contact us
Please ask about doing risk assessment in DOORS: Do any of the panelists document FMEA or risk analysis in DOORS? How do they do it?
< >[Paul] Yes and no. Textbook FMEA? No. using DOORS to analyze potential risks by means of looking at risk informed design? Yes, this is a “developing” activity[Patrick] We capture the risk status in DOORS, but not the actual assessment data
Do any of the panelists integrate DOORS with other tools like Rational Quality Manager?
< >[Patrick] We are looking into it, but have not actually started using in Production. Going to deploy DOORS/CQ by year end.[Paul] Only RPE integrations are used here
How has your work differed because you use SCRUM vs. when you did not?
Our pre-sprint requirements work is at a higher level – that is, not down in the functional weeds. We spend more time exploring new features from the user’s perspective, and more time on storyboarding and similar activities. The features we’ve built since going to agile are much more user-friendly and have much better workflows than the older stuff, where much of the workflow and usability decisions were left to the engineers.
Detailed functional analysis of requirements happens during the sprint, with the requirements analyst on the team working closely with the engineers and testers. For very complex features, this analysis process will begin before the sprint so that we have enough functional detail for the team to estimate the work. We have a small backlog grooming team that, members of the scrum teams, who help the requirements analyst with this.
For Mia, for your agile, how many full-time admins needed to maintain DOORS?
[Mia] We’re very small, so we don’t even have one full time admin. Our system engineer handles the environment, and I handle user administration. It’s a tiny part of each of our jobs. I don’t see that our approach represents a particularly different administrative need than any other. If we had a hundred users and multiple DOORS databases, then we’d dedicated administrative resources just like any other implementation.
@Mia - Are requirements for HW to SW interfaces traced through ICD(s) in DOORS?
We’re software only, so no HW/SW interfaces. However, I strongly support the use of interface specifications between software systems and have used an interface module to sit in between systems that need to interact. The trick is to stay on the requirements side of the requirements/design line. The cost of maintenance on such a model is high – probably too heavy for our agile process to maintain. However, should we need to support integration with an external application (one that our teams are not developing), our API requirements will undergo close scrutiny and possibly be carved out into a separate module so that we can use trace links to do impact analysis on external “clients.”
Do you DOORS lend itself to any specific Devops , ALM methodology liek Agile etc?
We adopted a continuous deployment model this year and it had no impact on our use of DOORS. That is, DOORS is still a critical part of our development tooling. We were already capturing release version information for requirements in DOORS, and this information remains very useful in understanding what feature set exists in a given version of our application.
Try IBM Rational DOORS in a Sandbox
Webcast - Achieving sustainable requirements across the supply chain with IBM Rational DOORS
Modified on by VijaySankar
Prof. Lawrence Chung (firstname.lastname@example.org) is in Computer Science at the University of Texas at Dallas. He has been working in System/Requirements Engineering and System/Software Architecture. He was the principal author of the research monograph “Non-Functional Requirements in Software Engineering", and has been involved in developing “RE-Tools” (a multi-notational tool for RE) with Dr. Sam Supakkul, “HOPE” (a smartphone application for people with disabilities) with Dr. Rutvij Mehta, and “Silverlining” (a cloud forecaster) with Tom Hill and many others. He has been a keynote speaker, invited lecturer, co-editor-in-chief for Journal of Innovative Software, editorial board member for Requirements Engineering Journal, editor for ETRI Journal, and program co-chair for international events. He received his Ph.D. in Computer Science in 1993 from University of Toronto.
What are non-functional requirements (NFRs)?
NFRs colloquially have been called “-ilities” and “-ities”, since many words referring to NFRs end with “-ility” (e.g., usability, flexibility, reliability, maintainability) or “-ity” (e.g., security, integrity, simplicity, ubiquity). There are of course many other words that do not end with either “-ility” or “-ity”, such as performance, user-friendliness, power consumption, and esthetics, but still refer to NFRs.
Functional requirements (FRs), in contrast, are about functions, activities, tasks, etc. that may accept some input and produce some output.
Consider, for example, “add” (“+”) on a calculator which adds two numbers given as input and produces another number as output shown on the screen. Now suppose you type “2 + 3 =” now and the calculator shows “5”, but one year from now. In this case, the “add” on the calculator is functionally correct but non-functionally terrible, in particular, concerning performance.
As even in this simple example, a system which fulfills only functional requirements is often times not usable or even not useful.
So, handle NFRs and handle them appropriately. Don’t spend time only on FRs.
The “soft” Characteristics of NFRs and how to deal with them:
NFRs are global, subjective, interacting and graded.
FRs, such as “The calculator shall offer an “add” function”, are local in the sense that they are specific to the particular functions and not applicable to other functions or globally to other systems, such as a “subtract” function or a banking system. However, NFR terms such as “performance” can be applied to many other functions and systems, such as a “subtract” function and a banking system, and also those parts of such a function and a system.
In contrast to FRs, NFRs are subjective in both their definitions and the manner they need to be met – some are more subjective than others. Concerning definitions, for example, usability may mean simplicity and the availability of many help facilities to some people, while the same may mean something different, such as minimal learning curve and fast response. Concerning the manner whereby NFRs are seen satisfactorily met also depends on the (perception of the) user. For example, the keyboard with tiny keys on a smartphone may be usable to young people but not to old people. Also, large keys may be good enough for some people in using a smartphone, but a context-sensitive help may additionally be needed for a smartphone to be considered usable for some other people.
So, clarify the definitions of NFRs. Don’t assume they have unanimously agreeable definitions.
So, operationalize NFRs. Don’t just leave them without how they can be met.
NFRs are also interacting with each other, either synergistically or antagonistically or both. For example, a heavy authentication mechanism, for the purpose of enhanced security, may be hurting usability. If it takes three different passwords, which have to be changed every month and should consist of at least one special character, one digit, one upper case character, one function key, etc., in order to get in the system, the user of the system is unlikely to feel that the system is user-friendly. Hence, a conflict between security and user-friendliness. But a heavy security mechanism may help prevent unauthorized people from entering fake data in the system, hence a synergy between the security and the accuracy of data.
So, identify conflicts among NFRs. Don’t think you can do anything with them individually without any negative consequences.
So, identify synergies among NFRs. This is how we get “the whole becoming bigger than the sum of its parts”.
NFRs are graded, in the sense that they are usually met to different degrees. For example, an “add” function may be seen to be very good, good, bad or very bad, concerning its performance or usability, and different ways to implement the add function may affect the function differently – e.g., fully positively, partially positively, fully negatively, and partially negatively.
So, consider the degree of contributions between NFR-related concepts. Don’t simply think NFR-related concepts affect each other in a binary manner – either a complete satisfaction or dissatisfaction.
In a nutshell, NFRs cannot be defined or met absolutely in a clear-cut sense, i.e., soft.
So, satisfice NFRs. Don’t think NFRs can be satisfied absolutely, whatever the term “absolutely” might mean.
Product- vs. process-oriented approaches:
In science, objective measurements are important. But, are we mature enough to do that in system/software engineering? Also, consider:
“Not everything that can be counted counts, and not everything that counts can be counted.” [Albert Einstein]
According to this wisdom, it seems we need to measure important NFRs and only if we can. For example, you wouldn’t say “I love you 8 love units tonight”. It also seems that we need to shift our emphasis from measuring how well NFRs are met by a system/software artifact to how to handle NFRs during the process of developing the artifact in such a manner that the resulting artifact can be measured well.
So, treat NFRs as (soft)goals to satisfice. Do not repeat developing, scraping, and redeveloping a system, if they do not meet the expected NFRs, until a good system is finally produced.
Rationalize decisions using NFRs:
A (functional) problem may be solvable in many different ways. For example, break entries may be stopped by having a security guard, a housedog, a fortified gate, a home security software system, etc. Similarly, a (functional) goal may be achievable in many different ways also. Which one do we decide to choose and how? We use NFRs as the criteria in making the decision on selecting among the (functional) alternatives. Furthermore, NFRs treated as softgoals naturally lead to the consideration of such alternatives, among which a selection is made.
So, use NFRs as softgoals in exploring alternatives and also as the criteria in selecting among them.
How many NFRs are out there?
There can be many FRs. How about NFRs? If we go through a reasonably comprehensive dictionary and consider how many words can end with “-ility”, “-ity”, “-ness”, etc., this might give a hint. It’s not in the order of tens or even hundreds, but potentially thousands and tens of thousands. Alas, but we have resource limitations – a limited amount of time and money, our memory and reasoning capabilities, etc.
So, prioritize NFRs and their operationalizations throughout the softgoal-oriented process. Don’t simply claim “Our system satisfies all the possible NFRs and absolutely”.
A leading analyst and Systems Engineering expert, David Norfolk of Bloor Research recently published a white paper titled Reducing the risk of development failure with cost-effective capture and management of requirements. In this report, David dwells into the relevance of requirements management as a discipline and put forth his views on how the domain is changing with the advent of new development paradigms such as agile, mobile and devops.
If enterprise architecture helps to bridge the gap between business strategy and vision and its implementation in technology, from the CEO’s point of view, Requirements Management continues to help bridge the gap between business and technology at a lower level.
The report provides valuable insights with his deep coverage about why requirements management is relevant even more today, issues associated with managing requirements, challenges faced by the discipline, best practices from his experience and his thoughts on the capabilities of an ideal requirements engineering tool. Some of the topics the whitepaper discusses about are --
Challenges in requirements management
Managing changing requirements
Scope and ideal capabilities of a requirements engineering tool
Real life examples of benefits from investing in requirements
Read the whitepaper here - Reducing the risk of development failure with cost-effective capture and management of requirements
Modified on by VijaySankar
Today, we are starting a new series of interview blog posts -- Coffee Time with Requirements Experts. Through this series, we try to bring to you the thoughts, career experience and advice of experts from the industry. Enjoy!
We have with us Jared Pulham, a Senior Product Manager at IBM. He focuses specifically on requirements management tools and capabilities for Jazz. Jared is responsible for one of IBM's requirements management products, Rational Requirements Composer. He has over 15 years of industry experience in software testing and development with a background and experience in many companies across industries. He joined IBM through Telelogic acquisition where he was a Director of Product Management. He regularly writes at jazz.net blog . He can be reached out at jared.pulham[at]uk.ibm.com
Q. Throughout your career in the Software development industry how have things in the field changed?
Watching the improvements and changes in development processes from waterfall models to adoption of faster development models proposed under the agile manifesto. Likewise the tools and emphasis on features (in those tools) with features/concepts to support different roles in the development process has continued to revolutionize and change the way organizations work.
Q. You have been mostly associated with the discipline of requirements management in your career; what attracted you to it?
I believe that requirements are the driver for shaping businesses and are the mechanism for deciding what should be developed to meet the customers need. As a business and tool leader myself I feel this is the right place to help myself and other organizations achieve similar successful results.
Q. How do you see tools and techniques helping professionals in a requirements domain?
Tools help bring development teams together to collaborate, bring them out of silos, allow them to organize development requirements better in structures that makes it easy for all/others to understand, see and spot gaps where development is missing, and easily recognize changes in the project.
Q. What do you think are some of the challenges faced by Business Analysts or Requirements Engineers today?
Understanding how they can work in faster projects adjusting to agile and iterative processes. Knowing how to help meet business goals and objectives through joined up thinking and development.
Q. What interests you outside your job?
Sports, sailing and technology improvements in mobile and Web devices for lifestyle
Q. How do you keep yourself current in this fast changing technology field?
Working with many customers across industries and markets. Writing blogs and papers that can help challenge the thought for requirements in development as well as speaking through conferences where I get a chance to meet other thought leaders for development processes and tool development.
Q. What's your advice to budding analysts/engineers considering focusing on requirements processes or tools?
Focus on understanding the market drivers for your specific industry because customer demand (requirements) will always drive a project and understanding how to translate that demand into use cases, business cases and actual development content for your project will help you better support your organization. Once you understand those requirements look at the other members of your team and understand who in your team will best benefit from those requirements to improve the business (through development).
Modified on by VijaySankar
Prof. Neil Maiden is Professor of Systems Engineering at City University London. He is and has been a principal and co-investigator on numerous EPSRC- and EU-funded research projects. He has published over 150 peer-reviewed papers in academic journals, conferences and workshops proceedings. He was Program Chair for the 12th IEEE International Conference on Requirements Engineering in Kyoto in 2004, and was Editor of the IEEE Software’s Requirements column from 2005 until earlier this year. He can be reached out at N.A.M.Maiden[at]city.ac.uk
Requirements work is still regularly perceived as stenography, one in which the analyst listens and documents while the stakeholders tell the analyst what they want. This perception is reinforced by the requirements techniques that we use on most projects – the observations of work that we make, the interviews with stakeholders that we hold, and the questionnaires that we distribute to collect data about problems and requirements. These techniques hardly set the pulses racing. Nor do these techniques help us discover stakeholders’ real requirements.
Stakeholders don’t know what they want
One reason for this is that eliciting requirements relies on stakeholders knowing what they want and need. However, most stakeholders do not know what they want or need. They are limited by their perceptions of what is possible – what new business models can offer and new technologies can enable. Your average stakeholder is neither a business visionary nor a technology watcher. So is it surprising that their answers to your interview questions are so, well, banal?
Indeed, many businesses have come to realize that customers are more often rear-view mirrors rather than guides to the future. A new approach is needed – one that empowers your stakeholders. My advice? If you want to discover your stakeholders’ real requirements, encourage the stakeholders to create them.
Make them up.
Why not? After all, when you interview someone, the requirements that they report to you are the results of their own, often limited, creative thinking about a new system – creative thinking that you are capturing at the end of, too late to influence.
In this blog post, I argue it is more effective to get in earlier – for analysts to facilitate creative thinking about requirements as soon as requirements work starts. Think of requirements as the outcomes of creative work – desirable inventions that your stakeholders are guided to come up with. After all, many of your digital solutions should be giving you some form of business advantage, and establishing this advantage starts with requirements – what the solution will give to your business.
Creativity in Requirements Work
Perhaps to the surprise of many in software engineering, creativity is well understood. Many different definitions, models and theories of creativity are available, from domains ranging from social psychology to artificial intelligence. As a software engineer, I was delighted to discover how well the phenomenon has been studied. And how much software engineers could leverage from it.
I like the definition of creativity from Sternberg and Lubart. I consider it prototypical of many of the definitions out there. Creativity is:
“the ability to produce work that is both novel (i.e. original, unexpected) and appropriate (i.e. useful, adaptive concerning task constraints)”
Creative problem solving methods have been available since the 1950s. What is striking about many of these methods is their similarity to software development methods that emerged 25 years later. The CPS method guides people through activities such as problem finding, goal finding and solution acceptance – stages similar to analysis and testing phases of software development. What is different is the focus on creative thinking at each of these stages – creative thinking to maintain a critical advantage in business.
My team at City University London has been leveraging creativity methods and techniques in requirements projects of different types for over a decade, with great results. One approach has been to run creativity workshops – risk-free spaces in which stakeholders can discover and explore ideas often not feasible within more traditional requirements work. A workshop is normally divided into half-day segments in which stakeholders work with different creativity techniques such as reasoning analogically from other domains, removing constraints, and combining visual storyboards. We’ve successful ran such workshops in domains from air traffic control and electric vehicle use to policing.
Another approach is to embed creative thinking into the early stages of agile projects – what we refer to as creativity on a shoestring because of the need to provoke creative thinking in less than an hour. We’ve learned to prioritise epics with more creative potential. We’ve identified creativity techniques that deliver new ideas in less than an hour – techniques such as hall of fame, creativity triggers and combining user stories.
Get More Creative
Much requirements work is creative. We need to adapt what we do to reflect this. Fortunately there are many creativity processes and techniques out there to experiment with. Do try – you will be rewarded.
Modified on by VijaySankar
Here is coverage of Requirements Management for Systems Engineering track keynote based on the presentation that Bill Shaw (Systems Program Director) and Richard Watson (Senior Product Manager for Requirements Management tools) delivered at Innovate 2013 today.
For those new to the space, IBM Rational DOORS is a widely recognized product in the requirements management area and here is how we see our products are meant for -
Rational DOORS the trusted, de-facto standard requirements management tool for employing Systematic Engineering methodologies to build complex and embedded systems.
Rational DOORS Web Access, an add-on for DOORS enables globally distributed stakeholders with visibility into requirements and traceability relationships (managed in Rational DOORS), with the ability to communicate via online requirements discussions. Using a Web browser, DOORS Web Access provides access to view and discuss requirements—with no additional software installed on your desktop.
And finally the latest addition to the family, Rational DOORS Next Generation is the next generation requirements management solution built on the IBM Rational Jazz platform.
With such a plethora of offerings, we believe we have the right requirements solution for you. As we mentioned earlier, introduction of DOORS Next Generation DOES NOT mean we are moving away from DOORS. We are continuing our investment in DOORS and we will continue to release better and improved versions of DOORS in future. We believe DOORS Next Generation takes the requirements management capabilities we offer to the next level especially with the foundation of an open collaborative platform. DOORS NG, designed from the ground up to accommodate an ever growing and complex ecosystem, greater need for collaboration and usability for a broader community of stakeholders plans to extend the capabilities for requirements change management & Product Line Engineering. Packaging DOORS NG within DOORS helps our customers to try both the products without purchasing two licenses.
New in Rational DOORS
We have now included four more pre-configured templates supplied from Systems and Software Engineering enabling our customers in kick-starting their projects.
Systems Engineering Template
A simple, pre-configured information schema for using DOORS to support systems engineering
Aerospace & Defense
Developing requirements against the DO-178B safety standard
Supporting the ISO26262 functional safety standard
FDA Design Control practices defined in 21 CFR Part 820
We are continuing our investments in replacing the requirement of installing Rational Publishing Engine (RPE) client for report generation. In DOORS 9.5.1, we have included the support for parameterized RPE templates. Advanced styling and configurations options are also now included.
We have been making some significant improvements in the Open Services for Lifecycle Collaboration (OSLC) front including an enhanced Rational DOORS-Design Manager integration. For this integration, we have used link discovery technique rather than back-linking, thus helping in a better integration - Links will only be stored within the creating application and discovery is done in the background on a real time basis. This investment also helps in improving our 3rd party integrations.
Starting this version of DOORS, we will be using ETL(Extract, Transform and Load) to integrate with Rational Insight. Since this is the method used for integrating RRC and DOORS NG with Insight, the metrics capabilities remain the same across the products. This enables specific metrics defined in DOOR being reused in DOORS Next Generation. Thus one can deploy insight over DOORS 9.x data and while piloting DOORS NG, all the metrics data would be made available in it automatically. Check this article for more details - Improve the value of your CLM reports by using metrics.
Based on feedback from customers, we have made good amount of usability enhancements in 9.5.1. Some of them include
Link preview with OSLC style rich hover on links to understand better traceability navigation
Better navigation into baselines and improvement in management of baselines
Improved support for DOORS table formatting
We have also made some significant improvements to DOORS Web Access. Some of them include
Simplified configuration and deployment
Improvements to Database Explorer
Support for DOORS project view in the Database Explorer now
New in Rational DOORS Next Generation
We have been continuing to make improvements in the product since its first release in November 2012. Our priorities are to focus on product quality and usability and from a long term perspective - on requirements configuration management. Some of the major updates to DOORS Next Generation are
Unobtrusive locking of data to avoid save conflicts
Improved graphical markup to help in change management
Improved multi-user and offline edits of non-native data
Improved data support for product evaluation
Note: Roadmap and strategies mentioned in this post are subject to change and request you to be in touch with IBM reps to understand the latest road maps
Modified on by VijaySankar
Alex Ivanov is a Senior Software Engineer II with Honors at Raytheon Integrated Defense Systems. Alex has more than 10 years experience as a Requirements (DOORS) Database Manager supporting a large scale distributed requirements database in the aerospace and defense industry, specializing in writing re-usable DXL, training, user support and consulting with programs to ensure they get the most out of their use of IBM Rational DOORS. Alex in an IBM Certified Deployment Professional DOORS v9 and has been recognized as a three time IBM Champion (2011, 2012, 2013). In 2011 Alex was elected the President of the New England Rational User Group.
1. How does it feel to be a returning IBM Champion?
I’m honored to be selected for this honor for the 3rd consecutive year. Ironically enough it was in 2010 that I 1st started to reach out across Raytheon the best practices that my team and I had developed around how to effectively use IBM Rational DOORS. I thought to myself, we have hundreds of programs at Raytheon that use DOORS and yet not everyone is aware of how to make the best use of the tool and more over the customizations that have been developed through custom DXL to make it even easier to use DOORS. This was the beginning of my vision to standardize the way requirements are managed at Raytheon, and little did I know it would lead to discovering my true passion.
2. Can you tell us something about what you do at Raytheon?
I lead team of engineers that maintain our customizations for how to effectively use DOORS and moreover I consult with programs all across the company on how to best architect their DOORS database and take advantage of the automation we have available. I’m most passionate about reaching out to others across the organization that are eager to improve on how systems engineering uses DOORS to their maximum advantage. I’m happy to say that over the past 3 years I’ve been able to spread our best practices across numerous Raytheon locations across the US, and have been able to do wonders with social media through the use of wiki pages, blogs, communities and generating training videos.
3. What are your thoughts on managing requirements effectively?
A tool is just a tool unless you have a sound process and training to go along with it. It is much more important to get people to understand the process and how the tooling helps them do their job, than to simply rely on the tool to solve all their problems. I believe it’s very important to have a sound process in place and certainly if there are best practices to leverage on how to use tools people should strive to take advantage of them, without losing sight of their process and what they are responsible for delivering. I can’t stress how important it is to determine your project architecture and the relationships of what you will be managing in DOORS, it might seem time consuming at 1st but it will save you a lot of time down the line.
4. So, how long have you been using Rational DOORS?
I started using DOORS in 2000 which is when I began my career at Raytheon. At the time I was a software developer who had just graduated from Boston University and DOORS was the tool which had the software requirements to which I had to develop code for. I believe at the time it was DOORS version 4.0 and I can certainly say that the tool has come a long way since then.
5. That's a long time; in your opinion, what are some of the greatest assets of the product and well, the pain points?
Without a doubt one of the greatest assets is the ability to customize the product for your process. Having said that I believe it’s very important to have a solid understanding of the systems engineering process and a clear understanding of how to architect your project for success. Only once you have a reusable architecture can you turn the focus on how to write reusable DXL to compliment your project template and architecture. As far as the pain points, certainly one of them has to be how manual it is to manage the database, whether it be attributes, views or access without custom DXL scripting it would be rather time consuming to carry out many tasks.
6. Do you have some tips or tricks to share with the DOORS users out there?
I’m happy to say that there are a ton of free resources online and I would highly recommend people take the time to watch webcasts, join a local Rational user group and just network with your peers who have their own experiences to share. If you aren’t a member already I encourage everyone to join the Global Rational User Group, their monthly newsletters are a great source of information and many presentations are archived for viewing right on the site. Another great resource for DOORS and DXL are the developerWorks forums, having asked and answered questions on the Rational DOORS DXL forum I highly recommend it. Numerous webcasts are available on 321Gang's website, Managing DOORS: The Administrator’s Toolbox is one where I take you through some examples on how to write reusable DXL to make it much easier to manage attributes and views.
7. What advice do you give to the budding DOORS administrators?
I would encourage everyone to find a mentor, this can be anyone that you look up to and just ask them questions, I know that I would not be where I am today had it not been for the mentoring and support of numerous people in my life and I am forever grateful to them. Something I have learned throughout the years is that it’s important to ask questions, and don’t assume the person who is asking you to do something has the right answers. As you gain experience you’ll be able to tailor the solution for your customers and they will thank you for it. It’s unbelievable how many resources are available online for learning, I’m a big fan of watching and creating training videos and Youtube has numerous channels I’d recommend subscribing to, a couple of which are: IBMRational and IBMJazz.
I believe it’s very important to always want to improve, whether this be in your personal or professional life. I encourage everyone to grab a book on any subject that is of interest to them, you’d be amazed how much you will learn. I’ve read dozens of books over the past few years and have a reading list for that want to follow along on http://www.shelfari.com/alexivanov/shelf
Modified on by VijaySankar
In this guest blog post, Requirements Engineering Expert, Jeremy Dick continues with his discussion on practical applications of traceability. Read the first two parts here.
What’s really going on when you decompose a requirement?
What’s really going on when you plan V&V against a requirement?
Inspired by recent experience in a large systems engineering project, Part 1 of this blog post series covered the practice of decomposing requirements, which brings about one of the most important traceability relationships in requirements engineering. Part 2 covered the next most important relationship: that between requirements and validation and verification activities.
Part 3 (probably the final part!) presents some other things about V&V we think we have learnt from undertaking a large project: what do you do about new requirements that arise from the V&V you have planned against your main requirements?
Once the need for V&V activities has been established (see Part 2), this will often give rise to new requirements. Broadly speaking, these requirements fall into two types:
Those that affect the design of the product.
Such requirements may constrain the design of the product, and may even add functions. For instance, should there be a need during commissioning to establish that the temperature of a fuel cell does not exceed a certain level, then it may be necessary to design in a means of measuring it.
Those that define the need the build a secondary system.
These requirements are for the construction of test artefacts, such as models and test equipment. In large programs, this test equipment can represent huge projects in themselves – like the construction of a new building to test a new design of jet engine.
Neither of these types of requirement should really be mixed in with the remaining requirements of without recording (through traceability) their origin, because if the choice of V&V changes, then we should be able to identify which parts of the design – or which pieces of test equipment – are present only to make the old V&V possible.
If the need to test a product gives rise to the need to build facilities to carry out that testing, then another development life-cycle is spawned for the purpose. Requirements will be collected for the facility, and further V&V may be required against those requirements. We enter a sort of recursive world of development life-cycles. Some call this “fractal” – the main development spawns smaller developments, which in turn spawn yet smaller ones.
Primary versus Secondary V&V
Secondary V&V arises when V&V activities lead to test artefacts that also need validating or calibrating. For example, from requirements about power delivery, there is a direct need to do some analysis using a model of fluid dynamics – what we call primary V&V. Requirements for the model itself are collected, and the need for validation of the model considered. For example, the model may need calibrating against some similar existing systems. This calibration activity is another form of V&V – what we call secondary V&V.
Requirements lead to V&V activities lead to requirements
As noted above, the need to for V&V leads to design changes or secondary systems, and thus to further requirements. This relationship should be captured; a requirement, rather than having a parent requirement, may in fact have a parent V&V activity. We could say that a requirement “enables” a V&V activity.
In the extreme, a secondary system may impose requirements on the primary product, for instance through the need to interface with it.
An information model
The diagram below illustrates the idea that requirements enable V&V activities
In the diagram, three traceability relationships are shown: “satisfies”, “verifies” and “enables”. There is an example of a primary V&V activity leading to an enabling requirement on the primary system itself – a requirement whose parent (through the “enables” relationship) is a V&V activity.
There are also two examples of secondary systems driven by requirements for primary V&V, and one of those gives rise, in turn, to a tertiary system.
In our current project, we have pushed this model only as far as secondary systems with an “enables” relationship and secondary V&V. An example is the one cited above – a fluid dynamics model that needs itself to be validated through calibration with similar existing systems.
As with all these things, there is probably a law of diminishing returns. How far should you push the requirement to V&V to requirement to V&V to requirement chain? You will have to decide!
A similar model probably exists for other aspects of development – manufacture, for instance. The need to manufacture a product gives rise to the need to construct the factory and plant to do so – a primary development spawning a secondary development. However, the traceability train goes not from requirement to V&V activity to requirement, but rather directly from primary requirement to secondary requirement. This could still be characterized as enablement.
We never build just “the product”. There are always other things that need designing and building that surround the system for various purposes, including testing components, sub-systems, systems and completed products.
What has been presented here is an attempt to get to grips with the traceability required to track the relationships between the product and its secondary (and possibly tertiary) systems.We implemented all this in a Rational DOORS database, with a small amount of customisation to ease the way.
Thank you for reading these blog entries, and please contact me if you are interested in reusing any of this experience and associated Rational DOORS customization.
Read the first two parts here -
What’s really going on when you decompose a requirement?
What’s really going on when you plan V&V against a requirement?
About the author - Jeremy Dick works as Principal Analyst for Integrate Systems Engineering Ltd in a consultancy, research and thought leadership capacity. He has extensive experience in implementing practical requirements processes in significant organizations, including tool customization, training and mentoring. At Integrate, he has been developing the concept of Evidence-based Development, an extension of his previous work on “rich traceability”. Prior to this appointment, he worked for 9 years in Telelogic (now part of IBM Rational) in the UK Professional Services group as both an international ambassador for Telelogic in the field of requirements management, and a high-level consultant for Telelogic customers wishing to implement requirements management processes. During this time, he developed considerable expertise in customizing DOORS using DXL to support advanced engineering processes. His roles in Telelogic included a position in the DOORS product division to assist in the transfer of field knowledge to the product team. Co-author of a book entitled “Requirements Engineering” that has recently reached its 3rd edition, he is recognized internationally for his work on traceability. Jeremy can be reached out at jeremy.dick[at]integrate.biz
Modified on by VijaySankar
In this guest blog post, Requirements Engineering Expert, Jeremy Dick continues with his discussion on practical applications of traceability. Read the first part here -
The practical applications of traceability Part 1: What’s really going on when you decompose a requirement?
Inspired by recent experience in a large systems engineering project, Part 1 of this essay covered the practice of decomposing requirements, which brings about one of the most important traceability relationships in requirements engineering. Part 2 here covers the next most important relationship: that between requirements and validation and verification activities. Part 3 will continue the discussion of V&V, and how it itself gives rise to further requirements.
Verification & Validation (V&V)
I don’t care enough about the difference between validation and verification to want to enter into the divisive debate about it here. I am just going to say V&V and be done with it!
Kinds of V&V activity
There are many kinds of V&V activity, and organisations have varied ways of classifying them. In the project I am working on, the classifications are Analysis, Analogy, Inspection, Review, Test and Demonstration.
By their very nature, these types of activity tend to occur at different times of the life-cycle. Analysis, for instance, tends to occur early to predict properties of the proposed design and verify it against requirements. By contrast, demonstration tends to occur late as part of the acceptance tests.
Typically, a whole series of activities will be planned against a single requirement, some early, some late, allowing confidence to accumulate over the life-cycle of the project.
Requests for evidence
Despite the variety of kinds of activity, there is one thing they all have in common: they are requests for evidence of some kind or other. Indeed, I would favour calling V&V activities exactly that: “requests for evidence”.
Intention versus Fulfilment
Those activities that are carried out early in the development process provide evidence that the intended design will meet the requirements – they address design intention. Those activities applied late in the development process collect evidence that what has been built meets the requirements – they address design fulfilment.
Once the need for V&V activities has been established, this will often give rise to new requirements, either on the design of the product itself, or requirements for the construction of test artefacts, such as models and test equipment. (We never build just the product; there are always other things that need designing and building that surround the system for various purposes.)
The management of requirements arising from V&V will be the topic of Part 3.
Requirements decomposition and V&V planning
When planning V&V activities against a parent requirement, you need to take into account the V&V that will be carried out on its child requirements, and their child requirements, and so on.
Take, for instance, the following example where a user requirement is decomposed into a number of system requirements:
The only V&V activity planned against the user requirement is a commissioning test, which will occur late in the life-cycle. However, further V&V activities are defined against the child system requirements. Some of these are design inspections that occur very early, and some are system tests that occur relatively late, but still before commissioning.
There is, of course, a sense in which all these V&V activities provide evidence for the satisfaction of the user requirement, but some of the activities fit more directly against the system requirements. So when planning V&V activities, you need to ask the question: what activities can only be carried out against the parent requirement, and which can be delegated to child requirements? – because those that can be delegated are likely to provide evidence earlier in the life-cycle. And you always want that, if you can get it.
Granularity of V&V results
In the above example, there is one V&V activity that is linked to multiple requirements. In general, the relationship between requirements and V&V activities will be many-to-many.
However, this presents an issue when it comes to collating results of V&V against requirements. The System Test defined above may show positive results for filling, boiling and dispensing, but fail on the time taken to recover (cool down). So it has passed on all requirements except one. In terms of granularity of information, we need to record the result of the V&V activity against each linked requirement.
How is it best to do that? The only place to do that in the information model of the example is on the “verifies” links; there is a link for every requirement-V&V pair.
Another way is shown in the next example:
Here we have separated out the success criteria for each requirement for each test by adding subsidiary objects under the V&V activities (for instance, using the DOORS object hierarchy). Each success criterion has exactly one link to a requirement; a link from a criterion is implicitly a link from the V&V activity. (This link could be made explicit by retaining a link from the activity as well – not shown in the diagram.)
Now we have objects rather than links against which to record the results of the V&V activity (using an attribute of that object). This has the added advantage that it encourages a discipline of identifying precisely what the success criterion is for each requirement against each V&V activity. In addition, the V&V Activity and its list of success criteria can be used as a description/checklist for each particular test.
As results come in, the success/failure status on the success criteria can be rolled up through the “verifies” links to the associated requirements, and then on up through the “satisfies” links to the parent requirements. Both these relationships allow results to be summarised through the eyes of the requirements at every level.
V&V planning steps
These are the process steps we teach for planning V&V against requirements. They are numbered so as to continue from the process steps named in Part 1:
Determine the V&V activities you will need.
Consider what range of evidence you will need to collect to establish that the requirement has been met, and determine the best V&V activities for that. Aim to collect evidence as early as possible in the life-cycle, considering early proof of design intention as well as later design fulfilment. Capture the V&V activities into the database and (if using explicit links) link them to the associated requirements.
Identify the success criteria for each requirement against each V&V activity.
For each requirement/V&V activity pair, determine the success criteria to be applied. Capture each success criterion in a new object under the activity, and link it to the requirement.
Record the results of the V&V activity against each success criterion.
When the V&V activity has been competed, record the success or failure of the activity against each success criterion.
So this is what we now teach those engaged in planning and tracing V&V against requirements, in conjunction with requirements decomposition. It is wrong to assume that people will somehow automatically know how to do this kind of thing. By taking this approach, the V&V plan is well organized, defined at the most appropriate layers, with success criteria defined, and ready for the collection and roll-up of results.
Read the first part here - The practical applications of traceability Part 1: What’s really going on when you decompose a requirement?
About the author - Jeremy Dick works as Principal Analyst for Integrate Systems Engineering Ltd in a consultancy, research and thought leadership capacity. He has extensive experience in implementing practical requirements processes in significant organizations, including tool customization, training and mentoring. At Integrate, he has been developing the concept of Evidence-based Development, an extension of his previous work on “rich traceability”. Prior to this appointment, he worked for 9 years in Telelogic (now part of IBM Rational) in the UK Professional Services group as both an international ambassador for Telelogic in the field of requirements management, and a high-level consultant for Telelogic customers wishing to implement requirements management processes. During this time, he developed considerable expertise in customizing DOORS using DXL to support advanced engineering processes. His roles in Telelogic included a position in the DOORS product division to assist in the transfer of field knowledge to the product team. Co-author of a book entitled “Requirements Engineering” that has recently reached its 3rd edition, he is recognized internationally for his work on traceability.Jeremy can be reached out at jeremy.dick[at]integrate.biz
In my career I’ve been deeply involved with both modeling and requirements management disciplines and tools, so it always intrigues me when I hear debates over whether largely textual based (sometimes referred to as ‘traditional’ or ‘document-based’) or model-based approaches to defining and managing requirements are the right way to go.
We’ve all heard the argument that a picture paints a thousand words, but I’ve always vividly remembered something I heard at a conference some years ago which was “I’d have taken a 1000 words over this one unreadable diagram.”
My belief is that it is not an either-or decision. You need both. Models can add clarity to requirements specifications and can bring together a more holistic understanding of what’s expressed in the requirements. Models can be walked through with stakeholders and with the right language and tools (like SysML or UML in IBM Rational Rhapsody), they can even be run to validate that what is captured in the model is correct, consistent and complete. But what if you have contractual requirements to manage, documents of regulations or standards to comply with, or complex performance or availability constraints – you don’t want to clutter your model with so much detail that it becomes unusable.
My preference is for a combination of textual requirements and models, that can be described by the ‘Systems Engineering Club Sandwich’ (references 1&2) where textual requirements, which form the layers of bread - and maybe a bit dry on their own, are supplemented by models that form the layers of filling – they are richer and more expressive, together forming a tasty combination to help explore and elaborate requirements, perform decomposition and allocation, and maintain traceability. I recently got together with my colleague Paul Urban to record a 30 minute webcast entitled ‘The Tasty Way to Tackle Complexity - The Systems Engineering Club Sandwich of Requirements & Models’
where we take a look at some engineering challenges, where requirements work goes wrong, how the club sandwich approach works and how to use requirements and models together effectively. So if this hors d'oeuvre has made you hungry for more, please take a look. Paul and I are really interested to hear what you think.
1. "The Systems Engineering Sandwich: Combining Requirements, Models and
Design", Jeremy Dick, Jonathon Chard, INCOSE International Symposium,
Toulouse, July 2004.
2. Requirements Engineering, Hull, Jackson
& Dick, Springer 2004.
I was lucky enough last week to travel to the INCOSE
(International Council on Systems Engineering
) International Symposium 2012
near Rome, Italy.
An excellent opportunity to meet the systems engineering community and hear
about their interests and concerns. We had lots of traffic to the very stylish
IBM booth where we talked about the IBM Rational solutions for systems
engineering and the latest from IBM Research on tool interoperability and
design optimization & trade-off. I’d like to claim the traffic was to due
to my presence but in fact there was lots of excitement and interest in the
must have giveaway of the conference, the IBM Limited Edition of Systems
Engineering for Dummies book
(if you weren’t there and don’t have a copy, you
can download a PDF version
Being at the INCOSE event reminded me of the very active and
I recently provoked on the INCOSE LinkedIn group with the posting of the link to my
previous blog post ‘Traceability – How Much is Enough?’
It’s a great read with some very provocative statements about whether
traceability is at all useful and that it’s the root cause of failure on
projects that overrun and overspend versus those that say it’s absolutely vital
on safety-critical systems or where the project is contract-driven. In the end I
think some consensus was reached between these two camps that ‘just enough’
traceability to keep a project on track, provide customer/market need context
to engineers, facilitate impact analysis, and (if needed) to meet industry
standards and regulations, is sufficient. Any more is excessive and wasteful
and likely to bog down progress towards to delivering innovative products and
During a quiet time at the IBM booth, I also had chance to
chat with my colleague Brian Nolan (marketing manager for aerospace &
defense industry at IBM Rational) about effective traceability, since Brian
is very interested in this topic and has presented on a Dr Dobbs
webcast on ‘3 Ways to Improve Traceability and Impact Analysis’.
Brian believes in what I would describe as ‘traceability by design’, meaning
that traceability is automatically established while you decompose your system
design (for example, use case to use case realization to sequence diagram and
so on). This discussion also reminded me of what another colleague Greg Gorman
(program director for IBM systems and software engineering solutions and the
INCOSE Corporate Advisory Board member from IBM) described several years ago as
‘link while you think’, meaning traceability is created by the tools, while you
are performing requirements decomposition, design and development, rather than
as an overhead activity afterwards.
I think we’ve now moved some way beyond ‘link while you
think’. While an information model with ‘just enough’ traceability for your
project needs is essential to avoid traceability spiraling out of control, with
new approaches such as Linked Lifecycle Data
from the OSLC (Open Services for Lifecycle Collaboration) community
and tools that recognize implicit traceability, provide
new ways to visualize lifecycle traceability and perform effective impact
analysis, we can make traceability work for us to help engineering become more
agile, while staying within costs and schedule and produce innovative, higher
quality products and systems.