IBM Rational DOORS family solutions offer best practices in requirements management and traceability, saving organizations time and money, through improved collaboration with stakeholders to eliminate inaccurate, incomplete, and omitted requirements.
Doors Next Generation is a requirements management application for optimizing requirements communication, collaboration and verification throughout your organization and supply chain. This scalable solution can help you meet business goals by managing project scope and cost. Rational DOORS lets you capture, trace, analyze and manage changes to information while maintaining compliance to regulations and standards.
Top three reasons your organization needs Doors Next Generation
- Reduce development costs by up to 57%
- Accelerate time to market by up to 20%
- Reduce cost of quality by up to 69%
Key Feature's :
Centralized location : Requirements management in a centralized location improves team collaboration, provides access to full editing, configuration, analysis and reporting capabilities through a desktop client. It also supports the Requirements Interchange Format, enabling suppliers and development partners to contribute requirements documents, sections or attributes that can be traced back to central requirements. Records and displays requirements text, graphics, tables, requirements attributes, change bars, traceablity links and more.
Link requirements : Traceability by linking requirements to design items, test plans, test cases and other requirements. Users can concurrently edit separate product and system requirement documents and link entries between documents. Requirement entries can also be linked to models, text specifications, code files, test procedures and documents created with other applications
Scalability : Address changing requirements management needs. Offers an explorer-like hierarchy with multiple levels of folders and projects for simple navigation no matter how large the database grows.
Change management : Integrations to help manage changes to requirements with either a simple, pre-defined change proposal system or a more thorough, customizable change control workflow. It integrates with Rational change management software for requirements change control and for requirements workflow management. It also integrates with other rational solutions, including IBM Rational Quality Manager, IBM Rational Rhapsody, IBM Rational Focal Point and others like HP Quality-Center for visibility of requirements to create test cases for traceability and for status report on coverage of requirements by test cases, and also with Microsoft Team Foundation Server (TFS) to enable Microsoft Visual Studio development teams to create and maintain traceability between requirements in Rational DOORS and TFS Work Items in Visual Studio.
If your organization wants to replace outdated and expensive legacy tools, and needs better control over multiple versions of documents, IBM appreciates the opportunity to discuss REQUIREMENTS with you. If you’d like to see a live demo please click dngdemo,
Would you like to start benefiting by using IBM Rational DOORS Next Generation? Start your free trial today Free Trial
Modified on by VijaySankar
We had a fantastic DOORS customer webcast panel on September 26 with experts from across the industries talking about their experience in using IBM Rational DOORS for requirements management. If we have to take one success metric for the webcast; it was that we overshoot by half an hour from the designated one hour for the discussion because we had some wonderful discussions going on. Our panelists have graciously agreed to respond to the questions on an offline basis, and we are publishing the answers here.
If you missed this golden opportunity and is wondering whether you can get another chance? YES! Or you would like to listen to it again; we have posted the replay here. You can listen to it anytime now. Register here
How to copy the objects with in-links and out-links from one module to another module?
< >Use DXL scripts that capture link information, which can be edited if necessary, and then copy the objects, and then run another dxl script to re-create the links.
Paul Lusardi has shared with us couple of samples. If you are interested, contact us
What factors determine using new database vs a new project in a database to differentiate/segregate work?
< >The biggest criteria for using the same database is Linking. If there are Projects in a database that you need to link to, the new Project should be in that same DB.The default should be to use the same database so that you only have to maintain one set of users, groups, standards, etc…. the possible exception is if you have 2 distinct sets of security measures (HIPAA, DoD or DoE cleared data) then you will want to segregate that out to a different server.
Does anyone on Panel publish documents directly from DOORS without Add-on applications?
< >[Patrick] I use DXL scripts which export the data out to RTF with some typical publishing formatting
[Mia] < >I do. Whether you can depends on the demand for documents (do you need to do them once a week? Daily? Quarterly?) and how critical their formatting is to your organization. We have configured views in most modules for output. A simple one has just the object number and the Object Heading/Object Text. For a full spec, we include trace columns for one or more linked modules, with object identifiers and even object headers or text in those columns. In the output you get the main object from the current module, followed by all the objects it’s linked to. I keep in a label that specifies “in link” or “out link.”I used to painstakingly populate the Paragraph Style attribute in order to map to MS Word, but because our need for documents is very low I have stopped enforcing this. We have a set of MS Word templates with the title page, toc, other front matter, and a standard set of styles. Plain Vanilla DOORS maps to basic Word styles fairly well.I pick the appropriate view and filters and select Export to Word. I select the appropriate Word template, and let it rip. I then do some minimal post processing, like getting rid of the object IDs on headings. Obviously if you’re doing very long documents this would be impractical.In my previous job we had interns create a DXL script that handled the post processing – you can get to something that’s usable without a huge investment.
How do you deal with sharing of your database with the government or other contractors? Do you share read-only partitions ?
[Paul] < >Previous to coming to NuScale, I co-developed the Unified RM Database where we allowed outside access, using access control and company VPN login credentials to enable multi-company development and reviewer teams to look at the data.The questions would be …..do you want them to see work in progress? Or just a snapshot of baselined work? Perhaps a separate server with a restored baseline set would be best in that case. I will defer to an expert on baseline sets though
How do you get rid of allowing DOORS Links?
[Patrick] We use a link schema with Pairings as necessary, and then create and delete but not purge the DOORS Links module with a copy in every sub-Project/Folder, and leave it as the default Link Module. That way, users cannot create a link that is not intended between any module pair.
Paul Lusardi has shared us a presentation. If you are interested, contact us
Is there a "Copy View" script available?
Paul Lusardi has shared us a presentation. If you are interested, contact us
Please ask about doing risk assessment in DOORS: Do any of the panelists document FMEA or risk analysis in DOORS? How do they do it?
< >[Paul] Yes and no. Textbook FMEA? No. using DOORS to analyze potential risks by means of looking at risk informed design? Yes, this is a “developing” activity[Patrick] We capture the risk status in DOORS, but not the actual assessment data
Do any of the panelists integrate DOORS with other tools like Rational Quality Manager?
< >[Patrick] We are looking into it, but have not actually started using in Production. Going to deploy DOORS/CQ by year end.[Paul] Only RPE integrations are used here
How has your work differed because you use SCRUM vs. when you did not?
Our pre-sprint requirements work is at a higher level – that is, not down in the functional weeds. We spend more time exploring new features from the user’s perspective, and more time on storyboarding and similar activities. The features we’ve built since going to agile are much more user-friendly and have much better workflows than the older stuff, where much of the workflow and usability decisions were left to the engineers.
Detailed functional analysis of requirements happens during the sprint, with the requirements analyst on the team working closely with the engineers and testers. For very complex features, this analysis process will begin before the sprint so that we have enough functional detail for the team to estimate the work. We have a small backlog grooming team that, members of the scrum teams, who help the requirements analyst with this.
For Mia, for your agile, how many full-time admins needed to maintain DOORS?
[Mia] We’re very small, so we don’t even have one full time admin. Our system engineer handles the environment, and I handle user administration. It’s a tiny part of each of our jobs. I don’t see that our approach represents a particularly different administrative need than any other. If we had a hundred users and multiple DOORS databases, then we’d dedicated administrative resources just like any other implementation.
@Mia - Are requirements for HW to SW interfaces traced through ICD(s) in DOORS?
We’re software only, so no HW/SW interfaces. However, I strongly support the use of interface specifications between software systems and have used an interface module to sit in between systems that need to interact. The trick is to stay on the requirements side of the requirements/design line. The cost of maintenance on such a model is high – probably too heavy for our agile process to maintain. However, should we need to support integration with an external application (one that our teams are not developing), our API requirements will undergo close scrutiny and possibly be carved out into a separate module so that we can use trace links to do impact analysis on external “clients.”
Do you DOORS lend itself to any specific Devops , ALM methodology liek Agile etc?
We adopted a continuous deployment model this year and it had no impact on our use of DOORS. That is, DOORS is still a critical part of our development tooling. We were already capturing release version information for requirements in DOORS, and this information remains very useful in understanding what feature set exists in a given version of our application.
Try IBM Rational DOORS in a Sandbox
Webcast - Achieving sustainable requirements across the supply chain with IBM Rational DOORS
Modified on by VijaySankar
Here is coverage of Requirements Management for Systems Engineering track keynote based on the presentation that Bill Shaw (Systems Program Director) and Richard Watson (Senior Product Manager for Requirements Management tools) delivered at Innovate 2013 today.
For those new to the space, IBM Rational DOORS is a widely recognized product in the requirements management area and here is how we see our products are meant for -
Rational DOORS the trusted, de-facto standard requirements management tool for employing Systematic Engineering methodologies to build complex and embedded systems.
Rational DOORS Web Access, an add-on for DOORS enables globally distributed stakeholders with visibility into requirements and traceability relationships (managed in Rational DOORS), with the ability to communicate via online requirements discussions. Using a Web browser, DOORS Web Access provides access to view and discuss requirements—with no additional software installed on your desktop.
And finally the latest addition to the family, Rational DOORS Next Generation is the next generation requirements management solution built on the IBM Rational Jazz platform.
With such a plethora of offerings, we believe we have the right requirements solution for you. As we mentioned earlier, introduction of DOORS Next Generation DOES NOT mean we are moving away from DOORS. We are continuing our investment in DOORS and we will continue to release better and improved versions of DOORS in future. We believe DOORS Next Generation takes the requirements management capabilities we offer to the next level especially with the foundation of an open collaborative platform. DOORS NG, designed from the ground up to accommodate an ever growing and complex ecosystem, greater need for collaboration and usability for a broader community of stakeholders plans to extend the capabilities for requirements change management & Product Line Engineering. Packaging DOORS NG within DOORS helps our customers to try both the products without purchasing two licenses.
New in Rational DOORS
We have now included four more pre-configured templates supplied from Systems and Software Engineering enabling our customers in kick-starting their projects.
Systems Engineering Template
A simple, pre-configured information schema for using DOORS to support systems engineering
Aerospace & Defense
Developing requirements against the DO-178B safety standard
Supporting the ISO26262 functional safety standard
FDA Design Control practices defined in 21 CFR Part 820
We are continuing our investments in replacing the requirement of installing Rational Publishing Engine (RPE) client for report generation. In DOORS 9.5.1, we have included the support for parameterized RPE templates. Advanced styling and configurations options are also now included.
We have been making some significant improvements in the Open Services for Lifecycle Collaboration (OSLC) front including an enhanced Rational DOORS-Design Manager integration. For this integration, we have used link discovery technique rather than back-linking, thus helping in a better integration - Links will only be stored within the creating application and discovery is done in the background on a real time basis. This investment also helps in improving our 3rd party integrations.
Starting this version of DOORS, we will be using ETL(Extract, Transform and Load) to integrate with Rational Insight. Since this is the method used for integrating RRC and DOORS NG with Insight, the metrics capabilities remain the same across the products. This enables specific metrics defined in DOOR being reused in DOORS Next Generation. Thus one can deploy insight over DOORS 9.x data and while piloting DOORS NG, all the metrics data would be made available in it automatically. Check this article for more details - Improve the value of your CLM reports by using metrics.
Based on feedback from customers, we have made good amount of usability enhancements in 9.5.1. Some of them include
Link preview with OSLC style rich hover on links to understand better traceability navigation
Better navigation into baselines and improvement in management of baselines
Improved support for DOORS table formatting
We have also made some significant improvements to DOORS Web Access. Some of them include
Simplified configuration and deployment
Improvements to Database Explorer
Support for DOORS project view in the Database Explorer now
New in Rational DOORS Next Generation
We have been continuing to make improvements in the product since its first release in November 2012. Our priorities are to focus on product quality and usability and from a long term perspective - on requirements configuration management. Some of the major updates to DOORS Next Generation are
Unobtrusive locking of data to avoid save conflicts
Improved graphical markup to help in change management
Improved multi-user and offline edits of non-native data
Improved data support for product evaluation
Note: Roadmap and strategies mentioned in this post are subject to change and request you to be in touch with IBM reps to understand the latest road maps
Modified on by VijaySankar
Alex Ivanov is a Senior Software Engineer II with Honors at Raytheon Integrated Defense Systems. Alex has more than 10 years experience as a Requirements (DOORS) Database Manager supporting a large scale distributed requirements database in the aerospace and defense industry, specializing in writing re-usable DXL, training, user support and consulting with programs to ensure they get the most out of their use of IBM Rational DOORS. Alex in an IBM Certified Deployment Professional DOORS v9 and has been recognized as a three time IBM Champion (2011, 2012, 2013). In 2011 Alex was elected the President of the New England Rational User Group.
1. How does it feel to be a returning IBM Champion?
I’m honored to be selected for this honor for the 3rd consecutive year. Ironically enough it was in 2010 that I 1st started to reach out across Raytheon the best practices that my team and I had developed around how to effectively use IBM Rational DOORS. I thought to myself, we have hundreds of programs at Raytheon that use DOORS and yet not everyone is aware of how to make the best use of the tool and more over the customizations that have been developed through custom DXL to make it even easier to use DOORS. This was the beginning of my vision to standardize the way requirements are managed at Raytheon, and little did I know it would lead to discovering my true passion.
2. Can you tell us something about what you do at Raytheon?
I lead team of engineers that maintain our customizations for how to effectively use DOORS and moreover I consult with programs all across the company on how to best architect their DOORS database and take advantage of the automation we have available. I’m most passionate about reaching out to others across the organization that are eager to improve on how systems engineering uses DOORS to their maximum advantage. I’m happy to say that over the past 3 years I’ve been able to spread our best practices across numerous Raytheon locations across the US, and have been able to do wonders with social media through the use of wiki pages, blogs, communities and generating training videos.
3. What are your thoughts on managing requirements effectively?
A tool is just a tool unless you have a sound process and training to go along with it. It is much more important to get people to understand the process and how the tooling helps them do their job, than to simply rely on the tool to solve all their problems. I believe it’s very important to have a sound process in place and certainly if there are best practices to leverage on how to use tools people should strive to take advantage of them, without losing sight of their process and what they are responsible for delivering. I can’t stress how important it is to determine your project architecture and the relationships of what you will be managing in DOORS, it might seem time consuming at 1st but it will save you a lot of time down the line.
4. So, how long have you been using Rational DOORS?
I started using DOORS in 2000 which is when I began my career at Raytheon. At the time I was a software developer who had just graduated from Boston University and DOORS was the tool which had the software requirements to which I had to develop code for. I believe at the time it was DOORS version 4.0 and I can certainly say that the tool has come a long way since then.
5. That's a long time; in your opinion, what are some of the greatest assets of the product and well, the pain points?
Without a doubt one of the greatest assets is the ability to customize the product for your process. Having said that I believe it’s very important to have a solid understanding of the systems engineering process and a clear understanding of how to architect your project for success. Only once you have a reusable architecture can you turn the focus on how to write reusable DXL to compliment your project template and architecture. As far as the pain points, certainly one of them has to be how manual it is to manage the database, whether it be attributes, views or access without custom DXL scripting it would be rather time consuming to carry out many tasks.
6. Do you have some tips or tricks to share with the DOORS users out there?
I’m happy to say that there are a ton of free resources online and I would highly recommend people take the time to watch webcasts, join a local Rational user group and just network with your peers who have their own experiences to share. If you aren’t a member already I encourage everyone to join the Global Rational User Group, their monthly newsletters are a great source of information and many presentations are archived for viewing right on the site. Another great resource for DOORS and DXL are the developerWorks forums, having asked and answered questions on the Rational DOORS DXL forum I highly recommend it. Numerous webcasts are available on 321Gang's website, Managing DOORS: The Administrator’s Toolbox is one where I take you through some examples on how to write reusable DXL to make it much easier to manage attributes and views.
7. What advice do you give to the budding DOORS administrators?
I would encourage everyone to find a mentor, this can be anyone that you look up to and just ask them questions, I know that I would not be where I am today had it not been for the mentoring and support of numerous people in my life and I am forever grateful to them. Something I have learned throughout the years is that it’s important to ask questions, and don’t assume the person who is asking you to do something has the right answers. As you gain experience you’ll be able to tailor the solution for your customers and they will thank you for it. It’s unbelievable how many resources are available online for learning, I’m a big fan of watching and creating training videos and Youtube has numerous channels I’d recommend subscribing to, a couple of which are: IBMRational and IBMJazz.
I believe it’s very important to always want to improve, whether this be in your personal or professional life. I encourage everyone to grab a book on any subject that is of interest to them, you’d be amazed how much you will learn. I’ve read dozens of books over the past few years and have a reading list for that want to follow along on http://www.shelfari.com/alexivanov/shelf
Modified on by VijaySankar
In this guest blog post, Requirements Engineering Expert, Jeremy Dick continues with his discussion on practical applications of traceability. Read the first part here -
The practical applications of traceability Part 1: What’s really going on when you decompose a requirement?
Inspired by recent experience in a large systems engineering project, Part 1 of this essay covered the practice of decomposing requirements, which brings about one of the most important traceability relationships in requirements engineering. Part 2 here covers the next most important relationship: that between requirements and validation and verification activities. Part 3 will continue the discussion of V&V, and how it itself gives rise to further requirements.
Verification & Validation (V&V)
I don’t care enough about the difference between validation and verification to want to enter into the divisive debate about it here. I am just going to say V&V and be done with it!
Kinds of V&V activity
There are many kinds of V&V activity, and organisations have varied ways of classifying them. In the project I am working on, the classifications are Analysis, Analogy, Inspection, Review, Test and Demonstration.
By their very nature, these types of activity tend to occur at different times of the life-cycle. Analysis, for instance, tends to occur early to predict properties of the proposed design and verify it against requirements. By contrast, demonstration tends to occur late as part of the acceptance tests.
Typically, a whole series of activities will be planned against a single requirement, some early, some late, allowing confidence to accumulate over the life-cycle of the project.
Requests for evidence
Despite the variety of kinds of activity, there is one thing they all have in common: they are requests for evidence of some kind or other. Indeed, I would favour calling V&V activities exactly that: “requests for evidence”.
Intention versus Fulfilment
Those activities that are carried out early in the development process provide evidence that the intended design will meet the requirements – they address design intention. Those activities applied late in the development process collect evidence that what has been built meets the requirements – they address design fulfilment.
Once the need for V&V activities has been established, this will often give rise to new requirements, either on the design of the product itself, or requirements for the construction of test artefacts, such as models and test equipment. (We never build just the product; there are always other things that need designing and building that surround the system for various purposes.)
The management of requirements arising from V&V will be the topic of Part 3.
Requirements decomposition and V&V planning
When planning V&V activities against a parent requirement, you need to take into account the V&V that will be carried out on its child requirements, and their child requirements, and so on.
Take, for instance, the following example where a user requirement is decomposed into a number of system requirements:
The only V&V activity planned against the user requirement is a commissioning test, which will occur late in the life-cycle. However, further V&V activities are defined against the child system requirements. Some of these are design inspections that occur very early, and some are system tests that occur relatively late, but still before commissioning.
There is, of course, a sense in which all these V&V activities provide evidence for the satisfaction of the user requirement, but some of the activities fit more directly against the system requirements. So when planning V&V activities, you need to ask the question: what activities can only be carried out against the parent requirement, and which can be delegated to child requirements? – because those that can be delegated are likely to provide evidence earlier in the life-cycle. And you always want that, if you can get it.
Granularity of V&V results
In the above example, there is one V&V activity that is linked to multiple requirements. In general, the relationship between requirements and V&V activities will be many-to-many.
However, this presents an issue when it comes to collating results of V&V against requirements. The System Test defined above may show positive results for filling, boiling and dispensing, but fail on the time taken to recover (cool down). So it has passed on all requirements except one. In terms of granularity of information, we need to record the result of the V&V activity against each linked requirement.
How is it best to do that? The only place to do that in the information model of the example is on the “verifies” links; there is a link for every requirement-V&V pair.
Another way is shown in the next example:
Here we have separated out the success criteria for each requirement for each test by adding subsidiary objects under the V&V activities (for instance, using the DOORS object hierarchy). Each success criterion has exactly one link to a requirement; a link from a criterion is implicitly a link from the V&V activity. (This link could be made explicit by retaining a link from the activity as well – not shown in the diagram.)
Now we have objects rather than links against which to record the results of the V&V activity (using an attribute of that object). This has the added advantage that it encourages a discipline of identifying precisely what the success criterion is for each requirement against each V&V activity. In addition, the V&V Activity and its list of success criteria can be used as a description/checklist for each particular test.
As results come in, the success/failure status on the success criteria can be rolled up through the “verifies” links to the associated requirements, and then on up through the “satisfies” links to the parent requirements. Both these relationships allow results to be summarised through the eyes of the requirements at every level.
V&V planning steps
These are the process steps we teach for planning V&V against requirements. They are numbered so as to continue from the process steps named in Part 1:
Determine the V&V activities you will need.
Consider what range of evidence you will need to collect to establish that the requirement has been met, and determine the best V&V activities for that. Aim to collect evidence as early as possible in the life-cycle, considering early proof of design intention as well as later design fulfilment. Capture the V&V activities into the database and (if using explicit links) link them to the associated requirements.
Identify the success criteria for each requirement against each V&V activity.
For each requirement/V&V activity pair, determine the success criteria to be applied. Capture each success criterion in a new object under the activity, and link it to the requirement.
Record the results of the V&V activity against each success criterion.
When the V&V activity has been competed, record the success or failure of the activity against each success criterion.
So this is what we now teach those engaged in planning and tracing V&V against requirements, in conjunction with requirements decomposition. It is wrong to assume that people will somehow automatically know how to do this kind of thing. By taking this approach, the V&V plan is well organized, defined at the most appropriate layers, with success criteria defined, and ready for the collection and roll-up of results.
Read the first part here - The practical applications of traceability Part 1: What’s really going on when you decompose a requirement?
About the author - Jeremy Dick works as Principal Analyst for Integrate Systems Engineering Ltd in a consultancy, research and thought leadership capacity. He has extensive experience in implementing practical requirements processes in significant organizations, including tool customization, training and mentoring. At Integrate, he has been developing the concept of Evidence-based Development, an extension of his previous work on “rich traceability”. Prior to this appointment, he worked for 9 years in Telelogic (now part of IBM Rational) in the UK Professional Services group as both an international ambassador for Telelogic in the field of requirements management, and a high-level consultant for Telelogic customers wishing to implement requirements management processes. During this time, he developed considerable expertise in customizing DOORS using DXL to support advanced engineering processes. His roles in Telelogic included a position in the DOORS product division to assist in the transfer of field knowledge to the product team. Co-author of a book entitled “Requirements Engineering” that has recently reached its 3rd edition, he is recognized internationally for his work on traceability.Jeremy can be reached out at jeremy.dick[at]integrate.biz
Today we have with us Mia McCroskey at Emerging Health Montefiore Information Technology who was recognized as IBM Champion this year. She shares with us her thoughts about the requirements management domain.
Welcome to IBM family! It¹s a great pleasure to have you with us as an IBM Champion. Congratulations, How do you feel?
I am honored to be recognized in this way not just this year but for the past several years that I have been asked to present my team's stories at Innovate. It may seem like our little team's requirements management needs are nothing like those of customers with huge DOORS ecosystems. But really we are an R&D site for the evolution of requirements management techniques and strategies. If a member of my team has an interesting idea for how to capture, structure, track, or trace requirements, we can try it without getting high level approvals or disrupting the work of hundreds of people. I take tremendous satisfaction from sharing our successes in a way that may help others improve their best practices.
Can you tell us something about what you do at Emerging Health, Montefiore Information Technology?
Emerging Health is primarily an IT delivery organization supporting healthcare delivery in the Bronx. Montefiore Medical Center is our parent company. My team, Product Development, is a software development shop tucked away in a corner doing very different work from most everyone else. Our application, Clinical Looking Glass, is a browser-based clinical intelligence tool that gives clinicians access to the enormous wealth of patient data gathered by all the other systems. Our end users can get an answer to a question like "are my clinic's diabetic patients getting the level of follow-up care required by our funding sources?" in a few minutes. In most healthcare environments, getting the data that you need to answer this question takes weeks.
My title is Manager, Product Development Lifecycle because I have my fingers on just about every stage of that lifecycle. Specifically, I lead our Requirements Management, Quality Assurance, Education, Support, and Implementation teams. I also manage outsourced development work, manage client relationships, and do hands on end-user training and support. We are an extremely team-oriented organization, with two formal development scrum teams and two teams that are at various stages of adopting an agile process. I'm deeply involved in that right now: it's challenging to apply Scrum to a training and support team, and to a team of data analysts and engineers.
Having said all that, my roots are in requirements. On a team of "happy path" stakeholders who get very excited by ideas for new functionality, I love to dig in and find the challenges that nobody wants to think about -- before we start coding. Sometimes I feel like a real buzz kill!
What are your thoughts on managing requirements effectively?
Agile models have forced us to completely reorganize our requirements elicitation, analysis, and management processes. But one thing that has not changed is my belief in a comprehensive, functionally organized requirements model.
But when I say "comprehensive," I don't mean laboriously detailed. The art of requirements analysis and management is in knowing -- or guessing right -- what details are going to be important later. By later I don't mean to the coders and testers in this iteration, I mean when we want to revise or augment the feature in six months. The requirements analyst has to be deeply plugged in to the business goals and vision in order to predict the future and capture the least amount, but most important information about what the team is doing.
A few years ago I spent many months constructing an "as built" specification for a market data system at the New York Stock Exchange. I had the original ten-year-old spec and a few dozen incremental release documents. We were getting ready to refactor the system and nobody knew every business rule and function. I vowed that no system I worked on would ever lack a spec that described everything it currently did. Incremental requirements specs that aren't integrated into the overall system are defects waiting to be discovered once the coders get ahold of them.
Having argued that I must add that a requirements model in document form is pretty nearly impossible to maintain in the way I describe. You've got to employ a database tool that supports the granularity of each requirement and allows you to describe each one through attributes. Then you can use filters and queries and views to present an infinite number of customized specifications -- all of the requirements implemented in a specific release, or all of the requirements related to a specific functional area, or the completed requirements related to a specific business objective or corporate mandate.
What are your thoughts on the role of requirements management in agile projects?
I recently spoke with a software development professional who was very proud of his organization's highly structured and detailed requirements templates that captured every detail before any work began "to be sure we deliver what's wanted." I felt like I was talking to tyrannosaurus rex. We all know that the day after you baseline that 400-page spec it's already out of date.
Agile with its short increments and "only write down what you really need to" mentality can seem seductively freeing. When our organization adopted Scrum, I stuck to my requirements model guns, and sure enough a few months later we couldn't remember decisions that we'd made a few sprints back, nor even exactly which sprint we'd done the work in. Since we weren't supposed to be doing "heavy" requirements, we'd been coached to: use the system and see what happens; or sift through dozens of completed user stories and hope the detail we wanted was actually mentioned in the acceptance criteria (which it was not because it was an in-sprint decision); or try to find relevant test cases and check the expected results. Instead, I launched DOORS, went to the functional area related to the question, and checked our documented business rule. Done.
What are some of the challenges you see in Healthcare Informatics projects?
Deriving meaningful information from the electronic medical record is essential to justifying the cost of those systems. We're piloting the use of predictive analytics -- combining statistical methods with the mass of patient data collected every day at our parent medical center -- to predict outcomes at the population level. To do it you need a very wide range of data: blood pressure, height, and weight, smoking patterns, history of heart disease, current blood sugar level, and on and on. Just bringing all this data together is the first challenge. Next is the analytic tool -- that's CLG. Finally you need big iron to process it. Most local and regional healthcare providers don't have the funding for, say, Watson. My team spent last summer optimizing our hardware and software environment, and CLG itself, to handle analysis of larger data sets faster, but within a medical-center friendly infrastructure budget.
Another area of critical concern is patient information. The need to pool patient data for direct care as well as population research is supported by legislature and funding sources. But we are bound, both legally and ethically, to protect patient identity in every circumstance.
Clinical Looking Glass has the capacity to show patient contact information to users who have been granted permission to see it -- usually clinicians who are actively providing care and need to contact the patients. While this is a critical feature of CLG, we expect to have to develop more granular levels of access as new types of clients adopt the product. For example, our Regional Health Information Organization (RHIO) client has data from twenty-two healthcare institutions. Some patients have declined to have their identity shared across the organization. We have to build the capability to mask these patients' identity even to our users who have permission to see it.
We have with us today Bruce Powel Douglass. He doesn't need an intro for most of us -- Embedded Software Methodologist. Triathlete. Systems engineer. Contributor to UML and SysML specifications. Writer. Black Belt. Neuroscientist. Classical guitarist. High school dropout. Bruce Powel Douglass, who has a doctorate in neurocybernetics from the USD Medical School, has over 35 years of experience developing safety-critical real-time applications in a variety of hard real-time environments. He is the author of over 5700 book pages from a number of technical books including Real-Time UML, Real-Time UML Workshop for Embedded Systems, Real-Time Design Patterns, Doing Hard Time, Real-Time Agility, and Design Patterns for Embedded Systems in C. He is the Chief Evangelist at IBM Rational, where he is a thought leader in the systems space and consulting with and mentors IBM customers all over the world. He can be followed on Twitter @BruceDouglass. Papers and presentations are available at his Real-Time UML Yahoo technical group and from his IBM thought leader page.
The problems with poor requirements are legion and I don’t want to get into that in this limited space (see Managing Your Requirements 101 – A Refresher Part 1: What is requirements management and why is it important?
). What I want to talk about here is verification and validation of the requirements qua requirements rather than at the end of the project when you’re supposed to be done.
The usual thing is that requirements are reviewed by a bunch of people locked in a room until they are ready to either 1) gnaw off their own arm or 2) approve the requirements. Then the requirements are passed off to a development team – which may consist of many engineering disciplines and lots of engineers – for design and implementation. In parallel, a testing group writes the verification and validation (V&V) plan (including the test cases, test procedures and test fixtures) to ensure that the system conforms to the requirements and that the system meets the need. After implementation, significant problems tracing back to poor requirements require portions of the design and implementation are thrown away and redone, resulting in projects that are late and over budget. Did I get that about right?
The key problem with this workflow is that the design and implementation are started and perhaps even finished without any real assurance about the quality of the requirements. The actions that determine that the requirements are right are deferred until implementation is complete. That means that if the requirements are not right, the implementation (and corresponding design) must be thrown away and redone. Internal to the development effort, unit/developer and integration testing verify the system is being built properly and meets the requirements. Then at the end, the system verification testing provides a final check to make sure that the requirements are correctly addressed by the implementation.
During this development effort, problems with requirements do emerge – such as requirements that are incomplete, inconsistent, or incorrect. When such problems are identified, this kicks off a change request effort and an update to the requirements specification (at least in any reasonable process), resulting in the modification of the system design and implementation. But wouldn’t it be better to not have these defects in the first place? And even more important, wouldn’t it be useful to know that the implementing the requirements will truly result in a system that actually meets the customer’s needs?
There are two concerns I want to address here: ensuring that the requirements are “good” (complete, consistent, accurate, and correct) and that they reflect the customer’s needs. And I want to do this before design and implementation are underway.
It isn’t obvious
Imagine you’re building a house for your family. You contract an architect who comes back to you after 3 months with a 657 page specification with statements like:
- … indented by 7 meters from the west border of the premises, there shall be the left corner of the house
- … The entrance door shall be indented by another 3.57 meters
- … 2.30 meters wide and 2.20 meters high, there shall be a left-hand hinge, opening to the inside
- …As you come in, there shall be two light switches and a socket on your right, at a height of 1.30 meters
My question to you is simple: is this the house you want to live in? How would you know? There might be 6500 requirements describing the house but it would almost impossible for any human to understand whether this is the house you want. For example:
- Is the house energy efficient?
- Does the floor plan work for your family uses or must you go through the bathroom to get to the kitchen?
- Is it structurally sound?
- Does it let in light from the southern exposure?
- Is there good visibility to the pond behind the house?
- Does it look nice?
What (real) architects do is they build models of the system that support the reasoning necessary to answer these questions. They don’t rely simply on hundreds or thousands of detailed textual statements about the house. Most systems that I’m involved with developing are considerably more complex than a house and have requirements that are both more technical and abstract.
Nevertheless, I still have the same basic need to be able to understand how the requirements fit together and reason about the emergent properties of the system. The problem of demonstrating that the implementation meets the stated requirements (“building the system right”) is called verification. The problem of showing that the solution meets the needs of the customer is called validation. Verification, in the presence of requirements defects, is an expensive proposition, largely due to the rework it entails. Implementation defects are generally easy and inexpensive to repair but the scope of the rework for requirements defects is usually far greater. Validation is potentially an even more expensive concern because not meeting the customer need is usually not discovered until the system is in their hands. Requirements defects are usually hundreds of times more expensive than implementation defects because the problems are introduced early, identified late in the project, and require you to throw away existing work, redesign and reimplement the solution, then integrate it into the system without breaking anything else.
A proposal: Verifiable and Validatable Requirements
The agile adage of “never be more than minutes away from demonstrating that the work you’re doing is right” applies to all work products, not just software source code. It’s easy to understand how you’d do that with source code (run and test it). But how do you do that with requirements? The Core Concept Premise:
You can only verify things that run. Conclusion:
Build only things that run. Solution:
Build executable requirements models to support early requirements verification
If we can build models of the requirements, we can verify and validate them before handing them off to the design team. The way I recommend you do that is to
- Organize requirements into use cases (user stories works too, if you swing that way)
- Use sequence diagrams to represent the required sequences of functionality for the set of requirements allocated to the use case (scenarios)
- Construct a normative (and executable) state machine that is behaviorally equivalent to that set of scenarios
- Add trace links from the requirements statements to elements of the use case model
- messages and behaviors in scenarios, and
- events (for messages from actors), actions (for messages to actors and internal behaviors), and states (conditions of the system) in the state machine
- Verify requirements are consistent, complete, accurate, and correct
- Validate requirements model with the customer
When problems are identified with the requirements during this functional use case analysis, they can be easily and inexpensively fixed before there is any design or implementation to throw away and redo. Constructing the Executable Use Case
Some people are confused with the fundamental notion of using a state machine to represent requirements, thinking that state machines are inherently a design tool. State machines are just a behavioral specification, and requirements are really just statements of behavior in which we are trying to characterize the required inputoutput control and data transformations of the system. It’s a natural fit. Consider the set of user stories for a cardiac pacemaker
- The pacemaker may be Off (not pacing or sensing) or executing the pacing mode of operation.
- The cardiac pacemaker shall pace the Atrium in Inhibit mode; that is, when an intrinsic heart beat is detected at or before the pacing rate, the pacemaker shall not send current into the heart muscle.
- If the heart does not beat by itself fast enough, as determined by the pacing rate, the pacemaker shall send an electrical current through the heart via the leads at the voltage potential specified by the Pulse Amplitude parameter (nominally 20mv, range [10..100]mv) for the period of time specified by the Pulse Length (nominally 10ms, range [1 .. 20]ms)
- The sensor shall be turned off before the pacing current is released.
- The sensor shall not be re-enabled following a pace for the period of time it takes the charge to dissipate to avoid damaging the sensor (nominally 150ms, setting range is [50..250]ms). This is known as the refractory time.
- When the pacing engine begins, it will disable the sensor and current output; the sensor shall not be enabled for the length of the refractory time.
A scenario of use is shown in the below figure (click to enlarge)
A state machine that describes the required behavior is shown below (click to enlarge)
Of course, this is a simple model, but it actually runs which means that we can then verify that is correct and we can use it to support validation with the customer as well. We can examine different sequences of incoming events with different data values and look at the outcomes to confirm that they are what we expect. Verifying the Requirements
For the verification of consistency of requirements, we must first decide what “inconsistent” means. I believe that inconsistent requirements manifest as incompatible outcomes in the same circumstance, such as when a traffic light would be Red because of one requirement but at the same time must also be Green to meet another. Since the execution of the requirements model has demonstrable outcomes, we can run the scenarios that represent the requirement and show through demonstration that all expected outcomes occur and that no undesired consequences arise. For the verification of completeness, we can first demonstrate – via trace links – that every requirement allocated to the use case is represented in at least one scenario as well as the normative state machine. Secondly, the precision of thought necessary to construct the model naturally raises questions during its creation. Have we considered what happens if the system is THIS state and then THAT occurs? What happens if THAT data is out of range? How quickly must THIS action occur? Have we created all of the scenarios and considered all of the operational variants? These questions will naturally occur to you as you construct the model and will result in the addition of new requirements or the correction of existing ones.
For correctness, I mean that the requirement specifies the proper outcome for a given situation. This is usually a combination of preconditions and a series of input-output event sequences resulting in a specified post-condition. With an executable use case model, we can show via test that for each situation, we have properly specified the output. We can do the same scenario with different data values to ensure that boundary values and potential singularities are properly addressed. We can change the execution order of incoming events to ensure that the specification properly handles all combinations of incoming events. Accuracy is a specific kind of correctness that has to do with quantitative outcomes rather than qualitative ones. For example, if the outcome is a control signal that is maintains an output that is proportional to an input (within an error range), we can run test cases to ensure that the specification actually achieves that. We can both execute the transformational logic in the requirements and formally (mathematically) analyze it as well if desired.
Be aware that this use case model is not the implementation. Even if the system use case model is functionally correct and executes properly, it is not operating on the desired delivery platform (hardware) and has not been optimized for cost, performance, reliability, safety, security, and other kinds of quality of service constraints. In fact, it has not been designed at all. All we’ve done is clearly and unambiguously state what a correctly designed system must do. This kind of model is known as a specification model and does not model the design. Validating the Requirements
Validation refers to confirming that the system meets the needs of the customer. Systems that meet the requirements may fail to provide value to the customer because
- the customer specified the wrong thing,
- the requirements missed some aspect of correctness, completeness, accuracy or consistency,
- the requirements were captured incorrectly or ambiguously,
- the requirements, though correctly stated, were misunderstood
The nice thing about the executable requirements model is that you can demonstrate what you’ve specified to the customer, not as pile of dead trees to be read over a period of weeks but instead as a representation that supports exploration, experimentation, and confirmation. You may have stated what will happen if the physician flips this switch, turns that knob, and then pushes that button, but what if the physician pushes the button first? What has been specified in that case? In a traditional requirements document, you’d have to search page by page looking for some indication as to what you specified would happen. With an executable requirements specification, you can simply say “I don’t know. Let’s try it and found out.” This means that the executable specification supports early validation of the requirements so that you can have a much higher confidence that the customer will be satisfied with the resulting product. So does it really work?
I’ve consulted to hundreds of projects, almost all of which were in the “systems” space, such as aircraft, space craft, medical systems, telecommunications equipment, automobiles and the like. I’ve used this approach extensively with the Rational Rhapsody toolset for modeling and (usually) DOORS for managing the textual requirements. My personal experience is that it results in far higher quality in terms of requirements and a shorter development time with less rework and happier end customers. By way of a public example, I was involved in the development of the Eaton Hybrid Drivetrain project
. We did this kind of use case functional analysis constructing executable use cases, and it identified many key requirements problems before they were discovered by downstream engineering. The resulting requirements specification was far more complete and correct after this work was done that in previous projects, meaning that the development team spent less time overall. Summary
Building a set of requirements is both daunting and necessary. It is necessary because without it, projects will take longer – sometimes far longer – and cost more. Requirements defects are acknowledged to be the most expensive kind of defects because they are typically discovered late (when you’re supposed to be done) and require significant work to be thrown away and redone. It is a daunting task because text – while expressive – is ambiguous, vague and difficult to demonstrate its quality. However, by building executable requirements models, the quality of the requirements can be greatly improved at minimal cost and effort.
For more detail on the approach and specific techniques, you can find more information in my books Real-Time UML Workshop
or Real-Time Agility
Eric has worked in the software development industry for over 20 years and is co-author of UML for Database Design and UML for Mere Mortals both published by Addison Wesley. Eric is currently responsible for capabilities marketing of Rational’s application lifecycle management solutions including Agile Software Delivery, Quality & Test Management, Requirements Management and Collaborative Lifecycle Management. He rejoined IBM in 2008 as the team leader for InfoSphere Optim Solutions and later was responsible for Information Governance Solutions. Prior to rejoining IBM, he worked for Ivar Jacobson Consulting as VP of Sales and Marketing. Before joining Ivar Jacobson, he was director of product marketing for CAST Software. Previously working for IBM, Eric held several roles within the Rational Software group including program director for business, industry and technical solutions, product manager for Rational Rose and team market manager for Rational Desktop Product. He also spent several years with Logic Works Inc. (Acquired by Platinum Technologies and CA), as product manager for ERwin.
As I think about IT today, there comes a rebirth in some ways of the importance of architecture and requirements. We are in an era of “ANY” -- meaning that applications and data can be accessed from anywhere, by anyone, and at any time.
Looking back at the applications of yesteryear (two or three years ago), we didn’t expect much from the web or mobile-based applications. We could view, run some reports or do some basic tasks, but to do the real work, we needed to go to the fat-client. Now, in today’s era of any, the user interface may look different, but the capabilities had better be the same since we expect near full capabilities no matter our device or interface.
This puts a new found set of requirements on applications and their development, and is making modeling and requirements (analysis and design) relevant again, but with a new twist – AGILITY. It is no longer a question of “what platform am I developing for” – the question is how quickly can we get it up and running on the latest version of Apple, Android, HTML 5 and whatever other platforms our clients expect the application to run on … and it had better run on all of the latest versions, with no delays, when updated operation systems come out.
And the question that I often receive now, however, is “can I be agile and meet these needs at the same time”? The plain answer is, yes, you can. However, agility doesn’t me you cannot ignore requirements and design. I am not talking about write-once, run-anywhere, rather instead understand the true requirements so that the various development teams can articulate them in code brought to life as features for the users, as they expect to see them. Users are looking for the application to be specific to their hardware/OS (iPad/AppleOS, Droid/Android…) as the hardware has become the platform for not just running the application, but the expected look, feel and usability of it now, too. This often means different developers for different deployment platforms, certainly at the User Interface level.
Designing applications requires that we are prepared. Architectures must be solidified and communicated. Requirements must be consistent and shared. We must model architectures so that developers can build to the designs and not recreate their own, wasting time and resources, and we must share those designs across the team.
Does this get in the way of agility? NO, it will speed agility. By sharing designs, assigning tasks based on architecture needs, we can speed time to market and our ability to deliver high quality software. In the era of any, we may have multiple teams working on the same front-end capabilities for different platforms even though the back-end is the same. But the more they can share, the faster they can be deployed and having the right requirements from users, the more satisfied they will be. We see people changing their desired platform as employers, vendors and suppliers change requirements, so we need to be prepared for the customer who is using an iPad today to be using an Android device tomorrow with the same requirements on the application. Just look at how the world of Blackberry has evolved.
So, as you think about your next project, don’t skimp on requirements and architectures or you may be limiting your agility in the future rather than speeding your time to satisfied clients.
In my career I’ve been deeply involved with both modeling and requirements management disciplines and tools, so it always intrigues me when I hear debates over whether largely textual based (sometimes referred to as ‘traditional’ or ‘document-based’) or model-based approaches to defining and managing requirements are the right way to go.
We’ve all heard the argument that a picture paints a thousand words, but I’ve always vividly remembered something I heard at a conference some years ago which was “I’d have taken a 1000 words over this one unreadable diagram.”
My belief is that it is not an either-or decision. You need both. Models can add clarity to requirements specifications and can bring together a more holistic understanding of what’s expressed in the requirements. Models can be walked through with stakeholders and with the right language and tools (like SysML or UML in IBM Rational Rhapsody), they can even be run to validate that what is captured in the model is correct, consistent and complete. But what if you have contractual requirements to manage, documents of regulations or standards to comply with, or complex performance or availability constraints – you don’t want to clutter your model with so much detail that it becomes unusable.
My preference is for a combination of textual requirements and models, that can be described by the ‘Systems Engineering Club Sandwich’ (references 1&2) where textual requirements, which form the layers of bread - and maybe a bit dry on their own, are supplemented by models that form the layers of filling – they are richer and more expressive, together forming a tasty combination to help explore and elaborate requirements, perform decomposition and allocation, and maintain traceability. I recently got together with my colleague Paul Urban to record a 30 minute webcast entitled ‘The Tasty Way to Tackle Complexity - The Systems Engineering Club Sandwich of Requirements & Models’
where we take a look at some engineering challenges, where requirements work goes wrong, how the club sandwich approach works and how to use requirements and models together effectively. So if this hors d'oeuvre has made you hungry for more, please take a look. Paul and I are really interested to hear what you think.
1. "The Systems Engineering Sandwich: Combining Requirements, Models and
Design", Jeremy Dick, Jonathon Chard, INCOSE International Symposium,
Toulouse, July 2004.
2. Requirements Engineering, Hull, Jackson
& Dick, Springer 2004.
I was lucky enough last week to travel to the INCOSE
(International Council on Systems Engineering
) International Symposium 2012
near Rome, Italy.
An excellent opportunity to meet the systems engineering community and hear
about their interests and concerns. We had lots of traffic to the very stylish
IBM booth where we talked about the IBM Rational solutions for systems
engineering and the latest from IBM Research on tool interoperability and
design optimization & trade-off. I’d like to claim the traffic was to due
to my presence but in fact there was lots of excitement and interest in the
must have giveaway of the conference, the IBM Limited Edition of Systems
Engineering for Dummies book
(if you weren’t there and don’t have a copy, you
can download a PDF version
Being at the INCOSE event reminded me of the very active and
I recently provoked on the INCOSE LinkedIn group with the posting of the link to my
previous blog post ‘Traceability – How Much is Enough?’
It’s a great read with some very provocative statements about whether
traceability is at all useful and that it’s the root cause of failure on
projects that overrun and overspend versus those that say it’s absolutely vital
on safety-critical systems or where the project is contract-driven. In the end I
think some consensus was reached between these two camps that ‘just enough’
traceability to keep a project on track, provide customer/market need context
to engineers, facilitate impact analysis, and (if needed) to meet industry
standards and regulations, is sufficient. Any more is excessive and wasteful
and likely to bog down progress towards to delivering innovative products and
During a quiet time at the IBM booth, I also had chance to
chat with my colleague Brian Nolan (marketing manager for aerospace &
defense industry at IBM Rational) about effective traceability, since Brian
is very interested in this topic and has presented on a Dr Dobbs
webcast on ‘3 Ways to Improve Traceability and Impact Analysis’.
Brian believes in what I would describe as ‘traceability by design’, meaning
that traceability is automatically established while you decompose your system
design (for example, use case to use case realization to sequence diagram and
so on). This discussion also reminded me of what another colleague Greg Gorman
(program director for IBM systems and software engineering solutions and the
INCOSE Corporate Advisory Board member from IBM) described several years ago as
‘link while you think’, meaning traceability is created by the tools, while you
are performing requirements decomposition, design and development, rather than
as an overhead activity afterwards.
I think we’ve now moved some way beyond ‘link while you
think’. While an information model with ‘just enough’ traceability for your
project needs is essential to avoid traceability spiraling out of control, with
new approaches such as Linked Lifecycle Data
from the OSLC (Open Services for Lifecycle Collaboration) community
and tools that recognize implicit traceability, provide
new ways to visualize lifecycle traceability and perform effective impact
analysis, we can make traceability work for us to help engineering become more
agile, while staying within costs and schedule and produce innovative, higher
quality products and systems.
I'm writing from Innovate 2012 in Orlando, Florida where thousands are attending sessions and sharing thoughts about software development and systems engineering. One topic that keeps coming up is that of traceability. On Sunday at VoiCE (Voice of the Customer Event), we had some great discussions with clients in the industrial sector building complex and embedded systems such as planes, cars and medical devices about traceability scenarios they have. There was a lively discussion around how much traceability is enough. One client, who is working in aerospace, needs to comply with DO-178B, and requires traceability all the way from a high level customer requirement through to individual lines of code. Others asked 'do you really need that fine grained traceability?' and 'won't that be very difficult to manage?' Another described that they have 26 teams and 16 applications to manage, and in the past had many (I think I heard 50!) locations where requirements were stored, usually in spreadsheets, making traceability very difficult. Now with the 'right schema' in place and using IBM Rational Requirements Composer, they have a solution that makes traceability much easier, and an environment that is manageable for the long term as it scales. Having the right schema - the information model of artifacts and what relationships they have was stressed as a vital ingredient in any recipe for successful traceability.
In a breakout session yesterday, data was shared that on a deep space exploration mission project, there are over 80,000 items in the requirements database (IBM Rational DOORS) and over 40,000 links - mind blowing complexity of data and relationships, and that's on one of many projects they have running today.
The right culture, process and tools for your application/system/product/service, organization and industry are necessary to prevent traceability across not only requirements, but into designs, work items, tests and so on, spiraling into an uncontrollable, unusable spaghetti of artifacts and links.
So for you and your projects, how much traceability is enough, how are you managing it and what would you like to see in the future to make the creation, maintenance and most importantly utilization of traceability easier to do and more effective?
We have 16 sessions lined up in this track with focusing on software delivery that revolve around eliciting, defining, elaborating, understanding, organizing, reviewing, communicating and tracking business, user, and software requirements. We have multiple customer sessions; however I am sure you will find the below sessions interesting
- Case Study: Moving from Organized Chaos to Standard Process and Tooling – Disney’s Experience in deploying IBM Rational Tools
- DHL Aligning Business and IT with IBM Rational Requirements Composer
- Iterative Requirements Analysis: Implementing Lean and Agile Principles for Software Requirements Analysis
Like previous editions of Innovate, this year also we have keynotes where IBM shares its road map, strategies and vision for requirements definition and management tools. Also there will be immense number of opportunities to meet senior product management professionals and developers of IBM Rational's Requirements Management tools in sessions like 'Ask the Experts: IBM Rational Requirements Composer and IBM Rational RequisitePro', What's New with IBM® Rational® Requirements Composer? and other presentations.
Some of the other notable presentations focus on trending topics and best practices in the requirements management domain like conceptual frameworks for visual definition in requirements life cycle, Requirements Engineering Maturity Model (REMM) and many more. There are multiple workshops about defining and managing requirements with IBM Rational tools, IBM Collaborative Lifecycle Management, best practices in using Requirements Composer and Jazz primers. Don't forget to try out Open Labs and solutions peds at Innovate!
And finally we have a competition (Who Is the Best IBM Rational Composer User? ) in which Rational Requirements Composer experts put their skills to the test to compete in a variety of different tool challenges and prove who is the best requirements tool champion.In this special event, participants have a chance to compete with IBM experts and tool developers to prove their expertise by solving common and difficult requirements management problems!
Have you registered for Innovate 2012? Hurry Up!!! just 9 days left...http://www.ibm.com/innovate/
Don't miss this opportunity to meet 4000+ professionals, attend your preferred sessions from 27 tracks and 400+ presentations and try out next generation products.See you at Innovate!
At Innovate 2012 in Orlando, June 3-7 there
will be two requirements management (RM) tracks – one focused on RM for IT
application development, and product-wise primarily on RequisitePro and
Rational Requirements Composer; and another focused on RM for systems
engineering (SE), and product-wise on DOORS. This blog post is focused on the
RM for SE track but look out for another on the IT focused track.
I think you’ll find that the
RM for SE track has some really strong content this year. Out of 16 sessions,
10 will feature customer speakers, including:
- How IBM Rational DOORS Helps Jet Propulsion
Laboratory Get to Mars and Beyond: Best Practices in Metrics,
Verification, and Traceability
- Using IBM Rational DOORS to Support Systems
Engineering and Release Management across Multiple Programs at Trane, a
heating, ventilation & air conditioning systems manufacturer
- A CareFusion Case Study of Integrating IBM
Rational DOORS and HP Quality Center for Use in an FDA Environment
- Integrating IBM Rational DOORS with IBM Rational
Team Concert: Lessons Learned at
You’ll also be able to meet
product management and senior development staff and ask them questions in our ‘Ask
The Experts (for DOORS version 7.x, 8.x and 9.x users)’, and you’ll hear about
IBM’s strategy and roadmap in ‘What's Now and Next in Requirements Management
for Systems Engineering’, including the latest release and plans for the DOORS
If you’ve been following our
RM developments recently you’ll be aware of the DOORS Next Generation project
on Jazz.net and you’ll hear about that during the Now and Next session, and
if you want to dive deeper into what’s planned be sure to go to the session
‘Deep Dive Investigation and Feedback about IBM Rational DOORS Next-Generation
Beta’ and visit the DOORS Next Generation demo pedestal in the Innovation Lab
area of the Solution Center.
And if you’ve ever attended
before, you’ll know that a popular feature is the DOORS DXL Script Exchange
competition, where you can demonstrate your prowess in DXL and win a small
prize. For more details about this year’s competition (with a twist!) please
Scripts are due on or before May 25th.
On top of all this fantastic
content (and this is just for one track – there are over 400 sessions across
the whole conference!), Innovate is a great opportunity to network with other
systems engineers and software developers, share war stories, tips and tricks
and maybe a drink or two.
To find out more about
Innovate 2012 and to register go to ibm.com/innovate.
We hope to see you there!
We have been discussing about requirements management practices and solutions from IBM Rational for Requirements Definition and Management at Rational RDM Blog
. We decided to move to our new own home here in developerworks and will continue our discussions going forward here.
Make sure you change your bookmarks and follow this blog to to keep abreast of requirements management principles, advances in the domain and solutions from IBM Rational. We hope to continue having a fruitful discussion here and create a mutual learning platform for all of us! We will continue to discuss the contemporary world of requirements management from both software and systems perspectives.
Continue reading the blog for more details and articles...If you have any suggestions for topics, provide your comments here.