Modified on by VijaySankar
Jeremy Dick works as Principal Analyst for Integrate Systems Engineering Ltd in a consultancy, research and thought leadership capacity. He has extensive experience in implementing practical requirements processes in significant organizations, including tool customization, training and mentoring. At Integrate, he has been developing the concept of Evidence-based Development, an extension of his previous work on “rich traceability”. Prior to this appointment, he worked for 9 years in Telelogic (now part of IBM Rational) in the UK Professional Services group as both an international ambassador for Telelogic in the field of requirements management, and a high-level consultant for Telelogic customers wishing to implement requirements management processes. During this time, he developed considerable expertise in customizing DOORS using DXL to support advanced engineering processes. His roles in Telelogic included a position in the DOORS product division to assist in the transfer of field knowledge to the product team. Co-author of a book entitled “Requirements Engineering” that has recently reached its 3rd edition, he is recognized internationally for his work on traceability.Jeremy can be reached out at jeremy.dick[at]integrate.biz
It is a bit of a shock to find myself well into the fourth year on the same project! The nature of my work as a consultant means that it is rare for me to stick with a project beyond the initial phases of defining a requirements management process, establishing effective tool support and training the process enactors. But this time we have been able to stick with the requirements team supporting a large project long enough to see theory put into practice, and to see what it really means to apply the tools and techniques. We have gone well beyond just training, and find ourselves mentoring nearly 300 engineers in the application of DOORS for requirements capture, development and management. This has helped us keep our feet firmly on the ground – rubber on the road – and to walk with those who actually have to do the work.
So what have we learnt?
It is one thing to teach people how to write requirements statements that are clear, unambiguous, testable and traceable; it is quite another thing to help people understand how to take a requirement and develop it. None of the engineers we met on the project had previous experience of how to take a system requirement, for instance, and systematically decompose it through the design into sub-system and component requirements. We had to adapt our training and mentoring to address this skill.
Requirements decomposition establishes one of the essential requirements traceability relationships: how each layer of requirements contributes to the satisfaction of the layer above. This is often known as the satisfaction relationship, or as refinement in SysML. It is this relationship that connects the design to the development of requirements, and that lies at the heart of the ability to perform impact analysis.
Whatever layer you are engaged in – customer, system, sub-system or component – the same basic requirements development process can be applied. These are the process steps we teach for requirements development:
1. Collect and agree your requirements.
Here you actively seek out the requirements you are expected to fulfil. In an ideal world, development will be perfectly top-down, and you can wait for the layers above to allocate perfectly expressed requirements to you. In a practical world, you will have to be more proactive, and will have to cooperate with your requirement “customers” to obtain acceptably worded requirements.
2. Design against your requirements.
This stage is the creative bit that you perhaps most enjoy doing, and where the real engineering takes place. You will imagine how to design the system to meet the requirements, and what you need your requirement “suppliers” to do to contribute to your design. In other words, if you are at the system level, you design the system into sub-systems, and work out what each sub-system must do to meet the system requirements.
3. Decompose the requirements to reflect the design.
Now you enter the decomposed requirements and trace them back to the requirements they satisfy, thus making the requirements contents and traceability reflect your design. The wording of the decomposed requirements is important: if the original requirement read “The <system> shall ...”, then it is likely that the decomposed requirements will read “The <sub-system> shall ...” At this stage you can also capture rationale for the decomposition, including references to the design documentation or models, thus tracing also to design information.
4. Allocate the decomposed requirements.
Finally, you can pass on the decomposed requirements to those areas responsible for fulfilling them. These other areas engage in the same process, and together you systematically achieve alignment of requirements through all the layers.
The example below illustrates the end result of applying this process on a user requirement decomposed into system requirements. The rounded box contains the design rationale, and refers to a functional model of the product. (If you’ll forgive the shameless plug, such diagrams can be produced using a DOORS extension related to TraceLine. Ask me more if you are interested.)
The example just shown is a classic decomposition pattern. It is actually the decomposition of an overall performance into a combination of capacity and performance attributes of the product. We call this “decomposed”.
Other decomposition patterns are possible. Sometimes no decomposition is necessary, because the system requirement can be satisfied entirely by a single component or sub-system, as in the left-hand example below; or a constraint that will apply universally to all parts, as in the right-hand example. All that changes, perhaps, is the wording of the requirement to indicate the new target. We call this “direct flow”.
You will reach a point in the cascade of decomposed requirements when a requirement is satisfied entirely within the current area without the need to decompose further, as illustrated in the following. In this case, it is important to state the rationale for not flowing the requirement onwards, as otherwise it may be construed as a traceability gap. We call this “not developed further”.
I have seen a number of organisations that capture this flow-down type – Decomposed/Direct Flow/Not developed further – as an attribute of the requirement. We do this because it allows us to cross-check certain things: if you have marked a requirement as “Decomposed” but have not decomposed it, then that does indicate a design gap. However, if you mark the requirement as “Not developed further”, that gives you permission not to trace it further, i.e. not a design gap. (But you would do well to provide rationale!)
As requirements flow down through the layers, the complexity of the design becomes evident in the shape of the requirements graph. In general, satisfaction is a many-to-may relationship between requirements, and the figure below shows how this may be manifest. As the requirements are decomposed, they are refactored through the design.
Some patterns are questionable. Take these for instance:
Why decompose something into three requirements only to reduce them to a single one again? Or why collapse three requirements into one only to re-expand them in the next layer?
These patterns are not necessarily wrong, but they should be targeted for careful review.
So this is what we now teach those engaged in requirements decomposition, flow-down and traceability. It is wrong to assume that people will somehow automatically know how to do this kind of thing. By taking this approach, the flow-down of requirements reflects the design, and a clear satisfaction relationship is expressed in the traceability.
Read the second part here - The practical application of traceability Part 2: What’s really going on when you plan V&V against a requirement?
Modified on by VijaySankar
Just like the famous (mis)quote of Mark Twain, rumors of the demise of the requirements management tool IBM Rational DOORS are not only exaggerations but in fact untrue. I’d like to put straight here some of the myths and misunderstanding that I’ve heard and seen perpetuated:
Myth: IBM is discontinuing support and development for the DOORS 9.x series
Truth: IBM continues to develop the DOORS 9.x series with the same level of development resources. We have new functionality to announce in 2013 and plan further releases in the years ahead. In addition IBM has recently invested additional development resources in creating a new Requirements Management tool called IBM Rational DOORS Next Generation (DOORS NG).
Myth: IBM is replacing the DOORS 9.x series with DOORS Next Generation (DOORS NG) and expects customers to migrate now
Truth: DOORS 9.x will be developed in parallel with DOORS NG for many years to come. DOORS NG was released for the first time in November 2012. The tool has many functions DOORS 9.5 does not have, e.g. fully functional web client, type and data re-use (requirements & attributes), team collaboration, task management, but also does not have all of the capabilities relied on by DOORS 9.x users. We encourage users to evaluate the capabilities of DOORS NG and start projects or move projects to DOORS NG when practical and beneficial to the project.
Myth: To move to DOORS Next Generation, I’ll need to buy a whole new tool
Truth: Customers with active support & subscription on their DOORS 9.x licenses are entitled to use DOORS Next Generation - it is included as part of the DOORS 9.x package. Customers can choose to use either DOORS 9.x, DOORS Next Generation or a combination of both. There is no need for additional purchases or even for an exchange of licenses. All licenses are available to customers in the IBM license key center.
Myth: IBM offer no migration path from DOORS 9.x to DOORS NG
Truth: IBM do offer functions to transition into using DOORS NG for existing DOORS 9.x customers:
Work with DOORS 9.x and DOORS NG alongside each other - both tools support linking of information between both databases.
Work with suppliers by exchanging requirements through the standard exchange format of ReqIF. This works best between DOORS 9.x and DOORS NG but is also designed to work with other RM tools.
Where DOORS NG projects wish to be initialized with data from DOORS 9.x we offer re-factoring functions to harmonize the transition of data from DOORS 9.x to DOORS NG.
Myth: Everyone knows IBM’s strategy on requirements management and DOORS
Truth: IBM takes great care to regularly communicate our product strategy and roadmap at trade shows, IBM conferences and on webcasts. If in doubt, come and ask IBM! Please submit questions using the comment feature here or contact your IBM representative.
For information about the products, visit -
IBM Rational DOORS
IBM Rational DOORS Next Generation
Today we have with us Mia McCroskey at Emerging Health Montefiore Information Technology who was recognized as IBM Champion this year. She shares with us her thoughts about the requirements management domain.
Welcome to IBM family! It¹s a great pleasure to have you with us as an IBM Champion. Congratulations, How do you feel?
I am honored to be recognized in this way not just this year but for the past several years that I have been asked to present my team's stories at Innovate. It may seem like our little team's requirements management needs are nothing like those of customers with huge DOORS ecosystems. But really we are an R&D site for the evolution of requirements management techniques and strategies. If a member of my team has an interesting idea for how to capture, structure, track, or trace requirements, we can try it without getting high level approvals or disrupting the work of hundreds of people. I take tremendous satisfaction from sharing our successes in a way that may help others improve their best practices.
Can you tell us something about what you do at Emerging Health, Montefiore Information Technology?
Emerging Health is primarily an IT delivery organization supporting healthcare delivery in the Bronx. Montefiore Medical Center is our parent company. My team, Product Development, is a software development shop tucked away in a corner doing very different work from most everyone else. Our application, Clinical Looking Glass, is a browser-based clinical intelligence tool that gives clinicians access to the enormous wealth of patient data gathered by all the other systems. Our end users can get an answer to a question like "are my clinic's diabetic patients getting the level of follow-up care required by our funding sources?" in a few minutes. In most healthcare environments, getting the data that you need to answer this question takes weeks.
My title is Manager, Product Development Lifecycle because I have my fingers on just about every stage of that lifecycle. Specifically, I lead our Requirements Management, Quality Assurance, Education, Support, and Implementation teams. I also manage outsourced development work, manage client relationships, and do hands on end-user training and support. We are an extremely team-oriented organization, with two formal development scrum teams and two teams that are at various stages of adopting an agile process. I'm deeply involved in that right now: it's challenging to apply Scrum to a training and support team, and to a team of data analysts and engineers.
Having said all that, my roots are in requirements. On a team of "happy path" stakeholders who get very excited by ideas for new functionality, I love to dig in and find the challenges that nobody wants to think about -- before we start coding. Sometimes I feel like a real buzz kill!
What are your thoughts on managing requirements effectively?
Agile models have forced us to completely reorganize our requirements elicitation, analysis, and management processes. But one thing that has not changed is my belief in a comprehensive, functionally organized requirements model.
But when I say "comprehensive," I don't mean laboriously detailed. The art of requirements analysis and management is in knowing -- or guessing right -- what details are going to be important later. By later I don't mean to the coders and testers in this iteration, I mean when we want to revise or augment the feature in six months. The requirements analyst has to be deeply plugged in to the business goals and vision in order to predict the future and capture the least amount, but most important information about what the team is doing.
A few years ago I spent many months constructing an "as built" specification for a market data system at the New York Stock Exchange. I had the original ten-year-old spec and a few dozen incremental release documents. We were getting ready to refactor the system and nobody knew every business rule and function. I vowed that no system I worked on would ever lack a spec that described everything it currently did. Incremental requirements specs that aren't integrated into the overall system are defects waiting to be discovered once the coders get ahold of them.
Having argued that I must add that a requirements model in document form is pretty nearly impossible to maintain in the way I describe. You've got to employ a database tool that supports the granularity of each requirement and allows you to describe each one through attributes. Then you can use filters and queries and views to present an infinite number of customized specifications -- all of the requirements implemented in a specific release, or all of the requirements related to a specific functional area, or the completed requirements related to a specific business objective or corporate mandate.
What are your thoughts on the role of requirements management in agile projects?
I recently spoke with a software development professional who was very proud of his organization's highly structured and detailed requirements templates that captured every detail before any work began "to be sure we deliver what's wanted." I felt like I was talking to tyrannosaurus rex. We all know that the day after you baseline that 400-page spec it's already out of date.
Agile with its short increments and "only write down what you really need to" mentality can seem seductively freeing. When our organization adopted Scrum, I stuck to my requirements model guns, and sure enough a few months later we couldn't remember decisions that we'd made a few sprints back, nor even exactly which sprint we'd done the work in. Since we weren't supposed to be doing "heavy" requirements, we'd been coached to: use the system and see what happens; or sift through dozens of completed user stories and hope the detail we wanted was actually mentioned in the acceptance criteria (which it was not because it was an in-sprint decision); or try to find relevant test cases and check the expected results. Instead, I launched DOORS, went to the functional area related to the question, and checked our documented business rule. Done.
What are some of the challenges you see in Healthcare Informatics projects?
Deriving meaningful information from the electronic medical record is essential to justifying the cost of those systems. We're piloting the use of predictive analytics -- combining statistical methods with the mass of patient data collected every day at our parent medical center -- to predict outcomes at the population level. To do it you need a very wide range of data: blood pressure, height, and weight, smoking patterns, history of heart disease, current blood sugar level, and on and on. Just bringing all this data together is the first challenge. Next is the analytic tool -- that's CLG. Finally you need big iron to process it. Most local and regional healthcare providers don't have the funding for, say, Watson. My team spent last summer optimizing our hardware and software environment, and CLG itself, to handle analysis of larger data sets faster, but within a medical-center friendly infrastructure budget.
Another area of critical concern is patient information. The need to pool patient data for direct care as well as population research is supported by legislature and funding sources. But we are bound, both legally and ethically, to protect patient identity in every circumstance.
Clinical Looking Glass has the capacity to show patient contact information to users who have been granted permission to see it -- usually clinicians who are actively providing care and need to contact the patients. While this is a critical feature of CLG, we expect to have to develop more granular levels of access as new types of clients adopt the product. For example, our Regional Health Information Organization (RHIO) client has data from twenty-two healthcare institutions. Some patients have declined to have their identity shared across the organization. We have to build the capability to mask these patients' identity even to our users who have permission to see it.
We have with us today Bruce Powel Douglass. He doesn't need an intro for most of us -- Embedded Software Methodologist. Triathlete. Systems engineer. Contributor to UML and SysML specifications. Writer. Black Belt. Neuroscientist. Classical guitarist. High school dropout. Bruce Powel Douglass, who has a doctorate in neurocybernetics from the USD Medical School, has over 35 years of experience developing safety-critical real-time applications in a variety of hard real-time environments. He is the author of over 5700 book pages from a number of technical books including Real-Time UML, Real-Time UML Workshop for Embedded Systems, Real-Time Design Patterns, Doing Hard Time, Real-Time Agility, and Design Patterns for Embedded Systems in C. He is the Chief Evangelist at IBM Rational, where he is a thought leader in the systems space and consulting with and mentors IBM customers all over the world. He can be followed on Twitter @BruceDouglass. Papers and presentations are available at his Real-Time UML Yahoo technical group and from his IBM thought leader page.
The problems with poor requirements are legion and I don’t want to get into that in this limited space (see Managing Your Requirements 101 – A Refresher Part 1: What is requirements management and why is it important?
). What I want to talk about here is verification and validation of the requirements qua requirements rather than at the end of the project when you’re supposed to be done.
The usual thing is that requirements are reviewed by a bunch of people locked in a room until they are ready to either 1) gnaw off their own arm or 2) approve the requirements. Then the requirements are passed off to a development team – which may consist of many engineering disciplines and lots of engineers – for design and implementation. In parallel, a testing group writes the verification and validation (V&V) plan (including the test cases, test procedures and test fixtures) to ensure that the system conforms to the requirements and that the system meets the need. After implementation, significant problems tracing back to poor requirements require portions of the design and implementation are thrown away and redone, resulting in projects that are late and over budget. Did I get that about right?
The key problem with this workflow is that the design and implementation are started and perhaps even finished without any real assurance about the quality of the requirements. The actions that determine that the requirements are right are deferred until implementation is complete. That means that if the requirements are not right, the implementation (and corresponding design) must be thrown away and redone. Internal to the development effort, unit/developer and integration testing verify the system is being built properly and meets the requirements. Then at the end, the system verification testing provides a final check to make sure that the requirements are correctly addressed by the implementation.
During this development effort, problems with requirements do emerge – such as requirements that are incomplete, inconsistent, or incorrect. When such problems are identified, this kicks off a change request effort and an update to the requirements specification (at least in any reasonable process), resulting in the modification of the system design and implementation. But wouldn’t it be better to not have these defects in the first place? And even more important, wouldn’t it be useful to know that the implementing the requirements will truly result in a system that actually meets the customer’s needs?
There are two concerns I want to address here: ensuring that the requirements are “good” (complete, consistent, accurate, and correct) and that they reflect the customer’s needs. And I want to do this before design and implementation are underway.
It isn’t obvious
Imagine you’re building a house for your family. You contract an architect who comes back to you after 3 months with a 657 page specification with statements like:
- … indented by 7 meters from the west border of the premises, there shall be the left corner of the house
- … The entrance door shall be indented by another 3.57 meters
- … 2.30 meters wide and 2.20 meters high, there shall be a left-hand hinge, opening to the inside
- …As you come in, there shall be two light switches and a socket on your right, at a height of 1.30 meters
My question to you is simple: is this the house you want to live in? How would you know? There might be 6500 requirements describing the house but it would almost impossible for any human to understand whether this is the house you want. For example:
- Is the house energy efficient?
- Does the floor plan work for your family uses or must you go through the bathroom to get to the kitchen?
- Is it structurally sound?
- Does it let in light from the southern exposure?
- Is there good visibility to the pond behind the house?
- Does it look nice?
What (real) architects do is they build models of the system that support the reasoning necessary to answer these questions. They don’t rely simply on hundreds or thousands of detailed textual statements about the house. Most systems that I’m involved with developing are considerably more complex than a house and have requirements that are both more technical and abstract.
Nevertheless, I still have the same basic need to be able to understand how the requirements fit together and reason about the emergent properties of the system. The problem of demonstrating that the implementation meets the stated requirements (“building the system right”) is called verification. The problem of showing that the solution meets the needs of the customer is called validation. Verification, in the presence of requirements defects, is an expensive proposition, largely due to the rework it entails. Implementation defects are generally easy and inexpensive to repair but the scope of the rework for requirements defects is usually far greater. Validation is potentially an even more expensive concern because not meeting the customer need is usually not discovered until the system is in their hands. Requirements defects are usually hundreds of times more expensive than implementation defects because the problems are introduced early, identified late in the project, and require you to throw away existing work, redesign and reimplement the solution, then integrate it into the system without breaking anything else.
A proposal: Verifiable and Validatable Requirements
The agile adage of “never be more than minutes away from demonstrating that the work you’re doing is right” applies to all work products, not just software source code. It’s easy to understand how you’d do that with source code (run and test it). But how do you do that with requirements? The Core Concept Premise:
You can only verify things that run. Conclusion:
Build only things that run. Solution:
Build executable requirements models to support early requirements verification
If we can build models of the requirements, we can verify and validate them before handing them off to the design team. The way I recommend you do that is to
- Organize requirements into use cases (user stories works too, if you swing that way)
- Use sequence diagrams to represent the required sequences of functionality for the set of requirements allocated to the use case (scenarios)
- Construct a normative (and executable) state machine that is behaviorally equivalent to that set of scenarios
- Add trace links from the requirements statements to elements of the use case model
- messages and behaviors in scenarios, and
- events (for messages from actors), actions (for messages to actors and internal behaviors), and states (conditions of the system) in the state machine
- Verify requirements are consistent, complete, accurate, and correct
- Validate requirements model with the customer
When problems are identified with the requirements during this functional use case analysis, they can be easily and inexpensively fixed before there is any design or implementation to throw away and redo. Constructing the Executable Use Case
Some people are confused with the fundamental notion of using a state machine to represent requirements, thinking that state machines are inherently a design tool. State machines are just a behavioral specification, and requirements are really just statements of behavior in which we are trying to characterize the required inputoutput control and data transformations of the system. It’s a natural fit. Consider the set of user stories for a cardiac pacemaker
- The pacemaker may be Off (not pacing or sensing) or executing the pacing mode of operation.
- The cardiac pacemaker shall pace the Atrium in Inhibit mode; that is, when an intrinsic heart beat is detected at or before the pacing rate, the pacemaker shall not send current into the heart muscle.
- If the heart does not beat by itself fast enough, as determined by the pacing rate, the pacemaker shall send an electrical current through the heart via the leads at the voltage potential specified by the Pulse Amplitude parameter (nominally 20mv, range [10..100]mv) for the period of time specified by the Pulse Length (nominally 10ms, range [1 .. 20]ms)
- The sensor shall be turned off before the pacing current is released.
- The sensor shall not be re-enabled following a pace for the period of time it takes the charge to dissipate to avoid damaging the sensor (nominally 150ms, setting range is [50..250]ms). This is known as the refractory time.
- When the pacing engine begins, it will disable the sensor and current output; the sensor shall not be enabled for the length of the refractory time.
A scenario of use is shown in the below figure (click to enlarge)
A state machine that describes the required behavior is shown below (click to enlarge)
Of course, this is a simple model, but it actually runs which means that we can then verify that is correct and we can use it to support validation with the customer as well. We can examine different sequences of incoming events with different data values and look at the outcomes to confirm that they are what we expect. Verifying the Requirements
For the verification of consistency of requirements, we must first decide what “inconsistent” means. I believe that inconsistent requirements manifest as incompatible outcomes in the same circumstance, such as when a traffic light would be Red because of one requirement but at the same time must also be Green to meet another. Since the execution of the requirements model has demonstrable outcomes, we can run the scenarios that represent the requirement and show through demonstration that all expected outcomes occur and that no undesired consequences arise. For the verification of completeness, we can first demonstrate – via trace links – that every requirement allocated to the use case is represented in at least one scenario as well as the normative state machine. Secondly, the precision of thought necessary to construct the model naturally raises questions during its creation. Have we considered what happens if the system is THIS state and then THAT occurs? What happens if THAT data is out of range? How quickly must THIS action occur? Have we created all of the scenarios and considered all of the operational variants? These questions will naturally occur to you as you construct the model and will result in the addition of new requirements or the correction of existing ones.
For correctness, I mean that the requirement specifies the proper outcome for a given situation. This is usually a combination of preconditions and a series of input-output event sequences resulting in a specified post-condition. With an executable use case model, we can show via test that for each situation, we have properly specified the output. We can do the same scenario with different data values to ensure that boundary values and potential singularities are properly addressed. We can change the execution order of incoming events to ensure that the specification properly handles all combinations of incoming events. Accuracy is a specific kind of correctness that has to do with quantitative outcomes rather than qualitative ones. For example, if the outcome is a control signal that is maintains an output that is proportional to an input (within an error range), we can run test cases to ensure that the specification actually achieves that. We can both execute the transformational logic in the requirements and formally (mathematically) analyze it as well if desired.
Be aware that this use case model is not the implementation. Even if the system use case model is functionally correct and executes properly, it is not operating on the desired delivery platform (hardware) and has not been optimized for cost, performance, reliability, safety, security, and other kinds of quality of service constraints. In fact, it has not been designed at all. All we’ve done is clearly and unambiguously state what a correctly designed system must do. This kind of model is known as a specification model and does not model the design. Validating the Requirements
Validation refers to confirming that the system meets the needs of the customer. Systems that meet the requirements may fail to provide value to the customer because
- the customer specified the wrong thing,
- the requirements missed some aspect of correctness, completeness, accuracy or consistency,
- the requirements were captured incorrectly or ambiguously,
- the requirements, though correctly stated, were misunderstood
The nice thing about the executable requirements model is that you can demonstrate what you’ve specified to the customer, not as pile of dead trees to be read over a period of weeks but instead as a representation that supports exploration, experimentation, and confirmation. You may have stated what will happen if the physician flips this switch, turns that knob, and then pushes that button, but what if the physician pushes the button first? What has been specified in that case? In a traditional requirements document, you’d have to search page by page looking for some indication as to what you specified would happen. With an executable requirements specification, you can simply say “I don’t know. Let’s try it and found out.” This means that the executable specification supports early validation of the requirements so that you can have a much higher confidence that the customer will be satisfied with the resulting product. So does it really work?
I’ve consulted to hundreds of projects, almost all of which were in the “systems” space, such as aircraft, space craft, medical systems, telecommunications equipment, automobiles and the like. I’ve used this approach extensively with the Rational Rhapsody toolset for modeling and (usually) DOORS for managing the textual requirements. My personal experience is that it results in far higher quality in terms of requirements and a shorter development time with less rework and happier end customers. By way of a public example, I was involved in the development of the Eaton Hybrid Drivetrain project
. We did this kind of use case functional analysis constructing executable use cases, and it identified many key requirements problems before they were discovered by downstream engineering. The resulting requirements specification was far more complete and correct after this work was done that in previous projects, meaning that the development team spent less time overall. Summary
Building a set of requirements is both daunting and necessary. It is necessary because without it, projects will take longer – sometimes far longer – and cost more. Requirements defects are acknowledged to be the most expensive kind of defects because they are typically discovered late (when you’re supposed to be done) and require significant work to be thrown away and redone. It is a daunting task because text – while expressive – is ambiguous, vague and difficult to demonstrate its quality. However, by building executable requirements models, the quality of the requirements can be greatly improved at minimal cost and effort.
For more detail on the approach and specific techniques, you can find more information in my books Real-Time UML Workshop
or Real-Time Agility
On Thursday 14 March I presented on an IBM sponsored Dr Dobbs webcast on the topic of ‘3 Reasons to Throw Away Your Requirements Documents’. If you didn’t catch the webcast, you can view it on-demand and download the slides. If you did attend I hope you enjoyed it and in this blog post I want to answer some of the questions that I didn't get time to cover in the webcast, so scroll down to see if I've covered your question here. But first, for those of you who didn't make it, here's a quick recap of points I made during the webcast:
- What do I mean by ‘Throw Away Your Requirements Documents’? I’m not saying do away with your requirements or your requirements management process. What am I saying is invest in improving your requirements management process and support those process improvements with the right tooling. In the webcast audience poll, 60% said they are using documents and/or spreadsheets for requirements, so I set out to provide the case for why they should consider moving to a requirements management tool.
- Reason #1 – Collaboration. Using documents and spreadsheets, you can get into all sorts of problems like working from the wrong version of requirements. An integrated requirements management tool (one that has a collaborative repository that has open interfaces to share requirements data with design, test and development environments) helps you ensure that all engineering / development team members and their stakeholders are working from the right version and view of requirements for their role. I shared information from a case study on MBDA Missile Systems where collaboration was a major challenge.
- Reason #2 – Traceability. Traceability helps you get context and an audit trail for decisions made, perform coverage analysis, detect gold plating and perform impact analysis (See blog posts ‘What is Traceability?’ and ‘The uses and value of traceability’ for more explanation). But using documents and spreadsheets to create and manage traceability gives it a bad name – it becomes a tedious, error-prone overhead activity. With an integrated requirements management tool traceability links can be easily created and navigated. Traceability becomes part of the process rather than a bulk catch-up exercise, and through open interfaces it becomes easy to extend traceability from requirements to artifacts in other tools such as designs, work items and test cases. And most importantly there are automated views and reports that enable you to make active use of traceability for the purposes I mentioned above. I shared information from a case study on Invensys Rail Dimetronic where traceability is essential.
- Reason #3 – Agility. As many engineering and development organizations are looking to improve time to market and reduce costs, agile approaches are becoming increasingly popular (in the webcast audience poll 56% said they were using some sort of hybrid waterfall/iterative approach and 20% were agile), and with that comes changes to the way requirements might have been traditionally defined and most importantly to the way that changes to requirements are managed. Change is allowed and encouraged but you must have reviews and impact analysis to make informed decisions about whether to accept the change and implement in the current iteration or sprint, plan it into a future iteration or reject the change altogether. And to do that requires more than just a backlog managed in a spreadsheet or an agile planning tool. At IBM Rational we keep a prioritized backlog of user stories and epics as work items in plans managed in Rational Team Concert. User stories and epics are then decomposed and more fully described by requirements and supporting visual and textual artifacts created and managed in Rational DOORS Next Generation.
- And I started and concluded by looking at the most compelling reason for improving your requirements management process with the support of the right tooling: Return on Investment. I shared information from a case study on Emerging Health, Montefiore IT where they a achieved, among other benefits, a 69% reduction in the cost of quality (test preparation, testing and rework) within 6 months of deploying an improved requirements management process supported by IBM Rational DOORS. There’s also a follow-up video to this case study that was recorded after the client had adopted agile development practices.
Ok now onto some of the questions that were submitted but I didn’t time to answer during the webcast. I’ve divided them into sections based on the type of question so you can easily scan down to topics you’re interested in:
Alternative approaches to managing requirements
Q. What's the advantage of a requirements management tool if I could link my change management tickets to fine grained requirements (use cases, storyboards) that are maintained on a wiki or other collaboration tool (e.g. Sharepoint or IBM Connections)? It would seem that the work items would give you the needed traceability and metrics?
A. While you might be able to create the level of traceability you require, how would you report on it? Would you have to build bespoke views and reports? And would the use of a wiki for ‘fine grained requirements’ provide you with a view of requirements in context with one another as opposed to individual wiki pages? Where would you document additional properties of requirements? Could you easily reuse requirements across projects? A requirements management tool like IBM Rational DOORS Next Generation provides built-in traceability views and reports, enables to you to structure requirements in context with another in document-like views, provides user-defined properties for recording additional information and facilitates reuse of requirements.
Requirements Management and Agile Development
Q. We are mostly using water fall model, but looking into checking if Agile works for our customers. I am very interested in the role of a Requirements Analyst in the Agile world. In our world, the analyst is a facilitator of requirements gathering and not a SME on the business application we are gathering requirements for.
A. That facilitation role and the analysis skills of the Requirements or Business Analyst are still essential in Agile development. A common mistake when moving to moving to more agile approaches is appointing a ‘Product Owner’, who is a subject matter expert in the business domain you are building an application or product for, and expecting them to write the requirements or ‘user stories’. While they are experts in the business and you need that expertise on hand for Agile to work, they are not usually skilled in getting to the root goals or needs of the business problems you’re looking to address with the product or application. Without the skills of the Requirements or Business Analyst, it’s all too easy for the user stories to become about automating the way things are done today rather than addressing the real business issues in an optimal way. You can read more about the role of the analyst in Agile development in an interview with Mary Gorman of EBG Consulting.
Q. How does the IBM requirements toolset compare to more "agile" focused toolsets like Atlassian, Rally, etc.?
A. IBM is very committed to supporting agile development, particularly when agile is scaled to large, distributed development teams. At the heart of our agile development capabilities is Rational Team Concert which supports agile planning, task management and change management. As stated in the webcast though, we’ve found in our own continuous delivery process and heard from clients, that a place is needed to capture more details of requirements and their associated properties, in context with one another, together with supporting artifacts like storyboards, workflow scenarios and use cases. The integration of Rational Team Concert with Rational DOORS Next Generation provides those additional capabilities while preserving traceability to user stories and epics managed in the product backlog.
IBM Requirements Management solutions
Q. It seems this discussion is on Rational DOORS. Is Rational Requisite Pro still offered? If so then how do these two products compare?
A. IBM continues to support, maintain and respond to enhancement requests for Requisite Pro, but our future direction for requirements management for IT application development lies with Rational Requirements Composer and we provide migration support and a trade-up program. Please contact your IBM representative for more details. If you are using Requisite Pro for requirements management for complex products or embedded systems development, then you might also want to look at whether Rational DOORS or Rational DOORS Next Generation is the right move forward for your organization. But don’t worry we’re not forcing you to migrate today.
Q. You've mentioned DOORS Next Generation in your presentation, what is that? Does it replace DOORS?
A. IBM Rational DOORS Next Generation is a requirements management application on a collaborative lifecycle management platform for systems and software engineering that provides requirements collaboration, planning, reuse and lifecycle traceability. DOORS Next Generation (DOORS NG) was introduced in 2012 to take advantage of the common, collaborative ‘Jazz’ platform shared by Rational Team Concert, Rational Quality Manager, Rational Design Manager and Rational Requirements Composer; and to extend the requirements management capabilities in Requirements Composer to meet the needs of product & systems development organizations developing complex and/or embedded systems. DOORS NG will enable IBM to introduce new capabilities, faster, that would have been more difficult to deliver with existing DOORS product. However DOORS NG does not replace DOORS today. We have a very large install base of DOORS users working on programs that can last tens of years, and we will continue to support, maintain and enhance the DOORS 9.x series to meet the needs of those users. We encourage existing DOORS users to take a look at DOORS NG, to try it out on pilot projects and use the interoperability capabilities to exchange and/or link data with DOORS 9.x. Existing DOORS customers with active support & subscription are entitled to use DOORS NG without requiring an additional purchase – you can use your existing DOORS license entitlements to use with either DOORS or DOORS NG or a combination of the two. But we are not telling existing DOORS users to migrate live projects to DOORS NG today. DOORS NG in it’s early releases is attractive to organizations who don’t currently use a requirements management tool and are looking for a web based solution that also offers common platform integration with change management, agile planning, test management and design management capabilities. You can download a trial and follow development plans for DOORS NG on Jazz.net.
Q. How does Rational DOORS Next Generation compare to Rational Requirements Composer?
A. DOORS Next Generation is based on Rational Requirements Composer but extended to meet the requirements management needs of product & systems development organizations developing complex and/or embedded systems. In the first releases of DOORS NG, the web client is identical to Requirements Composer, but DOORS NG also features a rich client which is designed for usability of editing large requirements specifications. The strategy for the two products is that while Requirements Composer will be focused on the needs of business analysis and IT application development teams, DOORS NG will be focused on the needs of systems engineers and product & systems development teams.
I’d like to wrap up this post by thanking all of you who attended the webcast, participated in the polls, asked some great questions and completed the survey feedback. If you missed it, you can catch a replay or download the slides. I’d welcome any additional comments or questions here.
As another year comes to an end, we wish you all a happy and prosperous new year!
2012 was an eventful year with major releases for both Requirements Composer
. We announced the next generation requirements management solution for complex & embedded systems from IBM - Rational DOORS Next Generation
We moved to deverloperWorks platform for our blog. We believe our readers found this blog useful. Please share with us your comments and suggestions for the blog content and also any specific topics in requirements management
that you want us to focus on in 2013....
Last but not least, as we mentioned in the last post, call for speakers in now open for Innovate 2013 - The IBM Technical Summit
. Submit your abstracts before January 14 2013 to stand a chance of presenting in a conference with 4000+ footfalls and a free conference pass!Happy New Year!
Innovate 2013 – The IBM Technical Summit is here. The 2013 event promises to be even more exciting with top-notch keynotes, over 450 breakout sessions, labs, certifications and our biggest exhibit hall ever. As in previous events, Requirements Management is one of the key areas of interest at Innovate which attracts speakers and attendees from across the globe representing a wide range of industries. In 2012, we had two tracks for Requirements Management with sixteen sessions each with one track focusing on IT and another focusing on Systems Engineering. We had 14 real life case studies, 2 panel discussions and 4 instructor led sessions.
Managing requirements has always been a cornerstone in both software and systems development. The importance of the discipline continues to grow and is expected to take a leading role in the coming years. This is an opportunity to showcase your thoughts on the discipline, and how requirements management tools like DOORS or Requirements Composer can aid in managing effectively the requirements for project successes. Here are some of the topics from last year and an expected list of topics
- Requirements Management in Agile Projects
- Requirements Management for Mobile Development
- Managing requirements in developing Safety Critical Systems
- Developing and managing requirements specifications for contract agreement
- Requirements Driven Development: Understanding requirements and work items
- Requirements engineering and supporting layered requirements and models
- Delivering a specification perfect requirements set (document generation)
- Requirements Reuse: Methods and best practice
- Requirements management for complex systems and teams
- Using traceability to expose gaps/change to other requirements and across the lifecycle
- Requirements engineering for projects with complex systems and software
- Requirements definition and management case studies
- Requirements definition and management across the software lifecycle
- Elicitation techniques for requirements and use cases
- Agile software development and requirements modeling
- Requirements management for outsourced projects
- Defining and managing requirements across geographically distributed teams
- Metrics and analysis used in requirements management
- Integrating requirements with project and portfolio management
- Implications of regulatory compliance on the requirements management process
- Business specification-centric approaches
- Best practices in aligning business goals and IT
- Value-based requirements engineering
- Business modeling in requirements definition
- Requirements prioritization best practices and choosing your methodology
- Incorporating industry standards as reusable requirements
- Effective reporting using requirements and CLM information
- DOORS, Requirements Composer and other Rational products best practices
- Requirements engineering and product lifecycle management
Some session topics from Innovate 2012
- Iterative Requirements Analysis: Implementing Lean and Agile principles for Software Requirements Analysis (Nationwide Case Study)
- Visual definition in the requirements lifecycle: a conceptual framework
- How IBM Rational DOORS Helps JPL Get to Mars and Beyond: Best Practices in Metrics, Verification and Traceability
- Integrating IBM Rational DOORS with IBM Rational Team Concert – Lessons Learned at Raytheon
- Integrating Requirements and Models with IBM Rational DOORS and IBM Rational Rhapsody: Lessons Learned at Lockheed Martin MS2
- Writing Verifiable Requirements Is Not Easy
Share your experience, thoughts and best practices on requirements at an event attended by industry experts and IBM core development teams. Here are the top three reasons on why you should submit your paper for Innovate 2013.Explore
new areas - Free conference pass opening up the doors to 450+ sessions, labs and demo boothsNetwork
with experts and peers - Over 4000 professionals expected to attend the eventSharpen
your technical know-how - Learn from product and domain experts and from IBM core developers
Neal Middlemore has over 14 years of experience in requirements management, this encompasses the associated disciplines of change management of requirements and validation/verification. Neal comes from an avionics systems engineering background and has been working with the DOORS product for over 10 years.
One of the most fundamental benefits that businesses want to get from using requirements management tools is consistent traceability. It doesn’t matter if it is an IT system being developed or an aircraft carrier, the levels of complexity being dealt with determine that traceability across multiple levels of requirements, from stakeholder requests to detailed implementation, is not simple to maintain manually.
Further hurdles are put in our way by the need to comply with legislative requirements, so many different industries these days have requirements placed upon them by government and international standards bodies along with internal corporate standards.
To prove compliance with these legislative rules it is necessary for projects to not only prove that requirements are being managed but also to provide the how and where of their management. What features relate which stakeholder request? What aspects of the solution satisfy safety regulations? Has the realization of the requirement been tested and by whom on what platform?
To answer these questions it is necessary to utilize the traceability capabilities provided by modern tools but it’s not enough to let a tool decide how things relate to each other, it isn’t enough to let individual users decide these relationships. Traceability needs to be considered in the larger context of how you report on it to answer the very questions that led you to consider traceability for in the first place. It’s more than ‘does object A relate to object B?’.
An initiative to define an information architecture of your governance solution will assist in defining your process artifacts and their relationships to one another. Traceability becomes an asset rather than an overhead. A good way to begin such an initiative is having a workshop attended by all project stakeholders i.e. those who have a vested interest in ensuring project success. Most likely these attendees will have a good understanding of the process and what the project needs to deliver. Undoubtedly each will also bring differing perspectives and understanding of what information is required.
The test manager may want to see traceability from individual test artifacts through to the requirements being tested or to determine regulatory compliance has been achieved and whether verification methods have been agreed.
A subject matter expert (SME) may want to see the design in the context of the system level requirements and how it relates to stakeholder requirements.
Everybody wants to see something that is applicable to their job roles, even wishing to see things outside of their typical discipline domains. By asking the questions and documenting the answers you can start to put together an information architecture that makes sense for your project. It’s likely that many information architectures will exist. Not every project is the same and these will have differing sets of required attributes, views and reports of project information and agreed structure of inter artifact relationships.
Modern tools can often create templates for this kind of information to allow the deployment of additional artifact containers as and when required. These can even enforce traceability to ensure the integrity across the project. Above all, it is vital that the needs of the project is documented and communicated alongside of the information architecture to all project members.
Last week I was really fortunate to spend a couple of days in London presenting to and talking to clients, business partners and industry analysts. It's always so good to hear what's really going on out there and to get many different perspectives on what's important today and for the future. The first day was at IBM's Innovate UK 2012 event where I was fortunate to be asked to present on all the really exciting new stuff we've done in the past year to help organizations building today's and the next generation of smarter products and systems, with particular focus on providing solutions for systems engineers and embedded software developers. You can catch the absolute latest news on our recent launch webpages
. That session included a whistle-stop tour of the developments in requirements management for complex systems with Rational DOORS 9.4 and our plans for DOORS Next Generation. Whistle-stop because we also had so much news to get through in architecture & design, planning, change & configuration management and quality management, as well as industry specific solutions for A&D, automotive, medical devices and electronic design. And because on the following day at IBM's Southbank facility we had a whole day dedicated to topics related to DOORS.
At the DOORS customer day we had attendees from across several industry sectors including transportation, aerospace & defense, banking & mail services. The day kicked off with Morgan Brown presenting the latest on IBM's requirements management and DOORS strategy. Morgan told us how the DOORS 9.x series is and will continue to be developed and enhanced to meet the needs of the large install base, in parallel with the introduction of DOORS Next Generation (DOORS NG). DOORS NG is planned to take the best paradigms for managing structured requirements from DOORS 9.x and marry those with the requirements management and team collaboration capabilities that have been developed on the Jazz collaborative lifecycle management platform over the last 4 or so years (and are in use in the form of Rational Requirements Composer). The development of DOORS NG is out in the open on jazz.net
where milestone builds can be downloaded, discussions held, defects/enhancements raised and plans viewed. DOORS NG has gone through four beta releases and is expected to be released in late November. Morgan explained that in its first release, DOORS NG is not intended to replace the DOORS 9.x product line, but it is expected that existing DOORS customers will try out DOORS NG on pilot and new projects, and will use the interoperability capabilities of the ReqIF data exchange and cross-tool traceability linking to exchange and/or link data between DOORS 9.x and DOORS NG. DOORS NG will also appeal to those looking for a requirements management tool that is on an integrated platform with design, test management and task/change management capabilities. Morgan reminded the audience of an IBM statement released earlier this year that existing DOORS customers with active support & subscription would be entitled to use both DOORS 9 and next generation capabilities when they become available. This was well received by the attendees since it means that they can try out DOORS NG when it ships without the need for an additional purchase.
Of course a day of technology insights never goes past without some piece of tech throwing an unexpected spanner in the works. This time it was the projector and the next presenter's Apple Mac that refused to talk to each other, so instead of a flow into a demo of DOORS NG, next up was Neal Middlemore to tell us about the improved integration of requirements and quality management with DOORS 9.4 and Rational Quality Manager
(RQM) 4.0. This release was a significant enhancement that brings the integration in line with IBM's strategy to support OSLC - Open Services for Lifecycle Collaboration
. OSLC is a new approach to tool integration that is open and vendor neutral. What's really different about OSLC is that data no longer needs to be copied or synchronized between tools in order to create cross-tool or cross-discipline visibility or relationships. So now quality professionals working in RQM can see requirements in DOORS and create links between test cases (and now because some organizations require it, test steps) and the requirements they are validating; and requirements professionals in DOORS can see linked test case information including test results, without the need for either to leave the comfort of their familiar tool or for data to be copied between the two tools. Neal demonstrated the value of the integration to requirements & quality professionals and showed how RQM can be used to manage manual testing or hook up with a number of IBM and partner solutions for various forms of test automation. You can also see a demo of the DOORS - RQM integration on YouTube
So, technical issue solved, it was back to Jon Walton to give a demo of DOORS Next Generation using the Beta 4 release. Jon spent most of his time in the web client, highlighting the support for key DOORS paradigms such as hierarchical structured requirements documents, and showed off the plethora of new capabilities provided by the Jazz platform such as database-wide requirement reuse, graphical traceability view, requirements definition techniques (use case diagrams, storyboards), cross-discipline dashboards (containing requirements project info mashed up with info from design, quality and task management) and task management. Jon also showed the desktop client of DOORS NG which is very familiar looking to existing DOORS users with some twists (reuse of requirements across documents for one) - the desktop client will primarily be for users who need to do extensive editing of large requirements documents. If you're currently using DOORS 9.x, this YouTube video gives a quick preview intro
of DOORS NG and how it's both similar to and different to DOORS 9.x. Watch this space for more to come on DOORS NG later this month.
Back to the earlier lifecycle integration theme started by Neal, next to present was Steve Rooks on how to use DOORS with IBM's solution for model-based systems engineering and model-driven embedded software development, Rational Rhapsody, to link requirements and design activities. Rational Rhapsody enables elaboration of requirements and construction of systems and software architectures using SysML and UML. Rhapsody Design Manager
provides an additional level of design collaboration capabilities. Models can be published to and/or stored and managed in a central repository, making them more easily accessible to a wider set of stakeholders so that designs can be better communicated and understood by all those involved in specifying, designing, building and validating a product or system. Rhapsody Design Manager uses OSLC to facilitate linking of design elements to other lifecycle artifacts - requirements, test cases, work items, etc. Like with the DOORS-RQM scenario described above, a systems engineer or software architect working in Rhapsody can see requirements in DOORS and easily create links between requirements and design model elements. Requirements and requirement links can even be included in model diagrams. And of course a DOORS user can see links to design elements without leaving DOORS or to participate in design reviews can navigate into Rhapsody Design Manager. You can read more about linking requirements and design and the DOORS-Rhapsody Design Manager integration in my recent post 'The House That Paul Built'
that talks about a recent webcast
on the topic.
After lunch, an IBM business partner Kovair
was invited to present on how their Kovair Omnibus solution provides bridges, synchronization and workflow support across multiple tools from multiple vendors. It's a common situation to find yourself trying to enact processes and workflows when you have a diverse set of tools. Kovair talked about their support for OSLC to be able to widen the number of tools they can help link together, but also highlighted scenarios where you would still want to copy or transform data between tools - it's not a choice of Link or Sync, it's Link and Sync as appropriate.
The next session was presented by Paul Fechtelkotter, market manager for energy & utilities at IBM Rational. Paul gave a really interesting presentation on the challenges of complex systems development for nuclear power plants and how the nuclear industry is now adopting systems engineering best practices starting with requirements management to enable them to get better change management, traceability, impact analysis and compliance support. You can learn more about how IBM Rational is helping the nuclear industry on our dedicated web page
Unfortunately I had to leave after Paul's session and didn't catch the remainder of the afternoon, but as you can see it was a day packed full of information. I hope you find my summary and links for more information useful. If you have any questions or comments on any of the topics I've covered here or indeed anything on IBM's requirements management strategy, Rational DOORS and the lifecycle integrations, please don't be afraid to ask by using the blog comments facility.
You've bought the plot of land for your dream home. You have your list of requirements - 4 bedrooms, 3 bathrooms, spacious kitchen, 2 living rooms, 2 garages, landscaped gardens, etc. Would you be happy to simply hand that list to the builders and let them start work? Unlikely, I think. Typically, you call in an architect, who can take your quantitative requirements and qualitative desires and produce a blueprint, the architectural design that incorporates your wishes where feasible and adds creative flourish based on the architect's knowledge of house design. The blueprint enables you and the builders to have a much clearer picture of the desired end result than that original list of requirements. And it affords you the opportunity to influence the architecture, and for the builders to question and look at feasibility & cost options, before the foundations are dug and the first bricks are laid.
The same applies in product development. Systems engineers who are responsible for the holistic product specification and design don't just use textual requirements lists to capture the problem domain and describe the proposed solution. They analyze the requirements, identifying integrated scenarios, and often depict those using modeling languages such as UML or SysML. These modeled scenarios are easier to discuss and review with all stakeholders, and as the systems engineer evolves the proposed architecture (also in the same modeling language) they can run the scenarios against the architecture in model simulations to find inconsistencies or gaps in the requirements and flaws in the design, long before any software is coded, circuit boards are soldered or metal is welded.
So what value are our textual requirements lists - should we throw them away in favor of models? Well, not everything can be expressed in the model and not everyone involved in a development effort maybe using models. Going back to the house building analogy, there are contracts, numerous standards and regulations to be adhered to, and simply details that would make the blueprints unreadable. The various contractors (and I know from recent experience that sub-contracting is the name of the game in house building these days!) involved in the building process need to ensure that they can meet the contractual and regulatory demands while delivering against the architecture. Again this is the same in product development, except in many cases, particularly safety-critical systems, traceability and demonstration of conformance to requirements and compliance to standards & regulations are demanded. This requires the ability to integrate requirements and modeling workflows, easily link requirements and design elements, and to report on that linked information.
The need and solutions for this capability are nothing new. Integrations between requirements management and modeling tools have existed for many years (I think I started using such an integration in the early 90's and I'm sure they preceded that time). But I know from first hand experience of using and indeed writing such integrations that they've not always been optimal in the way integration is performed and in the workflow that is supported. Typically it's meant synchronizing (i.e. copying) data between tools in order to create the traceability links in one of the tools. This brings up all sorts of issues like 'which tool is the master?', 'am I looking at the latest data?', 'what happens when information is deleted?', etc.
With Open Services for Lifecycle Collaboration (OSLC
) we now have a much better way to link data across product development and operations tools, even when the tools maybe from different vendors, open source or in-house. OSLC has learnt from the principles of the World Wide Web and enables
tool data to be shared and linked where it resides (called a ‘Linked
Data’ approach). OSLC provides a common vocabulary for ‘resources’ in
particular domains, i.e. what a requirement, test case, design element,
change request, work item, etc. looks like, so that regardless of tool,
technology or vendor, tools implementing OSLC specifications can share and link data.
With Rational DOORS 9.4 and Rational Rhapsody 8.0 with Design Manager 4.0, IBM is utilizing OSLC to provide a simplified workflow for linking requirements analysis and design. On September 20, Paul Urban (if you've been wondering about this blog post title, now you know the Paul I'm speaking of), Market Manager for IBM Rational Rhapsody, presented this simplified workflow and its benefits on a IEEE Spectrum webcast sponsored by IBM. You can watch and listen to the replay at your own leisure here
. I hope you it enjoy it - please let Paul and I know what you think by leaving feedback on this blog post.