As we mentioned in an earlier blog post
, DOORS 9.4 and DOORS Web Access (DWA) 1.5 were released during Innovate 2012.This blog post provides insight into what’s changed in this release of DOORS and some of the significant new features. I have also provided a few resources where you can learn more about this release.
DOORS –RQM Integration based on OSLC
The most significant changes in DOORS 9.4 are the improvements to OSLC based integrations. A new integration based on OSLC has been provided for Rational Quality Manager
(RQM). Let's see how it is different from the existing (RQMI) integration based on a point-to-point solution. Provided below is a simplified representation of how RQMI works for the DOORS-RQM integration and contrasts it with the new OSLC based integration.
As you can clearly see, the integration has been made so simple in terms of software and storage, yet more powerful. The new integration provides a stable architecture for future enhancements and provides an automated migration. If we consider the installation and configuration aspects, the new integration no longer requires the server and java client components.
What does this mean to a typical DOORS user?
- This enables real-time lifecycle traceability to RQM test cases. This can be achieved through either the hover over menu from DOORS that display RQM artifacts or directly from RQMWhat does this mean to a typical RQM user?
- The real-time integration enables the RQM user to review and edit the automatically created draft test cases (with the new requirement reconciliation wizard) based on new requirements, and trace them back to DOORS. Full test coverage when requirements are changing is enabled with features like
- Automatic display of requirements not covered by test cases in current test plan
- Provision for linking existing test cases to new requirements
- Display of modified and removed requirements
- Enhanced suspect-ability analysis
Another important improvement in this release is the enhanced traceability to meet regulatory requirements. This enables
- Linking of one or more requirements to each test step of a manual test script
- Managing the association of requirements to related test cases
- Display of links during test execution and in test case results
Apart from this, integration to Design Manager RSA and Rhapsody beta
based on OSLC is also available in DOORS 9.4. However Design Manager is still in beta and this integration will be available only during later part of this year. For more details, visit jazz.net
. Also the data exchange mechanism has been upgraded to the latest version of OMG (Object Management Group) ReqIF (Requirements Interchange Format) from RIF. This helps in improved communication of requirements between organizations in a supply chain. Support for data exchange and linked data between DOORS 9.x and DOORS Next Generation is also included. Reporting across DOORS 9.x and DOORS Next Generation is also included.
There are going to be some changes in licensing when we consider using DOORS with Rational Publishing Engine (RPE). We have removed the requirement for a RPE license while using RPE custom templates from directly within DOORS. But you still need a license for creating new custom templates; however it is not required to drive the reports. Usability Enhancements
We will briefly look at the usability improvements that have gone into DOORS 9.4. Many of these usability improvements reduce the need of writing custom DXL scripts. In DOORS 9.4, we have provided a stronger support to define and manage how more than one user can work on a module simultaneously. It is controlled with a widget allowing you to set a sharing level for editing as shown below.
The Views now support color coding and the user is allowed to control the background color of attributes. The views have been also extended to 128 columns.
Another small, yet significant usability improvement is the possibility to remove multiple views in a single selection. And finally DOORS now supports rich text exporting to Microsoft Excel.
What do you think about the improvements? If you have questions or comments please leave them here. If you need more information about the product, trials or resources, visit IBM Rational Requirements Management Web Page.
Eric has worked in the software development industry for over 20 years and is co-author of UML for Database Design and UML for Mere Mortals both published by Addison Wesley. Eric is currently responsible for capabilities marketing of Rational’s application lifecycle management solutions including Agile Software Delivery, Quality & Test Management, Requirements Management and Collaborative Lifecycle Management. He rejoined IBM in 2008 as the team leader for InfoSphere Optim Solutions and later was responsible for Information Governance Solutions. Prior to rejoining IBM, he worked for Ivar Jacobson Consulting as VP of Sales and Marketing. Before joining Ivar Jacobson, he was director of product marketing for CAST Software. Previously working for IBM, Eric held several roles within the Rational Software group including program director for business, industry and technical solutions, product manager for Rational Rose and team market manager for Rational Desktop Product. He also spent several years with Logic Works Inc. (Acquired by Platinum Technologies and CA), as product manager for ERwin.
As I think about IT today, there comes a rebirth in some ways of the importance of architecture and requirements. We are in an era of “ANY” -- meaning that applications and data can be accessed from anywhere, by anyone, and at any time.
Looking back at the applications of yesteryear (two or three years ago), we didn’t expect much from the web or mobile-based applications. We could view, run some reports or do some basic tasks, but to do the real work, we needed to go to the fat-client. Now, in today’s era of any, the user interface may look different, but the capabilities had better be the same since we expect near full capabilities no matter our device or interface.
This puts a new found set of requirements on applications and their development, and is making modeling and requirements (analysis and design) relevant again, but with a new twist – AGILITY. It is no longer a question of “what platform am I developing for” – the question is how quickly can we get it up and running on the latest version of Apple, Android, HTML 5 and whatever other platforms our clients expect the application to run on … and it had better run on all of the latest versions, with no delays, when updated operation systems come out.
And the question that I often receive now, however, is “can I be agile and meet these needs at the same time”? The plain answer is, yes, you can. However, agility doesn’t me you cannot ignore requirements and design. I am not talking about write-once, run-anywhere, rather instead understand the true requirements so that the various development teams can articulate them in code brought to life as features for the users, as they expect to see them. Users are looking for the application to be specific to their hardware/OS (iPad/AppleOS, Droid/Android…) as the hardware has become the platform for not just running the application, but the expected look, feel and usability of it now, too. This often means different developers for different deployment platforms, certainly at the User Interface level.
Designing applications requires that we are prepared. Architectures must be solidified and communicated. Requirements must be consistent and shared. We must model architectures so that developers can build to the designs and not recreate their own, wasting time and resources, and we must share those designs across the team.
Does this get in the way of agility? NO, it will speed agility. By sharing designs, assigning tasks based on architecture needs, we can speed time to market and our ability to deliver high quality software. In the era of any, we may have multiple teams working on the same front-end capabilities for different platforms even though the back-end is the same. But the more they can share, the faster they can be deployed and having the right requirements from users, the more satisfied they will be. We see people changing their desired platform as employers, vendors and suppliers change requirements, so we need to be prepared for the customer who is using an iPad today to be using an Android device tomorrow with the same requirements on the application. Just look at how the world of Blackberry has evolved.
So, as you think about your next project, don’t skimp on requirements and architectures or you may be limiting your agility in the future rather than speeding your time to satisfied clients.
Innovate 2013 – The IBM Technical Summit is here. The 2013 event promises to be even more exciting with top-notch keynotes, over 450 breakout sessions, labs, certifications and our biggest exhibit hall ever. As in previous events, Requirements Management is one of the key areas of interest at Innovate which attracts speakers and attendees from across the globe representing a wide range of industries. In 2012, we had two tracks for Requirements Management with sixteen sessions each with one track focusing on IT and another focusing on Systems Engineering. We had 14 real life case studies, 2 panel discussions and 4 instructor led sessions.
Managing requirements has always been a cornerstone in both software and systems development. The importance of the discipline continues to grow and is expected to take a leading role in the coming years. This is an opportunity to showcase your thoughts on the discipline, and how requirements management tools like DOORS or Requirements Composer can aid in managing effectively the requirements for project successes. Here are some of the topics from last year and an expected list of topics
- Requirements Management in Agile Projects
- Requirements Management for Mobile Development
- Managing requirements in developing Safety Critical Systems
- Developing and managing requirements specifications for contract agreement
- Requirements Driven Development: Understanding requirements and work items
- Requirements engineering and supporting layered requirements and models
- Delivering a specification perfect requirements set (document generation)
- Requirements Reuse: Methods and best practice
- Requirements management for complex systems and teams
- Using traceability to expose gaps/change to other requirements and across the lifecycle
- Requirements engineering for projects with complex systems and software
- Requirements definition and management case studies
- Requirements definition and management across the software lifecycle
- Elicitation techniques for requirements and use cases
- Agile software development and requirements modeling
- Requirements management for outsourced projects
- Defining and managing requirements across geographically distributed teams
- Metrics and analysis used in requirements management
- Integrating requirements with project and portfolio management
- Implications of regulatory compliance on the requirements management process
- Business specification-centric approaches
- Best practices in aligning business goals and IT
- Value-based requirements engineering
- Business modeling in requirements definition
- Requirements prioritization best practices and choosing your methodology
- Incorporating industry standards as reusable requirements
- Effective reporting using requirements and CLM information
- DOORS, Requirements Composer and other Rational products best practices
- Requirements engineering and product lifecycle management
Some session topics from Innovate 2012
- Iterative Requirements Analysis: Implementing Lean and Agile principles for Software Requirements Analysis (Nationwide Case Study)
- Visual definition in the requirements lifecycle: a conceptual framework
- How IBM Rational DOORS Helps JPL Get to Mars and Beyond: Best Practices in Metrics, Verification and Traceability
- Integrating IBM Rational DOORS with IBM Rational Team Concert – Lessons Learned at Raytheon
- Integrating Requirements and Models with IBM Rational DOORS and IBM Rational Rhapsody: Lessons Learned at Lockheed Martin MS2
- Writing Verifiable Requirements Is Not Easy
Share your experience, thoughts and best practices on requirements at an event attended by industry experts and IBM core development teams. Here are the top three reasons on why you should submit your paper for Innovate 2013.Explore
new areas - Free conference pass opening up the doors to 450+ sessions, labs and demo boothsNetwork
with experts and peers - Over 4000 professionals expected to attend the eventSharpen
your technical know-how - Learn from product and domain experts and from IBM core developers
Modified on by VijaySankar
The world of requirements management has developed significantly in the last decade or so and has increasingly become one of the corner stones of successful software and systems engineering projects. We have been discussing various aspects of the domain from a best practices perspective and how tools can help managing your requirements efficiently and effectively.
Starting today we will discuss various aspects of the requirements management discipline at a bird’s eye view level. These are meant to be introductory in nature and also intend to serve as refreshers for those who are already in the field. The domain and best practices have developed to an enormous level of sophistication that; it is difficult to cover everything in a set of blog posts. However we intend to make these posts as a quick reference and starting point for you to think seriously about the domain.
Have you heard about the Gaudi’s unfinished Cathedral or Airane 5 explosion? The former one is a hundred year project still under progress which couldn’t be finished because of unclear and changing requirements
and the latter one resulted in over $7 billion loss when the rocket exploded on its first voyage due to a software error; specifically floating point number error
. The importance of requirements management can be established from three unique perspectives – project overshoot and thus missing the market opportunity due to unclear and changing requirements; project failures due to unmet or misunderstood requirements and finally cost burden due to errors and missed requirements found late in the development cycle.
In a classic IEEE Spectrum article, Robert N. Charette writes about Why Software Fails
. Among the top reasons for failure of software projects are poor definition of requirements, poor management of risk, communication failure among stakeholders and increasing complexity of projects. In IBM GBS, ineffective requirements managements in one among the top five reasons for troubled projects. Many research firms (Standish Group’s CHAOS report, Gartner, CMU-SEI) and academicians (A Davis, Robert B.Grady, Steve Easterbrook
) have studied and quantified the failure rates of software projects (for example, in the above IEEE article Robert opines that 40-50% of software development time is spent on rework and cost of fixing a bug in the field can be as high as 100 times compared to when fixed at development stage). In all of them, the preliminary reasons for failures or overshoots are ineffective management of requirements.
So what exactly is requirements management?
Before moving to requirements management, let’s understand what a requirement is? A requirement can be anything from an abstract need to a well drilled down implementation detail of a system. Essentially it can be considered the detailed view of a need under consideration. IEEE Standard Glossary of Software Engineering Terminology
defines a requirement as a condition or capability needed by a user to solve a problem or achieve an objective; or a condition or capability that must be met or possessed by a system or system component to satisfy a contract, standard, specification, or other formally imposed documents; or a documented representation of a condition or capability as in former two. Thus what a requirement essentially represents depends on to whom we are talking to – it could be the need to a client; a business requirement for customers; a system requirement for vendors or a specification for a developer and tester. We will come to the different types of requirements later. Requirements Management can be considered the management of requirements essentially from when a customer provides the needs or a product development process is started. It includes managing the definition, elaboration and changing requirements during the development cycle and systems development. Peter Zielczynski, a requirements management expert defines the following major steps in requirements management (Requirements Management Using IBM® Rational® RequisitePro®, Peter Zielczynski
Establishing a requirements management plan
Developing the Vision document
Creating use cases
Creating test cases from use cases
Creating test cases from the supplementary specification
Zave (Classification of Research Efforts in Requirements Engineering. ACM Computing Surveys (1997)) defines Requirements Engineering as “the branch of software engineering concerned with the real-world goals for, functions of, and constraints on software systems. It is also concerned with the relationship of these factors to precise specifications of software behavior, and to their evolution over time and across software families.” While in practical terms, this could be considered same as requirements management, we can say requirements engineering addresses various aspects of requirements development; requirements management is the set of processes in systems and software engineering that interfaces with requirements engineering. We will try to delve into more details in another post when we consider V&V (Verification & Validation) model.
This is the first part of our six part blog posts series on basics of requirements management. Read the remaining parts here -
1. What is requirements management and why is it important?
2. How to write good requirements and types of requirements
3. Why base line your requirements?
4. What is Traceability?
5. The uses and value of traceability
6. Revisiting Requirements Elicitation
Modified on by VijaySankar
Note: This is the fourth post in our series of Managing Your Requirements 101. Read the first three posts here:
What is traceability? Or more specifically what is requirements traceability? Well rather than repeat what is already a good collection of definitions, I’ll refer you to http://en.wikipedia.org/wiki/Requirements_traceability
. From there I’d summarize three elements to requirements traceability:
Following the life of a requirement – from idea to implementation
How requirements impact each other, and how requirements impact other development lifecycle artifacts (such as designs, tests, tasks, source code, hardware specs, etc.) and vice versa.
The decomposition of requirements – from high level user/customer/market needs to system, sub-system, software or hardware component requirements; and transformation into design specifications and the implementation realization of the requirement.
Traceability in this context is about relationships between requirements at the same or different levels of detail, and between requirements and other lifecycle artifacts as listed above. It also extends to relationships beyond those directly involving requirements – i.e. the relationship of a defect report to a test case – this is referred to as ‘lifecycle traceability’. Traceability relationships can be of multiple types, for example:
Satisfaction: a system requirement (or more likely a number of system requirements) ‘satisfies’ a user requirement e.g. system requirement ‘The engine shall have at least 200bhp’ satisfies user requirement ‘The car shall be capable of accelerating from 0-60mph in under 8 seconds’.
Verification: a test case ‘verifies’ a requirement e.g. test case ‘0-60mph acceleration test’ (consisting of a number of test steps) verifies user requirement ‘The car shall be capable of accelerating from 0-60mph in under 8 seconds’.
Dependency (often used where interfaces are concerned): a requirement ‘depends’ on another requirement e.g. requirement ‘the power socket shall take 3 pins’ depends on requirement ‘the plug shall have 3 pins’.
Basic traceability establishes a relationship or link between one or more elements. Typed traceability adds the relationship type with its associated semantics (examples above). Rich traceability (ref: Requirements Engineering, Hull, Jackson & Dick, Springer, 2004) adds additional information on the traceability relationship such as the rationale explaining why a group of systems requirements satisfies a particular user requirements; or as is often the case, you can’t be 100% certain on specification or design decisions, you might document any assumptions you made in deriving a set of systems requirements from a user requirement. The rich traceability approach is particularly valuable in heavily regulated industries and safety-critical systems where audit trails of decisions made are vitally important to provide assurance and reduce risks.
Once traceability has been established there are multiple ways in which it can be viewed and reported on. Perhaps the oldest and most commonly recognized method is the traceability matrix where you can see the intersection between two sets of requirements and a check or cross shows where a link exists. This method doesn’t scale particularly well since the matrix could become very large. It’s also sometimes used for creating the links, but it’s not ideal for that either since you can typically only see a small amount of information on the requirements.
Another way to see traceability is to pick a starting point, e.g. the user requirements and display the related systems requirements alongside the user requirement they are linked to, in a traceability column. You can typically choose how much detail of the linked requirement is displayed, and you can even make it recursive, going down as many levels of requirements as you need/is practical to manage in a single view.
Graphical displays are great for getting a bigger picture view of traceability rather than immediately focusing in on the details of particular relationship. You can explore the traceability tree, zooming in/out or collapsing/expanding parts of the tree, or changing the focus (starting point) of the tree.
But what about in agile development, I hear you cry? Well that could be another topic in its own right – watch this space - but relationships still exist between typical artifacts created in agile approaches (such as between product features and user stories), and I argue that as long as traceability is created ‘as you go’ and automated by tools as much as is practical, that it’s even more essential to stay informed when changes are happening rapidly and ensure you are looking at the correct versions of related artifacts.
In a follow-on post in this Requirements 101 series, I’ll take a look at what traceability can be used for – highlighting where its application can bring significant value to your projects. But for now I’ll leave you with a few resources below that I’d recommend you take a look at, and ask you to let me know if you think this post was useful (or not!) and provide any feedback or additional information using the comment function.
CLM 4.0.1 has enhanced its higher enterprise development support with server renames and clustering. CLM 4.0.1 is now supported in Max OS X and also Safari and Chrome supports. Continuing with improving the flexibility when it comes to licensing, we introduced two varieties - CLM Contributor and Practitioner licenses. In 4.0.1, we have extended the workgroup licenses to RRC & RQM also. For a detailed review contact us or your respective account rep.
The biggest addition to RRC 4.0.1 is the inclusion of modules. So what are modules?
Modules helps us in organizing the requirements into ordered lists and use hierarchy to group them. This is in addition to the collections we have presently. This helps Business Analysts to increase control for requirements by group and give teams’ additional meaning, awareness and understanding. Reporting of requirements could be improved significantly with documents with the templates provided.Here is a detailed video
explaining modules in Requirements Composer.
The ability of comparing collections is also now included in RRC 4.0.1. This helps the business analysts to discover differences in project information progress quicker, learn what has changed and communicate across the team.
Ability to better control lock in a multi user environment is improved with an automatic edit lock option. The main utilities of this feature are to lock a requirement artifact while you are making changes or to apply permanent locks to restrict for security/regulation purposes.Here is a video showcasing the feature in detail
We had introduced significant changes based on ReqIF in RRC 4.0. In this release, further improvements are made in terms of the ability to import the requirements into different projects. This enables for example of data transfer between DOORS & RRC; take the data offline to work on etc. We intend to improve the options available for offline data usage in the future releases.
Another significant improvement in RRC 4.0.1 is the RRC-HP Quality Center integration. Now we can directly preview QC tests in RRC. This helps in use of requirements definition and management capability from RRC to manage project requirements being tested using HPQC. The following screenshot showcases a customer using Requirements Composer to view various requirements collections. The user rests his pointer (“hovers”) over one of those collections and a pop-up window appears showing a preview of what’s contained in a test plan that’s being managed by HP Quality Center.
RRC 4.0.1 enables bi-directional connectivity from requirements to models/elements using Rational Software Architect (RSA), Rhapsody, and Design Manger. This seamless integration helps you to use requirements and models together to design and document the project needs.
To read more about enhancements in RRC 4.0.1, visit jazz.net.
To know more about Rational Requirements Composer, visit here.
Modified on by VijaySankar
Note: This is the fifth post in our series of Managing Your Requirements 101. Read the first four posts here:
Part 4 of this series ‘What is Traceability’ looked at the definition of requirements traceability and different types of traceability relationships. In this part, let’s look at what traceability can be used for and where it delivers value to application, system or product development.
Why is traceability necessary/important? Isn’t traceability just an overhead, an onerous documentation task that’s only done in industries where it’s mandated? In my view it’s true that requirements traceability practices have originated from industries like aerospace & defense, where one use of traceability is to show that contractual requirements have been addressed, but it also has so much more value to bring when used effectively, and you have the right tools to maintain it and report on it. The following lists some of the ways that traceability delivers value:
Context: If you can trace back from a design or test to a user requirement, you then have the reason for the existence of that design or test and through the information in the user requirement (and through any related requirements you can trace to), you have more supporting context to help create the optimal design to meet the requirement or the most effective test to verify that the requirement is met
Audit trail & compliance: Taking an example of a new person joining a project, traceability can help them navigate the project and see why particular requirements, designs, tests, etc. exist. This is also of upmost value and importance when you need to demonstrate compliance to a regulation or standard to an auditor – the traceability trail can help you quickly show you are addressing the regulation or standard.
Coverage: How do you know whether you’ve covered all the user requirements in the derived systems requirements, designs, tests, etc.? Traceability can help you here – you can see gaps indicated by where a higher level artifact doesn’t have any relationships to lower level artifacts. Of course even if a relationship exists you still need to follow it and examine the lower level artifact to ensure it does what the traceability relationship says it should, but at least you had a signpost to direct you to the right place to look.
Gold plating: As well as highlighting coverage gaps, traceability can help you investigate possible ‘gold plating’ or over-engineering. If you have a lower level element without any relationships to a higher level then you can ask the question – why does this exist if it’s not apparently satisfying a requirement? There might be a perfectly valid reason but this gives the opportunity to identify and eliminate anything that could bring unnecessary additional time and cost to the project.
Impact analysis: I think this is the most valuable reason for traceability. If you have traceability relationships in place you can following those relationships when say a user requirement changes, to identify all the related system/sub-system/component requirements, design elements, tests, work items, etc. that are potentially impacted by the change. This will enable you to fully scope out the impact of the change before it’s made (if you decide to proceed), giving you far more control over the cost and time impact of change requests. This can of course also work in reverse – if a design change is necessary, say because the original design proves infeasible, you can more easily see what impact that has if any on your ability to still meet the requirements. Both of these scenarios have great project management benefits – you can have informed discussions in the development team and with your customer/stakeholders about whether to make the change.
What’s that I hear again – what about agile? Aren’t traceability and the benefits it’s proposed to have only necessary/of value in waterfall development? Well, take another look at that list of benefit scenarios – don’t you always want to be able to do these things, regardless of development methodology? If you’re in a fast changing, evolving project don’t you need to be more informed, have the right information at your fingertips, in order that you can respond quickly but effectively? I argue that if you have the right tools in place to create and utilize traceability, that it is even more essential if you’re adopting agile practices.
Speaking of the right tools, tools can automate the various types of traceability reporting and analysis I described above. For impact analysis, tools can quickly display all of the related artifacts connected to a particular requirement. And more than that they can detect changes in a requirement that has links and automatically mark those links as suspicious. This is very effective where you have different people working on different parts of the application/system and you need to be aware of changes in other areas that might impact your work. A suspect link indicates that a change has been made to the artifact at either end of the link and that you should check that the link still holds true – i.e. if the user requirement has changed, does the linked system requirement still satisfy it? This proactive impact notification mechanism helps to avoid inconsistencies across your specifications.
So you have my views on the uses and value of traceability, but what do you think? Please use the comment function to leave feedback and additional ideas.
I’ll leave you with a couple of links that have additional discussion of the value of traceability and in particular how much traceability is enough:
This is the fifth part of our six part blog posts series on basics of requirements management. Read the remaining parts here -
1. What is requirements management and why is it important?
2. How to write good requirements and types of requirements
3. Why base line your requirements?
4. What is Traceability?
5. The uses and value of traceability
6. Revisiting Requirements Elicitation