Modified on by VijaySankar
Prof. Lawrence Chung (firstname.lastname@example.org) is in Computer Science at the University of Texas at Dallas. He has been working in System/Requirements Engineering and System/Software Architecture. He was the principal author of the research monograph “Non-Functional Requirements in Software Engineering", and has been involved in developing “RE-Tools” (a multi-notational tool for RE) with Dr. Sam Supakkul, “HOPE” (a smartphone application for people with disabilities) with Dr. Rutvij Mehta, and “Silverlining” (a cloud forecaster) with Tom Hill and many others. He has been a keynote speaker, invited lecturer, co-editor-in-chief for Journal of Innovative Software, editorial board member for Requirements Engineering Journal, editor for ETRI Journal, and program co-chair for international events. He received his Ph.D. in Computer Science in 1993 from University of Toronto.
What are non-functional requirements (NFRs)?
NFRs colloquially have been called “-ilities” and “-ities”, since many words referring to NFRs end with “-ility” (e.g., usability, flexibility, reliability, maintainability) or “-ity” (e.g., security, integrity, simplicity, ubiquity). There are of course many other words that do not end with either “-ility” or “-ity”, such as performance, user-friendliness, power consumption, and esthetics, but still refer to NFRs.
Functional requirements (FRs), in contrast, are about functions, activities, tasks, etc. that may accept some input and produce some output.
Consider, for example, “add” (“+”) on a calculator which adds two numbers given as input and produces another number as output shown on the screen. Now suppose you type “2 + 3 =” now and the calculator shows “5”, but one year from now. In this case, the “add” on the calculator is functionally correct but non-functionally terrible, in particular, concerning performance.
As even in this simple example, a system which fulfills only functional requirements is often times not usable or even not useful.
So, handle NFRs and handle them appropriately. Don’t spend time only on FRs.
The “soft” Characteristics of NFRs and how to deal with them:
NFRs are global, subjective, interacting and graded.
FRs, such as “The calculator shall offer an “add” function”, are local in the sense that they are specific to the particular functions and not applicable to other functions or globally to other systems, such as a “subtract” function or a banking system. However, NFR terms such as “performance” can be applied to many other functions and systems, such as a “subtract” function and a banking system, and also those parts of such a function and a system.
In contrast to FRs, NFRs are subjective in both their definitions and the manner they need to be met – some are more subjective than others. Concerning definitions, for example, usability may mean simplicity and the availability of many help facilities to some people, while the same may mean something different, such as minimal learning curve and fast response. Concerning the manner whereby NFRs are seen satisfactorily met also depends on the (perception of the) user. For example, the keyboard with tiny keys on a smartphone may be usable to young people but not to old people. Also, large keys may be good enough for some people in using a smartphone, but a context-sensitive help may additionally be needed for a smartphone to be considered usable for some other people.
So, clarify the definitions of NFRs. Don’t assume they have unanimously agreeable definitions.
So, operationalize NFRs. Don’t just leave them without how they can be met.
NFRs are also interacting with each other, either synergistically or antagonistically or both. For example, a heavy authentication mechanism, for the purpose of enhanced security, may be hurting usability. If it takes three different passwords, which have to be changed every month and should consist of at least one special character, one digit, one upper case character, one function key, etc., in order to get in the system, the user of the system is unlikely to feel that the system is user-friendly. Hence, a conflict between security and user-friendliness. But a heavy security mechanism may help prevent unauthorized people from entering fake data in the system, hence a synergy between the security and the accuracy of data.
So, identify conflicts among NFRs. Don’t think you can do anything with them individually without any negative consequences.
So, identify synergies among NFRs. This is how we get “the whole becoming bigger than the sum of its parts”.
NFRs are graded, in the sense that they are usually met to different degrees. For example, an “add” function may be seen to be very good, good, bad or very bad, concerning its performance or usability, and different ways to implement the add function may affect the function differently – e.g., fully positively, partially positively, fully negatively, and partially negatively.
So, consider the degree of contributions between NFR-related concepts. Don’t simply think NFR-related concepts affect each other in a binary manner – either a complete satisfaction or dissatisfaction.
In a nutshell, NFRs cannot be defined or met absolutely in a clear-cut sense, i.e., soft.
So, satisfice NFRs. Don’t think NFRs can be satisfied absolutely, whatever the term “absolutely” might mean.
Product- vs. process-oriented approaches:
In science, objective measurements are important. But, are we mature enough to do that in system/software engineering? Also, consider:
“Not everything that can be counted counts, and not everything that counts can be counted.” [Albert Einstein]
According to this wisdom, it seems we need to measure important NFRs and only if we can. For example, you wouldn’t say “I love you 8 love units tonight”. It also seems that we need to shift our emphasis from measuring how well NFRs are met by a system/software artifact to how to handle NFRs during the process of developing the artifact in such a manner that the resulting artifact can be measured well.
So, treat NFRs as (soft)goals to satisfice. Do not repeat developing, scraping, and redeveloping a system, if they do not meet the expected NFRs, until a good system is finally produced.
Rationalize decisions using NFRs:
A (functional) problem may be solvable in many different ways. For example, break entries may be stopped by having a security guard, a housedog, a fortified gate, a home security software system, etc. Similarly, a (functional) goal may be achievable in many different ways also. Which one do we decide to choose and how? We use NFRs as the criteria in making the decision on selecting among the (functional) alternatives. Furthermore, NFRs treated as softgoals naturally lead to the consideration of such alternatives, among which a selection is made.
So, use NFRs as softgoals in exploring alternatives and also as the criteria in selecting among them.
How many NFRs are out there?
There can be many FRs. How about NFRs? If we go through a reasonably comprehensive dictionary and consider how many words can end with “-ility”, “-ity”, “-ness”, etc., this might give a hint. It’s not in the order of tens or even hundreds, but potentially thousands and tens of thousands. Alas, but we have resource limitations – a limited amount of time and money, our memory and reasoning capabilities, etc.
So, prioritize NFRs and their operationalizations throughout the softgoal-oriented process. Don’t simply claim “Our system satisfies all the possible NFRs and absolutely”.
A leading analyst and Systems Engineering expert, David Norfolk of Bloor Research recently published a white paper titled Reducing the risk of development failure with cost-effective capture and management of requirements. In this report, David dwells into the relevance of requirements management as a discipline and put forth his views on how the domain is changing with the advent of new development paradigms such as agile, mobile and devops.
If enterprise architecture helps to bridge the gap between business strategy and vision and its implementation in technology, from the CEO’s point of view, Requirements Management continues to help bridge the gap between business and technology at a lower level.
The report provides valuable insights with his deep coverage about why requirements management is relevant even more today, issues associated with managing requirements, challenges faced by the discipline, best practices from his experience and his thoughts on the capabilities of an ideal requirements engineering tool. Some of the topics the whitepaper discusses about are --
Challenges in requirements management
Managing changing requirements
Scope and ideal capabilities of a requirements engineering tool
Real life examples of benefits from investing in requirements
Read the whitepaper here - Reducing the risk of development failure with cost-effective capture and management of requirements
Modified on by VijaySankar
In this guest blog post, Requirements Engineering Expert, Jeremy Dick continues with his discussion on practical applications of traceability. Read the first two parts here.
What’s really going on when you decompose a requirement?
What’s really going on when you plan V&V against a requirement?
Inspired by recent experience in a large systems engineering project, Part 1 of this blog post series covered the practice of decomposing requirements, which brings about one of the most important traceability relationships in requirements engineering. Part 2 covered the next most important relationship: that between requirements and validation and verification activities.
Part 3 (probably the final part!) presents some other things about V&V we think we have learnt from undertaking a large project: what do you do about new requirements that arise from the V&V you have planned against your main requirements?
Once the need for V&V activities has been established (see Part 2), this will often give rise to new requirements. Broadly speaking, these requirements fall into two types:
Those that affect the design of the product.
Such requirements may constrain the design of the product, and may even add functions. For instance, should there be a need during commissioning to establish that the temperature of a fuel cell does not exceed a certain level, then it may be necessary to design in a means of measuring it.
Those that define the need the build a secondary system.
These requirements are for the construction of test artefacts, such as models and test equipment. In large programs, this test equipment can represent huge projects in themselves – like the construction of a new building to test a new design of jet engine.
Neither of these types of requirement should really be mixed in with the remaining requirements of without recording (through traceability) their origin, because if the choice of V&V changes, then we should be able to identify which parts of the design – or which pieces of test equipment – are present only to make the old V&V possible.
If the need to test a product gives rise to the need to build facilities to carry out that testing, then another development life-cycle is spawned for the purpose. Requirements will be collected for the facility, and further V&V may be required against those requirements. We enter a sort of recursive world of development life-cycles. Some call this “fractal” – the main development spawns smaller developments, which in turn spawn yet smaller ones.
Primary versus Secondary V&V
Secondary V&V arises when V&V activities lead to test artefacts that also need validating or calibrating. For example, from requirements about power delivery, there is a direct need to do some analysis using a model of fluid dynamics – what we call primary V&V. Requirements for the model itself are collected, and the need for validation of the model considered. For example, the model may need calibrating against some similar existing systems. This calibration activity is another form of V&V – what we call secondary V&V.
Requirements lead to V&V activities lead to requirements
As noted above, the need to for V&V leads to design changes or secondary systems, and thus to further requirements. This relationship should be captured; a requirement, rather than having a parent requirement, may in fact have a parent V&V activity. We could say that a requirement “enables” a V&V activity.
In the extreme, a secondary system may impose requirements on the primary product, for instance through the need to interface with it.
An information model
The diagram below illustrates the idea that requirements enable V&V activities
In the diagram, three traceability relationships are shown: “satisfies”, “verifies” and “enables”. There is an example of a primary V&V activity leading to an enabling requirement on the primary system itself – a requirement whose parent (through the “enables” relationship) is a V&V activity.
There are also two examples of secondary systems driven by requirements for primary V&V, and one of those gives rise, in turn, to a tertiary system.
In our current project, we have pushed this model only as far as secondary systems with an “enables” relationship and secondary V&V. An example is the one cited above – a fluid dynamics model that needs itself to be validated through calibration with similar existing systems.
As with all these things, there is probably a law of diminishing returns. How far should you push the requirement to V&V to requirement to V&V to requirement chain? You will have to decide!
A similar model probably exists for other aspects of development – manufacture, for instance. The need to manufacture a product gives rise to the need to construct the factory and plant to do so – a primary development spawning a secondary development. However, the traceability train goes not from requirement to V&V activity to requirement, but rather directly from primary requirement to secondary requirement. This could still be characterized as enablement.
We never build just “the product”. There are always other things that need designing and building that surround the system for various purposes, including testing components, sub-systems, systems and completed products.
What has been presented here is an attempt to get to grips with the traceability required to track the relationships between the product and its secondary (and possibly tertiary) systems.We implemented all this in a Rational DOORS database, with a small amount of customisation to ease the way.
Thank you for reading these blog entries, and please contact me if you are interested in reusing any of this experience and associated Rational DOORS customization.
Read the first two parts here -
What’s really going on when you decompose a requirement?
What’s really going on when you plan V&V against a requirement?
About the author - Jeremy Dick works as Principal Analyst for Integrate Systems Engineering Ltd in a consultancy, research and thought leadership capacity. He has extensive experience in implementing practical requirements processes in significant organizations, including tool customization, training and mentoring. At Integrate, he has been developing the concept of Evidence-based Development, an extension of his previous work on “rich traceability”. Prior to this appointment, he worked for 9 years in Telelogic (now part of IBM Rational) in the UK Professional Services group as both an international ambassador for Telelogic in the field of requirements management, and a high-level consultant for Telelogic customers wishing to implement requirements management processes. During this time, he developed considerable expertise in customizing DOORS using DXL to support advanced engineering processes. His roles in Telelogic included a position in the DOORS product division to assist in the transfer of field knowledge to the product team. Co-author of a book entitled “Requirements Engineering” that has recently reached its 3rd edition, he is recognized internationally for his work on traceability. Jeremy can be reached out at jeremy.dick[at]integrate.biz
Modified on by VijaySankar
In this guest blog post, Requirements Engineering Expert, Jeremy Dick continues with his discussion on practical applications of traceability. Read the first part here -
The practical applications of traceability Part 1: What’s really going on when you decompose a requirement?
Inspired by recent experience in a large systems engineering project, Part 1 of this essay covered the practice of decomposing requirements, which brings about one of the most important traceability relationships in requirements engineering. Part 2 here covers the next most important relationship: that between requirements and validation and verification activities. Part 3 will continue the discussion of V&V, and how it itself gives rise to further requirements.
Verification & Validation (V&V)
I don’t care enough about the difference between validation and verification to want to enter into the divisive debate about it here. I am just going to say V&V and be done with it!
Kinds of V&V activity
There are many kinds of V&V activity, and organisations have varied ways of classifying them. In the project I am working on, the classifications are Analysis, Analogy, Inspection, Review, Test and Demonstration.
By their very nature, these types of activity tend to occur at different times of the life-cycle. Analysis, for instance, tends to occur early to predict properties of the proposed design and verify it against requirements. By contrast, demonstration tends to occur late as part of the acceptance tests.
Typically, a whole series of activities will be planned against a single requirement, some early, some late, allowing confidence to accumulate over the life-cycle of the project.
Requests for evidence
Despite the variety of kinds of activity, there is one thing they all have in common: they are requests for evidence of some kind or other. Indeed, I would favour calling V&V activities exactly that: “requests for evidence”.
Intention versus Fulfilment
Those activities that are carried out early in the development process provide evidence that the intended design will meet the requirements – they address design intention. Those activities applied late in the development process collect evidence that what has been built meets the requirements – they address design fulfilment.
Once the need for V&V activities has been established, this will often give rise to new requirements, either on the design of the product itself, or requirements for the construction of test artefacts, such as models and test equipment. (We never build just the product; there are always other things that need designing and building that surround the system for various purposes.)
The management of requirements arising from V&V will be the topic of Part 3.
Requirements decomposition and V&V planning
When planning V&V activities against a parent requirement, you need to take into account the V&V that will be carried out on its child requirements, and their child requirements, and so on.
Take, for instance, the following example where a user requirement is decomposed into a number of system requirements:
The only V&V activity planned against the user requirement is a commissioning test, which will occur late in the life-cycle. However, further V&V activities are defined against the child system requirements. Some of these are design inspections that occur very early, and some are system tests that occur relatively late, but still before commissioning.
There is, of course, a sense in which all these V&V activities provide evidence for the satisfaction of the user requirement, but some of the activities fit more directly against the system requirements. So when planning V&V activities, you need to ask the question: what activities can only be carried out against the parent requirement, and which can be delegated to child requirements? – because those that can be delegated are likely to provide evidence earlier in the life-cycle. And you always want that, if you can get it.
Granularity of V&V results
In the above example, there is one V&V activity that is linked to multiple requirements. In general, the relationship between requirements and V&V activities will be many-to-many.
However, this presents an issue when it comes to collating results of V&V against requirements. The System Test defined above may show positive results for filling, boiling and dispensing, but fail on the time taken to recover (cool down). So it has passed on all requirements except one. In terms of granularity of information, we need to record the result of the V&V activity against each linked requirement.
How is it best to do that? The only place to do that in the information model of the example is on the “verifies” links; there is a link for every requirement-V&V pair.
Another way is shown in the next example:
Here we have separated out the success criteria for each requirement for each test by adding subsidiary objects under the V&V activities (for instance, using the DOORS object hierarchy). Each success criterion has exactly one link to a requirement; a link from a criterion is implicitly a link from the V&V activity. (This link could be made explicit by retaining a link from the activity as well – not shown in the diagram.)
Now we have objects rather than links against which to record the results of the V&V activity (using an attribute of that object). This has the added advantage that it encourages a discipline of identifying precisely what the success criterion is for each requirement against each V&V activity. In addition, the V&V Activity and its list of success criteria can be used as a description/checklist for each particular test.
As results come in, the success/failure status on the success criteria can be rolled up through the “verifies” links to the associated requirements, and then on up through the “satisfies” links to the parent requirements. Both these relationships allow results to be summarised through the eyes of the requirements at every level.
V&V planning steps
These are the process steps we teach for planning V&V against requirements. They are numbered so as to continue from the process steps named in Part 1:
Determine the V&V activities you will need.
Consider what range of evidence you will need to collect to establish that the requirement has been met, and determine the best V&V activities for that. Aim to collect evidence as early as possible in the life-cycle, considering early proof of design intention as well as later design fulfilment. Capture the V&V activities into the database and (if using explicit links) link them to the associated requirements.
Identify the success criteria for each requirement against each V&V activity.
For each requirement/V&V activity pair, determine the success criteria to be applied. Capture each success criterion in a new object under the activity, and link it to the requirement.
Record the results of the V&V activity against each success criterion.
When the V&V activity has been competed, record the success or failure of the activity against each success criterion.
So this is what we now teach those engaged in planning and tracing V&V against requirements, in conjunction with requirements decomposition. It is wrong to assume that people will somehow automatically know how to do this kind of thing. By taking this approach, the V&V plan is well organized, defined at the most appropriate layers, with success criteria defined, and ready for the collection and roll-up of results.
Read the first part here - The practical applications of traceability Part 1: What’s really going on when you decompose a requirement?
About the author - Jeremy Dick works as Principal Analyst for Integrate Systems Engineering Ltd in a consultancy, research and thought leadership capacity. He has extensive experience in implementing practical requirements processes in significant organizations, including tool customization, training and mentoring. At Integrate, he has been developing the concept of Evidence-based Development, an extension of his previous work on “rich traceability”. Prior to this appointment, he worked for 9 years in Telelogic (now part of IBM Rational) in the UK Professional Services group as both an international ambassador for Telelogic in the field of requirements management, and a high-level consultant for Telelogic customers wishing to implement requirements management processes. During this time, he developed considerable expertise in customizing DOORS using DXL to support advanced engineering processes. His roles in Telelogic included a position in the DOORS product division to assist in the transfer of field knowledge to the product team. Co-author of a book entitled “Requirements Engineering” that has recently reached its 3rd edition, he is recognized internationally for his work on traceability.Jeremy can be reached out at jeremy.dick[at]integrate.biz
Modified on by VijaySankar
Jeremy Dick works as Principal Analyst for Integrate Systems Engineering Ltd in a consultancy, research and thought leadership capacity. He has extensive experience in implementing practical requirements processes in significant organizations, including tool customization, training and mentoring. At Integrate, he has been developing the concept of Evidence-based Development, an extension of his previous work on “rich traceability”. Prior to this appointment, he worked for 9 years in Telelogic (now part of IBM Rational) in the UK Professional Services group as both an international ambassador for Telelogic in the field of requirements management, and a high-level consultant for Telelogic customers wishing to implement requirements management processes. During this time, he developed considerable expertise in customizing DOORS using DXL to support advanced engineering processes. His roles in Telelogic included a position in the DOORS product division to assist in the transfer of field knowledge to the product team. Co-author of a book entitled “Requirements Engineering” that has recently reached its 3rd edition, he is recognized internationally for his work on traceability.Jeremy can be reached out at jeremy.dick[at]integrate.biz
It is a bit of a shock to find myself well into the fourth year on the same project! The nature of my work as a consultant means that it is rare for me to stick with a project beyond the initial phases of defining a requirements management process, establishing effective tool support and training the process enactors. But this time we have been able to stick with the requirements team supporting a large project long enough to see theory put into practice, and to see what it really means to apply the tools and techniques. We have gone well beyond just training, and find ourselves mentoring nearly 300 engineers in the application of DOORS for requirements capture, development and management. This has helped us keep our feet firmly on the ground – rubber on the road – and to walk with those who actually have to do the work.
So what have we learnt?
It is one thing to teach people how to write requirements statements that are clear, unambiguous, testable and traceable; it is quite another thing to help people understand how to take a requirement and develop it. None of the engineers we met on the project had previous experience of how to take a system requirement, for instance, and systematically decompose it through the design into sub-system and component requirements. We had to adapt our training and mentoring to address this skill.
Requirements decomposition establishes one of the essential requirements traceability relationships: how each layer of requirements contributes to the satisfaction of the layer above. This is often known as the satisfaction relationship, or as refinement in SysML. It is this relationship that connects the design to the development of requirements, and that lies at the heart of the ability to perform impact analysis.
Whatever layer you are engaged in – customer, system, sub-system or component – the same basic requirements development process can be applied. These are the process steps we teach for requirements development:
1. Collect and agree your requirements.
Here you actively seek out the requirements you are expected to fulfil. In an ideal world, development will be perfectly top-down, and you can wait for the layers above to allocate perfectly expressed requirements to you. In a practical world, you will have to be more proactive, and will have to cooperate with your requirement “customers” to obtain acceptably worded requirements.
2. Design against your requirements.
This stage is the creative bit that you perhaps most enjoy doing, and where the real engineering takes place. You will imagine how to design the system to meet the requirements, and what you need your requirement “suppliers” to do to contribute to your design. In other words, if you are at the system level, you design the system into sub-systems, and work out what each sub-system must do to meet the system requirements.
3. Decompose the requirements to reflect the design.
Now you enter the decomposed requirements and trace them back to the requirements they satisfy, thus making the requirements contents and traceability reflect your design. The wording of the decomposed requirements is important: if the original requirement read “The <system> shall ...”, then it is likely that the decomposed requirements will read “The <sub-system> shall ...” At this stage you can also capture rationale for the decomposition, including references to the design documentation or models, thus tracing also to design information.
4. Allocate the decomposed requirements.
Finally, you can pass on the decomposed requirements to those areas responsible for fulfilling them. These other areas engage in the same process, and together you systematically achieve alignment of requirements through all the layers.
The example below illustrates the end result of applying this process on a user requirement decomposed into system requirements. The rounded box contains the design rationale, and refers to a functional model of the product. (If you’ll forgive the shameless plug, such diagrams can be produced using a DOORS extension related to TraceLine. Ask me more if you are interested.)
The example just shown is a classic decomposition pattern. It is actually the decomposition of an overall performance into a combination of capacity and performance attributes of the product. We call this “decomposed”.
Other decomposition patterns are possible. Sometimes no decomposition is necessary, because the system requirement can be satisfied entirely by a single component or sub-system, as in the left-hand example below; or a constraint that will apply universally to all parts, as in the right-hand example. All that changes, perhaps, is the wording of the requirement to indicate the new target. We call this “direct flow”.
You will reach a point in the cascade of decomposed requirements when a requirement is satisfied entirely within the current area without the need to decompose further, as illustrated in the following. In this case, it is important to state the rationale for not flowing the requirement onwards, as otherwise it may be construed as a traceability gap. We call this “not developed further”.
I have seen a number of organisations that capture this flow-down type – Decomposed/Direct Flow/Not developed further – as an attribute of the requirement. We do this because it allows us to cross-check certain things: if you have marked a requirement as “Decomposed” but have not decomposed it, then that does indicate a design gap. However, if you mark the requirement as “Not developed further”, that gives you permission not to trace it further, i.e. not a design gap. (But you would do well to provide rationale!)
As requirements flow down through the layers, the complexity of the design becomes evident in the shape of the requirements graph. In general, satisfaction is a many-to-may relationship between requirements, and the figure below shows how this may be manifest. As the requirements are decomposed, they are refactored through the design.
Some patterns are questionable. Take these for instance:
Why decompose something into three requirements only to reduce them to a single one again? Or why collapse three requirements into one only to re-expand them in the next layer?
These patterns are not necessarily wrong, but they should be targeted for careful review.
So this is what we now teach those engaged in requirements decomposition, flow-down and traceability. It is wrong to assume that people will somehow automatically know how to do this kind of thing. By taking this approach, the flow-down of requirements reflects the design, and a clear satisfaction relationship is expressed in the traceability.
Read the second part here - The practical application of traceability Part 2: What’s really going on when you plan V&V against a requirement?
Today we have with us Mia McCroskey at Emerging Health Montefiore Information Technology who was recognized as IBM Champion this year. She shares with us her thoughts about the requirements management domain.
Welcome to IBM family! It¹s a great pleasure to have you with us as an IBM Champion. Congratulations, How do you feel?
I am honored to be recognized in this way not just this year but for the past several years that I have been asked to present my team's stories at Innovate. It may seem like our little team's requirements management needs are nothing like those of customers with huge DOORS ecosystems. But really we are an R&D site for the evolution of requirements management techniques and strategies. If a member of my team has an interesting idea for how to capture, structure, track, or trace requirements, we can try it without getting high level approvals or disrupting the work of hundreds of people. I take tremendous satisfaction from sharing our successes in a way that may help others improve their best practices.
Can you tell us something about what you do at Emerging Health, Montefiore Information Technology?
Emerging Health is primarily an IT delivery organization supporting healthcare delivery in the Bronx. Montefiore Medical Center is our parent company. My team, Product Development, is a software development shop tucked away in a corner doing very different work from most everyone else. Our application, Clinical Looking Glass, is a browser-based clinical intelligence tool that gives clinicians access to the enormous wealth of patient data gathered by all the other systems. Our end users can get an answer to a question like "are my clinic's diabetic patients getting the level of follow-up care required by our funding sources?" in a few minutes. In most healthcare environments, getting the data that you need to answer this question takes weeks.
My title is Manager, Product Development Lifecycle because I have my fingers on just about every stage of that lifecycle. Specifically, I lead our Requirements Management, Quality Assurance, Education, Support, and Implementation teams. I also manage outsourced development work, manage client relationships, and do hands on end-user training and support. We are an extremely team-oriented organization, with two formal development scrum teams and two teams that are at various stages of adopting an agile process. I'm deeply involved in that right now: it's challenging to apply Scrum to a training and support team, and to a team of data analysts and engineers.
Having said all that, my roots are in requirements. On a team of "happy path" stakeholders who get very excited by ideas for new functionality, I love to dig in and find the challenges that nobody wants to think about -- before we start coding. Sometimes I feel like a real buzz kill!
What are your thoughts on managing requirements effectively?
Agile models have forced us to completely reorganize our requirements elicitation, analysis, and management processes. But one thing that has not changed is my belief in a comprehensive, functionally organized requirements model.
But when I say "comprehensive," I don't mean laboriously detailed. The art of requirements analysis and management is in knowing -- or guessing right -- what details are going to be important later. By later I don't mean to the coders and testers in this iteration, I mean when we want to revise or augment the feature in six months. The requirements analyst has to be deeply plugged in to the business goals and vision in order to predict the future and capture the least amount, but most important information about what the team is doing.
A few years ago I spent many months constructing an "as built" specification for a market data system at the New York Stock Exchange. I had the original ten-year-old spec and a few dozen incremental release documents. We were getting ready to refactor the system and nobody knew every business rule and function. I vowed that no system I worked on would ever lack a spec that described everything it currently did. Incremental requirements specs that aren't integrated into the overall system are defects waiting to be discovered once the coders get ahold of them.
Having argued that I must add that a requirements model in document form is pretty nearly impossible to maintain in the way I describe. You've got to employ a database tool that supports the granularity of each requirement and allows you to describe each one through attributes. Then you can use filters and queries and views to present an infinite number of customized specifications -- all of the requirements implemented in a specific release, or all of the requirements related to a specific functional area, or the completed requirements related to a specific business objective or corporate mandate.
What are your thoughts on the role of requirements management in agile projects?
I recently spoke with a software development professional who was very proud of his organization's highly structured and detailed requirements templates that captured every detail before any work began "to be sure we deliver what's wanted." I felt like I was talking to tyrannosaurus rex. We all know that the day after you baseline that 400-page spec it's already out of date.
Agile with its short increments and "only write down what you really need to" mentality can seem seductively freeing. When our organization adopted Scrum, I stuck to my requirements model guns, and sure enough a few months later we couldn't remember decisions that we'd made a few sprints back, nor even exactly which sprint we'd done the work in. Since we weren't supposed to be doing "heavy" requirements, we'd been coached to: use the system and see what happens; or sift through dozens of completed user stories and hope the detail we wanted was actually mentioned in the acceptance criteria (which it was not because it was an in-sprint decision); or try to find relevant test cases and check the expected results. Instead, I launched DOORS, went to the functional area related to the question, and checked our documented business rule. Done.
What are some of the challenges you see in Healthcare Informatics projects?
Deriving meaningful information from the electronic medical record is essential to justifying the cost of those systems. We're piloting the use of predictive analytics -- combining statistical methods with the mass of patient data collected every day at our parent medical center -- to predict outcomes at the population level. To do it you need a very wide range of data: blood pressure, height, and weight, smoking patterns, history of heart disease, current blood sugar level, and on and on. Just bringing all this data together is the first challenge. Next is the analytic tool -- that's CLG. Finally you need big iron to process it. Most local and regional healthcare providers don't have the funding for, say, Watson. My team spent last summer optimizing our hardware and software environment, and CLG itself, to handle analysis of larger data sets faster, but within a medical-center friendly infrastructure budget.
Another area of critical concern is patient information. The need to pool patient data for direct care as well as population research is supported by legislature and funding sources. But we are bound, both legally and ethically, to protect patient identity in every circumstance.
Clinical Looking Glass has the capacity to show patient contact information to users who have been granted permission to see it -- usually clinicians who are actively providing care and need to contact the patients. While this is a critical feature of CLG, we expect to have to develop more granular levels of access as new types of clients adopt the product. For example, our Regional Health Information Organization (RHIO) client has data from twenty-two healthcare institutions. Some patients have declined to have their identity shared across the organization. We have to build the capability to mask these patients' identity even to our users who have permission to see it.
Modified on by VijaySankar
The world of requirements management has developed significantly in the last decade or so and has increasingly become one of the corner stones of successful software and systems engineering projects. We have been discussing various aspects of the domain from a best practices perspective and how tools can help managing your requirements efficiently and effectively.
Starting today we will discuss various aspects of the requirements management discipline at a bird’s eye view level. These are meant to be introductory in nature and also intend to serve as refreshers for those who are already in the field. The domain and best practices have developed to an enormous level of sophistication that; it is difficult to cover everything in a set of blog posts. However we intend to make these posts as a quick reference and starting point for you to think seriously about the domain.
Have you heard about the Gaudi’s unfinished Cathedral or Airane 5 explosion? The former one is a hundred year project still under progress which couldn’t be finished because of unclear and changing requirements
and the latter one resulted in over $7 billion loss when the rocket exploded on its first voyage due to a software error; specifically floating point number error
. The importance of requirements management can be established from three unique perspectives – project overshoot and thus missing the market opportunity due to unclear and changing requirements; project failures due to unmet or misunderstood requirements and finally cost burden due to errors and missed requirements found late in the development cycle.
In a classic IEEE Spectrum article, Robert N. Charette writes about Why Software Fails
. Among the top reasons for failure of software projects are poor definition of requirements, poor management of risk, communication failure among stakeholders and increasing complexity of projects. In IBM GBS, ineffective requirements managements in one among the top five reasons for troubled projects. Many research firms (Standish Group’s CHAOS report, Gartner, CMU-SEI) and academicians (A Davis, Robert B.Grady, Steve Easterbrook
) have studied and quantified the failure rates of software projects (for example, in the above IEEE article Robert opines that 40-50% of software development time is spent on rework and cost of fixing a bug in the field can be as high as 100 times compared to when fixed at development stage). In all of them, the preliminary reasons for failures or overshoots are ineffective management of requirements.
So what exactly is requirements management?
Before moving to requirements management, let’s understand what a requirement is? A requirement can be anything from an abstract need to a well drilled down implementation detail of a system. Essentially it can be considered the detailed view of a need under consideration. IEEE Standard Glossary of Software Engineering Terminology
defines a requirement as a condition or capability needed by a user to solve a problem or achieve an objective; or a condition or capability that must be met or possessed by a system or system component to satisfy a contract, standard, specification, or other formally imposed documents; or a documented representation of a condition or capability as in former two. Thus what a requirement essentially represents depends on to whom we are talking to – it could be the need to a client; a business requirement for customers; a system requirement for vendors or a specification for a developer and tester. We will come to the different types of requirements later. Requirements Management can be considered the management of requirements essentially from when a customer provides the needs or a product development process is started. It includes managing the definition, elaboration and changing requirements during the development cycle and systems development. Peter Zielczynski, a requirements management expert defines the following major steps in requirements management (Requirements Management Using IBM® Rational® RequisitePro®, Peter Zielczynski
Establishing a requirements management plan
Developing the Vision document
Creating use cases
Creating test cases from use cases
Creating test cases from the supplementary specification
Zave (Classification of Research Efforts in Requirements Engineering. ACM Computing Surveys (1997)) defines Requirements Engineering as “the branch of software engineering concerned with the real-world goals for, functions of, and constraints on software systems. It is also concerned with the relationship of these factors to precise specifications of software behavior, and to their evolution over time and across software families.” While in practical terms, this could be considered same as requirements management, we can say requirements engineering addresses various aspects of requirements development; requirements management is the set of processes in systems and software engineering that interfaces with requirements engineering. We will try to delve into more details in another post when we consider V&V (Verification & Validation) model.
This is the first part of our six part blog posts series on basics of requirements management. Read the remaining parts here -
1. What is requirements management and why is it important?
2. How to write good requirements and types of requirements
3. Why base line your requirements?
4. What is Traceability?
5. The uses and value of traceability
6. Revisiting Requirements Elicitation
Last week the UK chapter of INCOSE (International Council on Systems Engineering) held their annual systems engineering conference on the Warwick University campus. I'd like to share some of what I heard during the conference, both on systems engineering in general, and more specifically on requirements management practices in the systems engineering domain.
One of the keynote speakers was Dr Sandy Wilson, President & Managing Director, General Dynamics UK. Dr Wilson spoke about the key challenges in the defense industry - the rate of change in threats and technology and the need to lower costs. He challenged the V model - said it's a nice diagram but its linearity is an issue - the world is not linear or rigid but the SE V diagram is. He spoke about the need for the defense industry to become more agile but that today change is cumbersome due to contractual issues and governance constraints. There are two main types of defense procurement done in the UK - the longer term needs are met by EPs (Equipment Programmes) and the urgent tactical needs by UORs (Urgent Operational Requirements). The former is bogged down in top level scrutiny and check boxes. The latter is helped by the top level sense of urgency and support. An example of a UOR was the decision to implement the multinational no-fly zone over Libya. Dr Wilson proposed that all defense projects should become more like UORs - more agile. He said that "an 80% solution delivered 1 year earlier is better than 90% delivered 4 years late". I heard that delivering incremental capability needs asset management and tracking, configuration management and a more agile approach to systems engineering - valuing "Product over Process". As well as changes in the way companies deliver capabilities, a change is needed in the way the customer (governments) do their acquisition and contracts in order to enable more agility.
Dr Jeremy Dick of Integrate Systems Engineering
and co-author of the book 'Requirements Engineering' presented a case study in the aerospace industry on developing the assurance case for a (safety) critical system in parallel with requirements analysis, design, verification & validation, using an extension of his technique for documenting the rationale for traceability relationships known as 'rich traceability'. In addition to developing a requirements 'flow-down' (through levels of requirements to design), the 'evidence' supporting the flow-down is documented. The evidence in the early stages can be how you expect the lower level requirements or design elements to satisfy the higher level and your evidence to suggest that your argument is sound. In parallel your verification & validation strategies should be evolved, including an argument and supporting evidence for how the test(s) will prove the requirement(s) is/are met. Jeremy was asked how the textual requirements, arguments and evidence would fit with a MBSE (Model-Based Systems Engineering) approach. Jeremy answered that he favours (and in fact came up with the concept of - ref: "The Systems Engineering Sandwich: Combining Requirements, Models and Design", Jeremy Dick, Jonathon Chard, INCOSE International Symposium, Toulouse, July 2004) the sandwich model - interleaved layers of requirements and modeling used to decompose a system specification adn design (you can read more on that concept in the post 'Food for thought: The Systems Engineering Club Sandwich'
Chris Rolison, CEO, Comply Serve
, continued the theme of progressive assurance with focus on the rail industry. Chris highlighted the complexity challenges in major rail infrastructure projects, and the issues presented by paper-based systems, silos in organization structures, and the supply chain. Chris said that "up to 80% of the engineering requirements can change during design & build" - not because the customer changes their mind but because of all the external factors involved in building a rail system. Chris went onto describe a more collaborative, requirements-driven design approach where systems engineering principles are applied, supported by a collaborative platform (ComplyPro which is based on IBM Rational DOORS).
Alastair Mavin of Rolls Royce 'lent' us his EARS (Easy Approach to Requirements Syntax
(link is to an IEEE publication - sign in required) an application of a template with an underlying rule set on how to describe requirements using natural language but in a more structured, consistent way. He described the latest version of the template EARS+ (or as he nicknamed it 'Big EARS' !) and the benefits of the approach - simplicity and structure combined.
I could go on for pages about all of the great content shared at this excellent event but I'll leave it there with the main requirements related topics, except to quote from the keynote speaker on day 2: "The core of Systems Engineering is defining requirements and delivering against them". I'd put it this way - you can't have successful systems engineering without effective requirements management.