Getting requirements right: The Perspective-Based Reading technique and the Rational Unified Process

from The Rational Edge: This article presents a method for requirements inspection that incorporates an extended version of the Perspective-Based Reading (PBR) technique in combination with the Requirements discipline portion of the Rational Unified Process (RUP). Extensive examples of the proposed approach are provided, along with case study data to illustrate its cost and benefits.

Paulo Costa, Senior Systems Developer, Politec

Paulo Costa is a senior system developer at Politec, a consulting service organization in Brazil. He has more than 10 years of IT experience developing and maintaining mission-critical information systems for financial and government institutions. He has a master's degree in Information Systems from Catholic University of Brasilia, Brazil.



Forrest Shull, Senior Scientist, Fraunhofer Center for Experimental Software Engineering

Dr. Forrest Shull is a senior scientist at the Fraunhofer Center for Experimental Software Engineering in Maryland. He is project manager and member of the technical staff for projects at Fujitsu, Motorola, NASA, and the U.S. Department of Defense. He has also been the lead researcher on grants from the National Science Foundation, Air Force Research Labs, and NASA's Office of Safety and Mission Assurance. He specializes in research and consulting projects that tailor software inspection approaches for effective defect removal.



Walcelio Melo, Solution Architect Director, Unisys Corporation

Dr. Walcelio Melo is a solution architect director at Unisys Corporation and an Adjunct Professor at George Mason University. At Unisys, he is currently the chief architect for the GSA/Unisys FAME program, in which he indirectly oversees the work of FAME architects and developers. He is responsible for establishing and implementing architectural guidelines across all FAME Task Orders and leads the FAME architecture review board. Dr. Melo teaches in the graduate program of the Department of Information and Software Engineering at GMU. He has authored more than fifty papers published internationally in workshops, conferences, journals, books, and encyclopedias, and he can be contacted via http://www.geocities.com/walcelio_melo



15 September 2006

Also available in Chinese

illustrationInspection is an important, but often underutilized, aspect of software quality assurance. The goal of inspection is to detect and correct software defects earlier in the development cycle. It is widely recognized1 that the cost of detecting and correcting a defect in the coding phase can be five to ten times higher than the cost of doing so during the requirements development phase. If the defect is not detected until the maintenance phase, it can cost up to 200 times more to correct it than it would have during the requirements phase.2

In this article, we present a method for extending the Rational Unified Process®, or RUP®, requirements inspection guidelines through incorporation of principles from the Perspective-Based Reading (PBR)3 technique for inspecting requirements artifacts. The PBR technique supports the identification of defects from the viewpoint of those actors who will use the artifacts being inspected. It does this by providing scenarios that help explain what to look for in order to find potential defects from that particular perspective. PBR offers guidance for the viewpoints of designer, tester, or customer. To effectively integrate PBR with RUP, we elaborate new scenarios corresponding to the actors in the RUP's Requirements discipline: System Analyst, Requirements Specifier, and Software Architect.

To evaluate the cost/benefit tradeoff of our approach, which we call PBR-UP, we performed a case study in the context of a small software development project (an object-oriented information system). We conducted the case study according to AINSI guidelines4 using the GQM paradigm.5 Our results verify that PBR-UP helped inspectors find serious defects in the artifacts produced during requirements development in a cost-effective way.

The Perspective-Based Reading technique

Software inspection is a well-defined and disciplined process in which a group of qualified professionals analyze a software artifact with the aim of detecting defects as early as possible during the development process, preventing costly rework and contributing to software improvement as a whole.6 Inspections can be applied to most of the artifacts produced during a product lifecycle, such as requirements specifications, design models, source code, testing plans, etc.

With the intention of helping reviewers inspect requirements more effectively, Victor Basili and his colleagues developed a technique called Perspective-Based Reading (PBR), which instructs reviewers on exactly what to search for, thus enabling them to find more defects in less time. This technique acknowledges that there are multiple consumers of the documents produced during the requirements development phase. PBR offers each of the reviewers a viewpoint or perspective specific to each type of consumer, from which the requirements are inspected. Perspectives relate to the roles of different reviewers within the software development process. The idea behind PBR is straightforward: If multiple inspectors analyze the requirements documents from different stakeholder perspectives, then the documents will more reliably reflect actual requirements.

The perspectives PBR defines represent the customer, the tester, and the designer. From each of these perspectives, the inspector is advised to apply a scenario-based approach to reading the requirements.7 Each scenario consists of a set of questions and activities that guide the inspection process by relating the requirements to the normal work practices of a specific stakeholder. In the course of answering the questions and performing the activities, the inspector takes note of the potential defects discovered that would interfere with that stakeholder's typical responsibilities.

The overall goal of the PBR's scenario-based reading technique is to create conditions for the identification of classes of defects that might otherwise only be found by the users of the inspected artifact after delivery, in the course of normal development activities. For example, from the tester's perspective, the inspector would check to see whether an appropriate test plan can be created for the requirements. When the inspector has trouble doing this -- perhaps owing to unclear or incomplete requirements -- a requirements defect is thus identified. In this manner, PBR offers each reviewer a specific, well-articulated focus.

Introducing PBR-UP

To obtain maximum benefit from the PBR technique, it is important to create comprehensive scenarios for each of the perspectives to be applied. This entails defining a set of questions and activities that will be applicable across many requirements specification documents, and be understandable to the readers of the specifications. In order to integrate PBR with RUP in an effective way, it is therefore necessary to elaborate specific scenarios for the actors involved in the RUP Requirements discipline.8

As illustrated in Figure 1, we identified the following key roles as active participants in the requirements development process: the System Analyst, the Software Architect, and the Requirements Specifier. These roles are "active" in the sense that they produce artifacts to support the software development process. Each of these participants has to consider software quality as part of their role, since they need to assure the various products they generate will have the level of quality demanded/expected by the enterprise.

figure 1

Figure 1: Key participants and artifacts involved in the RUP's Requirements discipline9

In developing questions and scenarios relating to the Systems Analyst's perspective, for instance, we focused on supporting the inspectors to verify whether the requirements are documented in a way that correctly delineates the actors and the use cases, whether key terms are defined in the glossary, and whether the requirements support structuring the Use-Case Models to facilitate reuse of functionality. In short, we focused on the specific objectives of the role in question in order to make sure the requirements artifacts are of sufficient quality to accomplish them.

Developing new scenarios for PBR-UP

As mentioned above, the scenarios given to the various reviewers in a scenario-based inspection require the reviewer to create high-level models of the artifact under review as part of the defect detection activities. Each model corresponds to a type of work product that would be created during the stakeholder's typical activities. By creating such models, inspectors are required to work with the information under review at a more hands-on level, thus enabling them to find defects more effectively than if the review was more passive, as is normally the case with traditional, checklist-based inspection techniques. Moreover, the models created during scenario-based review can serve as a basis for future work products to be created by the various stakeholders during software development, thus saving time and effort at later phases of the lifecycle. Unlike inspection checklists, these models are not throwaway artifacts, but useful inputs that can be leveraged to improve the actual models under review. For example, the Use-Case Models created by requirements inspectors, playing the role of System Analyst, can become inputs for System Analysts in improving actual Use-Case Models. Hence, the outputs produced by a scenario-based inspection process are not just a catalog of defects, but reusable models.

The scenarios consist of both a set of questions to be answered by the reviewers and a description of the activities the inspector should perform in order to enable the identification of the largest number of defects on the requirements artifacts. As the scenario-based approach demands, each role's review activities are kept focused on a specific set of responsibilities to avoid redundancy with the responsibilities of other roles.

In the following sections, we describe how we applied the scenario-based approach to create requirements review scenarios for key roles defined in Figure 1: System Analyst, Requirements Specifier, and Software Architect.10

The System Analyst scenario

According to RUP, "The System Analyst role leads and coordinates requirements elicitation and use-case modeling by outlining the system's functionality and delimiting the system; for example, identifying what actors exist and what use cases they will require when interacting with the system." The System Analyst is also responsible for, among other activities, "defining a common vocabulary that can be used in all textual descriptions of the system, especially in use-case descriptions." To accomplish these objectives, the System Analyst should perform a set of activities aimed at establishing the system context. The goal is to clearly identify the system's boundaries; to determine what should be inside and what should be outside of it. The boundaries are delimited by the identification of the actors and of the use cases.

Figure 2 shows an overview of the artifacts the System Analyst is responsible for. The PBR-UP System Analyst review scenario focuses on the Use-Case Model and Glossary. The original PBR already provides scenarios that can be used to support the review process of Vision and Supplementary Specification artifacts.

figure 2

Figure 2: Activities and artifacts produced by System Analysts during requirements development

The scenario developed for this perspective asks the reviewer to focus on the system requirements specification conveyed via Unified Modeling Language (UML) Use-Case Models, which normally also include Use-Case Packages and Use-Case Outlines. The activities related to this scenario ask the reviewer to identify the important elements involved in the creation of the Use-Case Models. In so doing, the reviewer, playing the role of a System Analyst, creates high-level abstractions based on the information provided in the artifacts under review, which can later be reused by the actual System Analyst. The questions provided in the scenario-based inspection help the reviewer check whether the major functional capabilities (e.g., use cases), actors (i.e., people and systems), and terminology (i.e., glossary) were correctly captured in the artifacts under review. The questions provided in the scenario-based inspection also help the reviewer verify that the information provided is detailed enough to enable the successful accomplishment of other software development activities that depend on artifacts produced by the System Analyst. In short, the scenario helps the reviewer verify completeness, understandability, and correctness of the artifacts under review.

Table 1 presents a sample of the PBR-UP System Analyst scenario, which provides reading guidelines for the Technical Reviewers responsible for assessing the quality of artifacts produced by the System Analyst during Requirements. Other scenarios have different questions and reading guidelines. The scenario-based inspection is further supported by other review documents and forms, through which technical reviewers can catalog the defects they find. IBM Rational tools, such as IBM Rational ClearQuest® and IBM Rational RequisitePro®, can be used to automate the process of cataloging defects, thereby improving the efficiency of the review process.

Table 1: Sample PBR-UP System Analyst scenario
Reading Technique for Use Cases Inputs: Vision, Use-Case Model, and Glossary Output: List of defects Reading Procedures:
a) Read through the Use-Case Model, use-case outlines, glossary, and provided requirements documents, e.g., Vision document, once, finding participants involved.
Q1.1 Are multiple terms used to describe the same participant in the use case?
Q1.2 Is the description of how the system interacts with a participant inconsistent with the description of the participant? Are the specifications unclear or inconsistent about interactions? Is this situation omitting some important part of the whole functionality?
Q1.3 Have necessary participants been omitted? That is, does the system need to interact with another system, a piece of hardware, or a type of user that is not described in the Use-Case Models?
Q1.4 Is an external system or a class of "users" described in the artifacts, which does not actually interact with the system?
b) Read through the Use-Case Model and specifications a second time, identifying the product functions.
Q2.1 Are the start conditions for each use case specified at an appropriate level of detail?
Q2.2 Are the class(es) of participants who use the functionality described, and are these classes correct and appropriate?
Q2.3 Is there any system functionality that should be included in a use case but is described in insufficient detail or omitted from the requirements?
Q2.4 Has the system been described sufficiently so that you understand what activities are required for the user to achieve the goal of a use case? Does this combination of activities make sense based on the general description and your domain knowledge? Does the description allow more than one interpretation as to how the system achieves this goal?
Q2.5 Do the requirements omit use cases that you feel are necessary, according to your domain knowledge or the general description?
c) Match the participants to all of the use cases in which they are involved. (Remember that if two participants are involved in all of the same use cases, they might represent a single unique actor and should be combined.)
Q3.1 Is it clear from the requirements which participants are involved in which use cases?
Q3.2 Based on the general requirements and your knowledge of the domain, has each participant been connected to all of the relevant use cases?
Q3.3 Are participants involved in use cases that are incompatible with the participants' descriptions?
Q3.4 Have necessary participants been omitted (e.g., are there use cases that require information that cannot be obtained from any source described in the requirements)?

Requirements Specifier scenario

The main focus of this role is to analyze the requirements to obtain a detailed description of the event flow that should be found in the use cases, to make explicit the start and end conditions for each use case and to identify all the interactions with the actors. The main sources for the Use-Cases Specification are the Use-Cases Model, outlined use cases and the glossary. Figure 3 indicates the key artifacts produced by Requirements Specifiers during requirements development. The PBR-UP scenario focuses on the Use-Case Specification, which is the major output of the activity "Detail a Use Case."

figure 3

Figure 3: Activities and artifacts produced by Requirements Specifiers during requirements development

Software Architect scenario

The Software Architect can control the development of the system architecture from a technical perspective using structural elements, such as classes, subsystems, and components. The system architecture is built over many iterations, performed mainly in the Elaboration phase and made explicit within the Software Architecture Document, which captures the well-known 4+1 Architectural Views. Each iteration encompasses the stages of requirements, analysis, design, implementation, and test, and is supported by its own architectural model embodying the system's most relevant use cases.

Prioritized Use Cases are those anticipated to address the system's main technical and business risks. Thus, the scenario created from the architect's perspective consists of creating a Use-Case Model that includes the architecturally most significant actors and use cases. Figure 4 presents an overview of the key artifacts produced and consumed by the Software Architect during requirements development. Note that, although the Software Architect participates in all the lifecycle phases, the focus of the PBR-UP scenario is only on the role the Software Architect plays in developing requirements; that is, on the use-case view of the software architecture document.

figure 4

Figure 4: Activities and artifacts produced by the Software Architect during requirements development

The PBR-UP case study

In order to measure the cost-efficiency of the requirements inspection process with PBR-UP, we performed a case study in which six inspectors reviewed the requirements for an actual software project. We collected and organized the inspections results using a form we developed called the Software Requests Inspection Registry.11 We also interviewed the inspectors after they performed their inspections.

Setting up the case study

The first step towards performing the case study was to choose a project to study. It was important that the project be managed according to RUP principles for the PBR-UP technique to be applied. The project we chose was the development of an Exams and Consultations Managing System for the City Hall of Goiânia, Goiás State, Brazil. The objective of the system is the management of information related to health and legal services. It contains thirty use cases.

Having found a project through which to conduct our study, and with the requirements artifacts for the Exams and Consultations Managing System in hand, we next needed to identify a professional who could inspect the requirements using PBR-UP. The prerequisites for inspectors were a college degree in an information technology field (e.g., analysts, programmers, designers, etc.) and experience with object-oriented analysis and UML. Ultimately, we chose six system analysts with diverse levels of experience. None of the inspectors had worked with the system to be reviewed; nor did they have access to any system artifacts prior to the study.

Preparing for inspection

We gave each of the inspectors the requirements artifacts for the system, as well as the three PBR-UP scenarios we developed. We also gave each inspector a blank copy of the Software Requirements Inspection Registry form in order to simplify and help organize the process of recording their inspection results.

We then held a 90-minute training course for the inspectors, during which we explained the PBR-UP technique and how to apply it. We asked the inspectors to adhere to the scenarios as closely as possible. Each was directed to fill out the supporting forms, particularly the Software Requirements Inspection Registry, while rigorously observing the time spent in each activity. The time spent reading reference materials was also added to the total preparation time.

Performing the inspections

All the inspectors did their inspections independently. During the inspection, they most frequently asked for help in filling out the forms and also for feedback on the level of detail they should provide. With regard to the latter questions, it was suggested that the inspector should apply the level of detail that s/he thought was most appropriate. At the end of the inspection process, each inspector delivered to us a completed Software Requirements Inspection Registry form.

Post inspection meeting

After all the inspections were complete, we sat down with each inspector individually to discuss his or her inspection results. The resulting explanations and exchange of ideas enabled us to better identify false defects. The numbers of actual versus false defects were calculated in order to evaluate the efficacy of the technique.

Results of the case study

We analyzed the results of the study in-depth in order to determine a wide range of cost and benefit parameters, including the severity of defects, the artifacts they related to, the productivity of inspectors in finding defects, and the cost-efficiency of the PBR-UP technique.

Severity of defects

Table 2 shows the results of the inspections grouped according to defect severity. These data illustrate the PBR-UP technique's effectiveness in identifying defects at varying levels of importance. Defects rated as "serious" reflect the omission of actors and of architecturally important use cases. Most defects rated as "simple" involve inconsistencies in the glossary and inconsistent naming of actors and use cases. "Medium" defects were mostly related to generalizations and relationships that either were not considered or were inadequately described. The greatest number of defects found were rated as simple, indicating that the requirements analysts had not devoted sufficient attention to the glossary of terms -- a common problem. However, the average number of medium and serious defects found per inspector was substantial given the relative simplicity of the system being developed.

Table 2: Defects severity
InspectorSimple DefectsMedium DefectsSevere DefectsTotal
#1156223
#283213
#30134
#4104822
#596520
#66028
Total48202290
Average8.03.33.6

pie chart of table 2

Origin of defects

Table 3 shows the results of the inspections grouped according to the origin of defects uncovered. These data indicate which requirements artifacts contained the most defects. The majority of defects related to the actors, the use cases, and their relationships. Based on the inspectors' observations as gathered in the Requirements Inspection Registry, it was easy to see the relationship between the overall quality of the Vision document and the occurrence of defects in other artifacts. (Inspectors rated the project's Vision document to be of medium quality on a questionnaire for quality evaluation.) Note that more defects were found in the actors and use cases than in the glossary itself. This indicates a high degree of concern for the system's functional capabilities, a natural bias on the part of many developers.

Table 3: Origin of defects
InspectorActorsUse CasesGlossaryRelationships
#1151251
#235500
#331000
#439640
#527911
#611600
Total132838102
Average2,164,666,31,660,33

pie chart of table 3

Inspector productivity

Table 4 shows the productivity of the inspectors: the mean was three defects found per person-hour of time invested in the inspection process. According to our case study, inspectors that performed inspections using PBR found more defects than those that used checklists. The number of defects found per person-hour was also less using the PBR technique. Using PBR, it took an average of 56 minutes to find each defect, versus 132 minutes for each defect found using checklists. In this respect, our results are similar to those reported by others.13

The inspector who spent the least time performing the inspection received the highest productivity index; however, this is due to the fact that he did not take time to fill out some of the forms. In one case where the productivity index was low, the inspector showed little motivation for the inspection and low adherence to the technique. In the other instance of low productivity, the inspector lacked experience with system development activities, resulting in a short inspection time relative to the average, but with few defects found. This inspector also had more "false positives" than the others, which affected the efficacy of his results. Overall, the inspectors who took more time, whether due to greater motivation or a more consistent adherence to the technique, generated more consistent results compared to their colleagues.

Table 4: Inspectors' productivity
InspectorExperience*Time (hh:mm)# of DefectsErrors/ Hours
#1High06:00233.83
#2Medium02:00115.50
#3Medium02:1541.77
#4Medium05:00224.4
#5Medium04:30112.44
#6Low02:3020.8
Mean/MedianMedium03:4012.163.12

* Reviewer's experience scale

bar chart of table 4

Cost-efficiency

Cost-efficiency is a measure of the quantity of project resources expended to produce a given volume of products.14 In the context of the PBR-UP technique, measuring cost-efficiency serves to verify whether the resources invested in utilizing the technique generated a volume of results that justified its application. The quantitative factors that express the efficiency are defined in terms of results gained (quantity of defects found) and investment in the process (cost of inspection), as represented by the formula:

costefficiency equals cost inspection over quantity of defects found

The quantity of defects found by the PBR-UP technique can be expressed by the medium quantity of identified errors, considering all reviewers. The cost of the inspection is derived from two variables: time spent (quantity) and human resources (quantity and value), also averaged across the inspectors. By this measure, the cost-efficiency of the PBR-UP technique in this case was approximately US $10 per error. Note that this result relates only to the cost of the identification of the defect and does not consider associated costs of the inspection process, such as rework and follow-up costs. Results of comparative interest include:

  • In 300 inspections over a 21-month period, inspections cost between $90 and $120 (an average of $105) per defect, including the cost of the effort to find, fix, and verify the correction of the defect.15
  • In another study, the cost of inspection processes across a range of software development processes was found to be between 5% and 15% of the project's total budget.16 In our study, we found the cost of inspection from one of the three perspectives to be approximately 4.2% of the total cost of requirements development. If we consider all three perspectives and assume the cost to remain constant during the other lifecycle phases, the cost of inspections for our project falls within these limits.

Efficacy of the PBR-UP technique

In order to measure efficacy (see Table 5), we noted the relationship between the number of actual defects uncovered by the inspection and the total number of reported defects. We chose to ignore simple defects in this analysis, since they were largely related to the definition of terms in the glossary. As we consider that whatever the term might be, there can always be some debate about it, it is not up to us to decide whether these reported defects were actual defects or not.

Based on these assumptions, we found that the efficacy of PBR-UP in this case study to be more than 70% on average. This high rate may be due to the fact that the inspectors were under study; however, we believe that a high degree of efficacy can be achieved whenever the inspectors closely and systematically follow the guidelines of the technique (and notwithstanding any increase in inspection cost due to the additional time required to adhere to the technique).

Table 5: Efficacy of the PBR-UP technique
InspectorFalse-Positive Defects# Defects *% Efficacy
#108100%
#22560%
#32450%
#431275%
#51785%
#61250%
Average1.56.370%

* Not considering the simple defects

bar chart of table 5

Inspectors' statements

After the inspection process was concluded, we interviewed the inspectors in order to obtain further information about PBR-UP. Following are the main points we obtained from these interviews.

Inspector 1: "As for the technique, its use has collaborated to validate the quality of the documentation produced in the requirements phase. Nevertheless, it is unproductive to redo the analyst's work to just later compare it with the presented documentation. Rework will only be beneficial if it is used in the higher specification level, e.g., Façade Use Cases."

Inspector 2: "The technique is valid to accelerate and more precisely identify the artifacts proposed by the RUP Requirements discipline. Another relevant point lies in the fact the technical community is mainly composed of elements of other paradigms (Structured Analysis, Data Analysis, etc.). This makes it more difficult to change paradigms without an adequate guide. Little documentation on this area (perfecting techniques, since the RUP itself is not always enough) has been presented, and this is what turns it into a very important technique to be implemented by the enterprises in order to improve the artifacts quality."

Inspector 3: "Many attempts have been done to improve the specification products' quality, and most of them are supported by professional experiences, which do not leave anything to be contested. Problems tend to decrease with it, but we cannot depend on the experience of some to have quality in the developed systems. The requirements inspection method proposed in the project and used by me in the use-case evaluation pleased me due to the following reasons: first, because it is not backed up by people's experiences, but by well-defined rules; second, because it presents a simple guide to be followed by any analyst; and, finally, because the inspection products showed the application of the method provides a significant gain of time and quality to the specification."

Inspector 4: "Since inspections are usually done according to checklists, many times we do not know the reason why certain questions are, in fact, relevant to the quality of the model. When it is based on scenarios, as those used in this project, the inspection process comprehension and the defects explanation before the project analysts become easier to be explained. Therefore, I thought the use of scenarios was quite appropriate. Though more time was demanded to perform the inspections, results were more reliable."

Conclusion

According to the results of our case study, the PBR-UP technique proved to be of significant value as an inspection method and, thus, to improve software quality. The key positive aspects of this technique, as expressed in the interviews with the inspectors, include:

  • It provides consistent guidelines for use-case-oriented requirements inspection work.
  • It identifies aspects of the artifacts often overlooked by analysts.
  • It uncovers a large number of the defects generated during the requirements gathering process.
  • It provides a useful way to organize the inspection results.
  • The outputs of the PBR-UP process can be used by the analysts themselves to guide system development.
  • The technique is easy to learn.
  • Applying the technique spreads the culture of "search for quality" inside the organization.

Based on the feedback received, we also identified the following significant challenges for the PBR-UP technique:

  • The activity of designing high-level models should be undertaken at only a moderate level of detail, since it increases the time required to prepare for the inspection meeting.
  • To inspect requirements using the PBR-UP technique requires prerequisite knowledge of RUP and UML.

Although there is a catalog of the main defects in the Software Inspection Registry form, there will always be the possibility of finding new defect types due to the individual profile of each inspector, who carries inside of him/her a specific knowledge that ends up influencing the typology of the inspections results. During the application of the technique, it is possible to observe the identification of defects that are different from those catalogued in the form. These defects need to be discussed with the document author to be well understood. However, many of the false-positive defects derive from the misinterpretation of the inspector on issues s/he does not fully understand about the domain. It is up to the moderator to solve any possible doubts between inspectors and producers during the inspection meeting. Therefore, we do not consider the inspection meeting as optional: on the contrary, the experience acquired with this case study lets us support inspection meetings.

As we have applied PBR-UP, we have been able to acquire further empirical evidence about the efficacy of the inspection process in the identification of defects. For example, when we use the System Analyst's perspective, we care about finding certain sets of defects, such as functionalities not perceived or relationships with certain actors that were not as evident. Focusing on the inspection process using a specific point of view makes it more productive, since a usually ad hoc evaluation is replaced by a well-driven and controlled process. With the scenarios proposed by PBR-UP, the inspector was able to identify the majority of defects during the preparation for the inspection meeting. This fact influences the inspection's productivity since the meetings may be used only to present and consolidate the defects found. The artifacts produced during the inspection process not only help evaluate under inspection, but also provide a basis for the creation of documents used by the analyst, enabling an exchange of ideas helpful for system development.

The process's efficiency is clearly dependent on the specification's quality and the reviewer's experience. Organizations that use requirements templates, which reflect tested procedures for producing quality requirements, can help in this regard. Such templates help in the validation of the requirements by inducing the analyst to provide more complete information. Dealing with more experienced analysts in the role of reviewers sometimes leads to a review based on their intuition or previous knowledge, rather than the scenarios provided, which can give an incorrect perception of the usefulness of the inspection approach. In such cases, the objective should be to learn from the experienced reviewers, taking advantage of the experience they have developed to improve the review procedures.

Notes

1 See Barry Boehm, "Industrial Software Metrics Top 10 List." IEEE Software September 1987, pgs. 84-85.

2 See Dean Leffingwell and Don Widrig, Managing Software Requirements A Unified Approach. Addison-Wesley 2000.

3 See Victor R. Basili, "The Empirical Investigation of Perspective-Based Reading," University of Maryland, College Park, MD, USA, 1996; and Basili et al., "Lab Package for the Empirical Investigation of Perspective-Based Reading," 1997. Available at: http://www.cs.umd.edu/projects/SoftEng/ESEG/manual/PBR_package/intro_index.html

4 See L. Briand, K. el Emam, and L. W. Melo, AINSI -- An Inductive Method for Software Process Improvement: Concrete Steps and Guidelines. ESI-ISCN'95: Measurement and Training Based Process Improvement. Vienna, Áustria, 1995.

5 See Rini Von Solingen and Egon Berghout, The Goal/Question/Metric Method. McGraw Hill 1999.

6 Oliver Laitenberger, An Encompassing Life-Cycle Centric Survey of Software Inspection. International Software Engineering Research Network 1998. Available at: http://www.iese.fhg.de/network/ISERN/pub/

7 As described by Jeffrey Carver, "Impact of Background and Experience on Software Inspections." Proposal for PhD thesis, University of Maryland 2000.

8 A discipline is a collection of related activities that are related to a major "area of concern." The disciplines in RUP include, among others: Business Modeling, Requirements, Analysis/Design, Implementation, and Test.

9 For a complete list of roles and artifacts see the RUP Requirements Discipline Activities, Artifacts, and Workflow.

10 A complete description of all scenarios, review forms, and questionnaires can be found in: Costa, Paulo, "UM PROCESSO DE INSPEÇÃO UTILIZANDO LEITURA BASEADA EM PERSPECTIVA APLICADO À ANÁLISE DE REQUISITOS DO UNIFIED PROCESS", Master Dissertaton, Catholic University of Brasilia, DF, Brazil, 2002.

11 Costa, 2002.

12 According to RUP: "The Vision [document] defines the stakeholders' view of the system to be developed, specified in terms of the stakeholders' key needs and features and must contain an outline of the envisioned core requirements."

13 The results of a study that compared inspections done with checklists versus perspective-based reading can be found in: Oliver Laitenberger, "An Encompassing Life-Cycle Centric Survey of Software Inspection." International Software Engineering Research Network 1998. Available online (2002) at: http://www.iese.fhg.de/network/ISERN/pub/

14 Basili, 1996.

15 Marilyn Bush, "Improving Software Quality: The Use of Formal Inspections at the Jet Propulsion Laboratory." In Proceedings of the 12th International Conference on Software Engineering, pgs. 196-199, 1990.

16 Karl E. Wiegers, Improve Quality Through Software Inspections. 1999. Available online (2002) at: http://www.processimpact.com/

Comments

developerWorks: Sign in

Required fields are indicated with an asterisk (*).


Need an IBM ID?
Forgot your IBM ID?


Forgot your password?
Change your password

By clicking Submit, you agree to the developerWorks terms of use.

 


The first time you sign into developerWorks, a profile is created for you. Information in your profile (your name, country/region, and company name) is displayed to the public and will accompany any content you post, unless you opt to hide your company name. You may update your IBM account at any time.

All information submitted is secure.

Choose your display name



The first time you sign in to developerWorks, a profile is created for you, so you need to choose a display name. Your display name accompanies the content you post on developerWorks.

Please choose a display name between 3-31 characters. Your display name must be unique in the developerWorks community and should not be your email address for privacy reasons.

Required fields are indicated with an asterisk (*).

(Must be between 3 – 31 characters.)

By clicking Submit, you agree to the developerWorks terms of use.

 


All information submitted is secure.

Dig deeper into Rational software on developerWorks


static.content.url=http://www.ibm.com/developerworks/js/artrating/
SITE_ID=1
Zone=Rational
ArticleID=158896
ArticleTitle=Getting requirements right: The Perspective-Based Reading technique and the Rational Unified Process
publish-date=09152006