- What is a supplementary specification?
- Classifying supplementary requirements using FURPS+
- Classifying requirements
- Capturing supplementary requirements
- Analysis mechanism
- Design mechanism
- Implementation mechanism
- Supplementary requirement dichotomy
- Eliciting supplementary requirements
- The questionnaire
- The questionnaire and RUP
- Common pitfalls to avoid
- The "shopping cart" mentality
- Supplementary requirement questionnaire is technical
- All requirements are equal
- The requirement "parking lot"
- Requirements are not measurable
- Lack of time
- Lack of ownership
- Talking to the wrong people
- Requirements are too general
- Questions and answers
- Downloadable resources
What, no supplementary specification?
As a member of Rational's field organization, I spend the majority of my time working with customers who are adopting the Rational Unified Process® (RUP®) and Rational toolset, primarily in architecture-centric initiatives. I often go along to organizations with the mindset of an architect and look for the architectural requirements. On some of those visits, I have looked at a customer's set of artifacts and discovered that the gathering of architecturally significant requirements is less than optimal.
This article is the direct result of a request from a customer that I met last year who felt that some emphasis should be placed on the gathering of architectural requirements. This was partly because organizations that use use-case modeling are familiar with gathering primarily functional requirements, but there are not many techniques for capturing nonfunctional requirements.
In addition to explaining the purpose of the supplementary specification in this article, I also emphasize the important role that this artifact has to plan within the development process in helping capture these architecturally significant requirements.
I will start by explaining what the supplementary specification is, along with the intent of the content of the supplementary specifications artifact. Next I want to think about and discuss how we might classify the elements that we see within the supplementary specifications. I will briefly explain the acronym FURPS+, which is a way of classifying the contents within the supplementary specification. We will use that classification to lead into a semi-systematic approach for ensuring that the supplementary specifications artifact is actually populated, because many organizations struggle to populate this artifact. Finally, I will review some common pitfalls that I have encountered in using this approach in organizations that I have worked with.
What is a supplementary specification?
|Figure 1: What is a Supplementary Specification?|
A Supplementary Specification is a requirements artifact in the RUP®. Figure 1 shows the artifact overview for the requirements discipline in RUP 2003. On the top right of the diagram, we see the supplementary specifications artifact. Notice that its name has changed to plural in RUP 2003. I often find that because RUP used to use the singular term people took this to mean that we use just one physical document to capture some of the requirements. In reality, just to manage the communication of the information, it can make an awful lot of sense to separate out the information into separate physical documents. This is one of the reasons that we changed the name.
Classifying supplementary requirements using FURPS+
|Figure 2: Relationships between requirements artifacts|
Figure 2 shows the overall relationship of supplementary specifications with other requirements artifacts. Typically, within an iteration, we start by identifying and capturing stakeholder requests in an artifact of the same name. We then develop our vision document, which contains needs and features. The supplementary specifications artifact and the use-case model define the requirements on the system we are going to develop, based on the content of the vision document.
Another common misconception that I come across is the idea that all of the functional requirements live in the use-case model, and all of the nonfunctional requirements live in the supplementary specifications. This is not actually the case. As we will see later, it is appropriate for certain nonfunctional requirements to reside with a particular use case if they apply only to that use case. Otherwise, if we consider those requirements to be system-wide, we over-engineer the system. Similarly, some functional requirements, such as online help, are system-wide, and should therefore be placed in the supplementary specifications.
I have found that many organizations end up in what amounts to a brainstorming session when they try to capture things such as nonfunctional requirements. These organizations do not take a systematic approach to gathering the requirements. The technique outlined in this article attempts to address this.
There are a number of ways of thinking about system-wide requirements. A system-wide requirement could represent a technical capability such as Web access. It could be a technical constraint, such as using a relational database in our solution. It could also be a technical quality, such as the availability of the system as a whole.
Requiring the product to support multiple human languages is an example of a system-wide requirement. Another example is specifying that a relational database will handle the persistence. Other requirements might include that the database be DB2; that the system run 7 days a week, 24 hours a day; that an online help system be included; or that all presentation logic be written in Visual Basic. These are all system-wide requirements. It is arguably very difficult to assign any of these requirements to an individual use case. Therefore, you would expect to see these things articulated in the Supplementary specifications artifact when using RUP.
Next we will take a look at the classification that we use for these different types of requirements. Examining the sample requirements just mentioned, we can see that some requirements are functional and others nonfunctional, and also that some requirements are technology-independent and others technology-specific. And so we need a classification that will allow us to think about these different aspects of requirements. The template for the supplementary specifications artifact in RUP uses a classification that goes by the acronym FURPS+. This classification was developed by Robert Grady at Hewlett Packard.
|Figure 3: Classifying requirements with "FURPS+"|
Figure 3 shows the meaning of each letter in the FURPS+ acronym: functionality, usability, reliability, performance, and supportability, with the plus (+) used to represent all other requirements, such as design or implementation constraints. As we can see illustrated in Figure 3, the FURPS+ classification addresses both functional and nonfunctional requirements. A particularly nice aspect of this classification is the emphasis placed on understanding the different types of nonfunctional requirements.
As a side note, the "+" in the FURPS+ acronym is generally used to represent constraints such as "the system will use a relational database." Many people debate, with an almost religious fervor, whether a constraint is a requirement or not. In this context, we will just assume that we need to include constraints along with the nonfunctional requirements.
The F in the FURPS+ acronym represents all the system-wide functional requirements that we would expect to see described. These usually represent the main product features that are familiar within the business domain of the solution being developed. For example, order processing is very natural for someone to describe if you are developing an order processing system. The functional requirements can also be very technically oriented. This is another reason why people have trouble capturing them and stating them as requirements -- people may be very familiar with the business domain concepts, but not so familiar with technology concepts. Functional requirements that you may consider to be also architecturally significant system-wide functional requirements may include auditing, licensing, localization, mail, online help, printing, reporting, security, system management, or workflow. Each of these may represent functionality of the system being developed and they are each a system-wide functional requirement.
Usability includes looking at, capturing, and stating requirements based around user interface issues -- things such as accessibility, interface aesthetics, and consistency within the user interface.
Reliability includes aspects such as availability, accuracy, and recoverability -- for example, computations, or recoverability of the system from shut-down failure.
Performance involves things such as throughput of information through the system, system response time (which also relates to usability), recovery time, and startup time.
Finally, we tend to include a section called supportability, where we specify a number of other requirements such as testability, adaptability, maintainability, compatibility, configurability, installability, scalability, localizability, and so on.
The "+" of the FURPS+ acronym allows us to specify constraints, including design, implementation, interface, and physical constraints.
A design constraint, as the name implies, limits the design -- for example, requiring a relational database stipulates the approach that we take in developing the system.
An implementation constraint puts limits on coding or construction -- for example, required standards, platform, or implementation language. Statements such as "My organization uses Visual Basic for the user interface" or "We are going to build these in the J2EE platform" are implementation constraints. While some of these admittedly impact the design also, in this context they are considered to be implementation constraints.
An interface constraint is a requirement to interact with an external item. When you develop within an enterprise, quite often you have to interact with external systems. In these cases, you might want to describe the nature of an interface to an external system -- in other words, you are describing protocols or the nature of the information that is passed across that interface.
Finally, physical constraints affect the hardware used to house the system -- for example, shape, size, and weight. I worked on one system that was to be deployed within a tank (an armored vehicle type of tank!), and which had very tight physical limits in terms of space. We developed this system, did all the modeling and so on, and it turned out that the processor size that we were using would require a fan and a box that would be too large to fit in the space we were given. This is why physical constraints are sometimes necessary -- especially for military or aviation applications.
We can apply these requirement classifications to the examples we saw earlier. The requirement that a product support multiple human languages is a supportability requirement. Needing persistence to be handled by a relational database is a design constraint. Stipulating a DB2 database is an implementation constraint. Needing the system to run 7 days a week, 24 hours a day, is a reliability requirement. Requiring an online help system is a functional requirement. Finally, standardizing on Visual Basic for all presentational logic is an implementation constraint.
Notice the relationship between the second and third examples: the second says that persistence will be handled by a relational database, and the third says that that database will be DB2. It is therefore useful to understand the relationships between requirements. In addition, if we have an idea of how a requirement actually gets translated as we move into design and implementation, then this can help us as the right questions of our stakeholders. Architectural mechanisms provide us with a framework for considering both of these aspects, and we consider these next.
Capturing supplementary requirements
|Figure 4: Architectural mechanisms|
The RUP makes mention of something called an architectural mechanism, which in simple terms represents a common solution to a frequently encountered problem. As such, architectural mechanisms are often used to realize architectural requirements. Figure 4 shows three categories of architectural mechanisms, and also some examples of architectural mechanisms expressed as analysis mechanisms, design mechanisms, or implementation mechanisms.
An analysis mechanism represents a common solution in an implementation-independent manner. The figure shows persistence as the analysis mechanism.
A design mechanism is a refinement of an analysis mechanism. A design mechanism assumes some details of the implementation environment, but it is not tied to a specific implementation. In our example, the persistence analysis mechanism may be realized as a design mechanism such as an RDBMS or an OODBMS.
Finally, an implementation mechanism is a refinement of a design mechanism, and specifies the exact implementation of the mechanism. In our example, an RDBMS may be implemented using either DB2 or Oracle.
Later, we will be exploring a technique for ensuring that the supplementary specification is appropriately populated. In order to do that, we need to ask the right questions of our stakeholders, which requires a list of the sorts of things we should be asking. This technique assumes that there is a finite set of requirements to be considered when it comes to these architecture-centric requirements. When considering a particular business domain you have a vast potential set of requirements to pick from. However, when it comes to supplementary requirements I recommend using a finite set as a checklist. The when you talk to stakeholders, you can use this checklist to ensure that topics such as reliability, availability, and all of these sorts of thing, are discussed.
|Figure 5: Analysis mechanisms|
You can also describe that list in terms of analysis mechanisms, as shown in Figure 5. I recommend that you develop your own version of this list, which you can use when going to your stakeholders in order to make sure that you get answers that allow you to then understand whether or not each of these elements is required in the system you are delivering. You have probably come across most of the analysis mechanisms listed in the figure. The most interesting item on this list is mega-data, which many people do not understand. Mega-data is a mechanism for retrieving information in a distributed environment, where a lot of information needs to be handled. If the database contains a million customer records, what are the implications of scrolling through a list of customers in a user interface on a client workstation? You need some mechanism for retrieving subsets of the information you need in order to support the UI.
|Figure 6: Requirements and mechanisms|
Figure 6 summarizes the relationships between requirements and mechanisms. Once we capture the FURPS requirements, they are fairly easy to understand. These requirements will obviously influence the work we do in analysis, and they have a strong bearing on the analysis mechanisms that we will create. For example, if we have a requirement for a distributed environment, then we might see an analysis mechanism to support distributed communication. This is then refined further as we move into design and again as we move into implementation. The columns at the top of the figure almost map to the disciplines of the RUP - I have split out analysis and design, however, in order to clarify the different categories of mechanisms. If you think next about the other requirements that you are specifying (the "+" in the FURPS+ acronym) -- such as constraints -- then they also fit in the requirements space.
In this model, the idea is that things should come together at certain points. For example, imagine that I have a requirement for persistence as an analysis mechanism. When I consider the design mechanisms -- whether I should use a relational or an object database -- then I should take into account all of the specified design constraints as well. The same goes for the implementation constraints. This model shows us how all of the different types of requirements impinge on the process as we move forward.
Supplementary requirement dichotomy
Before I move on to a systematic approach to gathering architectural requirements, I want to note an observation, which I call the supplementary requirement dichotomy: supplementary requirements are difficult to gather, yet they drive the foundations of our system -- the architecture.
In my experience of working with quite a few organizations, supplementary requirements are difficult to gather. What we might call "use case" requirements are typically more visible, because they are the sorts of things that you can talk to business people about. While you can ask what functionality stakeholders want in the system, rather than talking about system qualities such as reliability, they quite often they talk instead about aspects such as order processing or credit card payment. Many supplementary requirements are unfamiliar to stakeholders. This means that when you talk with them about availability, or performance, or scalability, or localizability, they struggle to answer the question. This suggests that we need to get that information using another route, by asking questions whose answers allow us to specify the requirements on their behalf. In addition, unlike use-case modeling, there are few techniques for gathering supplementary requirements. They are difficult to gather, and yet they are really important.
At the same time, supplementary requirements are important because they may drive a system's foundations -- in other words, its architecture. Because these supplementary requirements are system-wide, they are by definition extremely important. In addition, they can be more significant than the requirements specified in use cases. I often use the example of a life support machine: if I am attached to a life support machine that you are going to build, I would hope that availability was pretty high on the list and prioritized appropriately. These kinds of requirements can be much more important than those captured in use cases. Again, this brings us back to the dichotomy: gathering the requirements is hard to do, and yet it is really important.
Eliciting supplementary requirements
|Figure 7: Eliciting supplementary requirements|
The technique that I recommend for eliciting supplementary requirements involves five steps, which are listed in Figure 7. First, as we discussed earlier, you need to maintain what you might consider a complete list of supplementary requirements. This list can be used as a starting point for the supplementary requirements that you want to consider.
Second, having understood what requirements you are interested in capturing, you can think about the sort of questions you might want to ask your stakeholders before you talk to them. Your requirements may also affect the nature of the question you want to ask -- for example, if your system is going to implement licensing, then you might talk to product management. The way you ask a product manager a question may be very different from the way you ask an architect a question about scalability. You need to formulate one or more questions that are going to help us derive the stakeholder requests that will drive our requirements.
Third, we need to give visibility of the impact of answering a question one way or another, because a cost is involved that comes into play when we look at how we prioritize the requirements.
Next, we simply capture the responses to each question, and finally formally assign a priority or weighting to each response.
|Figure 8: The "supplementary requirement questionnaire"|
Figure 8 shows an example of using this technique. First, if we are trying to get input on the licensing requirement from our stakeholders, we might ask questions such as "Will a system or parts of the system be licensed?" and "Are there any constraints on the mechanism used to provide licensing capability?" These questions actually hint at two classification areas: the licensing functionality itself, as well as any design or implementation constraints.
We then state the impact, because it is closely related to the priority. If a stakeholder is going to specify the priority, they need to understand aspects like the cost of including something in the system. In this example, we add that the greater the sophistication of the mechanism, the longer the time to market and the greater the long-term maintenance cost of implementing that licensing capability. One pitfall of this process is that stakeholders may treat these questions as a shopping list -- "Yes, I will take that, I will have one of those as well." However, if you provide an impact statement, then they realize the cost of selecting some of these capabilities.
Next, obviously, you obtain answers to the questions, and then specify a priority so that you can actually start to make trade-offs and understand which elements are important and need to be in the system.
The questionnaire and RUP
|Figure 9: The questionnaire and RUP|
|(click here to enlarge)|
Figure 9 shows the relationship between this questionnaire and RUP since the technique being presented is undertaken in the context of an overall software development process. As Figure 9 shows, the RUP requirements discipline includes a workflow detail called Understand Stakeholder Needs. If you double-click that workflow detail, you see the diagram shown on the right in the figure.
|Figure 10: Activity -- Elicit stakeholder requests|
That breakout diagram includes an activity called Elicit Stakeholder Requests, which Figure 10 shows in more detail. The outputs from that activity include the stakeholder requests artifact. This is the RUP artifact where the questioning technique comes in. This technique is based on developing a number of questions that you might ask the stakeholder. We could call the input to this process a supplementary requirement questionnaire. In RUP, this is not an artifact, but rather just one of the techniques that you use when eliciting stakeholder requests. In the stakeholder requests artifact, we might expect to see a completed questionnaire of the nature that I am hinting at.
|Figure 11: Activity -- Elicit stakeholder requests|
Figure 11 shows how we might use IBM® Rational® RequisitePro® to implement this questionnaire. Using RequisitePro offers a couple of benefits, including traceability between the supplementary requirements and the stakeholder requests that are captured in there.
In the questionnaire shown in the figure, I have added a new requirement type called supplementary stakeholder requests, or SSTRQ. I have also assigned some attributes to that requirement type. Overall, the table includes essentially the same set of columns that we saw earlier.
Given the requirement type, you can actually specify questions in RequisitePro. The figure shows an example where we are considering licensing requirements. One column allows us to specify a FURPS classification, and we can specify functionality and design constraints as well. After that information, we enter the question we might want to ask, the impact, and then the answer and the priority we assign.
At first glance, it might look like we are simply using RequisitePro as a spreadsheet to capture information. One advantage of Requisite Pro, however, is that it allows us to filter. For example, if I am going to talk to the software architect today, I can filter on software architect and see only those questions relevant to the software architect. As you start to develop this table, quite often you do these stakeholder request gathering exercises in workshops, and it can be very difficult to get everybody that might be interested in providing input into your system in a single workshop. You could have 30 people present, each of whom is probably only interested in a fraction of the questions that you are asking. Filtering can therefore be quite an effective and efficient technique of asking the right kinds of questions.
|Figure 12: Activity -- Find actors and use cases|
Once we gather our stakeholder requests, a related activity called Find Actors and Use Cases might take them as input, as shown in Figure 12. This activity might result in an update to two of the key requirements artifacts: the Use-Case Model and the Supplementary Specifications.
Next I want to cover a very simple example in my RequisitePro database. It is really important to understand that when stakeholders provide stakeholder requests, they are just that -- requests, not requirements. You use the stakeholder requests to find actors and use cases in order to come up with a definitive statement of the requirements on the system, as specified in the Use-case model and the supplementary specifications. This is the definitive statement of what the system will do.
|Figure 13: Database showing requirements derived in the Find actors and use cases activity|
|(click here to enlarge)|
In the RequisitePro database shown in Figure 13, we see a representation of requirements derived by executing this activity. Because we have a separate requirement type for supplementary stakeholder requests, however, I think it is useful to also provide a traceability matrix. This allows you to indicate which supplementary requirements you derived from which stakeholder requests. I can then, for example, change the answer to "is localization capability required?" from yes to no.
|Figure 14: Suspect link|
|(click here to enlarge)|
At that point, a suspect link appears, as shown in Figure 14, which flags that someone has changed a request and that we therefore need to understand the impact on the requirements that we have derived. This RequisitePro features is another obvious advantage of using this tool.
|Figure 15: Activity -- Find actors and use cases|
Once we finish executing the Find Actors and Use Cases activity, the supplementary stakeholder requests that we glean from the questionnaire should result in requirements that are in the right places. Again, people tend to think that all of the functional requirements are in the use-case model, and all nonfunctional requirements are in the supplementary specifications. Figure 15 explains where things should live. I deliberately show the system-wide functional requirements in the supplementary specification on the left side here -- functionality in the supplementary specifications. On the right side, the use-case specification includes a section called Special Requirements, which is intended to contain any nonfunctional requirements that are use-case specific. It is important to make the requirements as specific as possible, because if you need, for example, a particular response time, which is tied to a particular use case, then you should put it with a particular use case. If you instead put it in the supplementary specifications, which apply to the system as a whole, you might over-engineer the system and try and get that response time for everything you do rather than just for the particular interaction that you intended.
|Figure 16: Automating the Supplementary Specification|
Another advantage of RequisitePro is that it allows you to generate a document containing your supplementary requirements. As Figure 16 illustrates, I have written a SoDA script that allows me to generate a SoDA report based on the information described in RequisitePro. Although the supplementary requirements that I derived from stakeholder requests are actually all in RequisitePro, some people want to see and review a physical Microsoft Word document, rather than having to look at a RequisitePro database. Elements are generated into the appropriate sections based on their FURPS+ classifications in RequisitePro. I believe in capturing information once and once only, and making sure that it is in the right place. Placing the requirements in RequisitePro lets me create a single source of information while creating derived documents.
Common pitfalls to avoid
I thought it would be worth going through some of the pitfalls that I come across in organizations using this technique.
The "shopping cart" mentality
|Figure 17: Common pitfalls -- The "shopping cart" mentality|
The first one is the "shopping cart" mentality. You are interviewing a stakeholder, trying to gather their requirements, and you ask about potential specific needs. As Figure 17 shows, the stakeholder can easily end up treating your questions as a shopping list -- they want everything on it. You can avoid this trap by ensuring that stakeholders understand the costs of their selections, using the impact statement in the questionnaire itself.
Supplementary requirement questionnaire is technical
Another potential pitfall is the perception that the questionnaire itself is technical. When you communicate with the stakeholders and explain to them that you want to try to get some of these supplementary requirements from them, it is important for them to understand the value of doing this. They might perceive this questionnaire as being a technical thing and not really of interest to them. However, when you suggest a use-case modeling workshop, they tend to understand some of the key concepts and the value of the workshop, and are more open to attending. Therefore, it is important to emphasize the value of the questionnaire. It might help stakeholders to appreciate why this is important if you explain how the results of your questioning are actually going to be used in terms of remainder of the development process.
All requirements are equal
Another common problem is that in general, if you leave stakeholders to their own devices, they rate all of the requirements as high priorities, which makes the entire exercise useless because it's not then possible to perform any requirement "trade off". The solution here is simply to make sure that all the requirements are prioritized in the right way.
The requirement "parking lot"
I have also gone into organizations, suggested that they go through this process, and then watched as they go through the motions of gathering the information but do not actually do anything with it. It is very important to understand why you are doing this, and how the results are going help you to ensure that you have the right requirements.
Requirements are not measurable
Sometimes requirements gathering results in requirements that are not measurable. This is a problem with requirements gathering in general, of course, not just this technique. Obviously, it is important that requirements be unambiguous and as measurable as possible.
Lack of time
Lack of time is another problem that I have seen. Stakeholders may decide that they have done so much use-case modeling work that they do not really have time for other things, and just want to move on. Remember, as I pointed out earlier, sometimes the supplementary requirements can actually be more important than some of the use-case requirements. For this reason, you need to ensure that you have sufficient time for performing this exercise.
Lack of ownership
I added lack of ownership as a pitfall fairly recently, after seeing someone apply my sample RequisitePro project as-is. It is important, however, to take ownership of the process. Make sure that your questions are in there. Apparently while the person who executed my questionnaire without changes was talking to a stakeholder, the stakeholder said that they were not quite sure what the person meant by a given question, and this person said, "Yeah, neither do I, so let's move on to the next one." It gives the wrong impression of the value of the questionnaire if you do not even understand the content of it yourself. This is a technique, but it is important that you customize it for your organization and project.
Talking to the wrong people
I also sometimes find people asking questions of the wrong people. Once again, we include the role attribute to ensure that the right questions are asked of the right people.
Requirements are too general
|Figure 18: Common pitfalls -- Requirements are too general|
Finally, it is important to make sure that when you are capturing supplementary requirements, you put them in the right place. Figure 18 shows three examples, which range from the most applicable to the system as a whole, to the most specific.
The requirement at the top of the table includes a statement about support for multiple human languages. This supplementary requirement is relevant to the system as a whole, so it should reside in the Supplementary Specification.
At the next level of generality, a requirement applies to a use case as a whole. The second row in the table specifies that any order that is processed can contain up to 10,000 items. This requirement would reside in the special requirements section of the use-case specification.
Finally, you find the most specific requirements when something applies to a particular flow of events within a use case. The bottom row in the table specifies that if in the basic flow of the use case the plane undercarriage fails to engage, then an alarm will be sent to the central monitoring station in less than one second. This requirement is actually more specific than the use case - it is specific to a flow of events. You need to document it at the appropriate point, which, in this case, is in a flow of events section in the use-case specification. Whatever you do, you need to indicate each supplementary requirement's level of applicability.
To summarize, this article covered five main points. First, you can classify supplementary requirements using FURPS+. Whether you use this method or another one, it is important to have a classification that helps you tease apart the nonfunctional requirements. Understanding the role of architectural mechanisms can help us think about the right sorts of questions that we should be asking the stakeholders. We looked at a mental model for doing that. A Supplementary Requirements Questionnaire can help you ensure that your requirement gathering is systematic. Use appropriate automation in the gathering of supplementary stakeholder requests, and in production of the Supplementary specifications. Finally, avoid the common pitfalls.
Questions and answers
What if you have a supplementary requirement that applies to, say, two use cases, placing it in between system-wide and being use case specific?
I suggest that you actually document the requirement in the supplementary specification, because that allows you to document it just once. However, it is important to make it very clear that that requirement applies only to particular use cases. The appropriate use case specifications should also refer back to the requirement described in the Supplementary specifications.
What if you start to develop your architecture while going through this process, and that actually impacts the requirements that you have defined?
This really does happen, and it emphasizes the value of an iterative process. The process should result in the requirements being reviewed. One of the reasons we do iterative development is that we uncover flaws, not only in the design, but also perhaps in the requirements. Designing a time machine is one example. I could define the requirements for a time machine and design some transporter mechanisms, but when it comes to the implementation and the architecture and I start to try to build this thing, it might not hold water. In the next iteration, I would obviously review the requirements and notice that it is just a crazy idea. The iterative nature of RUP actually allows you to do that, and it naturally happens during development when using RUP.
How should the special requirements section of a use case specification be documented?
You could actually use the FURPS+ classification (actually the URPS+ classification, without the "F", since functionality is described elsewhere in the use case specification), within the special requirements section of an individual use case if you wanted to have a systematic break-down of the nonfunctional requirements that apply to that use case. So once again, that particular classification scheme can be applied usefully here also.