When planning and executing a RUP project, most if not all development teams face an interesting question: "Should we allow our iterations to 'overlap'?" During my service engagements, I frequently hear this question from my clients, and it often comes up in the external RUP forum (which you can subscribe to by sending an email with 'subscribe rup' in the body to email@example.com).
This is one of the controversial topics that can cause a mile-long thread. IBM Rational's formal position on this is that you cannot overlap your iterations. Yet many successful projects claim to have done so. It leads many to wonder if the "no overlapping iterations" policy is just a doctrinaire position on a practical concept, or whether there's actually some reasoning behind the policy.
This article will show what is meant by "overlapping iterations" and why the question of using this technique comes up in the first place. I will offer some advice from different perspectives and conclude with my personal recommendation.
Before we start, let's agree on what overlapping iterations means. I have encountered other names for this concept, such as "extreme iterative," "staged iterations," and "stacked iterations."
Standard iterations in RUP could be visualized as shown in Figure 1 below. Three iterations, I1, I2, and I3, occur sequentially from left to right. Occurring sequentially from top to bottom, within each iteration, R, D, C, and T represent some of the standard disciplines executed in each iteration (Requirements, Design, Code, and Test). (While there are nine disciplines in the RUP, all of which would likely be executed in a typical iteration, I'm referring only to RDCT for the sake of brevity.)
Figure 1: Standard iterations. I1, I2, and I3 represent three iterations, occurring sequentially from left to right. Occurring sequentially from top to bottom, within each iteration, R, D, C, and T represent some of the standard disciplines executed in each iteration (Requirements, Design, Code, and Test).
Once the project manager (PM) starts to lay out a plan for a project's iterations, he or she is likely to ask: "What does my requirements team do after they hand off I1 requirements to the design team? I can't have them sitting around idly until I2, waiting for I1 to end." This problem ripples across all of the other teams as well, as teams wait for successors to finish their work for the iteration, or for predecessors to give them artifacts to work on.
So some start asking, "Why can't the requirements team work ahead of the other teams?" When this thought is propagated to all of the other roles as well, we end up with something like the diagram in Figure 2.
Figure 2: Overlapping iterations
The basic idea of overlapping iterations is that the requirements team can work on the requirements for a future iteration, say iteration 3, while design is still working on the requirements from a previous iteration, say iteration 1 or 2.
If you ask about overlapping iterations in one of our chat forums, most IBM Rational consultants will tell you that this is not recommended by the RUP. Some may even go so far as to say it is flat out not allowed. Either way, it is safe to say that the official Rational position on this is not to do it. Yet the concept has value. There are risks to be handled, and some definitions to be cleaned up, but if managed carefully, overlapping iterations can be a useful approach to organizing a project.
An accomplished project manager and friend, John Stonehocker, frequently helps organizations adopt the RUP. He explained to me that the whole concept of overlapping iterations doesn't make sense if you define an iteration as a "timebox" in addition to a set of objectives.
As a timebox, the concept of a "future iteration" indeed makes no sense. Imagine you're working on requirements that won't be coded until a later timebox; technically speaking, that requirement work is still part of this iteration -- not the next one -- even if the code won't be finished in the current timebox. To work on a future iteration, you would literally need a time machine!
Figure 3 shows what a few iterations using a timeboxed-iterations approach might look like.
Figure 3: An illustration of using timeboxed iterations instead of overlapping iterations to allow roles to work ahead. The roles down the side are Requirements (R), Design (D), Code (C), and Test (T). The iterations I1, I2, and I3 show what use cases (UCs) that role might be working on.
In timeboxed iterations, an iteration is considered a set period of time, during which all the work is carefully planned. That is one difference between timeboxed iterations and overlapping iterations. In the overlapping iterations approach, the requirements team works ahead if they get done with UC1,2 in I1 as far as they can before the iteration ends. In timeboxed iterations, we plan for UC1,2,3,4,5 for the requirements team during I1. By planning this up front, we can really start to understand our requirements team's capability. Did they run out of time? Did they have time to spare? Either way we can refine our estimate for the next iteration and plan a better number of use cases for the team to detail requirements.
But consider Figure 3 from the standpoint of overlapping iterations, in which an iteration is treated as a "functionality set," such as a specific set of use cases, features, or whatever the team uses to document functionality at the end of a given iteration. So I1 above is equivalent to functionality set UC1,2 (only these user cases are coded and tested by the end of I1); I2 is functionality set UC3,4; and I3 is functionality set UC5,6,7,8. In I1, notice that the requirement team could be considered to be doing work on I2's and I3's functionality set. This is what the teams mean when they say "overlapping iterations." They are working on iteration 2 and 3's scheduled functionality.
If you've followed me this far, you might have noticed that, overall, both the overlapping approach and timeboxed approach will lead to the exact same actual work during a given time period! So, the difference between overlapping iterations and timeboxed iterations lies more in the semantics or philosophy than in the overall tactical work. In this example, whether considered overlapping iterations or timeboxed iterations, the requirements team in I1 still worked on UCs 1,2,3,4,5 even though only UCs 1 and 2 were implemented.
Just to be sure we are still on the same page, note that in the standard RUP iterative approach, the requirements team in each of I1, I2, and I3 would only be allowed to work in the UCs scheduled to be completed during that iteration. So for I1, the requirements team could only work on UCs1,2 and would have to wait until I2 to continue.
The decision to move away from "standard iterative" can best be understood as a reaction to some set of issues or risks. Before I share my personal recommendations, we need to understand the particular types of risks that drive project teams into considering overlapping iterations (or timeboxed iterations, where "working ahead" is allowed).
In my experience, there is a primary risk that drives project managers to consider overlapping iterations: the resource utilization risk. In other words, the project manager looks at the standard iteration plan and asks the same question each time: "What do the requirements folks do until the next iteration after they are done with this iteration's requirements?"
In fact, there may be a variety of risks that prompt consideration of overlapping iterations. Whatever the risk(s) may be, it's important that they are captured in the project risk list.
Just as importantly, if overlapping iterations are allowed, it is important that the risks this technique will generate are themselves captured in the risk list and analyzed.
To save you some time, I'll share the risks I have identified with the overlapping iteration (OI) technique. If you choose to proceed with OI, these are the pitfalls that can make the adjustments required by iterative development methods much more difficult:
- Increased scrap and rework
If you pursue the technique of overlapping iterations to the extreme, you will end up at the waterfall model. For example, you may find that the requirements analysts have completed detailing all the use cases before the first iteration ever released anything of value to be evaluated. That means you've done all of your requirements up front, just like in a waterfall model.
One danger in this: iterative teams typically adjust the breadth view of the requirements based on valuable feedback from 1) the customer examining the executable, and 2) from the architects, testers, and the rest of the design team as they gain understanding of the solution. This means that any detailed requirements produced by an overly aggressive overlapping-iterations team will have to be thrown away or rewritten, increasing the cost of development.
Another danger: In Elaboration, architecture is usually unstable. As we stabilize the architecture, we may reprioritize or even remove requirements given the architectural difficulty they entail. Again, the detailed requirements for the extra use cases created by OI may have to be changed to accommodate our growing understanding of the solution idea. If they had left those details out, the changes would be easier to make, there would be less scrap, and a lower overall cost of development would be the result.
- Increased cost of project cancellations
One of the benefits of iterative development is that it gives you the ability to walk away from a project before large investments are made. If you stack your iterations, as you do with OI, and decide to walk away at the end of the first iteration, the cost of the cancelled project will be higher because all the future work the early disciplines engaged in will now go unused.
This concept appears in manufacturing where they count both scrap and inventory. If one of the manufacturing points operates too quickly, it will build up an inventory for the next station to consume. If something wrong is found upstream, or if we stop building the product, the scrap will be higher due to the in-process inventory.
- Low morale
Rework and scrapping frequently lead to morale issues. Frustration levels tend to rise when people throw away hard work, or have to redo that work, especially because of poor planning. In waterfall and overlapping iteration development projects, it is pretty common to hear "if you had told me that up front, I would have gotten it right the first time," or "I am only reworking this because you are changing your mind." This is in response to being asked to change depth work. Breadth work is usually much easier to change, so people are not as attached to it. If you work ahead, it is more likely that depth work will have to be reworked or scraped.
- Impacted process improvement
Every process improves as we execute it, especially software development processes, even if you have adopted the RUP. In particular, the requirements process for writing use cases improves as designers and testers receive use cases for the first time and communicate back to the requirements team where they see weakness from their discipline's perspective. Once requirements teams receive these improvement suggestions, they can immediately apply improvements to the next iteration.
But consider this in the context of an OI project: We will soon have a number of use cases that are not written well, since feedback won't be incorporated into the "work ahead" use cases. They'll either have to re-do those use cases or pass them on with their lower quality. (Imagine the designers and testers getting that junk from the requirements team after they'd just explained how the style was weak from their point of view. It might be hard to get them to help next time!)
- Silo mentality
One other risk of allowing the team to work ahead is that you may create -- or promote an existing -- "silo mentality," where the team becomes discipline focused rather than iteration focused or functionality focused. With a few teams I've observed that used an OI approach, the requirements people began to receive change requests or meeting invitations to help clarify problems with the first iteration requirements. However, they didn't have time to deal with those issues because they had conflicting stakeholder meetings, or were feeling the pressure of deadlines to complete downstream "future functionality" use cases. They believed "I have to get these requirements done" when they should have been thinking "I have to get Iteration 1 to succeed at all cost." The team has to remember that completing the iteration successfully has to be everyone's number one goal, regardless of what discipline they are in.
- The dangers of waterfall
Overall, the problem with working ahead is that it assumes your understanding of the current picture is near perfect. Thus we do more depth work than is needed for the current iteration. When the current picture and its related requirements change, we have to scrap a significant amount of work, thus increasing the cost of development and lowering morale.
In other words, allowing for OI puts you on a slippery slope toward the waterfall model, which most of us have come to accept as riskier; in waterfall development, teams are disjointed, and there is a substantial lack of uncertainty until the final executable is delivered.
When a team creates their requirements, they need to create both the breadth view of requirements (RUP Vision) and the depth view of requirements (the Software Requirement Specification, or SRS). In a waterfall approach, the breadth and depth are 100 percent complete by the end of the Requirements phase. In RUP, we only attempt to get the breadth view 100 percent complete by the end of Inception, but even then we are wise enough to know it can't be 100 percent correct, and that it will be adjusted based on reality during Elaboration. By Construction, however, the breadth view of requirements, or RUP Vision, should be stable.
During each iteration after the first, we create the depth view of requirements by doing deep dives into specific functional areas (use cases), thus building an SRS for each iteration. As we create the depth view SRS for each iteration, we adjust the breadth view based on reality. This is not difficult, because the breadth view is sufficiently high-level to allow easy modifications. Again, once the breadth view is stable (as well as the architecture, but that is a different topic), the project moves from Elaboration to Construction.
So, in Elaboration, the breadth view of the requirements (the vision) is expected to be unstable, changing as the depth view is understood and executables are created. This means that working ahead using OI in Elaboration will present a much higher risk than in Construction. In Construction, the architecture and the vision are stable at the start (this stability is the exit criteria for the Elaboration phase), which means that there is probably less scrap and rework if a team chooses to work ahead using OI in Construction.
Imagine that the team plans for two iterations in Elaboration (E1 and E2), which is fairly typical. Now imagine that we allow for OI. The requirements writers will detail the requirements for E1. Then they will detail the requirements for E2 (for timeboxed iteration folks, this means that they are working on requirements for a later functionality release). They may even detail the requirements for the next iteration.
What is odd is that, according to the plan, the next iteration would be C1, the first iteration of Construction. But should it be? Note that the way we pick what to work on in Construction is quite different from Elaboration. In Elaboration, we pick use case flows based on their ability to stabilize the architecture. In Construction, we pick use case flows based on their ability to maximize the value in the final solution set delivered to the customer at the end of Construction.
So here is where it gets dicey. At the end of E2, it is possible that the exit criteria for Elaboration may not have been met (stable vision, stable architecture). If so, the team has a few choices: Scrap the project, have an E3, or scope out any remaining architectural issues:
- If we choose to scrap the whole project, notice that our cost of cancelled projects is higher than if we had not worked ahead. That extra work was done on a project that went away.
- If we choose to have an E3, we will lack the benefit of knowing which use case flows to choose. Why? Because the technique used to pick the (normally next) C1 use case flows really should have been to pick the E3 use case flows, so the wrong kinds of flows have now been detailed. Further, if the design team also works ahead, they may find all of their work scrapped as we change our decision on the architecture.
- If we choose to change the scope of the project, we again may find much of our work scraped if the re-scoping removes use case flows that the team had worked ahead on!
Elaboration will end not when two iterations have been completed, but rather when the architecture and the vision are stable. So it is possible that at the end of the E2 the team will say "we are still in too much flux, and therefore the next iteration will be E3, not C1."
Remember that most projects adopt OI plans to mitigate resource utilization risks (i.e. the risk of having idle resources causes us to let them work ahead). Before you leap to an OI plan, you should explore other resource utilization mitigation strategies as potential alternatives to OI. I describe six of these below. In the final section, I describe a simple, workable approach to OI, in case none of these six mitigation strategies satisfies your portfolio management needs.
- Multiple projects.
I have seen graphs showing that, optimally, a single resource should be assigned to two projects. With only one project, idle time will increase, and with more than two projects, time will be lost to "context switching," where the team member has to switch mentally, electronically, and possibly even physically every time he or she jumps between projects.
Rather than have team members work ahead within one project, have them move to a second project in the company's portfolio. Imagine your organization is new to RUP. Your pilot project team completes the Inception phase and documents dozens of lessons learned. Now team members move into Elaboration Iteration 1, detail a few use cases, then run out of work. If you let them work ahead on future iteration use cases, they are very likely to be facing rework due to changes to the requirements as well as the process, which will also increase their frustration levels.
Instead of having them work ahead, you assign them part time to the Inception phase for a second project. These folks just finished their lessons learned on the first project's Inception phase and can immediately apply those lessons to the second project. Note that you'll need to have a few of the analysts assigned only part time to the second project, because they must always be available to the project 1 team to ensure the iteration for the first project succeeds at all costs.
For depth-oriented roles, I believe that two projects is the most effective number for each team member. But for breadth-oriented roles, I believe the number can actually be higher, perhaps three or four projects. I have no hard data to back this up, but my experience suggests this is reasonable.
- Multiple roles.
Another strategy is to allow people to work in more than one role on the project. For instance, if the requirements analyst also helps create the analysis model or the test model, he or she will be much busier beyond the requirements work for the iteration. Also, this may also improve team morale in a few ways. Some of the team members would love the opportunity to do more than one single task over and over again, so they are happier in this kind of model. For those who aren't interested or aren't capable of occupying more than one role, you can use a different strategy for them, such as multiple projects above.
- Multiple use case flows.
Don't forget that a single iteration usually has more than one use case flow assigned to it. Imagine that we have three use cases assigned to the iteration, and that each use case has three flows to be detailed. After detailing the first of the nine flows, we can immediately turn them over to the design team, and spend time with them to see how well the single flow guides their work. After the requirements analysts are satisfied, they can go back and detail more of the remaining eight flows, turning over each one as it is completed. This means that there isn't such a huge stagger in the workload.
- Matrixed organization.
A matrixed organization can help to reduce your resource utilization risks as well. In a matrixed organization, each person reports to two managers: a "resource manager" and a "process manager."1
The resource manager's job is to make sure their resources are fully utilized on valuable projects. They do this by growing the capability of their resources to best serve the needs of the process owners and by maintaining relationships with the process owners to ensure they understand what project types and what resource needs the process owners have. Resource owners are typically not given money for their budgets. They accrue money by having their resources bill time to a process owner. Then they use those funds to improve the resources with training, tools, etc. Resource owners are generally rewarded based on how utilized their resources are. One typical mistake here is to measure the resources themselves on utilization. This makes little sense, as it is the resource owner's job to get them utilized. For example, if one resource keeps getting strong performance ratings on every job, but is only used 20 percent of the time, whose fault is that? And what if a mediocre rated resource is used 100 percent of the time? Again, this reflects more on the resource owner than on the resources themselves.
The process owner's job is to execute the process and achieve some kind of value to the organization in doing so. They need resources from the resource managers to work on their projects and must be able to articulate the value of their process area, as well as the instance of their process area. For example, a process area might be "Executing IT Projects." Each IT project is an instance of this process area, and each must be able to articulate its value. Process area instances (i.e. a project) are awarded budget. They must spend this budget to get resources from the resource managers, and spend more to get the better resources. Process owners are rewarded on two basic things: completing their projects and showing measurable positive value to the organization.
In our "Executing IT Projects" example, the process owner creates the process (hopefully using RUP). They need project managers, analysts, designers, architects, etc. Each role reports to a resource owner -- for example, the analysts. Once the resource is assigned to a project, they now have two managers: the project manager for the IT project and the resource owner for analysts. The process owners evaluate the performance of the resource on their projects. The resource owner also evaluates the resource from the perspective of what is important to the resource pool.
Using a matrixed organization in this way can greatly mitigate the resource utilization risk. When project tasks thin out, the resource manager ensures that the employee can work on a different project part time. If an iteration does not succeed on a given project, the project manager (PM) can state this on the performance assessment of the resource. PMs do not care about the requirements; they care about the iterations being successful, and this part of how they can be measured.
On the other hand, if the PM has any trouble acquiring resources, this would be reflected on the resource manager's performance assessment.
In this strategy, people accept that their team members will have idle time during each iteration. They do not attempt to use any of the above strategies to mitigate this and simply accept that some downtime is part of the cost of development. Team morale is usually very good in these organizations, though some people get stir crazy and individually employ one of the above techniques to keep themselves happily occupied. (Others write papers like this one.) If you take this strategy, track the cost of development closely and compare it to those who take more aggressive mitigation strategies.
- Reduced team size.
Obviously, the cost of idle resources goes up as the team size goes up. Therefore, the smaller the team, the lower the cost of development if idle resources are a likely occurrence. It may seem silly to be so specific, but keep your team size small during Elaboration. Eight analysts are probably too many for Elaboration but may be great for Construction, once the vision and architecture have stabilized.
- Portfolio management.
You may have noticed that strategies 1, 2, and 4 have something in common. They are all part of having a strong portfolio management strategy. For those of you who are new to portfolio management, a portfolio is a set of projects managed by some authority.
This is different from a program, which typically associates projects that are technically related. In other words, for the program to succeed, all of the contained projects need to work together.2
In a portfolio, there may be zero relationship between the contained projects. The portfolio has a projected value to the organization (say, by the end of fiscal 2006, this portfolio of projects will save this division $6.3 million in IT costs) and each project contributes to that overall value. But each project also has a level of risk associated with it. The portfolio allows a portfolio manager to understand the value to the organization as well as which projects are most likely to fail to deliver any or all of the projected value.
Included in portfolio management is the task of managing your resources to ensure they are utilized to maximize portfolio value. Common sense best practices -- such as "put your best people on the most valuable projects" -- are formalized using portfolio management techniques. IBM Rational's tool in this arena is called Rational Portfolio Manager (RPM) and can significantly help to automate 1, 2, 4, and even 6 when it is time to expand the team size.
This is especially true for matrixed organizations (strategy 4). A strong portfolio management tool and strategy can help process owners prove the value of their projects to the portfolio, and can make acquiring resources more about who has the skill, vs. who is friends with whom. Resources will know their portfolio management strategy is working when they get pulled into a great project, not because of who they know, but because of what they know!
If you cannot adopt the above portfolio management techniques to mitigate your resource utilization strategies, or you still feel that working ahead using OI or timeboxed iterations is the correct choice for you, then there is one more technique you can try that holds two best practices:
- Disallow OI during Elaboration as being too unstable, and thus more likely to result in scrap and rework, and only allow it in Construction.
- During Construction, only allow the team to work ahead for one iteration (or on the functionality set of the next iteration only for timeboxed iterations).
Rule 2 is a simple rule, which doesn't require more technical explanation. But the essential logic could use some illustration. I witnessed an exercise in a previous company that helps to show why working ahead by only one iteration may be a good limit.
In this exercise, a team of people were asked to assemble a "product" as on a manufacturing line. The product was folded cardboard bricks, each taped so that it would hold its rectangular shape. The team consisted of five people, and between each pair of people was a circle. The first person had to cut the cardboard. The second person had to fold the cardboard. The third person had to tape the cardboard. The fourth had to tie a ribbon around the folded, taped cardboard, while the fifth had to make a mark on the final product of some kind. The specific tasks themselves were irrelevant.
What was relevant was different amounts of time each task required. The team was told they had seven minutes to produce as many cardboard boxes as they could. The timer began, and everyone started to work frantically at their task, placing completed items on the circle between them and the next person on the assembly line.
All of us took perverse delight in watching as a few bottlenecks formed and one person slowly had a growing, teetering stack of blocks on his left waiting for him to process. In fact, the person to the left of the backlogged worker seemed to speed up even more just to see how badly she could bury the poor guy with the backlog. And the observers just giggled more.
In the end, the time ran out and a new player was assigned to quality control. She was given a checklist and counted the number of bricks that were "of sufficient quality" based on the checklist. None were. (To be fair, the checklist should have been distributed at the start, but that is a different discussion.)
After this, they then counted "scrap" or the number of units that had to be thrown away due to bad quality. They also counted "inventory" or the number of units half formed that nonetheless cost the company to have in a not yet sellable state. This number was huge due to the towers of units between a fast and slow task. Overall things looked pretty bad.
The exercise was then repeated, but this time they added a single rule: Each member was allowed to only have one unit on the circle between them and the next team member. Once that unit was removed, members could put another one in its place. The participants with a fast task would finish, put the piece in the circle, build a second one completely, and then wait until the spot was clear before they could fill it and make the next one.
The change was immense. Every player was more relaxed immediately. The faster player was no longer rushing to bury their neighbor. The slower participant was no longer being pressured by a tower of units waiting for them. In the end, almost every piece passed quality, and inventory was greatly reduced as well.
Bottom line, everyone was more relaxed, some people stood around doing nothing for short periods of time, but the result was less scrap, less inventory, more product out the door, and better morale for all roles! This is why some people use the strategy of only being allowed to work one iteration ahead during construction.
The best way to manage your resource utilization risks is not necessarily to allow your resources to work ahead using OI or timeboxed iteration plans, but to gain control of your IT portfolio. Common sense and experience will help you get comfortable with a variety of combinations of all the above techniques. Generally speaking, in Elaboration, you should use standard iterations and not allow the team to work ahead of the current functionality set. Use small team size to reduce the cost of development by reducing the impact of idle resources. Allow the team members to be assigned to two projects, but ensure the second project is of lower priority than the first. During Construction, allow the team to work on functionality from the current and next iterations only, and no further than that.
If you cannot use the above portfolio management techniques of multiple projects, matrixed organizations, or one of the other techniques, then by all means, allow the team to work ahead. But track your scrap and rework, and mitigate the other risks.
In the standard iterative technique, we limit work to the current iteration functionality (use cases) only. You will need to use a risk mitigation strategy described earlier, such as multiple projects, multiple roles, multiple flows, matrixed organizations, acceptance; or some other clever mitigation strategy. Good portfolio management will help to make this technique work.
In overlapping iterations, also known as extreme iterative, staged iterations, and stacked iterations, etc., an iteration is defined as the container for a functionality set (use cases, features, whatever). The team may be allowed to work ahead on future iteration work if it completes the work of the current iteration. In this case you will need to put mitigation strategies and indicator measures in place to minimize the risks that overlapping iterations might produce: increased scrap and rework, low morale, impacted process improvement, and silo mentality.
In timeboxed iterations, an iteration includes all the work pertaining to it, regardless of the functionality set that work belongs to. This forces the team to try to plan for the exact work expected in a given timebox. As always, you can add or remove scope, but in a timeboxed method this is tracked as deviations to the plan and can assist future planning. People will be busy throughout the iteration, but will not be working on the same functionality set at the same time. Therefore, overlapping iterations literally becomes impossible -- unless you have a time machine!
Because the tactical work accomplished when using overlapping iterations or timeboxed iterations ends up being very similar, the risks tend to be the same. So it is important that project managers put in place the same strategies and measures as defined for overlapping iterations: i.e., increased scrap and rework, low morale, etc.
Can people succeed with overlapping iterations? A project manager friend of mine, Ruth Nantais, has a highly successful track record in guiding difficult projects, and she occasionally uses this approach. Her team refers to it as "extreme iterative." Another company that also has a strong track record for success refers to it as "staged iterations." Whatever name you use, overlapping iterations can be a useful project management technique as long as you limit its application.
1 These are also often called "resource owners" and "process owners," respectively.
2 A portfolio can actually contain multiple programs. Basically, a program is much more like a very large project than a portfolio. In a project, the goal is to ensure the project succeeds, regardless of its value to the portfolio. This is also true for a program. Only a portfolio focuses on how a project or program contributes to corporate value as compared to other projects and programs.
Anthony Crain is a software engineering specialist with the IBM Rational Services Organization. In addition to helping clients roll out the IBM® Rational Unified Process®, or RUP®, he also trains engineers in requirements management with use cases, object-oriented analysis and design (OOAD), and RUP. He learned OOAD and use-case modeling while at Motorola for five years, then sharpened his skills at Rational. An honors graduate from Northern Arizona University, he holds a B.S. in computer science and engineering.