Measuring Project Health -- Part II

from The Rational Edge: What aspects of a software development project should be measured in the middle phases of the lifecycle to ensure that the project is healthy and on track? Read how risk lists, backlogs, and other project artifacts can be used to objectively measure project health.


illustrationThis is Part Two of a three-part article that describes strategies for measurement as part of managing a software development project. Part I of this series introduced the concept of measurement as a mechanism for assessing project health, and I described specific measurements that can help you assess the health of projects during the Inception phase. Here in Part Two, I focus on measurement during the "mainstream" of the development work: the Elaboration and Construction phases.

The Elaboration and Construction phases have a number of similarities that lead to similar approaches to measurement. In both phases the primary focus is producing executable releases that progressively implement an increasing amount of the desired functionality of the system. The primary differences between the two phases stem from differences in the kinds of risks the phases address: the Elaboration phase deals with technical risks that affect the architecture of the solution, while the Construction phase deals with risks related to getting the bulk of the project work done on time and within budget. These differences lead to a slightly different focus in the measurement approaches during the two phases.

The Elaboration phase

The main focus of the Elaboration phase is to prove the basic technical approach used as the basis for estimates in the Inception phase, and to fill in the technical details necessary to ensure delivery of the solution in the future. If the technical risks are low -- as they usually are in projects that add relatively minor functional enhancements to existing systems, based on a stable and proven architecture -- the Elaboration phase can be quite abbreviated and measurement can be constrained to testing the assertion that the proposed changes will not break the existing architecture. It is for this reason that certain "agile" approaches such as Scrum and Xtreme Programming lack an equivalent phase. These methods implicitly or explicitly assume that the architectural risks are low, and that any architectural changes will emerge out of the normal development work.

The typical development work performed in the Elaboration phase is to implement additional scenarios1 that will prove the technical approach is sound and the rough schedule and cost data from the Inception phase are still reasonable. From a planning perspective, the main difference between the Elaboration and Construction phases is the choice of scenarios developed. In the Elaboration phase, their selection is driven by risk -- that is, scenarios are chosen because they will cause risks to be confronted.

Solving these technically challenging problems will not only increase the certainty that the chosen solution is technically sound, it will most likely resolve problems in the solution that could threaten the project if left unresolved. In working off the technical risks, valuable experience and information will be obtained, resulting in estimates and plans for the remainder of the project that will have much greater reliability. This will become an important factor when we consider measurement in the Construction phase.

From a staffing perspective, the main differences between the Elaboration and Construction phases are that the Elaboration phase tends use a smaller team focused on exploring the technical risks in the solution. This exploratory character of the Elaboration phase means that project staffing and accompanying costs tend not to scale up until the Construction phase when the project team may be expanded to complete the remaining work.

Measurement in the Elaboration phase

As noted above, technical risk reduction is the main objective of the Elaboration phase. As a result, measurements in the phase will focus on assessing whether technical risks are really declining. Figure 1 shows the typical risk profile for an effectively managed project (solid line) along with expected progress (dashed line). Iteration boundaries are marked by the vertical lines in the figure.

figure 1

Figure 1: Expected risk profile for an iterative project

Notice that the total risks actually rise for an iteration or two early in the project because the project team will still be discovering business risks as they explore more completely the real needs driving the project, and they will discover technical risks as they consider different alternative solutions. What causes the line to drop after this point is the explicit focus on technical risks. The inflection point in the curve, the point at which the slope of the curve levels out, corresponds to the end of the Elaboration phase and the beginning of the Construction phase where technical risks have been mostly removed.

It is also important to measure progress starting early in the project. Figure 2 shows the profile of expected progress across the project lifecycle.

figure 2

Figure 2: Expected project progress profile

In Figure 2, the gray area represents the normal expected range of possible progress observations, while the curved line represents an ideal progress profile. In the figure, notice that there is some progress even in the Inception phase (the initial one or two iterations), although progress remains slow given the usually significant amount of rework experienced as different technical approaches are tested and refined.

Measuring risk reduction

It is a common practice for projects to develop a risk list early in the project, though fewer projects maintain this list over the full project lifecycle. To ensure focus on the most important risks, this list can be abbreviated to the top ten risks when reviewing it with others beyond the project team. An example is presented in Figure 3.

Rank at StartRiskMitigation Results
1Supporting previous ATM versionsSome progress made, but need to make better progress; keep in top position on list
2Keith leavingResponsibilities successfully transferred; remove
3Test Strategy, resources and environmentsRisk acceptably mitigated; remove
4It might be harder than we think (estimates)Risk seems to be under control; keep on list but lower rank
5Reliability of the O/S platformSome progress made, but need to make better progress; raise rank in next iteration
6Scalability of J2EE Infrastructure
7What are the requirements?Risk acceptably mitigated; remove
8Fault toleranceSome progress made; keep on list but lower rank
9Tamper proofingRisk acceptably mitigated; remove
10Printing flexibility and reliabilityNo progress

Figure 3: Example top ten risks

The top ten risks are selected based on their impact (rated from 1-10, with 1 being low impact and 10 being high impact) times their severity (rated from 1-10, with 1 being low severity and 10 being high severity). This number is used to rank the risks, and to measure the decrease in overall project risk. The expected risk profile (presenting the sum of the risk ratings for all risks over time) for an effectively managed project was shown in Figure 1.

If the overall risks are not declining from iteration to iteration, something is wrong: Most often the strategies the team is using to reduce risk are not effective and risks continue unmitigated from iteration to iteration, or new risks are being found faster than old risks are being retired. In either case, a change in approach will be needed. Perhaps the team lacks the technical experience to mitigate the technical risks, or perhaps the business environment is changing more rapidly than the project can respond. In any case, failure to mitigate risks over a series of iterations is a sign that a significant change in approach is required.

Measuring progress

Progress can be measured by the number of scenarios that have been implemented and successfully tested. It is best to keep the measurement simple, ignoring for all practical purposes the fact that some scenarios are more complex than others. As you will note in Figure 2, the progress curve rises slowly at first, as technical issues are resolved and the team gains experience working with one another. As the project enters the Construction Phase, however, the project should have a fair amount of momentum and the iterations should be highly productive. Then as the project nears its end, the rate of progress falls off as the finishing touches are put on the project.

The smooth progress curve shown in Figure 2 can be derived from a cumulative count of scenarios implemented and successfully tested. But actually progress rarely follows such a smooth upward progression. Some rework is to be expected when using an iterative approach, especially when the project team is new to iterative and incremental software development. New ideas are tried and sometimes rejected, and better solutions are devised when more information is available. So, while the curve shown in Figure 2 shows the overall march of progress, a closer inspection of actual results would reveal overall progress, with occasional setbacks, as shown in Figure 4.

figure 4

Figure 4: The impact of rework on progress

Figure 4 actually shows an extreme case, with rework growing over time, indicating some potential problems with the initial approach taken, and possibly the skills of the team. This illustrates why it is important to measure both overall progress as well as rework: to create a better picture of what is really going on. The work being performed is real development work -- scenarios are being analyzed, developed, and tested; and progress results when the implemented code passes testing. Rework occurs when implemented code is found wanting or deficient, resulting on some of the implemented code being scrapped and reworked (hence the origin of the term). Figure 5 shows expected rework trends over the lifecycle of the project.

figure 5

Figure 5: Rework trends across the project lifecycle

Notice that rework is fairly significant in the early phases, but declines significantly after the architecture is established at the end of the Elaboration phase.

As noted above, rework generally derives from defect identification, although it can also result from changes in requirements, and defects should be captured from the start of the project. The typical trends for defects are shown in Figure 6.

figure 6

Figure 6: Defect trends across the project lifecycle

Measurement in the Construction phase

The main focus of the Construction phase is building out the solution that was sketched in the Inception phase and "architected" in the Elaboration phase. You will identify any remaining requirements and develop and test the solution. By the end of the phase you should have an operational system, though some rough edges may remain to smooth out.

In addition to the measurement performed in the Elaboration phase, a few new measurements are added:

  • Backlog growth or shrinkage
  • Test coverage
  • Build stability

Measuring the project Backlog

The project Backlog consists of anything that you are not currently working on. The Backlog grows as new things that need to be done are identified, and it shrinks as those things are implemented and tested. If it is still growing significantly by the time the Construction phase rolls around, something is amiss. By the end of the Construction phase, all scenarios must be implemented and tested, which means that the Backlog must be declining throughout the phase and must reach zero (or close enough) by the end of the phase.

The Backlog can be managed by declaring that some items in it will be implemented in a future release, meaning that they are moved out of the current project's Backlog and into the Backlog for the future release. This often requires a fair amount of negotiation among stakeholders, but it is usually essential to managing the Backlog.

Measuring test coverage

Implicit in the prior discussion about measuring progress was the notion that work can only be considered complete when it has been successfully tested. The recommended measure of progress is the completed scenario. Sometimes, largely because of staffing problems, it is difficult to thoroughly test all completed work. When this happens, two things are visible in the measurements: 1) progress will fall below expectations because there is a kind of "testing backlog," and 2) test coverage will not grow as quickly as it should.

Some project teams make the mistake of counting the scenario as completed when the developers working on it have completed their work. Doing so is a mistake for several reasons: 1) it inflates the project progress, counting as completed things that may in fact be rejected for rework during testing, and 2) it masks the fact that the project may not be adequately staffed to complete the testing effort.

You need to be testing as you go along, not only unit testing but also testing entire scenarios. If you can get users to test the evolving solution, even better! The feedback you get will be extremely valuable. Testing is really the only way you know whether you are done with something. Testing also always uncovers things that need to get done, so if you do not keep up with the testing effort you are falling behind. As you evaluate test results you need to also keep an eye on the quality of the work that is being performed. Tests that are failing are an indication that your progress may not be as good as you think it is.

To ensure that progress measures are not inflated, keep track of the test coverage -- it should be at 100% for all completed scenarios. If a coverage gap is forming, you need to make immediate and deliberate efforts to reduce it to zero as soon as possible.

Measuring build stability

A "build" is what results when code for the solution is compiled and linked into an executable version of the system. Effective build processes including running automated tests against the build to validate that it is working correctly. The results of the build and automated test process produces useful measures of project health.

Figure 7 shows the results of the build process over a sequence of days.

figure 7

Figure 7: Build status trends over time

Figure 7 shows, for a sequence of days, the percentage of the builds which have successfully completed and passed automated testing. The graph shows that while the builds have been 100% successful on many days, there are also a number of days when the build was completely broken, and toward the end of the period it appears that results were very unstable. This could indicate that changes were introduced that affected many teams members, and that the changes were poorly communicated. This could be a sign of deeper problems and should be investigated.

Build health is often a key indicator of many other problems on the project. If a team cannot produce stable builds it cannot make adequate progress. Broken builds often signal more pernicious problems -- sometimes insufficient team skills, sometimes a flawed architecture, and sometimes a poor partitioning of work. If you are working iteratively you should be building frequently; if you are really working iteratively you should be able to build continuously.

Measurements from a continuous build and automated testing process will give you valuable insight into overall project progress and health. If you have a continuous build process, allowing you to build anytime there are significant changes that could be tested, you will have lots of data to assess progress. Over the course of a typical 4-6 week iteration, this amounts to many hundreds of builds and automation test executions, providing you with a wealth of data for assessing progress and health.

Measuring progress, revisited

As noted above, all planned work needs to be completed by the end of the Construction phase. The next and last phase of the project, the Transition phase, focuses on deploying the solution into a production environment. The Transition phase should not be turned into a "cleanup" phase in which the Backlog continues to be worked. There is enough work to do in deploying an application that there will not be time to continue implementing scenarios.

One of the major questions to answer in the Construction phase is this: "Can we get all the necessary work done by the end of the phase?" Assessing the productivity of the team is important. You have to constantly evaluate the productivity of the team and assess the remaining Backlog to determine if it can be reduced to zero. A useful, though subjective, measure is the "velocity" of the project team, which is a fancy way of measuring the productivity of the development team -- how much work they can accomplish over time. You can think of velocity as the rate of change in the progress curve, or the rate of growth or shrinkage of the Backlog. Based on whether the Backlog is growing or shrinking, you can estimate the productivity of the team and make predictions about whether the Backlog can be reduced to zero over the remainder of the phase. If it appears that it cannot, you have few alternatives: extend the length of the phase, or negotiate the scope of the Backlog downward.


The Elaboration and Construction phases are similar in that they are both principally focused on producing executable code; the differences between the phases lie in their degree of focus on technical risks and architectural issues. As a result of the similarities in the work performed, the measurements used to manage the work tends to be similar as well, with a greater focus on technical risks in the Elaboration phase.

By the Construction phase, with the technical risks effectively mitigated, the principal focus turns to the question "Can we get all the work done by the end of the phase?" As a result, measures that quantify test coverage, productivity, and progress should be the team's focus. By the end of the Construction phase, the Backlog should be reduced to zero and the solution should be functionally complete, resulting in an initial release candidate.

In the next and final article in this series, we will look at measurements in the Transition phase, which primarily focus on the release readiness of the solution. We will also turn our attention to the issue of measurement across programs consisting of multiple projects, including managing portfolios consisting of multiple projects or programs.

Further reading

This article was drawn from material initially presented in Managing Iterative Software Development Projects, by Kurt Bittner and Ian Spence, and published in 2006 by Addison-Wesley.


1 A scenario is a "thread" through the use case -- i.e. the basic flow plus zero or more alternate flows.


developerWorks: Sign in

Required fields are indicated with an asterisk (*).

Need an IBM ID?
Forgot your IBM ID?

Forgot your password?
Change your password

By clicking Submit, you agree to the developerWorks terms of use.


The first time you sign into developerWorks, a profile is created for you. Information in your profile (your name, country/region, and company name) is displayed to the public and will accompany any content you post, unless you opt to hide your company name. You may update your IBM account at any time.

All information submitted is secure.

Choose your display name

The first time you sign in to developerWorks, a profile is created for you, so you need to choose a display name. Your display name accompanies the content you post on developerWorks.

Please choose a display name between 3-31 characters. Your display name must be unique in the developerWorks community and should not be your email address for privacy reasons.

Required fields are indicated with an asterisk (*).

(Must be between 3 – 31 characters.)

By clicking Submit, you agree to the developerWorks terms of use.


All information submitted is secure.

Dig deeper into Rational software on developerWorks

ArticleTitle=Measuring Project Health -- Part II