Estimation variance and governance

from The Rational Edge: Uncertainties at the onset of software development projects are well known among seasoned project managers, yet estimations for project costs and timelines are often made as if these uncertainties do not exist. This paper explores the consequences for ignoring such variances and proposes a mathematically sound technique for reducing them as a means toward providing greater accuracy in the estimation process.

Murray Cantor, IBM Distinguished Engineer, IBM

Murray Cantor's photoAs a leader in the IBM Rational field services group, Murray Cantor promotes and extends Rational best practices, and works closely with customers on innovative ways to build and deliver systems more efficiently. Currently, he leads the evolution of a new engagement model for transforming software development organizations, as well as Rational Unified Process for Systems Engineering® (RUP-SE®). The latter methodology is critical for organizations working at the leading edge of large-scale hardware and software system development. He also focuses on how to integrate IBM Rational field capabilities with those of other IBM brands.

He has been named Distinguished Engineer both for his contributions to RUP-SE and for his successes with client enterprise transformations. A well-known thought leader, he is a sought-after keynote speaker at industry events, has published two books and numerous papers, and plays a key role on standards committees relating to UML and RUP.

Murray Cantor received his Ph.D. in mathematics from the University of California at Berkeley in 1973.



15 March 2006

illustrationMy favorite quote from Albert Einstein, paraphrased, is "Keep things as simple as possible, but no simpler." By this, Einstein was telling us that, while simplicity is a virtue, a correct solution to a problem must address all the relevant facts. In this paper, I apply this principle to the problem of delivering large, complex development projects on time, within budget, and meeting stakeholder needs.

The failure rates of such development projects are well known, and they continue unabated.1 In spite of a consistent record of over-budget, late projects, the industry rarely adopts practices that reflect one of the fundamental facts of development: that it is impossible at the onset of all but the most routine projects to achieve full understanding of the project requirements. Hence, it is impossible to make highly accurate estimates of the effort to meet them. In fact, over the last forty years, we have learned that the variances in the estimates of project parameters (such as schedule and budget) are quite high.

The real reason these projects fail to meet stakeholder needs is that they are managed as if these variances do not exist. Presumably the variances are ignored because statistics is considered to be too theoretical to be of practical concern. This is an example of making things too simple. On the other hand, embracing variance leads to better results. The right balance of theory and practice is the critical success factor. Achieving that balance is a theme of this paper.

As I will show, reduction of variance in the estimates of project parameters is the key to managing the dynamics of system development. In particular, I propose a practical framework for applying the concept to project governance.

Context and definitions

By governance, I mean:

  • Establishing organizational chains of responsibility, authority, and communication
  • Executing measurement and control mechanisms to effectively drive the organization

This paper will focus on the impact of estimation variance on the measurement and control component of governance.

The definition of project risk management in the International Council on Systems Engineering (INCOSE) handbook2 includes technical risk (the possibility that a technical requirement of the system may not be achieved) as well as performance risk (failure to meet project cost or schedule). While the methods described in this paper generally apply to both technical and project risk, technical risk has somewhat different mathematics, and so I will restrict our attention to performance risk. Technical risk and its relationship to performance risk will be considered more fully in a later paper.

This paper is intended for anyone concerned with the management of software, systems, product development, and integration programs. While some appreciation of probability and statistics is needed to fully understand the concepts presented here, I have strived to minimize the technical content.

As the first of a series outlining the theoretical and practical implications of explicitly dealing with uncertainty in development projects, this article presents the theoretical background and briefly discusses the implications for project organization, value and reuse, and development process taxonomies. Later articles will more fully address these implications.

Random variables

At the onset of a project, nothing is known for sure. For example, the cost, effort, and duration of a project are at best educated guesses. Project estimation tools provide estimates of effort and duration based on the available information. But since input to the tools produce estimates, the output of the tools are estimates as well.

In mathematical terms, values that are not known with complete certainty are called random variables. A random variable is a function that takes values based on a probability distribution. For example, suppose an estimated value of the time it will take to do a task is 3 months. As an estimate, the value of 3 months is only the most likely value. There is some probability that the actual duration is 2.5 months or 3.5 months. There is less likelihood that the value is 1 month or 4 months. So the task duration is not a single real number, rather it is a random variable that describes the likelihood of completion within a certain range of time.

A random variable, such as time in the example above, is associated with properties that are different from those associated with a single value. Instead of this single value, a random variable has a function that describes the likelihood of values for the estimate and is called the variable's probability distribution. The weighted mean of the random variable is the expected value. The variance of a random variable measures how far from the expected value its values typically are.3 For many random variables, a good choice is the normal distribution, often represented as a bell-shaped curve, shown in Figure 1.

For example, if the duration of the task is a normally distributed random value with mean value 3 and variance 1, the probability that it will take some value between 0 and 6 is given by the curve on Figure 1. This is just one of a myriad of possible probability distributions.

Figure 1: A normal distribution with mean 3 and variance 1.

Figure 1: A normal distribution with mean 3 and variance 1

Since a program is more likely to be over schedule and over budget, the actual distribution for program parameters is likely to be skewed to the right, more like the curve shown in Figure 2. The actual shape of the distribution may be derived by applying financial calculus.4

Figure 2: A more realistic distribution for program parameters

Figure 2: A more realistic distribution for program parameters

Pragmatically speaking, it is not useful to dwell on the mathematical description of the distribution. However, as I discuss below, successful project management requires recognition of the fact that a variance exists within the estimates, that its expression contains useful information, and that it should be addressed explicitly. One can use the notion of a random variable as metaphor, the mental model that leads to understanding of the project dynamics.

A serious consideration of random variables, such as a task's duration, is important during project planning during one or more intervals. In mathematics, this is the area under a distribution curve plotted over an interval. For example, if the estimated duration of a task is 3 months and the variance is 1 month, then the likelihood of completion between 2 and 4 months is the area shown in Figure 3. In this example, the likelihood of the task falling in this range is about 68%.

Figure 3: Likelihood of completing a task within the variance interval

Figure 3: Likelihood of completing a task within the variance interval

Risk and variance

Project managers must constantly deal with risk. At the onset of a project, they obtain estimates of the technical and/or project parameters. Those estimates are random variables. The variance of the estimates is a measure of uncertainty; it reflects lack of knowledge of the variances of the project parameters that contribute to project performance. Note that the estimation uncertainty of a project's parameters is provided by the variance; the greater the uncertainty, the greater the variance. For example, if the project manager is fairly certain of the estimate of 3 months, the distribution might look like the graph in Figure 4. If the project manager is even more confident of the schedule estimate, the distribution might appear as in Figure 5.

Figure 4: Nominal certainty regarding project parameters produces a normal distribution with mean equal to 3 and variance 0.5

Figure 4: Nominal certainty regarding project parameters produces a normal distribution with mean equal to 3 and variance 0.5.

Figure 5: Higher certainty about project parameters produces an even tighter distribution; here for example the mean equals 3 and variance is 0.1

Figure 5: Higher certainty about project parameters produces an even tighter distribution; here, for example, the mean equals 3 and variance is 0.1.

Clearly, as the variance in the estimates of project parameters increases, the likelihood of completing the project successfully decreases. High variances in the estimations of project parameters are therefore high risks to the project, and early reduction of these variances is important for the project.

Note that severity of risk is directly related to the variance in the estimation of a project's parameters.

Now, consider how a project manager sets the duration of tasks in a project plan. Suppose the estimated duration of a task is 3 months, and the variance of the estimate is 1 month. How should the manager plan the task? By setting the duration at 3 months, the manager is assuming the task will be completed at 3 months or less, as shown in the area under the curve in Figure 6. Note the likelihood of success is just 50%. So by setting the duration at 3 months, the manager is making an even bet that this task will be completed on schedule. If there are several tasks in the plan, each with a 50% chance of success, the overall plan's success is .5N, where N is the number of tasks in the critical path. Clearly, such choices would doom the project to failure; if N is greater than or equal to 10, the probability of executing the plan is less than 0.001.

If the manager wants to improve the odds of success for the task to, say, 90%, he must set the duration at 4.3 months, and to get to 99% odds, the duration must be 5.4 months. Of course, for the project to have acceptable risk, this must be done for each of the tasks, leading to almost doubling the duration. Moving the project's schedule out to accommodate all this uncertainty may not be economically practical.

Figure 6: The likelihood of completing a given task within the mean value of a normal distribution

Figure 6: The likelihood of completing a given task within the mean value of a normal distribution

It is important to note that if there is no bias in the estimates, then half of the estimates are high and half are low. So in the end, the errors to some extent cancel out. Generally, the reduction falls as the square root of the number of terms in the series. This phenomenon, known as reduction of variance, implies that the end date may be achievable even if the actual plan cannot be followed.

Measurement and control

It is an old maxim of process engineering that you can only control what you measure. There are two approaches to measurement and control of software development and integration programs:

  • Plan-and-track -- defining all of the project activities at the onset, measuring whether the program is following the preset plan, and resetting activities to get back on plan
  • Iterative development -- setting the initial program milestones and planning a set of program deliverables that incrementally culminate in the delivered solution

It is common for projects to adopt a plan-and-track method, sometimes summarized as "plan your work and work your plan." This approach requires that all the tasks and their durations be known at the onset of the project. It is based on the implicit assumption that the plan can be made with certainty: i.e., that the variance of the durations is very low. As discussed above, a team can only create and adhere to such a plan if the variances are small, because reducing risk by choosing a high estimate is unacceptable. Hence the plan-and-track approach is only practical for very low-risk projects.

Nevertheless, the plan-and-track method is often applied to projects with high variance, usually with disappointing results. The above discussion of the statistical nature of project parameters shows why: When the variance is large, as is typical for complex projects, setting the task durations sufficiently long for the plan to be executable at low risk is likely to be unacceptable. On the other hand, this variance can't be ignored. Going back to the example, if, the variance in the estimates is 0.1, the 90% certainty date for the task is 3.15 months and the 99% certainty date is 3.3 months, a mere 10% increase in duration.

Here is the point: Even if, at the onset of the project, the initial variance is high, the project could be managed so that early in the project, the uncertainties are removed so that the variance is low. In fact, the key to project success is removing the uncertainty as soon as possible, resulting in an increased ability to make accurate and aggressive plans. However, since the ability to reduce variance is based on knowledge gained through project experience, a plan-and track-governance approach is not well suited to managing risky projects -- i.e., projects with high initial variance.

The above example uses a normal distribution. However, the effect is the same for all distributions (log-normal, triangular, etc.). Of course, essentially the same reasoning applies to cost and effort estimates. Both cost and effort are random variables whose variances will reduce over the lifecycle of the project.

In the following sections, I describe how reductions in estimation variance may be used as a key measurement to guide the control of iterative development.

The risk workoff curve

At the onset of a development project, there is project uncertainty. There may be uncertainty as to the cost or time to delivery. There may be uncertainty regarding the technical parameters, such as the reliability of the project deliverables. The amount of uncertainty is often a reflection of the degree of novelty that the team perceives in the problem posed and in the solution, including novelty of the system, product, or service to be delivered.

As discussed above, project and technical parameters are random variables whose variance decreases over the project lifecycle. It is easy to see that the variance of any of the variables is high at the beginning of the project and low at the end when completion is in sight.

In order to proceed, we need the concept of useful team knowledge -- i.e., the information needed to complete the project successfully by the extended development team (the core team, the stakeholders, and the suppliers). Examples of useful knowledge include the stakeholders requirements and priorities, an effective design approach, the technologies to be applied, and the internal and external dependencies.

A simple derivation of the risk workoff curve follows from two assumptions:5

  1. Program parameter variance (V) is inversely proportional to the useful team knowledge (K), i.e., knowledge required so that: V equals one over kay
  2. Program knowledge increases with the useful team knowledge acquired to date. The rate of acquisition is proportional to the coefficient of collaborative effort (Cs) in sharing knowledge (e.g., coefficient of knowledge proliferation). Hence, we have this differential equation:dee kay over dee tee equals C sub five times kay.

Solving the differential equation yields

Kay times tee equals Kay sub zero times ee exponent Cee sub five tee

Hence, knowledge grows exponentially over time, starting with knowledge K0.6 That means that knowledge is reused to create new knowledge. Whatever the learning and experience, the knowledge is always reusable; it never diminishes in value.

Consequently,

Vee times tee equals Vee sub zero times ee exponent negative Cee sub five tee

where V0 is the initial variance.

The coefficient Cs corresponds to the overall effectiveness of the organization in acquiring knowledge. The actual measure of "effective" is a function of both time and organizational cohesiveness. Generally, Cs is proportional to the exponential parameter, which measures diseconomies of scale found in many estimation models such as COCOMO II. That is, the greater the collaboration, the better the opportunity for knowledge propagation. Hence, the variance diminishes over time due to collaborative interaction on new knowledge acquisition.

It follows that variance of the parameters in successful projects approximates the exponential fall-off curve shown in Figure 7. Evidence for this curve is also found in empirical studies7 and a more rigorous theoretical analysis by Scott Mathews.8

Figure 7: The risk workoff curve

Figure 7: The risk workoff curve

In fact, decades of experience have shown that, no matter what lifecycle methodology is formally practiced by an organization, successful project managers identify the risks at the onset of the project and intuitively manage their project so that the project variables follow the curve shown in Figure 7. Sometimes this is done explicitly, and sometimes it is done "off the books."

It stands to reason that a project will have a high degree of novelty and large variance in the beginning; at the end of a successful project, the variance is near 0. In the remainder of this paper, we will explore some of the implications of this observation.

Tracking the curve

We now turn to the question of how to manage a project consistent with the risk workoff curve. The answer is to conduct two activities:

  • Estimate the project's cost, effort, and duration, along with its variance
  • Adopt a control-loop iterative lifecycle model using project variance as a control

We will address each of these activities in turn.

Estimating variance

For all of the above discussion to be practical, the team must find a way to compute the variance of the project parameters. A widely accepted forecasting tool, Delphi estimation, provides a robust, practical method for estimating project parameters and their variances. The Delphi method provides a systematic, interactive forecasting methodology that focuses and quantifies the judgment of project leads. Developed by the RAND Corporation in the 1950s-1960s, it takes into account the value of expert opinion, experience, and intuition.

To apply the method, the project manager partitions the effort and then asks the teams responsible for the partitions to provide three estimates:

EL = Lowest Value (best case)

EN = Nominal Value (expected case)

EH = Highest Value (worse case)

The estimates could be of any parameter interest: e.g., effort or schedule.

These estimates can be more or less formal depending on the team's experience and the project's degree of novelty. The partitioning can be either by project phase or by some division of effort (e.g., by component development team), or both. Even the trivial case of a single partition can be used. However, the more partitions used, the better the calculation, for three reasons:

  • Statistical error is reduced by including more terms.
  • The partitions permit more team members to apply their judgment to the part of the effort with which they are most familiar.
  • Each team can apply whatever estimation method most appropriate to their part of the work. For example, a subteam can apply a parameterized method such as COCOMO by varying parameters with optimistic, nominal, and pessimistic values, and then using the computed values for input into the Delphi method.

The expected overall value of the estimate (E) is the weighted average

Ee equals sigma ee sub el plus four sigma Ee sub en plus sigma Ee sub ache all over six.

where the sums are taken over the partitions. The variance (V) is found by:

Vee equals sigma Ee sub ache minus sigma Ee sub el all over six.

The importance of the Delphi method is that it reflects the uncertainty inherent in a project. The uncertainty is input by the developers as the difference between the best and worst cases, or EL and EM. If the difference is large, then the manager knows his team feels this component has proportional risk. The manager would be wise to probe further, determine the source, and plan a set of activities that would raise the confidence of the team so that over time they can report smaller variances in the estimates.

Iterative development control loops

In engineering, the use of control loops is common for managing systems that have high variances in their measurements. For example, the difference between a ballistic missile and the more accurate cruise missile is that the latter uses measurement of position and constant course corrections to hit the target. To carry the analogy further, a plan-and-track method is the ballistic missile's approach to reaching a target; iterative development is the cruise missile's approach.

Here is the key principle:

Projects should be governed using a control loop to drive project variance to zero.

Since it is impossible to know at the beginning of a project how to reduce the variance of the later tasks, one must adopt an iterative approach.

As shown in Figure 8, the manager plans a series of control points (called iterations) and, at each iteration, reviews the project status in terms of the key variables and their variance, and adjusts the activities and resource assignments to steer the project to success.

Figure 8: Project governance control loop

Figure 8: Project governance control loop

In this way, the project manager takes advantage of the lessons learned to create a feedback loop for the project.

Iterative methods are well-established in development practice. Generally, iterative methods track demonstrative progress in project artifacts (usually partial implementation of the to-be-delivered product or system) against plan. If the project is failing to deliver the artifacts as planned at the iteration boundaries, the activities and staff assignments are reset. This classical iterative method, while powerful, does have a shortfall: Planning the correct content for each of the iterations is more of an art than a science. The classical method lacks instrumentation to drive the control loops.

The usual approach is to plan the iterations to remove risk. As mentioned above, the ability to design an iteration plan that removes risk so that the project tracks the curve shown in Figure 6 is a hallmark of an effective project manager. In the hands of a poor project manager, iterative development is subject to the bow wave effect: the early iterations address the easy stuff so that the iterations are not in fact removing the risk of failure. In fact, good iterative practice addresses the difficult (risky) aspects of a project first.

The additional metrics referred to in Figure 7 should include financial and quality metrics. A discussion of how to choose those metrics will appear elsewhere. However, it is important to note that they play a role in setting the iteration content moving forward.

Note that a guiding principle for choosing metrics is how they will affect a specific project parameter's variance. For example, a metric that does not correlate to the project cost or duration cannot aid in assessing the variance reduction in the iterations.

Hence, a project governance method should include, but not be limited to:

  • A measure of progress achieved
  • A measure of risk removed, i.e., the reduction of variance of key project metrics
  • Meeting other metrics targets

The iterations are planned to deliver the content, remove the risk, and track the additional targets. Removing the risk in the early iterations improves the likelihood that later iterations and the project are successful.

Project lifecycle

The management and team behavior varies considerably as a project tracking the risk curve (shown in Figure 7) moves through its lifecycle. As shown in Figure 9, it is useful to divide the lifecycle into risk stages in order to characterize the expected team behavior and effective management approach. We discuss these stages more fully below.

The risk stages

As shown in Figure 9, the project may be roughly divided into three stages.

Figure 9: The Project Lifecycle can be divided into three risk stages.

Figure 9: The project lifecycle can be divided into three risk stages.

Each phase represents 80 percentage points in the variance reduction. That is, at the end of Stage I, 80% of the variance has been removed. At the end of Stage II, 80% of the remaining risk has been removed. Also note that the risk stages align well with the Rational Unified Process (RUP) lifecycle phases, as shown in Figure 10.9 The RUP principles and phased-based activities provide good guidance on how to plan and execute the projects.

Figure 10: The Rational Unified Process

Figure 10: The Rational Unified Process

Stage I

The project's focus in the Risk Removal stage is to address early those risks that lead to the highest initial variances. Typical risks in this stage include uncertainty of scope and/or technical approach along with team capability. By the end of the phase, roughly 80% of the project variances should be removed, and variances at the end of this phase should be near 20%.

This stage contains the RUP Inception and Elaboration phases. Recall that during inception, the team gains agreement on overall scope, and during Elaboration, agreement is reached on solution approach.

Since this stage is focused on making good decisions that reduce risk, the team must feel free to collaborate in a free-form manner. Hence the team's activities are not well described by transactional workflows. Rather, management techniques that flow from systems theory are more appropriate. Tooling for this stage enables good team decision support.

Stage II

This stage is focused on removing roughly 80% of the remaining risk so that, at the end of the stage, variances should be around 4%. This stage aligns with the RUP Construction phase, during which the primary focus is execution risk. During Construction, the team applies the solution approach in iterations. The phase concludes with a working, tested system ready for transition to the client. During this period, the team should be well structured. While the workflow is not fully automated, much of the work can be choreographed. The tooling can support the execution workflows.

Stage III

This stage completes the project. It aligns with the RUP Transition phase, consisting of rolling out the solution into "production." During this phase, the activities should be highly transactional, with a major focus on change management so that new risks are not introduced.

Implications of the variance-governance approach

In this section, I will introduce some of the implications of taking the variance-governance approach. These topics will be expanded in later papers.

Risk management

Classical risk management methods include brainstorming sessions by program members to identify the project risks, their likelihood (high, medium, low), their impact (high, medium, low), and their mitigation plan. The combination of likelihood and impact are used for prioritization. Over the course of the project, the risks and their mitigation plans are tracked. In this method, the relationship of risk management and the key project measurements is tenuous. The risk attributes (likelihood and impact on success) and their statuses are, at best, a matter of consensus and are not correlated to the governing metrics.

This discipline, in spite of its benefits, is not universally followed. In many cases, the risk management process is seen more as a nuisance, often no more than an accommodation to the process assurance organization. One possible cause of this perceived low value is that the risk management activity is decoupled from the other governance activities. This decoupling can make risk management appear as a project add-on, not a core activity that benefits the project.

On the contrary, a risk is some condition that adds to project variance. Rethinking risk management practices from this perspective offers a more rigorous and valuable approach to project management.

It follows that it is precisely risk removal activities that remove the project variance. Further then, using the Delphi method, the team can quantify the impact of a risk. In particular, the team can estimate the three values -- lowest, nominal, and highest estimations -- assuming that the risk was removed, and then apply these values to reestimate the project variance. The outcome of the exercise is a measurement of the risk and the effect of its mitigation.

Adopting this approach unifies risk management and project governance.

Client management

Often, projects are conducted on a contractual basis and begin with a bidding process. In this circumstance, it is common for the project team to develop an estimate of nominal project parameters and bid the project around these parameters. The more experienced project managers will raise the bid to account for the variance. This practice creates a tension between being confident of delivery and being competitive. Raising the bid to account for the level of uncertainty in the estimates will probably make the proposal noncompetitive. This problem is compounded when the client insists on detailed project plans that are likely to be filled with uncertain estimates.

In the end, the team's winning bid leaves them with a very risky project, which is likely to fail or at the least be unprofitable.

An alternate approach is to have the client and the contracting team share an understanding of the variance and its cause at the beginning of the project. Often, much of the variance is due to factors that include the client. Then the client and the project team can collaborate to work off the variance together. This teaming approach, understanding and sharing the risk, is the basis for the honest communication.11

Admittedly, taking such an approach, while rational, is a major departure from current contracting practice. A variance-based acquisition approach requires that the contract include a staged acquisition. Rather than contracting for the entire project, the contact should be negotiated one phase at a time. The contract phases could align with the risk stages or with the RUP phases. Note that the US Department of Defence 5000.x adopts this phased approach for its risky projects.

Value creation

The financial analysis community has extended Black-Scholes reasoning (for determining the value of a financial investment option) to the technique known as Real Options, which estimates the value of investing in anything that might deliver some value in the future. Real Options methods treat the variance in the estimates similarly to volatility in the Black-Scholes equations.

Real Options methods can be used to determine the value of a development project at its onset. Essentially, the value of the project is the call option value of the expected benefits. For example, Vinay Datar, associate professor in finance and economics at Seattle University, and Scott Mathews, Boeing Phantom Works, have developed Real Project Value (RPV), a Real Options variant targeting product development.12 Specifically, the RPV of a development effort is calculated as a call option to buy the benefits of the completed project. These methods apply generally to development projects.

Following the Datar, Mathews method, the value of the project increases as the risk is removed. In particular, most of the project value is created in the early stages, when most of the risk is removed. This result is consistent with Pareto's Law that 80% of the value is created by 20% of the work.

A corollary about this observation on value creation is that those organizations that are able to successfully manage risky projects create the most value.

The collaboration paradox

The risk reduction equation introduced earlier

Vee times tee equals Vee sub zero times ee exponent negative Cee sub five tee

raises an apparent paradox:

Increasing Cs improves the opportunity for lowering variance, but also lowers productivity.

Reducing variance requires collaboration, but more collaboration results in more of the staff effort being spent interacting and less on the generation of the program artifacts.

The resolution of the paradox requires that the collaboration must be structured and facilitated to increase knowledge and thereby reduce variance. Collaboration that does not result in lowering variance results in wasted effort. Hence, in structuring the team, the program manager must ensure that the collaborations are productive. Some considerations include:

  • Ensuring that technical responsibilities are in place, allowing for encapsulation of knowledge. Everyone does not need to know everything.
  • Enabling the creation and maintenance of a communications path.
  • Creating the needed infrastructure for capturing and sharing program-related knowledge.
  • Training the team in appropriate semantics and ontologies for precise communication.

Specifics on enabling productive collaboration will be the topic of a later paper. For now, it is worth mentioning that collaboration styles vary across the risk stages.

In conclusion

The ability to accept risk and succeed at risky projects is how development teams create value. The path to competitiveness fully embraces and deals with risk; it does not ignore or avoid it. Teams that reject risk will be relegated to routine work of little value. These teams in the end find themselves in price spirals. With an understanding of the impact that the variance of key variables has on the project, the project may be governed by either

  • Adopting a plan-and-track governance method and moving out the schedule to account for the variance, or
  • Adopting a control-loop governance approach that focuses project activities upon reducing the variance in the estimate by removing uncertainty early in the project.

The second method paves the way to success.

Acknowledgments

Thank you to Michael Mott, Vasco Drecun, Scott Mathews, David Lubanko, and James Cantor for their help in preparation of this article.

References

Murray Cantor, Software Leadership. Addison Wesley 2002.

Murray Cantor, Scott Mathews, Vasco Drecun, "Real Metrics to Drive Product Development." Preprint 2005.

Vinay Datar and Scott Mathews, "A Simple Algorithm for Valuation of Real Options: An Intuitive Alternative to the Black-Scholes Formula." Journal of Applied Finance, in press.

INCOSE System Engineering Handbook. Version 2a. June 2004.

Per Kroll and Philippe Krutchen, The Rational Unified Process Made Easy. Addison Wesley 2003.

Walker Royce, Software Project Management. Addison Wesley 1998.

Walker Royce, "Successful Software Management Style: Steering and Balance." IEEE Software, September -- October 2005.

Steve Tokey, Return on Software. Addison Wesley 2005.

Notes

1 See, for example, the Standish Chaos Report or the Bull Report.

2 INCOSE System Engineering Handbook, Version 2a. June 2004.

3 The technical definition of variance is the random variable's second central moment, the weighted average of the square of the differences from the expected value.

4 Scott Mathews, "Real Options Reshape the Distribution." Preprint 2005.

5 Vasco Drecun, private communication.

6 Actually knowledge follows an 'S-shaped curve,' leveling over time. This analysis applies to the exponential region of the curve.

7 Steve Tokey, Return on Software. Addison Wesley 2005.

8 Murray Cantor, Scott Mathews, Vasco Drecun, "Real Metrics to Drive Product Development." Preprint 2005.

9 See Walker Royce, Software Project Management, Addison Wesley 1998; Per Kroll and Philippe Krutchen, The Rational Unified Process Made Easy, Addison Wesley 2003; and Murray Cantor, Software Leadership, Addison Wesley 2002.

10 Murray Cantor, Scott Mathews, Vasco Drecun. Op cit.

11 For more on "honest communication," see Walker Royce, "Successful Software Management Style: Steering and Balance" in IEEE Software, September -- October 2005. A similar version of this paper can be found at http://www-128.ibm.com/developerworks/rational/library/mar05/royce/index.html

12 Vinay Datar and Scott Mathews, "A Simple Algorithm for Valuation of Real Options: An Intuitive Alternative to the Black-Scholes Formula." Journal of Applied Finance, forthcoming issue.

13 As discussed in Murray Cantor, Scott Mathews, Vasco Drecun, "Real Metrics to Drive Product Development." Preprint 2005.

Comments

developerWorks: Sign in

Required fields are indicated with an asterisk (*).


Need an IBM ID?
Forgot your IBM ID?


Forgot your password?
Change your password

By clicking Submit, you agree to the developerWorks terms of use.

 


The first time you sign into developerWorks, a profile is created for you. Information in your profile (your name, country/region, and company name) is displayed to the public and will accompany any content you post, unless you opt to hide your company name. You may update your IBM account at any time.

All information submitted is secure.

Choose your display name



The first time you sign in to developerWorks, a profile is created for you, so you need to choose a display name. Your display name accompanies the content you post on developerWorks.

Please choose a display name between 3-31 characters. Your display name must be unique in the developerWorks community and should not be your email address for privacy reasons.

Required fields are indicated with an asterisk (*).

(Must be between 3 – 31 characters.)

By clicking Submit, you agree to the developerWorks terms of use.

 


All information submitted is secure.

Dig deeper into Rational software on developerWorks


static.content.url=http://www.ibm.com/developerworks/js/artrating/
SITE_ID=1
Zone=Rational,
ArticleID=105046
ArticleTitle=Estimation variance and governance
publish-date=03152006