I recently read about "the iron triangle" and its extensions on Max Wideman's excellent Web site on project management wisdom, www.maxwideman.com. I've long been fascinated by the now famous "scope, resources, time  pick any two" paradigm, which states that trying to maximize scope while simultaneously minimizing resources and time will impose too many constraints and inevitably lead to project failure.1
Max makes the important point that we need to add a critical fourth dimension  quality  to the paradigm. As he wrote to me,
Interestingly, quality ultimately transcends all else, whether in terms of performance, productivity, or final product. But a remarkable number of people in the project management industry don't seem to have latched onto that. Who cares if last year's project was late and over budget? That's all lost in last year's financial statements. But the quality [of the product] is enduring.
It is hard to argue with that point of view. Most of us software developers can recall some time when, in our zeal to make our commitments and ship on time, we let stuff get out the door that caused us heavy regrets later on.
So Max extends the iron triangle to a star, as shown in Figure 1.
Figure 1: Max Wideman's extension of the "iron triangle"  resources, scope, and time  introduces "quality" as a fourth element.
As an alternative to this star, Max's correspondent Derrick Davis suggests using a tetrahedron to illustrate these relationships. This allows you to maintain the original triangle but create a third dimension to depict the quality aspect. The nice thing about the tetrahedron is its intrinsic symmetry; the four attributes populate the vertices, and any three can be used as the base. Max has illustrated this in a thoughtful way, tying the vertex pairs together with another descriptor (see Figure 2).
Figure 2: The tetrahedron model allows any three attributes to serve as a base, placing the fourth attribute in the third dimension.
Five, not four
Although I agree with Max's insistence on quality as a critical fourth factor, I believe that his model still leaves something to be desired. When thinking about a project prior to beginning work on it, management is typically interested in the "shape" of the project  an interest that maps nicely to the four parameters illustrated in Figures 1 and 2. That is, we can state how much we intend to do (scope); we can describe how well we are going to do it (quality); we can predict how long we will take to complete the project (time); and we can estimate how much it will cost (resources). But then are we done with our project description?
I don't think so. Management is always interested in a fifth variable: risk. That is, given the previous four parameters we've identified and the plan that goes with them, management wants to know whether the project represents a high, medium, or low risk to the business. We know from vast experience that projects have different risk profiles, and good managers try to balance their project portfolios by planning a spectrum of projects with different risk levels. The more risky ones have a greater probability of failure, but they might have bigger payoffs, too. Just as it is judicious for individuals to have diversified financial investment portfolios, it is smart for a company to diversify its portfolio by having many projects with different risk/reward profiles. Statistically, such an enterprise is bound to prosper.
Now, how can we use geometry to visualize this new, important, and  I believe  final, parameter?
Enter the pyramid
I propose a model that represents the first four variables as the four sides of the base of a pyramid. We'll assign extensive properties to the sides so that the lengths are meaningful. Note that this is different from the Davis tetrahedron model, in which the attributes occupy the vertices.
For simplicity, let's assume that all sides of the base are of equal length, so that the base forms a square. This is reminiscent of Max's star, except that we have moved the attributes from the corners to the edges. Of course, these lengths can be independently adjusted, so the base is actually an arbitrary quadrilateral. Conceptually, however, we lose nothing by assuming for now that the base is a square.
Now let's redefine the length of each side of the base. We will also adjust our terminology slightly to reflect more accurately what the sides represent. Bear with me, and you will see why.
 Scope. More "things to do" represents a larger scope, so the length of this side increases as the scope increases.
 Quality. Higher quality standards mean a tougher job, so the length of this side increases as our quality metrics increase  in other words, as we "raise the bar" on quality.
 Speed. This is our way of capturing the time element; we increase the length of this side as the speed increases. Conversely, the slower you go  the more time you have  the shorter this side becomes. Completing five function points per month is harder than completing two function points per month; think of this side as work accomplished per unit time.
 Frugality. (Max suggested this term instead of my original parsimony.) When we consume fewer resources, we are being more frugal. So higher frugality corresponds to a longer length for this side. If we use up more resources, then this side gets shorter.
Notice that if we use these definitions for scope, quality, speed, and frugality, the project becomes easier if the sides are shorter. That is, the project is easier if we do less, lower our quality standards, proceed more slowly (take more time) and can afford to be less frugal (have more resources.) Thus all four variables "move in the same direction."
Note also that with these definitions we increase our profitability as we increase the area of the base. This is because the value of the product goes up as we make it bigger, better, and get it sooner, while at the same time are the most frugal in producing it. Maximizing value while minimizing cost optimizes for profitability. It's perfectly logical that attempting to make our profit larger also makes the project harder and more risky.
The altitude variable
Now let's build a pyramid on this base, keeping in mind that no matter what lengths the sides are for a given project, the volume of the pyramid will be proportional to the area of the base times the altitude. The altitude abstractly represents the probability of project success, which is the inverse of its risk. That is, a highrisk project will have a low probability of success and a low altitude. A lowrisk project will have a high probability of success and a correspondingly higher altitude.
Now all we have to do is link it all together.
Figure 3: The project pyramid. A highrisk project will have a low probability of success and a low altitude. A lowrisk project will have a high probability of success and a higher altitude.
The pyramid's volume is constant
We can now posit that the volume contained in the pyramid is a constant for a given team. That is, reality dictates that only so much "stuff" will fit into the project pyramid, based on that team's capabilities. This makes sense, because the pyramid's volume is proportional to
{difficulty} x {probability of success}
As one goes up, the other must go down. This is another way of saying that there is a "conservation law" at work here: the product of the base area  which represents the project difficulty due to the specification of the four parameters  times the altitude (representing the probability of success) is proportional to a "conserved" volume.
What determines the pyramid's volume? Two things. First, the capabilities of the project team, as we have already mentioned. And second, the degree to which the project team members are grappling with unfamiliar problems. A highly capable team implies a larger volume:
more capacity = more "stuff" = more volume
and lots of new problems and unknowns implies a smaller volume:
more unknowns = higher risk = less volume
So, given a constant volume corresponding to the project team, what do you have to do if you want to make the altitude higher  that is, if you want to increase the probability of success? By the logic of elementary solid geometry, you must make the base smaller. You do this by reducing the lengths of one or more sides of the base, thereby making the project easier.
Remember: Volume is proportional to base times altitude, regardless of the base's shape.
A statistical interlude
At this point, we can attempt to figure out the right "scale" for the altitude. We can measure the edges along the base in familiar units:
 Scope  function points or features
 Quality  inverse of number of defects allowed
 Speed  function points or features/month
 Frugality  "inverse" dollars or personmonths
But what about that pesky probability of success, our altitude?
We know that "longer is better," that a higher altitude corresponds to a higher probability of success. But there is a slight problem with using probability  a percentagebased measurement  as the scale. For example, if we have a pyramid with an altitude corresponding to a 60 percent probability of success, we cannot, under the constant volume assumption, improve that percentage by cutting the area of the base in half in order to double the altitude. That would give us an absurd answer of "120 percent probability of success," and we know that probabilities must be between zero and 100 percent.
To resolve this conundrum, we must investigate how the outcomes of software development projects are distributed. Can we assume that these project outcomes are distributed according to the standard normal distribution  the wellknown "bell curve"? The diagram in Figure 4 is worth a thousand words.
Figure 4: How software development project outcomes relate to the standard probability bell curve
For those of you who are rusty on what a probability distribution function is, recall that the xaxis represents the outcome, and the yaxis represents the number of events with that outcome, which, properly normalized, is the probability of that outcome. If we start from the left edge and sweep out the area under the curve, we measure the cumulative probability of attaining that outcome. In Figure 4, the percentages below the xaxis show us how much area is contained between the xaxis coordinates that are spanned.
Note that the distribution is "normalized" here, with the midpoint called Âµ, and the "width" of the distribution characterized by the standard deviation: sigma (the Greek letter in Figure 4 that resembles a small "o" with a tail on its upper right). The distribution extends to both plus and minus infinity, but note that the "tails" of the distribution past the plus and minus 3 sigma limits are quite small; the two tails share less than 0.3 percent of the entire area under the curve. The graph tells us that 68 percent of the projects will be either somewhat successful or somewhat unsuccessful, that only about 27.5 percent (95.5 percent minus 68 percent) will be either very successful or very unsuccessful, and that only 4.2 percent (99.7 percent minus 95.5 percent) will be either extremely successful or extremely unsuccessful. To get the relevant percentages for each of these, we can just divide by two, as there is symmetry around the middle. For example, we can predict that around 34 percent  approximately a third  of all projects will be somewhat successful.
In most applications we assume that Âµ is zero, so the outcomes range from minus infinity to plus infinity. We can think of the xaxis as the payoff or reward. Although I am sure many software development projects have had zero payoff, it is hard to conceive of a project having a very large negative payoff, red ink notwithstanding. 2 And surely all projects will be cancelled long before management lets them get to minus infinity! So the symmetrical standard normal distribution with tails to infinity in both directions seems to be the wrong model. What we'd prefer is a distribution that we can use with positive outcomes only, or at least with a finite limit on negative outcomes.
Right idea, wrong distribution
For this purpose, my dear friend and colleague, Pascal Leroy, suggested the skewed lognormal distribution, 3 which more accurately reflects many phenomena in nature.
Unlike the standard normal distribution, the lognormal distribution is asymmetrical and lacks a left tail that stretches to infinity. It describes phenomena that can have only positive values. Figure 5 shows what it looks like.
Figure 5: The lognormal distribution for depicting positive outcomes only
We still use a sigma to represent the standard deviation, but we interpret it differently for the lognormal distribution, as we will explain below. Note that Âµ is now coincident with 1 sigma. Half the area under the curve is to the left of 1 sigma and half is to the right; if we believe the universe of projects has this distribution, then we want our project to fall to the right of the 1 sigma line, which means its reward will be above the average.^{4} This is the equivalent of saying that we are willing to invest Âµ (or 1 sigma) to do the project; any outcome (payoff, reward) less than that represents a loss (red ink), and anything above that a win.
Unlike the standard normal distribution, the lognormal distribution clumps unsuccessful projects between zero and 1 sigma, and successful projects range from 1 sigma to infinity, with a long, slowly diminishing tail. This tells us that we can have a small number of projects with very large payoffs to the right, but our losses are limited by the zero on the left. This seems to be a better model of reality.
The meaning of sigma is different in this distribution. As you move away from the midpoint, which is labeled here as 1 sigma, you accrue area a little differently. Each confidence interval corresponds to a distance out to (1/2)**n sigma on the left, and out to 2**n sigma on the right. This means that 68% of the area lies between 0.5 sigma and 2 sigma, and 95.5% of the area lies between 0.25 sigma and 4 sigma. This is how the multiplicative nature of the lognormal distribution manifests itself.
Mathematically, the distribution results from phenomena that statistically obey the multiplicative central limit theorem. This theorem demonstrates how the lognormal distribution arises from many small multiplicative random effects. In our case, one could argue that all variance in the outcomes of software development projects is due to many small but multiplicative random effects. By way of contrast, the standard normal distribution results from the additive contribution of many small random effects.
Implications for real projects
What are the implications of this distribution for real projects? Because the peak of the curve lies at around 0.6 sigma, we see that the most likely outcome (as measured by the curve's height) is an unsuccessful project! In fact, if the peak were exactly at 0.5 sigma, your probability of success would be only around 16 percent:
50%  Â½(68%) = 50%  34% = 16%.
Since the peak is not at 0.5 sigma but closer to 0.6 sigma or 0.7 sigma, the probability of success is a little higher  around 20 percent.
Now this is starting to become very interesting, because the Standish CHAOS report, 5 of which I have always been skeptical, documents that around four out of every five software development projects fail. This is a 20 percent success rate. I will have more to say about this report later on. But it is interesting to note that the lognormal distribution predicts the Standish metric as the most likely outcome, which may mean that most development projects have a builtin difficulty factor that causes the lognormal distribution to obtain.
What does it take to get to a coin flip?
What project manager wants to start with a lessthaneven chance of success? At the very least, we would like to get the chances up to 50/50 for our projects. So, using our pyramid model, what do we have to do to the base to increase the altitude?
Using units of sigma for our pyramid's altitude, we begin with a plan that gives us a starting point at the most probable outcome, at the distribution peak of 0.66 sigma. To get to a 50 percent probability of success, we need to accumulate half the area under the curve, which we know is at the 1 sigma point. So we need to go from 0.66 sigma to 1.0 sigma, which is an increase of 50 percent. That says we have to increase the altitude of the pyramid by a factor of 1.5, which means decreasing the area of the base by 1.5, or multiplying it by twothirds.
In turn, this implies that we must multiply the lengths of sides of the square base by the square root of twothirds, which is about 0.82. Therefore, to go from a naÃ¯ve plan with only a 20 percent chance of success to a plan with a 50 percent chance of success, we must simultaneously:
 Reduce scope by about 18 percent
 Reduce quality standards by about 18 percent
 Extend the schedule by about 18 percent (i.e., reduce speed by 18 percent)
 Apply about 18 percent more resources (i.e., reduce frugality by 18 percent)
relative to what we had planned in the original scenario.
We could, of course, change each of these parameters by a somewhat different amount, as long as we reduced the area of the base by a third.
Let's call this new plan, the one that gets us to a 50/50 footing, "Plan B." We'll refer to the original, most likely, and somewhat naÃ¯ve plan as "Plan A."
More confidence
Can we do better? Suppose we wanted to go out to the 2 sigma point. This would then lead to a probability of success of around 84 percent:
50% + Â½(68%) = 50% + 34% = 84%
This would bring up our odds to fivetoone, which any project manager would gladly accept. In fact, this would be standing Standish on its head: five successful projects for every unsuccessful one.
What would it take to get us there?
Well, we can do the math both ways, either starting from our original Plan A or from the 50/50 Plan B. For consistency's sake, let's begin with Plan A. The math is pretty much the same. We now have to go from 0.66 sigma to 2 sigma, increasing our altitude by a factor of 3. That means we must multiply the area of the base by a third, which in turn means that we must multiply each side by the square root of 0.333. And in our previous list of things we'd need to change simultaneously to achieve better results, we'd have to replace 18 percent with 42 percent.
Let's now summarize, using rough numbers so that we don't assign spurious precision to the model. 6 Plan A has a probability of success of only around 20 percent. As we have seen, if we simultaneously reduce the difficulty of all four of the base parameters (scope, quality, speed, and frugality) by about 20 percent, we get Plan B, which has a 50 percent probability of success. To achieve an 85 percent success rate, we'd need to reduce the difficulty on the base parameters by around 40 percent relative to Plan A. Table 1 summarizes these relationships.
Table 1: Results of using the pyramid model and lognormal distribution
Plan  Description  Location on lognormal curve  Probability of success  Values for base parameters 

A  NaÃ¯ve and most likely starting point  0.67 sigma  20%  Per Plan 
B  More realistic  1 sigma  50%  Reduced by 20% relative to A 
C  High efficiency  2 sigma  85%  Reduced by 40% relative to A 
Clearly we've gone way out on a thin limb here, but the numbers in Table 1 represent the pyramid model's predictions, based on the lognormal distribution for project outcomes and a constant volume assumption.
Important caveats
At this point it is important to step back for a moment and consider the limitations of this model. We have made many implicit assumptions along the way, and now we must make them explicit.
 Let's begin with what we mean by "success." Remember, we said we would
define success as an outcome greater than 1 sigma in the lognormal distribution,
which would mean that about half of our projects would be successful.
But the Standish report says that four out of five projects fail. Does this mean they are so constrained and therefore so difficult that this is the result? Perhaps. Many software development projects are doomed the instant the ink dries on the project plan. But I think there is more going on than that.
I have always had a problem with the Standish report, because I think it overstates the case, and in so doing it trivializes the real problem. If we were to take all original project plans and then apply our four base metrics to assess the projects at their conclusion, Standish would probably be right. And the lognormal distribution seems to support this scenario. But do we really have four failures for every success?
Here is what I think really happens: Along the way, as a project progresses, management realizes that the original goals were too aggressive, or the developers were too optimistic, or that they really didn't understand the problem. But now the project has incurred costs, so that scrapping it would seem wasteful and impractical. So instead the project gets redefined, and the goals are reset. This may involve scaling back the feature set, deferring some things to a subsequent release. Sometimes, especially if the team discovers problems near the end of the development lifecycle, they will sacrifice quality and ship the product with too many defects. And even then, they are likely to exceed their schedule and budget. But does this mean that the project is a failure? Not necessarily.
I maintain that lots of these projects fall into the "moderately successful" bucket, and some into the "only somewhat unsuccessful" bucket. So, as in everything else in the world, we revise expectations (usually downward) as we go, so that when we are done we can declare victory. This is important both politically and psychologically. It avoids what the psychologists call "cognitive dissonance." No one likes to fail, and you can always salvage something. So we tend to gently revise history and "spin" actual results  and the usual distribution applies. In reality, the Standish metrics apply only if you use the original project plan as a measuring stick. But no one actually does.  The parameters scope, quality, speed, and frugality are not all independent of each other. For example, as the project slips and takes more time, it also incurs increased cost because of the increased resources consumed, so both speed and frugality tend to suffer in parallel. You could try to offset one with the other  for example by spending more money to hire more people and go faster. But, as Brooks so clearly pointed out almost thirty years ago, 7 adding people to a software project usually has the effect of slowing it down! If you wanted to make this kind of tradeoff, counterintuitively, you would do better to spend less money per unit time by having fewer people and going more slowly. You might not even lose that much time, because, as Brooks pointed out, smaller teams tend to be more efficient.
 The different parameters do not have perfectly equal impact. Time, or the inverse of speed, appears to play a more critical role than the other three, although this is always open to debate. Typically, managers resist reducing scope and quality, and they are always in a big hurry. From their point of view, the only parameter they can play with is resources. So often they opt to throw money at the problem. This usually fails, because they don't take the time to apply the money intelligently and instead spend it in ineffective ways. This is exactly Brooks' point. In the end, he said, more projects fail for lack of time than for all other reasons combined. He was right then, and I believe he is still right.
 In general, you cannot trade off the four parameters, one against another  at least not in large doses. That is, you cannot make up for a major lacuna in any of them by massively increasing one or more of the others. Projects seem to observe a law of natural balance; if you try to construct a base in which any one side is way out of proportion in relation to the others, you will fail. That is why we opted to assume our base was a square, with all sides (parameters) conceptually on a somewhat equal footing. We acknowledge that you can adjust the sides up and down in the interest of achieving equivalent area but caution against the notion that you can do this indiscriminately. Again, you can't increase one parameter arbitrarily to solve problems in one or more of the other parameters. Max Wideman likes to think of the base as a "rubber sheet." You can pull on one corner and adjust the lengths, but eventually the sheet will tear. Geometrically, of course, one side cannot be longer than the sum of the other three sides, because then the quadrilateral would not "close."
 To some extent, we have ignored the most important factor in any software development project: the talent of the people involved. Over and over again, we have seen that it is not the sheer number of people on a team that matters, but rather their skills, experience, aptitude, and character. Managing team dynamics and matching skills to specific project tasks are topics beyond the scope of this article. However, the pyramid's volume to some degree corresponds to the team's capabilities.
 We should be careful not to specify product quality based solely on the absence of defects. Quality needs to be defined more generally as "fitness for use." A defectfree product that doesn't persuasively address an important problem is by and large irrelevant and cannot be classified as "high quality."
 What about iterative development? Unfortunately, this treatment looks at
the project as a "one shot," which goes against everything we believe in with
respect to iterative development. But perhaps the unusually high failure rate
documented by Standish is caused by a lack of iterative development.
That is, by starting with an unrealistic plan and rigidly adhering to it throughout
the project, despite new data indicating we should do otherwise, we bring
about our own failures.
However, if we are smart enough to use an iterative approach, then we can suggest a workable model. We start out with a pyramid of a certain volume and altitude during Inception, based on our best knowledge of the team and the unknowns at that point. As we move into the next phase, our pyramid can change both its volume and shape. The volume might shift as we augment or diminish the team's capability, or as we learn things that help us mitigate risks. This is a natural consequence of iterative development. In addition, the shape of the pyramid may change, as we adjust one or more sides of the base by reducing scope, adding resources, taking more time, or relaxing the quality standard a bit  or by making changes in the opposite direction. This should happen at each of the phase boundaries; our goal should be to increase the altitude each and every time. As the project moves through the four phases of iterative development, we should see our pyramid not only increase volume but also grow progressively taller as we reduce risks, by whatever means necessary. If this does not happen through an increase in volume, we must accomplish it by decreasing the base area.  The issue of whether projects follow the lognormal probability distribution is debatable. I agree with Pascal that it makes more sense than a standard normal distribution. But we have no fundamental underlying mechanism that says the distribution must be lognormal.
 Finally, the conservation law expressed as a constant volume pyramid is just a model. It provides a convenient visualization of the phenomenon, but it is a guess  and the simplest geometric model I could come up with. To determine whether it reflects reality, we'd need to examine empirical data.
Although it is long, this list of caveats does not negate the value of the model; I think its predictions are valid and consistent with my previous experience. Indeed, many midcourse corrections that teams make during a development project to improve their probability of success turn out to be mere bandaids and don't come close to addressing the real issues. We have demonstrated over and over again that to improve your chances of success substantially, you need to do more than relax a single constraint by 10 percent, and this model underscores that point. Therein lies its greatest value; I believe it represents a fundamental truth.
It's all about risk
Risk is perhaps the most important parameter to consider in funding and planning a project. That is why the simple model we have defined in this article correlates four traditional project parameters  scope, quality, speed, frugality  and then adds risk as a fifth variable. If you were planning to paddle a canoe down a river, you'd want to know whether the rapids were class three or class five. The latter would be a lot riskier, so you might decide to spend a little more money on your boat. The same is true for software development investments: It is worthwhile to assess the risks before deciding what resources to allocate to a project. But remember, resources are only one leg of the square base; you must consider them in concert with scope, time, and quality. It is the combination of these four parameters, along with the quality of the team, which ultimately determines the risk profile.
The simple pyramid model also shows how much you must trade off to improve your probability of success. Although it is speculative, the model helps us to soberly decide whether we are willing to invest the resources required to raise our probability of success above the minimum threshold acceptable for our business, given the scope, quality, and time constraints that we specify.
Notes
^{1} The relevant links are http://www.maxwideman.com/musings/irontriangle.htm and http://www.maxwideman.com/musings/triangles.htm. In this article, we use the term resources to mean all costs, including burdened people costs.
^{2} Part of this has to do with the finite horizon of a project. If we ship a defective product, the company will suffer huge support costs post deployment. But these costs are rarely charged back to the project. This is rather unfortunate, because it shifts the burden away from the place that originated the problem. True project costs would include postdeployment support.
^{3} See http://www2.inf.ethz.ch/personal/gutc/lognormal/bioscience.pdf
^{4} Actually, the math is a little more complicated than that. For the standard normal distribution, the mean and the median are identical, because of the symmetry of the distribution. For the lognormal distribution, they are not. So taking the 1 sigma point here is a little off, but the effect is small. We will ignore it in all that follows, as the effect is on the order of a few percent, and our overall model is not that precise anyway.
^{5} See, for example, http://www.costxpert.com/resource_center/sdtimes.html
^{6} Remembering also the detail we ignored earlier about the mean and median not being identical for the lognormal distribution. Here is where we can bury some of that approximation.
^{7} See Frederick P. Brooks, The Mythical ManMonth: Essays on Software Engineering, 2nd edition. AddisonWesley, 1995.
Comments
Dig deeper into Rational software on developerWorks
 Overview
 Free online course: Getting started with IBM Bluemix
 IBM Bluemix Garage Method
 Technical library (tutorials and more)
 Forums
 Communities

developerWorks Premium
Exclusive tools to build your next great app. Learn more.

developerWorks Labs
Technical resources for innovators and early adopters to experiment with.

IBM evaluation software
Evaluate IBM software and solutions, and transform challenges into opportunities.