In my previous couple of blog entries, I used triangular distributions for examples. For many who suffered through (or maybe enjoyed) their stat classes (what are the odds?), this might be a surprising choice. They were taught the default choice would be a Gaussian distribution. For those more attuned with modern business analytics, they are likely to be familiar with triangular distributions. In this entry, I'll briefly the reasoning beyond each of them.
First, as you hopefully recall, both are distributions associate with random variables (Those who don't recall migh benefit from the series of tutorials at The Khan Academy
site). Each are non-negative functions with integral (area under the curve) one. (There are fancier mathematical definitions, but no matter.) Each describes the likelihood of each of set of possible outcomes of some random variable. The difference in shape between Gaussian (aka Normal) and triangular distributions reflects the nature and use of the random variables.
Briefly, normal distributions
are often arise as the histogram
of a set of measurements. They have some central value (called the mean) and some dispersion (called standard deviation) around the mean. Anyone who took a stat class studied these distributions. They show up in a many contexts:
- The distribution resulting from tabulating the histogram of repeated, but imprecise measures of some quantity and then divided the entries by the sum of the measures is often assumed to be normal. The mean of the distribution is the estimator of the actual measure.
Statisticians like the normal distribution for several reasons. First, it is easy to parameterize. If you know the mean. mu (μ), and the standard deviation, sigma (σ), you have completely characterized the distribution. For example, the likelihood of a measurement occurring is often characterized as being within some number of σ's from the mean. Figure 1 shows how this works.
The likelihood of a value falling in a range is given by the area under the curve. For example, the probability of a value of the normally distributed random variable falling within one standard deviation of the mean is 68.2%.
Normal distributions have one really cool feature called the Central Limit Theorem
, which states that under remarkably general conditions, the sum of a set of random variables will be close to normal. Notice, in the previous blog entry, when we added two triangular random variables, the sum appeared smooth and in fact started to look normal.
All that said, I do have have a pet peeve. Normal distributions are overused. Most things in nature and economics are not normally distributed. For example, as as documented in Wikipedia
, these phenomena are nowhere near normal, but are closer to a Pareto distribution:
- The sizes of human settlements (few cities, many hamlets/villages)
- File size distribution of Internet traffic which uses the TCP protocol (many smaller files, few larger ones)
- Hard disk drive error rates
- The values of oil reserves in oil fields (a few large fields, many small fields)
- The length distribution in jobs assigned supercomputers (a few large ones, many small ones)
- The standardized price returns on individual stocks
- Fitted cumulative Pareto distribution to maximum one-day rainfalls
- Sizes of sand particles
- Sizes of meteorites
- Areas burnt in forest fires
- Severity of large casualty losses for certain lines of business such as general liability, commercial auto, and workers compensation.
Getting back to our topic, let's turn to triangular distributions. They are not used to describe a set of measured outcomes from an experiment. They are used to describe what we know or believe about some unknown random variable.
For example, the sales of a new product one year after delivery generally can not be determined by measuring the sales of a bunch of new products. As pointed out by Douglas Hubbard
, treating the future sales as a single fixed variable is unreasonable (although all too common). What is more reasonable is setting the low (L), high (H) , and most likely (E) values of the future sales. As I wrote in an earlier entry
these are the values that specify a triangular distribution. I.e. triangular distributions are set to zero below a given low value, L, and above the high value, H, and peaks at the expected value E. The distribution is then a describe be a triangular curve so that the total area is 1. Here is the distribution for L = 1, E=6, and H=7.
Some would argue there is a 'real' distribution of the future sales random variable and it is unlikely to be triangular. My response is for all practical purposes, it does not matter. The triangular distribution is a good-enough approximation to whatever the real distribution might be. By 'good enough' I mean they may be used to support decision making: they are a big improvement over using single values. They are also practical as they easy to specify and there is no assumption of symmetry, No wonder they are common in business analytics.
To wrap up, normal distributions are occasionally useful to describe outcomes of measurements while triangular distributions are useful for giving rough estimates of one's belief of the liklihood of outcomes based on the evidence on hand. More generally, normal distributions are useful in frequentist
statistics and triangular in Bayesian
statistics. See this Wikepedia article for a discussion of the kinds of statistics.
Much of what we do in development analytics is more Bayesian than frequentist. I hope to write more about that in the near future.
This entry is a follow-on to my most recent entry
. The idea is that random variables are are the way
describe the uncertain quantities that arise in managing development efforts. They are a natural extension of the fixed variables we all grew up with. In fact, a fixed variable in a random variable that has probability one of taking a given value and probability zero of taking any other value. In this entry, I explain how you can (with computer assistance) calculate with random variables.
Suppose you want to add two random variables, v1 and v2. This need might arise if you have two serialized tasks in a Gantt chart, each described by a distribution as explain in the previous entry and you would like to know how long it would take to complete both of them, i.e. the sum of their durations.
How would you proceed? First note the sum would be another random variable. Therefore what you need is the probability distribution of the sum. There is no formula for that distribution, but there is an effective, commonly used numerical approach, known as Monte Carlo simulation.
The idea behind Monte Carlo simulation is to use a pseudo random number generator take a sample value of v1 and a sample value of v2 and then add them. For more detail, follow this link.
Note that the values are selected according to the probability distributions of each of the variables. The more likely values are taken more often. Now save that sum and do the same thing many times, say 100,000 times, and store each of the sums. For each of the sums, you can compute its probability by looking at its frequency in the collection of saved sums (some sums are more frequent than others) and divide by the number of samples (actually you have to round the sums to get the counts). What you get is an approximation of the distribution of the sums.
Lets look at an example, if v1 has a triangular distribution with L = 3, E = 4, H=7, as shown in Figure 2 and v2 has a triangular distribution with L = 1, E = 6, H= 7 as seen figure 3.
The distribution of the sum, found using the Monte Carlo simulator in Focal Point, is given by figure 3.
First note the sum is not another triangular distribution. It is
smoother. This is to be expected from the mathematics of probability. On
the other hand, the distribution of the sum makes sense. For example,
we would expect the most likely value of the sum to be 10, the sum of
the two most likely values, but the simulation found 9.98. The
discrepancy is due to chance and would diminish with more samples. Also
note the probability is zero below 4, the sum of the lows, and above 14,
the sums of the highs.
For fun, here is the distribution of the product for the variables:
The reader can check if this looks sensible. Note also that the product does not have a triangular distribution. The peak is much smoother.
So random variables can be used in place of fixed variables in any computation. So they have all of the utility of fixed variables and enable us to express uncertainty. They may seem foreign at first, but they are worth the trouble to learn. Like anything else, they become intuitive after a while.
Modified on by firstname.lastname@example.org
One of the things that characterizes software or systems development is that the project manager routinely commits to deliver certain functionality on a given date at an agreed-upon level of quality for a given budget. It is the role of the project manager to make good on the commitment, The software and systems organization leadership may count on the commitments being met in order to meet their business commitments or there may be an explicit contract to deliver on time for a fixed budget. The measure of a good project manager is the ability to make and meet commitments.
iIn this blog entry, I will discuss the nature of that commitment and how it relates to project analytics. First of all, lets define 'commitment' in this context. Of course, I do not mean the confinement to a mental institution, I mean, as suggested above, the promise to deliver certain content with acceptable on or before a certain date.
The first thing to notice is that the future is never certain, and so we are in the realm of probability and random variables, i.e. a quantity described by probability distribution. Going forward, I will assume the reader is familiar with the concepts of random variables and their associated distributions . Soon, I will devote a blog entry just that topic.
Meanwhile, the best way to describe the likelihood of meeting a commitment is the use of a random variable. Consider the distribution of the time it will take to meet the commitment. It might look something like this:
A similar distribution would apply to cost to complete.
Recall, the probability then of the commitment being met is the area under the curve that falls before the target date:
The manager, in making the commitment, is essentially betting (perhaps his or her career) that he or she will meet the commitment. According to this measurement, the odds are about 50-50. The key measurement then is the amount area of the random variable that lies prior to the target date, which in turn relies on the the ability to calculate the probability distribution. I also will discuss some techniques to do that in a later entry.
Now consider for example, "project health". What I believe what is meant is the likelihood of meet the commitment to deliver the project on time.
If it highly probable the project will ship at the target date, the project is 'green' otherwise it is 'yellow' or 'red' like in the following figure.
There are three reposes to a yellow or red project. One can move the target date, move the distribution, or change the shape of the distriburion, again a topic for a later bog.
When I started this experiment in blogging, I wrote that I am not a natural blogger. I am not the affable, chatty web presence who on a daily or weekly basis shares one's thoughts. I have learned since then what kind of blogger I am. My style is to write little essays that might take weeks to prepare, given the priorities of my day job. I have also found I do enjoy writing the blog as it gives me a chance to share some issue that is top of mind. So here goes:
A few weeks ago, my good colleague displayed a chart in one of his PowerPoint decks entitled something like "How to Understand Murray." The chart was an explanation of probability distributions. It was both flattering and a bit of a wakeup call. As Arthur mentioned and the readers of this blog know, much, if not all, of my writing assumes an understanding of probability and probability distributions (aka probability densities). My experience in discussions with folks from our industry is that most of them have vague memories from some stat class in college and so can generally follow the discussions, but most could use a refresher. I could simply refer readers to a good Wikipedia article
, but, instead, let me given a domain-specific example.
Let's go with a topic I wrote about in a previous entry: time to ship. For explanation purpose, let's take a fictional example. Suppose you are starting a project expected to ship in 110 days. That said, we cannot be 100% certain of being ready to ship on exactly that day, no sooner, no later. In fact, it is very unlikely we will exactly hit that day. Maybe we will be ready the day before, or maybe the day before that. Since being ready on or any day before day 110 is success, we can sum up the probabilities on being ready on any of those days to get the probability we really care about. All that said, the probabilities for each of the days matter because we need them in order to get the sum. The set of probabilities for each of those days is the probability distribution or density that is our topic.
Let's look at a simple example.
In this example of a triangular distribution there is 0% probability that the product will be ready before day 91 and we are 100% certain we will be ready before day 120. We think the days become more probable and reach a peak as we approach day 110 and fall off after that. This graph then shows the probability, day by day, of being ready on exactly that day. You might notice that the peak is less than 0.07 (actually 0.06666...). This makes sense since we are assuming that the project may be complete on any of 30 different days and so the densities would be in the neighborhood of 1/30 = 0.03333. In our case, some are above and some below.
These distributions are the basis of calculating the likelihood of outcomes. The principle is very simple: The probability of being ready within some range of dates is the sum of the probabilities of being ready on exactly one of those dates, i.e. , we add up the density values for those days. As I explained above, if we want to compute the probability of being ready on or before day 110, we would add up all of the densities for days 90 to 110 to get 0.7. Using the same reasoning the probability of being ready on some day before day 120 is the sum of all the densities which comes to exactly 1.0, which was one of our going in assumptions. In fact the property that the sum of densities for all possible outcomes equals 1 is a defining property of distributions. Those who want to try this out at home could use this spreadsheet.
. For example, can you find on what day, being ready on or before that day is an even bet?
For most development efforts, the overall state of the program (some would say 'health') is characterized by the shape of the distribution, This shape changes every day. Every action the team takes changes the shape. So, one key goal of development analytics would be to track the shape of the distribution throughout the lifecycle, a daunting task. More on this (probably
) in future postings.
I have the honor of giving one of the keynotes at the Conseg2011 conference
this February in Bangalore. I have chosen a large, perhaps overly ambitious topic: "The Economics of Quality". Here is my conference proceedings document
. My goals in preparing the paper and presentation is to make the case is that quality is fundamentally an economics concern and to suggest an overall approach for reasoning about when the software has sufficient quality for shipping. For those who have read some of my earlier entries, you will see have different my thinking on the topic differs from those who take a technical debt approach.
Anyhow, the brief proceedings paper is very high level and there is considerable work to filling in the details and validating the approach. This paper really is the beginning of a program that I believe, when carried out, will have great benefit to our industry. So, I would like to hear from anyone who has similar interests and perspectives. There must be some existing relevant research. Perhaps we find enough like-minded folks to build a community exploring the topic.
It should not be a surprise that I have been following the
BP oil spill with much interest. In fact, as I starting typing this entry, I
was watching the grilling of the BP CEO, Tony Heyward, by Congress. Rep. Stupak
is focusing on the BP’s risk management.
Some of you have read my earlier posting on my thoughts of
the BP decision process that led to the Deepwater Horizon blowout. So far,
information uncovered since that posting is remarkably consistent with my
earlier suppositions. In this entry I would like to step back a bit and discuss
what broader lessons might be learned from the incident. While it is all too
easy to fall into BP bashing, I would rather use this moment to reflect more
deeply on risk taking and creating value. (BTW, some of you might now that my
signature slogan is ‘Take risks, add value’.)
In our industry we create value primarily through the
efficient delivery of innovation. Delivering innovation, by definition,
requires investing in efforts without initial full knowledge of the effort
required and the value of the delivery. This incomplete information results in
uncertainties in the cost, effort, schedule of the projects and the value of
the delivered software and system, i.e. cost, schedule and value risk.
Deciding to drill an oil well also entails investing in an
effort with uncertain costs and value. In this case, the structure of the
subsystem and productivity of the well cannot be know with certainty before
drilling. As I pointed out in an earlier blog, a good definition of risk is
uncertainty in some quantifiable measure that matters to the business. So in
both our industry and oil drilling we deliberately assume risk to deliver
So, what can we learn from the BP incident? Briefly, one
creates value by genuinely managing risk. One creates the semblance of value
for a while by ignoring risk.
Assuming risk, investing in uncertain projects, provides the
opportunity for creating value. That value is actually realized by investing in
activities that reduce the risk. The model that shows the relationship is described
in this entry. So, reducing risk has economic value, but reducing risk
takes investment. In the end, the quality risk management is measured with a
return on investment calculation. This in turn requires a means to quantify and
in fact monetize risk.
I wonder what was there risk management approach was
followed by BP. A recent Wall Street Journal article suggested they used a risk
map approach – building a diagram with one axis a score of the ‘likelihood of
the risk’ and the other a score of the ‘severity of a failure’. So with this
method, they would score the risk of a blowout as very low (based on past
history) with a very high consequence. So, such a risk needs to be ‘mitigated’.
(Some actually multiply the scores to get to some absolute risk measure.) Their
mitigation was the installation of a blow-out preventer. They could then
confidently report they have executed their risk management plan. Note these
scores are at best notionally quantified and not monetized.
Paraphrasing my good colleague, Grady Booch (speaking of
certain architecture frameworks), risk maps is the semblance of risk
management. As pointed out by Douglis Hubbard in The Failure of Risk Management (and in
an earlier rant in this blog), this sort of risk management is not only
common, but dangerous: It is a sort of business common failure mode that leads
to bad outcomes. Also, Hubbard points out, useful risk management entails
quantification and calculation using probability distributions and Monte Carlo
analysis. I would add that since risk management in the end is about business
outcomes, risks need to be monetized as well as quantified. I am willing to bet
a good bottle of wine that BP did no such thing. Any takers? The business
common failure mode was over-reliance on the preventers, even though there are
several studies showing they are far from ‘failsafe’.
Further, it appears BP assumed risk by consistently taking
the cheaper, if riskier. design and procedure alternative, the one with greater
uncertainty in the outcome, even when the cost of an undesired, if unlikely,
outcome was possibly catastrophic.
The laundry list of such decisions is long; some outlined in Congressman
Waxman’s letter to Tony Hayward.
CEO’s of Shell and Exxon testified before congress that their companies
would have used a different, more costly designs and followed more rigorous
procedures. According the congressional and journalistic reports, this behavior
is BP standard operating procedure. So BP assumed risk by drilling wells but did not invest in reducing
For quite a while they got away with the approach of
assuming but really reducing risk, and appeared to be creating value as
reflected in stock and value and dividends to the investors. The BP management
raised the stock price from around $40/share in 2003 to a peak of around of
$74/share prior to the Deepwater Horizon incident. At this writing the stock is
trading at $32/share and the current dividend has been cancelled. Investors
might rightly wonder if there is another latent disaster and so discount the
apparent future profitability with the likelihood of unknown liabilities. The
total loss of stockholder value is over $100B, which is in the ballpark of the
eventual liability of BP. So, whatever approach BP used to manage risk failed.
BTW, some may recognize this same pattern in the management
of financial firms that participated in the subprime mortgage market. In that
case, they ‘mitigated risk’ by relying on the ratings agencies. Those who
actually built monetized models of the risk realized there was a great
opportunity to bet against the subprime mortgage lenders and made huge fortunes
(See, e.g. The Big Short: Inside the
Doomsday Machine by Michael Lewis .).
Readers of the blog will notice a recurrent theme is some of
the postings. It is essential that we assume and manage risk. To repeat a
favorite quote, “One cannot manage what one does not measure.” The risk map,
score methods, while common are insufficient to the needs of our industry; they
do not measure, nor really manage risk. We as a discipline need to step up to
quantifying, monetizing, and working off risk in order to be succeed as drivers
of innovation. We need to step up to the mathematical approach found in the
Douglas and Dan Savage’s (see
this posting) texts.
I came to this same realization probably a decade ago. I
held off at first because I had not deep enough understanding of how to
proceed, and I knew I would encounter great skepticism. I tested the waters in
2005 and posted my first
paper on the subject in 2006. I indeed received a great deal of skepticism
and resistance, but enough acceptance to go forward. I have learned some
important lessons from all that. In my next blog, I will share my experiences
of bringing more mathematical thinking to risk management for SSD.
Yesterday, I was at the Conference on System Engineering Research (CSER) held this year at Stevens Institute. I sat through a talk which stimulated my curmudgeon tendencies. In the spirit of hopefully generating some contraversy, I will not hold back.
The talk was about an expert-system based engineering risk management system. Essentially, the authors got a set of experts together to identify catagories of risks (people, delivery, product ...), risks in the categories, and a method for identifying level of risk and their consequence and then summing the products of the levels and the consequences. The end is the total amount of category risk. Looking at the output is supposed to give you insight of the overall program risk and the contributing risks.
My problem is that I cannot parse the last sentence. In fact I do not understand terms like "program risk" and say "people risk". There may be a clash of cultures here; to many those terms seem reasonable.
My argument starts here: One can ask 'What is my risk of going over budget?' or 'what is my risk of missing the delivery date?' The answers to these sort of questions are answered using stardard business analytics. See, for example, Mun's text on risk analytsis
that defines risk as statistical uncertainty of a quantity that matters. For example, 'time to complete' is a quantity that does matter to a project. The uncertainty in making the date can be measuresd as the variance (or standard deviation) of the estimate of the time-to-complete. (Note, for the math aware, time-to-complete is what the statisticians call a continuous random variable
.) So the answer to the question, 'what is my schedule risk?' has an unambiguous, quantified answer. What is 'my people risk' has no such answer. In fact, 'people risk' is not a concept defined in business analytics.
Of course, it does make sense to ask what contributes to the schedule risk. One might fear that the inability to staff the project contributes to the schedule risk. Fair enough. In my mind, that does not make staffing a 'risk', but say a schedule risk factor.
I am not sure why I am so adamant about this, but I am. It could be that I believe that the less precise use and measurement of risk is holding our industry back.
Anyone want to comment or defend the so-called risk management practice underlying the talk I found so annoying
First, some personal disclosure: In the late 1980’s, I
worked for a while at Shell Research, developing seismic modeling and data
imaging algorithms. (See
.) While there I received training on oil exploration. I
remain awed by the passion, expertise, daring, and discipline of the engineers,
scientists, technicians, and skilled laborers who take responsibility for
providing the hydrocarbons we completely rely on.
Oil exploration is remarkably costly and risky. Even then in
the late 1980’s, it was not uncommon to spend $1B on an exploration well,
hoping to find oil based on the seismic data only to find it dry. At Shell, I was on a team that developed, for the time, a
highly compute-intensive algorithm for imaging seismic data captured in complex
subsurfaces. They literally bought
us a Cray since running the algorithm might make a marginal difference in the
success rate of exploration drilling.
Hence, I am not an oil industry basher, far from it. So I
have been watching the BP, Deepwater Horizon gulf catastrophe with great
interest. In this entry, I will share what I have gleaned from various news
sources. (I have found the Wall Street Coverage very credible). So here is my
Recall, the blowout occurred shortly after capping an
exploration well (a will drilled solely to confirm the presence of a oil
resevior). The depth of the well, reported 18,000 ft, is no big deal. The depth
of the water, 5000 ft, is far from the record of around 8000 ft. So the well itself was routine for
the industry. So what happened?
BP had drilled many of these wells. In fact, ironically the
blowout occurred while BP executives were celebrating their safety record. However,
over the last few years have become profitable by building a very
cost-conscious culture. Such a culture is likely to cut corners, repeatedly
taking small risks in business operations. Such behavior may be rational if you
believe the total liability is bounded. There is reason to believe BP’s
liability is ‘capped’ at $75M. This culture seemed to be at work on the oil
I bet that BP managers routinely
made the same decisions for years with no adverse outcomes. They were probably
rewarded for this behavior. Such a culture makes such disasters inevitable over
time. A great case study of such
cultures is found in Diane
Vaugh’s The Challenger Launch Decision:
Risky Technology, Culture, and Deviance at NASA. She studies the NASA
culture that led to the decision to launch the shuttle that exploded on launch killing
‘the first teacher in space’ while tens of thousands of school children watched
on television. The managers overruled the engineers who advised them that the
temperature was out of spec for a launch. As she explains, the managers had
gotten away with taking similar risks in the past and had decided to bow to
political pressures and approve the launch.
So what they were thinking is something like, “These risks
are no big deal and the savings matter.”
A key moral is that over time the unlikely becomes
inevitable. Further, experience and past data lead to exactly the wrong
behavior: they reward the risk taking, not the caution. Many have made exactly
the same point about behavior of the financial firms during the financial
Internalizing this moral and acting accordingly is key to
our industry. Increasing, we will be building life critical, economic critical
systems that are very complex, will operate over long periods, and whose
failure could be catastrophic. There is no turning away from this
inevitability. So, we all need to understand that cost savings must be balanced
by a clear understanding of the overall risks of failure, their consequences
and the real return of investment in failure avoidance. This of course takes some math. In
particular, thinking about averages is not useful. That is topic of my next
I have mentioned in the first posting, I am still getting the hang of blogging. I guess one use of blogs is to share what on my mind while staying in the neighborhood of the topic of analytics. So, I have been putting a lot of thought to Toyota's diemma about how to deal with the reports of dangerous acceleration in their cars. The recent reports of Prius incidents (see this article in the New York Times)
confirmed some of my earlier suspicions and hence this blog.
First, I need to come clean; all I know is from news accounts. I have had no contact with Toyota or any IBMers working with Toyota. Further, I need to say the the opinions here, in my opinion not controversial, are my own and do not reflect any IBM position.
So what do we know:
- The two models with reported acceleration problems, Camry and Prius, have bus interfaces from the petal to the electronic control unit (ECU) that manages the acceleration. (see Electionic Design News Article)
- There are confirmed cases where the floor mat is not not the cause (see the NYT article in the previous link)
- Toyota has has millions of Camrys on the road and the Prius has been the largest selling hybrid.
- Toyota has stated they cannot reproduce the problem.
So, here is how it seems to me. The models with the problem are those that have embedded software. Further, the incidence of the reports is consistent with the failure rate of software that meets usual software quality standards. It is well known that the time and cost of testing goes up dramatically as the defect density gets low. Further the defect may involve interactions between the ECUs. From an analytic perspective, the combined state space of the combined ECU's is very large and as the defects get removed, the bug states are sparse and so the likelihood of a set of interactions getting the integrated ECUs into a bug state is very low and unlikely to be found in conventional testing. So there may well be some latent defect in the embedded code that they
have not found after whatever amount of testing they find affordable,
particularly given the pressures of getting into the market. In conventional software, when the cost of finding the next defect is too high, the code is released in beta so that a crowd of volunteers can further exercise the code, putting in the hours to find the next set of defects. One cannot beta test an automobile.
Now, say there is one chance in a million miles of driving of the latent defects manifesting. They may be impossible to find with standard testing and will inevitably happen every so often to drivers. This is the standard insight that with large volumes unlikely events become inevitable. So with Toyota's large sales, they may be the victim of their success.
The avionics community has developed a discipline around safety-critical software. There are design and model testing methods to validate that the embedded software is good enough to stake people's lives on the code running correctly. (There is a good article is the latest Communications of the ACM on model checking for avionis) It seems Toyota and the entire auto industry needs to adopt these safety-critical disciplines going forward. The cost of these practices is overshadowed by the costs of costs of the highly publicized incidents, the suits, and other liability.
In a previous entry
, I discussed triangular distributions. I pointed out that they arose from the practices in Hubbard's How to Measure Anything. When making an estimate,
one asks for the high, low and expected values of a quantity. These are used to to define a probability distribution by interpreting the results to mean that there is zero probability that the value is less than the low or greater than the high and the mode of the distribution is at the expected. You get a distribution like the figure below.
A reader, Blaine Bateman, president of EAF LLC, found the elicitation question too restraining. After all, saying there is no probability of a value being below the low may call for an unreasonable level of certainty. He prefers asking the asking the question, "Give me low and high values that in which you are 90% confidence."
This raises an interesting (at least to me) mathematical question, "Can you specify the triangle given the mode and the 90% of the distribution". That is, can you specify the triangle given points c and d rather than a and b. Blaine has a couple of solutions based a choice of addition assumptions you need to make to get a unique solution.
One of the common criticisms of estimation methods is that
the calculation is no better than the assumptions: garbage in, garbage out (affectionately known as GIGO). That is, if you make poor or
dishonest assumptions then you will get misleading forecasts. It is especially egregious
that occasionally someone might take advantage of the system by gaming the system
and intentionally feeding assumptions that lead to false forecasts to get a
desired business decision.
However, estimation is an essential part of any disciplined
funding decision process (such as program portfolio management). The funding decision
relies on estimates of the costs and benefits. But for reasons just described,
estimation is suspect.
So, what to do? I suggest the answer is not to abandon
estimation; the answer is the not input garbage, or if you do, detect it as
soon as possible to minimize the damage.
First note that the future costs and benefits are uncertain,
so any serious approach to the GIGO
problem is to treat the assumptions as random variables with probability distributions and work from there. Generally, this allows one to use the limited information at hand to enter the assumptions and calculate
Douglas Hubbard, in How to Measure Anything, gives us one way to proceed. Briefly, when an uncertain
value is needed, ask the subject matter expert (sme) to give not one but three values:
low, high, and expected. The three values may be used to specify random
variables with triangular distributions [ref].
In this case, the greater the difference between the high
and low values, the wider the triangular distribution of the estimate reflecting
the uncertainty of the sme who is honestly making the assumptions.
One can use the random variables as values in the estimation
algorithm using Monte Carlo by repeatedly
replacing the single values with sampled values of the triangular distributions
and assembling the distribution of the estimated value. Note the estimate is
again just as good as the assumptions, however we assess our faith in the
estimate by the width of the 10%-90% range of its distribution.
For example, one might estimate to the total time for
completing s project by a project by entering, for each task, the least time,
the most time, and the most likely time. Then one could apply Monte Carlo
simulation or more or
more elementary methods to rollup the estimates to compute the distribution
of the time to complete.
Hubbard goes further by suggesting that as actuals in the
assumptions come available to review if they fall within the 10% -90% range of
the initial distributions. If they do,
fine. If they don’t, questions are asked about the underlying reasoning and
beliefs. Over time the organization becomes more capable and accountable at
making good assumptions.
Further, we can also deal with the garbage in garbage out
problem by using actual data whenever possible. There are at least two techniques.
In the first, as actuals in the assumptions become available
in the, they can used to replace the distributions. For example if there are
month-by-month sales projections captured as triangular distributions to
forecast sales volumes, the distributions are replaced by the actual sales numbers. Also, one should update the remaining triangular
distributions reflecting the actual sales trends. The resulting estimate will usually
have a narrower distribution.
A second technique is Bayesian trend analysis. In this case
we use actuals for evidence of the estimate. For example, if a project were on
track, then we can expect that certain measures, such as burn down rate and
test coverage reflect that. If a project were to ship on time, the number
unimplemented requirements would be going to zero, Similarly, the code coverage
measure would be trending towards the target. So these measures are evidence of a healthy
project. Using Bayesian trend analysis, we
can turn the reasoning around and update the initial (prior) estimate of the
time for completion using the actuals as evidence for an improved estimate. The
result is an improved probability distribution of the time to complete the
project. As more actuals become available, the distribution becomes narrower,
increasing the certainty of the forecast.
This way one can detect early if the system is being gamed
and at the same time, use the actuals to estimate the likelihood of an on-time
So generally, one can use actuals to not only improve the
estimation process as Hubbard suggests, but also to apply Bayesian techniques,
to improve the estimates of the program variables.
I am not a natural blogger, but do like to share thoughts with a broad community of folks with shared interests.
My longterm passion is how to enable organizations develop and deliver software and systems more effectively. I am thoroughly convinced that well organized, governed organizations not only deliver value to the businesses, but also enhance the lives of the staff. In such organizations, people get to work on cool things, innovate, work well with colleagues, and build a legacy by being a part of making things of value.I also am a mathematician by training. I am especially energized by the opportunity to apply mathematical reasoning to the improvement of software and system organizations. My current assignment is to lead the Business Analytics and Optimization (BAO) strategy for Rational.
So, with this blog, from time to time, I will share my thoughts on BAO for software and system organizations. I hope this blog will be a catalyst for building a community. I especially look forward to comments and conversations.
So stay tuned.
In an earlier blog entry, I mentioned my article Calculation and Improving the ROI of Software and System Programs.
I am pleased to announce to that it has been published in the September 2011 issue of the Communications of the ACM
An IBM colleague, Jim Densmore, has the phrase 'Embrace Uncertainty' below his e-mail signature. I know Jim well enough to believe I know what he means. In fact, I used to give a presentation with the that title. The idea is that uncertainty is an aspect of the world as it is. If you constantly deny uncertainty and expect to be able to predict the future as if it were certain, all sorts of disappoints ensue. If, instead, you understand and accept the uncertainty, you are more in harmony with the world. Uncertainty is not your enemy, it just is a part of life.
What brought this mind is that one of the world's leading poker players, Annie Duke, used the phrase in the latest episode of Radiolab
, one of my favorite NPR programs. She explains how embracing uncertainty is the key to being a great poker player, not gimmicks like reading 'tells'. Follow the link and check it out.
Of course, readers of the blog know that I believe this applies to our domain. Developing software is systems is fraught with uncertainty. One should embrace it and learn to manage it by applying the right analytics.
In a conversation with a development lab productivity team, I was reminded that the first challenges software and system organizations face when starting an improvement program is 'what to measure'. In particular, some organizations start with what is easy to measure with the understandable thought "we need to start somewhere". I have found this approach tends not get traction. Over the years I have settled on some principles that seem to apply:
- Be careful that the measurement drives the intended behavior. Organizations respond to how they are measured. This is useful in that measurement provides a way to change behavior. On the other hand, it is dangerous in that measurements can drive undesired behavior. For example, insisting that every project exactly meet the initial estimate of schedule, cost, and content will lead to risk aversion and drive innovation out of the organization.
- You can't manage what you don't measure (taken from Lord Kelvin) -This is one of those universal truths that applies to every field.
- Don't measure what you do not intend to manage - Measuring without a way to respond to measurement adds overhead and lowers productivity
- For development organizations, one size does not fit all -The measures need need to reflect the mixture of work the team is doing. For example measurements for a team doing level 3 maintenance work may not apply to a team building a new platform.
- The Einstein test, make the measures as simple as possible, but no simpler.
I suspect there some other principles that should be added. But these are a good start.
In a later blog, I will discuss levels of measurement.