Folks who have heard me present will recognize the following
discussion as a variation of what I have used as an example to explain the
importance of variance in software and system estimates. Imagine this time you
are a development organization manager given the following artificial
opportunity. You can agree to the following deal: Have the teams at your own
expense develop some application, each meeting a given set of requirements. The
client really wants the applications and will accept them if acceptable and
perfectly will to be consulted throughout the projects. Here is the catch: if
you deliver the projects on time in 12 months, you will receive $1M per
application. If you are a day late, you get nothing.You have to decide whether to take the deal.
Lets suppose you take the projects to your estimators and
they tell you the estimated time to complete is 11 months and the estimated
cost to complete is $750K for each of the projects. So you stand to make an
estimated $250K per project. So you staff up as much as you take on three
projects looking forward to your bonus. Was this a good deal?
Those who have read The
Flaw of Averages by Sam Savage and Dan Denziger already know the answer.
Those who haven’t read the book should. This book nicely captures the sort
of statistical reasoning that underlies IBM Rational’s approach to business
analytics and optimizations (found in the RTC agile planner and the ROI
calculations in Focal Point). Some key rules:
Uncertain quantities are captured by curves
called distributions (e.g. the bell shaped curve of normal distributions)
Most distributions for uncertain quantities are
not normal, bell shaped curves, i.e. normal distributions are abnormal.
Calculating with averages in any case yields the
wrong answer with business critical effects. Rather one should calculate with
the distributions. This is done with Monte Carlo methods.
Back to the example: The time to complete is an uncertain
quantity and so must be described by a distribution. Often, the estimate
returned by the estimator is the mean of that distribution. The distribution
may be pretty wide and so may look like Figure 1 of the attached document. (I
have had bad luck trying to embed figures in the blog and I have put the figures in this this attachment.) Note that 40% of the distribution lies beyond 12 months.
Assuming the $750K cost to complete estimate is dead on,
lets apply some simple high school probability to get the distribution of
profit (See Figure 2):
·The chance of succeeding at all three projects and
getting $3M is revenue.is (0.6)3=0.216,
·The chance of succeeding at exactly two projects
and getting $2M in revenue is 3(0.6)2(0.4)=0.432
·The chance of succeeding at exactly one project and
getting $1M in revenue is 3(0.6)(0.4)2=0.288
·The chance you will fail at all three projects yielding
no revenue is (0.4)3=0.064.
The weighted average of the distribution of revenues is
So the likely outcome of your (3)$750K = $2.25M expense is a
But wait, it is worse. The distribution is probably not
normal. Programs are more likely to late than early and so are skewed to the
right. In this case the average (i.e. the mean) is less than the 50% point. So,
as shown in Figure 3, it is possible to have the estimate of 11 months and the
likelihood of failure is 50%. The revenue distribution is given in Figure 4. In
this case, the weighted average of the distribution of revenues is
But wait, it is still worse. The cost to complete is also
uncertain. To keep things as simple as possible, lets suppose the cost to
complete for each of the projects is described by three values: best case is
$700K, the likely case is $750K, and the worse case is $1M To compute the
expect profit in this case requires using this values as parameters for a
triangular distribution (see Figure 5) and then apply Monte Carlo methods to do
the calculation to get the distribution that describes the profit. The result
is shown in Figure 6. Briefly in this case:
·The most likely outcome is a loss of $945K
·There is a 90% certainty of losing at least
·There is a 10% chance of losing more than $1.1M
So taking this deal is at best career limiting!
Notice by ignoring the rules, one is tempted to make a bad
deal. Applying each of the rules with more discipline shows how bad the deal
is. The moral of all this is that making business decisions based on
calculations of averages can lead to disastrous outcomes.
This moral needs to be taken to heart by our industry. Far
too often, managers when faced with making funding projects or business
commitments insist, “Just give me the number.” What they need is a distribution;
the number they are given is likely to be an average. Decisions based on the
number will likely go sour. No wonder the software and system business outcomes
rarely delight their stakeholders. The good news is that there are robust,
proven techniques to avoid the flaw of averages.
I know. It has been a long time since my last posting. Over the last few months, nothing much happened that prompted an entry. I was thinking of writing something sort of philosophical on the nature of estimates, but never got to it. Then there was the tsunami and the associated nucler reactor failures at the Fukushima power plant. Suddenly, the topic became more urgent. This is relevant to this blog, because our domain includes the engineering and economics of safety critical systems. Presumably the nuclear reactor industry uses the state of the art methods. I have been exploring what is going on there and while, I am far from an expert, I have found out some things worth sharing in a blog.
We have been told that reactor failure is a 1 in over a hundred thousand year event. Sounds reasurring.Yet, in my lifetime, there have been three that I know of: Three mile island, Chernoble, and now Fukushima. Discounting Chernobyl which apparently was greatly under-engineered, something must be wrong for the there to be two meltdowns in what has been estimated to be a one in over 100,000 year event. Apparently, there have been many near misses, e.g. the loss of coolant at the Brown's Ferry plant, something one would not expect from such safe systems. This raised some questions. What does a 'one is N year event' mean? Does it mean that we should not expect the event until N years has passed or that we can be certain one will occur within N years? More importantly, if there are K systems, each having a 1 in N year safety rating, what is the rating of the poputation? As I have pointed out in previous blogs, we do not need estimates, we need probablility distributions to get any practical understanding.
Here are some of the sources I found. This New York times article helps explain what is going on. A few points in the article caught my attention. First, they carried out 'deterministic' risk analysis (see this NRC page) because probabilistic methods are "too hard. A good summary of the difficulty of the problem and the history of how it is addressed is found in Uncertainty: A Guide to Dealing with Uncertainty in Quantitative Risk and Policy Analysis by M. Granger Morgan and Max Henrion. Briefly there is little to go on to estimate the likelihood of individual events and their dependencies in event chains. So the distribution of estimated time to failure must have huge variance. This article by M. V. Ramana summarizes and criticizes the current practice including probablistic risk assessment (pra). The key idea is that failure results from a sequence of component failures. Each component is reliable and so the probability of a system failure is the joint probability of the component failures which is very low. This assumes that the component failures are independent events. However, as parts of thr system, the joint probabilities are hard to estimate. For example, one component may fail which results in a second component running out of spec which might result in a number of other component failures. Getting the joint probabilies right entails a very faithful system model and data collected from thousands of simulations with varying inputs and Monte Carlo methods to take into account the variability of the components. The output of such a simulation could be used to improve the system design.
I want to focus on another pra challenge: estimating the likelihood of the devastating single cause event, such as earthquake and the tsunami. Clearly some sort of data are needed for the estimate, but what sort of data? As pointed out in the New York Times article, there was a deep historical data search of the size and frequency of the earthquakes in the relevant geographic region. That led to the conclusion that planning for 18 feet of water was sufficient. Recall that the reactor was inundated by 40 feet of water. So past performance was not predictive. In retrospect, that is not surprising given that tectonic plate movements are hardly a stationary process. An alternate approach is to use modern geologic models and plate measurements. Then one could and should run simulations to get a distribution of the flood depths. One could argue that this approach is also suspect, since they depend on the quality of the models which introduces a subjective element. However, using historical data also is based on an assumption that earthquake generation is based on a stationary process, a very dubious model and its adoption is equally subjective. To be fair, presumably one could not run the needed simulations in the 1960's and so the frequency model may have been the best available. It can be argued that earthquake prediction is notoriously difficult, especally pinpointing when an earthquake will occur. However, using Monte Carlo methods and simulations, it seems reasonable that one can create a probility distribution of the time to an earthquake above a certain size and use this to estimate the likelihood of the event over the lifespan of the plant.
The point of this discussion is that frequency model data is no more 'objective' than data used to build and apply models. Both involve subjective assumptions of the validity of the model. Note Baysian data analysis methods can be used to validate the various models and so we can assess their usefulness in the estimation process.
Finally, these safety estimates are used to set policy and, in particular, to make economic decisions about nuclear energy. The cost of a failure is huge. For example, an estimate of the cost of Fukushima failure is $184 Billion. The proponants of nuclear energy argue that they make economic sense, assuming they are safe enough and the new designs are much safer. Maybe so. But knowing they are safe enough will take much better analytics than we have seen to date.
First, I am pleased that many saw the humor in the April fools posting. That said, I wonder if there will be ever quantum project management. Also, I fear this blog lacks humor. I will do what I can, but there is only so much that can be done to spice up the topic of analytics for software and system organizations. So, back to the serious stuff.
But first a joke that I believe that dates back to vaudeville: Onstage, there is a streetlight. Under the streetlight, there is a man crawling around on hands and knees. A policeman walks up and asks what he is doing. The man says he is looking for his keys. The policeman asks if he is sure he lost them here. The man answers, "No, in fact I lost them down the street." The policeman asks why is he looking under the light. The man answers, "The light is brighter here."
OK, not so funny. So what's the point? A while back, I was discussing a client's management program with a colleague (who will remain nameless and I hope is reading the blog). I pointed it would not serve any purpose. My colleague answered "Well at least they are measuring something." I retorted, "First, you need to figure out what you need to measure, then figure out how to do the analysis and get the data." We left it at that. More generally, software and system organizations often measure what is easy, not what they need. They look where the light is brightest. We still have the question how to specify the needed measures, analytics, and data collection program.
In an earlier entry, I proposed some measurement principles. While these principles are sound for assessing a measurement and analytics program, they do not provide operation guidance for defining the set of measures, associated analytics, and data. What is also needed is the analytics version of a requirements analysis. Last Friday two colleagues (named Clay Williams and Peri Tarr who I believe do read the blog) introduced me to the Goal Question Measure (GQM) method. This method has been extended in various ways such as GQM+Strategy.
I have seen the method applied. It looks much like functional decomposition and so it is a requirements analysis technique for analytics solutions. I think it should be extended to include identification of the data sources. So we would have GQMAD (not kidding), my spin on the main idea:
Goals - what is the organization trying to achieve
Questions - how would know quantitatively that the goals are being met?
Measures that provide answers to the questions.
Analytics that realize the measures
Data that feeds the analytics.
Taking such an approach is a far cry from looking where the light is brightest. Note after building out the GQMAD requirements, one still needs to design how the data is collected and staged, how the analysis is executed, and measures displayed in order to answer the questions. So design and development of the analytics solution remains after the GQMAD process is carried out.
For my waterfallphobic friends, I share the concern. Building an analytics solution this way should be more iterative than is described above. Probably something like the Unified Process can be applied using GQMAD as a good requirements practice.
Anyone out there with GQM experience they would like to share?
In my last blog, I laid out a vision of how a project lead and her stakeholders might use the predictive analytics to drive to better project outcomes. As I mentioned in the entry, IBM Rational is work on such a tool. A demonstration of this tool is found here: Agile Development Analytics Demo. The video was created by Peri Tarr, the lead architect on the project.
Some of you might notice that the terminology and development process described in the demo is at odds with your understanding of Agile. We do understand that and currently we are working on making a robust tool that accommodates a wide range of processes for what might some might call 'pure Agile' to various hybrids we are discovering in the market place.
In the next blog, I will explain more on how the tool works.
I would like to build on the theme of reasoning about what to measure. The goal of business analytics is to track what matters to the organization (what it is you are trying to manage) and respond to the measure in some way to gain improvement. The science of measuring outcomes and In manufacturing and some service delivery domains is statistical process control (SPC), SPC lies at the heart of the Six Sigma movement. Even so, there will be no need to have a 6-six sigma belt to participate in this discussion . While there is reason to believe that not all of the Six Sigma practices do not apply all that well to our domain, the idea of tracking outcomes, applying statistical analysis to detect change change, and applying some sort of controls to affect the change applies in all business domains, including software and system development and delivery.
Briefly then, the outcomes are the operational goals and controls are the actions you take to achieve the outcomes, So naturally we need too kinds of measures.
Outcome measures - tracking the measures of effectiveness of the business organization
Control tracking whether - tracking whether the controls are in fact enacted.
Here is a thought experiment:. Imagine there is a potato chip factory with an operational goal of achieving the right amount of salt on its chips. There is an target amount and the factory needs to stay within small limits for market acceptance. So everyday they grab a sample of chips and record the saltiness. They apply salt by running the recently deep fried chips under a salt shaker. The two controls are the frequency of the shaker and the speed of the belt. Both the shaker frequency and belt speed are measured to confirm the controls are properly responded to. In this example the saltiness is the outcome measure and the shaker frequency and belt speed are control measures.
The simplest way, for me at least, to think about SPC is to measures trends in outcome measures and control measures to determine the likelihood that the controls are in fact affecting the outcome. In our potato chip example we might find that we cannot control the outcome well enough by the shaker and belt controls. In that case, we might look for some other factor to control, say the factory humidity.
If you look at many measurement programs in software and system you often find that outcome and measures are confused. In fact even sorting the measures into the two buckets is hard. No wonder measured process improvement for our domain has been so hard.Anyone have good examples of measurement patterns or antipatterns of measuring controls and outcomes?
To now, this blog has been a series of essays on the theoretical considerations underlying the analytics of development. With this entry, I want to start changing the emphasis to the practicalities of building analytic tools. Going from theory to practice raises all kinds of issues: data content and formats, robustness of algorithms, reinforcing agile practices, .... To start that discussion, lets start with an epic on how an analytic tool for agile teams might work:
A lead of an agile team, call her Shirley. has been asked to deliver a mobile application, with a specified set of features, in time for the next world games, which is one year away. Understanding that the future is uncertain, Shirley treats the time to complete as the random variable. Before committing to the project, she needs an initial distribution of the time to complete the project. With such a distribution, she has a view of the probability of achieving the goal. It is the area under the distribution curve that lies to the left of the target date in Figure 1.
Figure 1. Probability distribution of delivering the Shirley’s Mobile app project
Fortunately she has tool called 'ARaVar' to help her build and maintain this distribution. This tool is federated with her OSLC agile project environment, Agilista (a fictional product). To use ARaVar, the team estimates the level of effort required for each feature using planning poker. In particular, for each feature’s level of effort the leadership team agrees on three values to enter in Agilista:
The low (best case) – Assumes all the stars align and the feature comes together easily to meet requirements.
The high (worse case) – Assumes Mr. Murphy S. Law and Ms. May Hem unexpectedly join the team and inject unexpected challenges and obstacles.
The nominal (most likely) – Assumes level of effort has the expected mix of good fortune and bad luck.
Behind the scenes, the ARaVar finds these inputs to in Agilista and uses them to define triangular probability distributions. In particular, AraVar interprets these effort inputs as saying
There is zero probability that the level of effort will be less than the best case.
There is zero probability that the level of effort will be greater than the worst case.
The greatest probability of the level of effort will be at the expected case.
So ARaVar sets the distributions to be zero below the low value and above the high value, with a peak at the expected case. Figure 2 show the resulting triangular distribution, setting the high and low to zero and setting the peak (expected case) so that the total area of the distribution in one.
Figure 2: Typical triangular distribution for each feature.
In the parlance of Bayesian reasoning, this technique provides the subject matter experts a means of arriving at an honest prior, based on current information and informed belief. If the difference between the low and high of the distribution of a feature is large, then the team is expressing its uncertainty of the effort required to deliver the feature. This gives Shirley’s team the opportunity to focus the team on resolving the uncertainties early, progressively de-risking the project.
With this prior estimate in place, Shirley has an idea of how likely it is she can make the commitment and she negotiates the content. What-if analysis in ARaVa provides her with capability to compute the impact of adding, changing or dropping one or more features from the program. Luckily, she does find that one of the relatively uncertain features is more of a nice-to-have than a must-have and adds considerably more risk than value. So she negotiates that feature out of scope for a firmer commitment to an earlier delivery in 11 months as illustrated in Figure 3.
Figure 3: The negotiated delivery commitment: earlier and more predictable.
So Shirley now is in a good place. She has agreement on the scope of the project between her team and her stakeholders. She feels her team has a good chance of delivering on time.
In the Agile fashion, work proceeds by establishing work items to deliver the features. These work items are scheduled for iterations/sprints, on an ongoing basis. As the team completes work items, they not only have less work to complete, but also have a track record of the actual time it takes the team to complete work (called team velocity). From a Bayesian perspective, these constitute important evidence of how well the project is actually executing. ARaVa queries Agilista for the completion status of the features, the work item burndown history, and updated effort-to-complete estimates for the remaining features. ARaVa uses modern predictive algorithms to update the time to complete distribution.
With these ongoing predictions, Shirley can discuss with her team, and external stakeholders, whether the odds of meeting the commitment are improving (as they should) or degrading. If the later is the case, she can use ARaVa to predict the impact of managing content (decommitting features) or adjusting resources. For example, the tool revealed that one feature was very much at risk. In discussion with the stakeholders, it was decided that this feature was necessary and so it was decided that for the next sprint there should be more resources focused on the this feature. Some staff were assigned to the team for just that sprint. With ARaVa, all stakeholders can have a more honest and trustworthy discussion on how best to proceed.
ARaVa does not yet exist, but it is not a dream. IBM Rational and Research are now in the process of developing such a tool for a possible delivery next year. We are calling the project AnDes (for Analytics of Development). AnDes uses state of the art learning algorithms. We do have working versions federated with Rational Team Concert (We did show a preview at last year’s Rational Innovate). In addition to consideration of automating the data collection, we are exploring how it can be applied across a wide range of projects:
Large to small
Innovative to complex
Fully or partially agile.
We are looking for design partners now! Interested? Please let me know at firstname.lastname@example.org.
Today, April 1, seems like a good day to bring forward an important new idea. In fact, I think this may be the next big thing.
One of the well-understood problems with software development project management is that it is often impossible to completely specify the complete work breakdown with certainty. The longer the project and the more innovative the project, the more uncertain the work breakdown items. This is addressed in iterative, agile planning by identifying the summary work items and then adding detail as the project evolves. Another source of uncertainty is the dependency between the summary items. This uncertainty in turn makes critical path analysis for such programs problematic. In fact there is a whole ensemble of project critical paths, each with some likelihood. For the physics literate, this ensemble of paths is much like Feynman Path Integrals in quantum theory. The math is pretty hairy (see this elementary description). Fortunately, as Feyman also pointed out, one can simulate quantum mechanics with quantum computers. I am no expert in quantum computing, but even so I have a proposal: Quantum Informed Projects (QuIPs). The idea is to represent work items as QItems using QBits from quantum computing.. Then we can represent the project as a set of entangled QItems and using a suitablly large quantum computer to calculate the wave function for the critical path.
My understanding is that we do not yet have large enough enough quantum computers to make this practical. However, the same is true for implementing other useful quantum algorithms (see this example). So we can start by building algorthms. There is no time like the present (not accounting for the quantum uncertainty of measuing time) So on this special day, lets turn our attention to QuiPs.
In the previous entry, I introduced a probabilistic view of a commitment. The main idea is that when you commit to deliver something in a future, you are making a kind of bet. The odds of winning the bet is the fraction of the distribution of the time=to-deliver before the target date. For example, in the following example, the project manager has a 47% likelihood of winning the bet.
The raises a couple of questions. First, how is the distribution of time-to-complete determined? There are variety of methods to estimate time to complete of an effort. I am not taking a position on what method to adopt. The important point is that the estimation method should not return a number but a distribution! The major estimation vendor have this capability even if it not always surfaced. I will expand on this point in the next blog entry. For now, the key point is that you should be working not with point estimations, but with the distributions.
Second is how the project manager affects the shape and position of the distribution and therefore affects the odds. Some of the techniques are intuitive, some not so much, There are two things one might do: move the distribution relative to the target date, and change the shape if distribution typically narrowing it so that more of it .lies within the target date.
In the first, one can either move the target date out, so that the picture looks like this
This is, of course, intuitive - moving out the date lowers the risk. Another intuitive thing a project manager might do is the descope the project - commit to deliver less functionality. This may have two effects on the distribution: It will move it to the left as there will be less work to do. Depending on the difficulty of the descoped feature, the descoping may also narrow the distribution. By removing a difficult to implement feature. one is more certain of delivery, narrowing the distribution, removing risk resulting in this diagram:
Now comes the unintuitive part. Suppose the target date and content are not negotiable. What is a project manager to do then? The idea is to take actions that will narrow the distribution in Figure 1 so that it looks like
How is this done? Many project managers, in the name of making progress choose the easiest functions to implement first, "the low hanging fruit". However, by doing this the shape of the curve in figure in minimally affected, The less intuitive approach, Following the principle of the Ration Unified Process, is to work on the most difficult, riskiest requirements first! These are the requirements of which
the team has the least information and so should tackle first in order to have time to gain the information needed to succeed. Putting off the riskier requirements and doing the easy stuff first gives the appearance of progress, but by putting off the riskier requirements, one will run out time to do the riskier requirements and fail to meet the commitment.
All this has to be while ensuring their is sufficient time to fulfill all the requirements, risky or not. So in the end, one must account for both the time to complete tasks and their uncertainty to meet commitments. Some techniques for doing that will be discussed in the next blog entry.
Over the last couple of years I have been more or less following the technical debt community's discussion on what exactly is technical debt. Some ague that technical debt is limited to what it would cost to address deficiencies such as those found by code inspection tools such as Sonar. Other writers such as Chris Stirling introduce aspects or kinds of technical debt: quality debt, design debt; ....
My interpretation of the Ward Cunningham metaphor on incurring debt by shipping is broader, including the wide range of after-delivery costs. This entry is continue that discussion and suggest one path forward.
I argued that technical debt should reflect the fact that the very act of shipping software incurs all sorts of possible liabilities, any one of which may incur some future cost.
Future service costs
Executives getting on planes to deal with critical situations
Fines resulting from privacy violations
Loss of business from failing a compliance audit
Loss of intellectual capital due to security flaws
The nature of the liabilities very from domain to domain. Shipping the next rev of a mobile game like angry birds entails much less liability that next rev of avionic software for a commercial jet.
The costs of fixing the code may be the least of it and under-estimates the assumed labilites. Reasoning about whether these liabilities outweigh the benefits of shipping the code is key to the ship decision.
Since I wrote that entry I have been watching the technical debt space and see that I may be the minority, but not alone, with this perspective, Some people argue that technical debt is solely the cost of addressing shortfalls in the code. Others adopt a broader definition. In fact, in a conversation I had with Capers Jones, a long-time expert in software measurement, he shared a conversation he had with Ward discussing the same points. I have seen others make a distinction between software debt and technical debt. I have decided not to weigh in on this argument, but suggest we call all of the liabilities, (wait for it ...) technical liability.
There is a key difference between standardly-defined technical debt and technical liability: Technical debt involves code quality and can be determined. The liabilities involve possible future events and so entail predictions of the future. Some might even consider technical debt knowable and technical liability unknowable.
Readers of this blog know where I am going. Technical liability, unlike the more limited technical debt, involves a range of future possibilities and so each of the components of liability should be specified as a random variable with a probability distribution. The security violation might or might not occur. But if it does, the possible expense could sink the company. Reasoning about the risk takes some advanced techniques like setting the price of an insurance policy.
Finally, the economic decision if it makes sense to ship a piece of software, one needs to balance the value expected from the ship against the assumed liabilities. Note that the future value is also a random variable. In that case the decision to ship should be based on the techniques found here. I will elaborate the reasoning ibehind technical liability n a future blog (promise).
In summary then, technical liability gives a more complete picture of the economics of shipping a piece of code than technical debt, but it requires more sophisticated analysis.
Last week I briefed an IBM customer on some of our recent thoughts on the role of estimation in business analytics. I feel the briefing was not entirely successful. The customer asked about a use of estimation I had not considered previously My first reaction is that the approach desired by the customer was 'not possible'. I then realized it might work in some cases, but I was emotionally opposed to the idea. Then I realized I should not let my emotions interfere and think through the question and its implications. Hence this blog:
In Agile projects or in maintenance organizations, workers are assigned 'work items'. Often workers are asked to estimate the time it will take to complete the work item. Asking an employee to commit to a time-to-complete is both reasonable and unreasonable. Team leads and managers need to have some idea when the current work will be done to plan resource assignments, manage content, make commitments and the like. The management also wants to identify the more reliable, productive
workers. After all, development teams are meritocracies. It is right
that the more productive employees are identified and rewarded. So we
need a way for employees to make reasonable estimates while providing a
way for (cliche aler!) the cream to rise. It is unreasonable in that the worker is asked to guess and, in fact, commit to a time to complete. In some cases, the worker may be confident in the estimate. In some cases, there will be less confidence for a variety of good reasons: The task may have dependencies, the solution to fixing a bug report may not be apparent and so on. So asking to commit to a fixed time is unreasonable and measuring the worker against these commitments is oppressive. Under these circumstances, the intelligent worker will pad the estimate so to insure that the commitment is meant. This unintended consequence of asking for the duration is longer than needed estimates and, since people work to the commitments, lower productivity.
In the Agile Planning feature shipped in Rational Team Concert (RTC), we provided means to somewhat mitigate this phenomenon. RTC provides the mechanism for letting the worker enter the best case, likely, and worse case for the time to complete the task. This way the worker can enter numbers that reflect her or his uncertainty. This supports more reasonable commitments and less adversarial conversations. In the tool, the numbers are rolled up using a Monte Carlo algorithm that accounts for task dependencies and shows the likelihood of completing the iteration or scrum. A benefit of this approach is that the worker can be held accountable not to a single value, but to staying within the range of estimate and so need there is no need for padding. There remains the problem of knowing if the estimate is reasonable and how to find the meritorious, which finally brings us to the client request.
The client asked if we could turn this around. Could we use some sort of algorithm to compute the expected time to complete for the task? In other words, the system tells the worker the amount of time it should take to complete the task and the worker then is measured against this expectation. As I said at the beginning of the blog, my first reaction is 'probably not' and this is undesirable. Lets dive deeper. First, like the RTC agile planner, this computation can and should include some best, likely, and worse case in order not to be overly oppressive and roll up to show iteration and/or project schedule risk. Further, building out this approach raises the following statistical question: "Can we sort work items into equivance classes of similar enough tasks, so that we use these classes as populations to build time-to-complete statistics?" If we could do this, then we could properly set expectations on the worker, detect the superior and inferior workers, reward the former and better train the latter. Further, we could measure improvements over time in the execution of the tasks due to team or proecess improvements. All good things. However, this approach needs to be implemented very carefully and not over applied or it could lead to more oppresion and untended consquence.
I suspect the more creative architecture and design tasks simply do lend themselves to this sort of analysis. So teams that create new platforms and build new applications will rely more on expert opinion for the estimates and not predictions solely based on historical data. Not everyone would agree with this. For example, there are some estimation tools provided by various vendors that in fact do try to estimate design and architecture tasks effort and duration by using parametric models or classifications. However, there is so much variation in the amount of novelty of the efforts and the team skill and experience, the uncertainties in the estimates are large enough that they that they should be applied to projects with great care and to individuals not at all.
On the other hand, most of what development organizations do is more routine and for those tasks something along the lines of what the customer asked for might be possible. One would need a way of characterizing the different task classes, track the times-to-complete and do the statistical measures. With this in place, one could explore not only automated task estimates, but also process optimzation by what I believe is a novel application of statistical process control.
In summary I believe we need to pursue task analytics and estimation, but I have serious misgivings. Automated analytics-based business processes can go seriously wrong. We need to ensure that some judgment and subjectivity is part of the process. The misuse of analytics in the subprime mortgage business is a case in point
I realize something along the lines I am describing may already be available. Has anyone heard of a tool that supports this method?
An ongoing theme of this blog is that development processes differ from other business processes in that there is a wide range of uncertainty inherent in the efforts. It follows that tracking and steering development efforts entails ongoing predicting, from the evolving project information, when a project is likely to meet its goals.
Late last year, Nate Silver author of the Fivethrityeight blog and well know predictor of elections published The Signal and the Noise, a text for the intelligent layperson on how prediction works. I was impressed by the book as it explained the principles behind the sort of Bayesian analytics we need for development analytics without any explicit math. However, I felt for the folks in our field would greatly benefit by having the mathematical blanks filled in. So I decided to write a series of papers introducing the topics to folks who had some statistics and maybe some calculus in college, but not a solid background in prediction principles.
Since the last entry introducing the concept of liability, I have had the opportunity to discuss it on several occasions colleagues in IBM, In the course of this discussions I formulated what seems to be a useful way to explain the idea. In particular, I presented this idea at the Managing Technical Debt Workshop held on October 9. The following is a preview of what I will present as a lightning talk at the Cutter Consortium Summit next week.
Imagine an insurance agent comes into your office with the following offer: "Our company will indemnify your code against the following risks:
Excess support costs (above some deductible)
The policy will only cost $X a year. You realize that code insurance is much like auto liability insurance. In the auto case, the insurance protects you financially against the possible unfortunate outcome of driving the car, in the code case the insurance protects you against against some unfortunate outcome of running the code. So code liability insurance is like automobile liability insurance. This leads to the definition:
'Technical Liability' is the financial risk exposure over the life of the code.
(Thanks to my colleague, Walker Royce, for this crisp definition.)
Note auto insurance and code insurance have some significant differences.
The context for driving - city streets, highways, parking lots, ... - is more limited than the range of contexts that code can operate. Software is truly everywhere from which being embedded in an avionics system to Angry Birds on a smartphone.
The risk for auto insurance is spread among small numbers of large relatively homogenous populations: young drivers, safe drivers, high-risk drivers, etc. So rates can be computed from population experience. We have no such insurance markets for software.
Generally, firms faced with assuming a liability have a choice: Either they buy a policy indemnifying them against the risk or they self-insure. When they self insure, it is often reported in the annual reports.
If you ship software, you are assuming a liability. As far as I know, code insurance is either rare or nonexistent. If it did, the cost of the policy would be charged against the financial value code. So we are left with self-insuring,
Here is the main point. In order to truly assess the economic value of the code, one should, as best one can, estimate the technical liability and a fair price, X, for the indemnification. Even a rough estimate of X is better than ignoring the liabilities assumed by shipping code.
So how to estimate X? My first observation should be of no surprise to readers of this blog. Since technical liability involves the future, there are a range of outcomes of future exposure, each of which has some probability. Technical liability has a probability distribution and so is a random variable. X is a statistic (perhaps the mean) of the distribution.
As suggested above, code liabilities comes in flavors: There are exposures resulting from security, reliability, integrity, and so on. Each of these flavors is characterized by its own random variable. The overall liability is the sum of the liabilities that apply to the particular code. As I mention in a previous entry, this sum of random variables is also a random variable found using Monte Carlo simulation.
Now, reasoning about code liability is not unprecedented. Car manufacturers estimate warrantee exposure, telephone switch manufacturers reason about the economic value of going from .99999 reliable to .999999 reliable. There are Bayesian models of the likelihood of a security breach. To estimate technical liability, we need to agree upon the taxonomy of flavors of liability, not a daunting task, and then assemble good enough models of each into an overall framework.
This blog experiment seems to be working. The entries are
gietting around 100 visits and growing - good enough to keep at it. I have
found that writing the entries has given me the opportunity to clarify and
express my thoughts. This entry is a case in point.
We are deploying a BAO solution for the level 3 support
organizations in our IBM India Software Labs. That deployment provides a case
study in how to integrate two concepts I introduced in earlier blogs. This
entry is longer than the others. I hope you find it worth the wait and effort
In those previous entries, I discussed two frameworks for reasoning
These frameworks address different aspects of the problem of
using measures to achieve business goals by measuring the right things and
taking actions to respond to the measurements. In fact, these frameworks fit
together hand and glove.
that level 3 support teams provide fixes to defects found in delivered
code.Each of the teams deals with
an ongoing series of change requests (aka APARs, PMRs). An organization goal is to reduce the time to and cost of completion of these
requests. To achieve the goal, they are adopting some Rational-supported practices
and supporting tools. So the questions
that need to be answered are:
is the time trend of the time to complete of the change requests?
is the time trend of the cost to complete of the change requests?
each case how would I know that some improvement action resulting in
significant improvement in the trends?
comes the hard part: determining the measures
that answer the questions. The change requests come arrive somewhat
unpredictably. Each goes through the fix and release process and presumably
gets released in a patch or point release.So at any given time there is a population of currently open
and recently closed releases. The measures that answer the question are a time
trend of some statistic on the population on some population of change requests.
of the change requests requires different amount of time and effort to
complete. So to measure if the outcome is being achieved, one must reason
statistically: defining populations of requests, building the statistical
distribution of say time to complete for that population, defining the outcome
statistic for the distribution.So
we need to do things to define the measure:
the population of requests for each point on the trend line
the statistics on that population
keep it simple (as least as simple as possible), lets form the population by
choosing the set of change requests closed in some previous period, say the
previous month or quarter. To choose a statistic, one needs to look at the data
and pick the statistic that best answers the question. Most people assume the
mean of the time (or cost) to complete is the best choice.However, that choice is appropriate
when the shape of the histogram of the time to complete is centered on a mean
as is common in normally distributed data.
of the advantages of working in IBM is that we have lots of useful data. Inspection
of some APAR data of the time to complete from one of our teams in the IBM
Software Lab in India shows the distribution is not centered on a mean. and so reduction of the mean time to
complete is not the best measure of improvement.
have looked at literally tens of thousands of data points for time to complete
of change requests across all of IBM and have found the same distribution. For
you statistics savvy, it appears to be a Pareto Distribution, but statistical
analysis carried out by Sergey Zeltyn of IBM Research’s Haifa lab shows that
this distribution does not well fit any standard distribution. A possible
explanation is that is the time required to fix the defects is Pareto
distributed, but since the resources available to fix them is limited, the
actual time to complete is not pure Pareto. In any case, a practical way to
proceed is to choose a simple (non-parametric) measure: width of the head, i.e.
the time it takes to complete 80% of the distributions.
with this analysis in place, the organization decides to precisely specify the goal such as a 15% reduction in time and
cost to complete 80% of the requests closed each month.So the outcome measures are the time it took to close and costs of 80% of
the requests closed each month.
chosen this measures, we are ready to identify the data sources and instrument the measures. So far so good. But wait, we still need
to answer questions 3.
I mentioned, in order to improve the outcome measure and achieve the goals, the
lab teams have agreed to adopt appropriate Rational practices and tools to automate certain processes. The
practices were selection using the Rational MCIF Value Tractability Trees (a
development causal analysis methed). Adopting and maturing the practices and
their automations are the controls. Some
control examples are automating the regression test and build process, and the adoption
of a stricter unit test discipline to reduce time lost in broken builds. There
are control mechanisms with associated control
measures such as time-to-build, regression test time-to-complete, percent
of code unit-tested, and a self-assessment by the team of their adoption of testing
and build practices.
answer question 3, we need statistical analytics
to determine if the changes in the control measures have had a significant impact
on the outcome measures. Our Research staff has settled on those analytics, but
I will discuss that in a later entry. This entry is already too long.
case study is both reasonably straightforward and far from trivial. It does
show as promised that GQM(AD) and Outcome and Controls work together. I leave
you all with a thought problem. How would you apply the pattern to teams
developing new features to existing applications?
As readers of this occasional blog know, this blog has been less of a 'web log' and more a series of small essays on the topic of development analytics. I have decided to start writing less formal entries more frequently and have realized I would be comfortable doing that on my own web site, murraycantor.com.
I want to be entirely clear. IBM has in no way looked over my shoulder in the writing of the blog and has been very generous in providing me a forum. Nevertheless, I will be freer sharing my opinions when there is no opportunity of confusing my often idiosyncratic opinions of those of the company's.