firstname.lastname@example.org 110000CH6X Tags:  analytics procees and delivery optimization development bao business 2,301 Visits
I am not a natural blogger, but do like to share thoughts with a broad community of folks with shared interests.
My longterm passion is how to enable organizations develop and deliver software and systems more effectively. I am thoroughly convinced that well organized, governed organizations not only deliver value to the businesses, but also enhance the lives of the staff. In such organizations, people get to work on cool things, innovate, work well with colleagues, and build a legacy by being a part of making things of value.I also am a mathematician by training. I am especially energized by the opportunity to apply mathematical reasoning to the improvement of software and system organizations. My current assignment is to lead the Business Analytics and Optimization (BAO) strategy for Rational.
So, with this blog, from time to time, I will share my thoughts on BAO for software and system organizations. I hope this blog will be a catalyst for building a community. I especially look forward to comments and conversations.
So stay tuned.
In a conversation with a development lab productivity team, I was reminded that the first challenges software and system organizations face when starting an improvement program is 'what to measure'. In particular, some organizations start with what is easy to measure with the understandable thought "we need to start somewhere". I have found this approach tends not get traction. Over the years I have settled on some principles that seem to apply:
I suspect there some other principles that should be added. But these are a good start.
In a later blog, I will discuss levels of measurement.
I would like to build on the theme of reasoning about what to measure. The goal of business analytics is to track what matters to the organization (what it is you are trying to manage) and respond to the measure in some way to gain improvement. The science of measuring outcomes and In manufacturing and some service delivery domains is statistical process control (SPC), SPC lies at the heart of the Six Sigma movement. Even so, there will be no need to have a 6-six sigma belt to participate in this discussion . While there is reason to believe that not all of the Six Sigma practices do not apply all that well to our domain, the idea of tracking outcomes, applying statistical analysis to detect change change, and applying some sort of controls to affect the change applies in all business domains, including software and system development and delivery.
Briefly then, the outcomes are the operational goals and controls are the actions you take to achieve the outcomes, So naturally we need too kinds of measures.
The simplest way, for me at least, to think about SPC is to measures trends in outcome measures and control measures to determine the likelihood that the controls are in fact affecting the outcome. In our potato chip example we might find that we cannot control the outcome well enough by the shaker and belt controls. In that case, we might look for some other factor to control, say the factory humidity.
If you look at many measurement programs in software and system you often find that outcome and measures are confused. In fact even sorting the measures into the two buckets is hard. No wonder measured process improvement for our domain has been so hard.Anyone have good examples of measurement patterns or antipatterns of measuring controls and outcomes?
Again stay tuned for more....
Some business analytics for development organization entrail measureing return on investment for software and system programs. I just finished a long version of our RoI approach. This is a link to a more technical version of the paper..
I have mentioned in the first posting, I am still getting the hang of blogging. I guess one use of blogs is to share what on my mind while staying in the neighborhood of the topic of analytics. So, I have been putting a lot of thought to Toyota's diemma about how to deal with the reports of dangerous acceleration in their cars. The recent reports of Prius incidents (see this article in the New York Times) confirmed some of my earlier suspicions and hence this blog.
First, I need to come clean; all I know is from news accounts. I have had no contact with Toyota or any IBMers working with Toyota. Further, I need to say the the opinions here, in my opinion not controversial, are my own and do not reflect any IBM position.
So what do we know:
Now, say there is one chance in a million miles of driving of the latent defects manifesting. They may be impossible to find with standard testing and will inevitably happen every so often to drivers. This is the standard insight that with large volumes unlikely events become inevitable. So with Toyota's large sales, they may be the victim of their success.
The avionics community has developed a discipline around safety-critical software. There are design and model testing methods to validate that the embedded software is good enough to stake people's lives on the code running correctly. (There is a good article is the latest Communications of the ACM on model checking for avionis) It seems Toyota and the entire auto industry needs to adopt these safety-critical disciplines going forward. The cost of these practices is overshadowed by the costs of costs of the highly publicized incidents, the suits, and other liability.
Yesterday, I was at the Conference on System Engineering Research (CSER) held this year at Stevens Institute. I sat through a talk which stimulated my curmudgeon tendencies. In the spirit of hopefully generating some contraversy, I will not hold back.
The talk was about an expert-system based engineering risk management system. Essentially, the authors got a set of experts together to identify catagories of risks (people, delivery, product ...), risks in the categories, and a method for identifying level of risk and their consequence and then summing the products of the levels and the consequences. The end is the total amount of category risk. Looking at the output is supposed to give you insight of the overall program risk and the contributing risks.
My problem is that I cannot parse the last sentence. In fact I do not understand terms like "program risk" and say "people risk". There may be a clash of cultures here; to many those terms seem reasonable.
My argument starts here: One can ask 'What is my risk of going over budget?' or 'what is my risk of missing the delivery date?' The answers to these sort of questions are answered using stardard business analytics. See, for example, Mun's text on risk analytsis that defines risk as statistical uncertainty of a quantity that matters. For example, 'time to complete' is a quantity that does matter to a project. The uncertainty in making the date can be measuresd as the variance (or standard deviation) of the estimate of the time-to-complete. (Note, for the math aware, time-to-complete is what the statisticians call a continuous random variable.) So the answer to the question, 'what is my schedule risk?' has an unambiguous, quantified answer. What is 'my people risk' has no such answer. In fact, 'people risk' is not a concept defined in business analytics.
Of course, it does make sense to ask what contributes to the schedule risk. One might fear that the inability to staff the project contributes to the schedule risk. Fair enough. In my mind, that does not make staffing a 'risk', but say a schedule risk factor.
I am not sure why I am so adamant about this, but I am. It could be that I believe that the less precise use and measurement of risk is holding our industry back.
Anyone want to comment or defend the so-called risk management practice underlying the talk I found so annoying ?
Last week I briefed an IBM customer on some of our recent thoughts on the role of estimation in business analytics. I feel the briefing was not entirely successful. The customer asked about a use of estimation I had not considered previously My first reaction is that the approach desired by the customer was 'not possible'. I then realized it might work in some cases, but I was emotionally opposed to the idea. Then I realized I should not let my emotions interfere and think through the question and its implications. Hence this blog:
In Agile projects or in maintenance organizations, workers are assigned 'work items'. Often workers are asked to estimate the time it will take to complete the work item. Asking an employee to commit to a time-to-complete is both reasonable and unreasonable. Team leads and managers need to have some idea when the current work will be done to plan resource assignments, manage content, make commitments and the like. The management also wants to identify the more reliable, productive workers. After all, development teams are meritocracies. It is right that the more productive employees are identified and rewarded. So we need a way for employees to make reasonable estimates while providing a way for (cliche aler!) the cream to rise. It is unreasonable in that the worker is asked to guess and, in fact, commit to a time to complete. In some cases, the worker may be confident in the estimate. In some cases, there will be less confidence for a variety of good reasons: The task may have dependencies, the solution to fixing a bug report may not be apparent and so on. So asking to commit to a fixed time is unreasonable and measuring the worker against these commitments is oppressive. Under these circumstances, the intelligent worker will pad the estimate so to insure that the commitment is meant. This unintended consequence of asking for the duration is longer than needed estimates and, since people work to the commitments, lower productivity.
In the Agile Planning feature shipped in Rational Team Concert (RTC), we provided means to somewhat mitigate this phenomenon. RTC provides the mechanism for letting the worker enter the best case, likely, and worse case for the time to complete the task. This way the worker can enter numbers that reflect her or his uncertainty. This supports more reasonable commitments and less adversarial conversations. In the tool, the numbers are rolled up using a Monte Carlo algorithm that accounts for task dependencies and shows the likelihood of completing the iteration or scrum. A benefit of this approach is that the worker can be held accountable not to a single value, but to staying within the range of estimate and so need there is no need for padding. There remains the problem of knowing if the estimate is reasonable and how to find the meritorious, which finally brings us to the client request.
The client asked if we could turn this around. Could we use some sort of algorithm to compute the expected time to complete for the task? In other words, the system tells the worker the amount of time it should take to complete the task and the worker then is measured against this expectation. As I said at the beginning of the blog, my first reaction is 'probably not' and this is undesirable. Lets dive deeper. First, like the RTC agile planner, this computation can and should include some best, likely, and worse case in order not to be overly oppressive and roll up to show iteration and/or project schedule risk. Further, building out this approach raises the following statistical question: "Can we sort work items into equivance classes of similar enough tasks, so that we use these classes as populations to build time-to-complete statistics?" If we could do this, then we could properly set expectations on the worker, detect the superior and inferior workers, reward the former and better train the latter. Further, we could measure improvements over time in the execution of the tasks due to team or proecess improvements. All good things. However, this approach needs to be implemented very carefully and not over applied or it could lead to more oppresion and untended consquence.
I suspect the more creative architecture and design tasks simply do lend themselves to this sort of analysis. So teams that create new platforms and build new applications will rely more on expert opinion for the estimates and not predictions solely based on historical data. Not everyone would agree with this. For example, there are some estimation tools provided by various vendors that in fact do try to estimate design and architecture tasks effort and duration by using parametric models or classifications. However, there is so much variation in the amount of novelty of the efforts and the team skill and experience, the uncertainties in the estimates are large enough that they that they should be applied to projects with great care and to individuals not at all.
On the other hand, most of what development organizations do is more routine and for those tasks something along the lines of what the customer asked for might be possible. One would need a way of characterizing the different task classes, track the times-to-complete and do the statistical measures. With this in place, one could explore not only automated task estimates, but also process optimzation by what I believe is a novel application of statistical process control.
In summary I believe we need to pursue task analytics and estimation, but I have serious misgivings. Automated analytics-based business processes can go seriously wrong. We need to ensure that some judgment and subjectivity is part of the process. The misuse of analytics in the subprime mortgage business is a case in point
I realize something along the lines I am describing may already be available. Has anyone heard of a tool that supports this method?
Today, April 1, seems like a good day to bring forward an important new idea. In fact, I think this may be the next big thing.
One of the well-understood problems with software development project management is that it is often impossible to completely specify the complete work breakdown with certainty. The longer the project and the more innovative the project, the more uncertain the work breakdown items. This is addressed in iterative, agile planning by identifying the summary work items and then adding detail as the project evolves. Another source of uncertainty is the dependency between the summary items. This uncertainty in turn makes critical path analysis for such programs problematic. In fact there is a whole ensemble of project critical paths, each with some likelihood. For the physics literate, this ensemble of paths is much like Feynman Path Integrals in quantum theory. The math is pretty hairy (see this elementary description). Fortunately, as Feyman also pointed out, one can simulate quantum mechanics with quantum computers. I am no expert in quantum computing, but even so I have a proposal: Quantum Informed Projects (QuIPs). The idea is to represent work items as QItems using QBits from quantum computing.. Then we can represent the project as a set of entangled QItems and using a suitablly large quantum computer to calculate the wave function for the critical path.
My understanding is that we do not yet have large enough enough quantum computers to make this practical. However, the same is true for implementing other useful quantum algorithms (see this example). So we can start by building algorthms. There is no time like the present (not accounting for the quantum uncertainty of measuing time) So on this special day, lets turn our attention to QuiPs.
First, I am pleased that many saw the humor in the April fools posting. That said, I wonder if there will be ever quantum project management. Also, I fear this blog lacks humor. I will do what I can, but there is only so much that can be done to spice up the topic of analytics for software and system organizations. So, back to the serious stuff.
But first a joke that I believe that dates back to vaudeville: Onstage, there is a streetlight. Under the streetlight, there is a man crawling around on hands and knees. A policeman walks up and asks what he is doing. The man says he is looking for his keys. The policeman asks if he is sure he lost them here. The man answers, "No, in fact I lost them down the street." The policeman asks why is he looking under the light. The man answers, "The light is brighter here."
OK, not so funny. So what's the point? A while back, I was discussing a client's management program with a colleague (who will remain nameless and I hope is reading the blog). I pointed it would not serve any purpose. My colleague answered "Well at least they are measuring something." I retorted, "First, you need to figure out what you need to measure, then figure out how to do the analysis and get the data." We left it at that. More generally, software and system organizations often measure what is easy, not what they need. They look where the light is brightest. We still have the question how to specify the needed measures, analytics, and data collection program.
In an earlier entry, I proposed some measurement principles. While these principles are sound for assessing a measurement and analytics program, they do not provide operation guidance for defining the set of measures, associated analytics, and data. What is also needed is the analytics version of a requirements analysis. Last Friday two colleagues (named Clay Williams and Peri Tarr who I believe do read the blog) introduced me to the Goal Question Measure (GQM) method. This method has been extended in various ways such as GQM+Strategy.
I have seen the method applied. It looks much like functional decomposition and so it is a requirements analysis technique for analytics solutions. I think it should be extended to include identification of the data sources. So we would have GQMAD (not kidding), my spin on the main idea:
For my waterfallphobic friends, I share the concern. Building an analytics solution this way should be more iterative than is described above. Probably something like the Unified Process can be applied using GQMAD as a good requirements practice.
Anyone out there with GQM experience they would like to share?
email@example.com 110000CH6X 1,286 Visits
This blog experiment seems to be working. The entries are gietting around 100 visits and growing - good enough to keep at it. I have found that writing the entries has given me the opportunity to clarify and express my thoughts. This entry is a case in point.
We are deploying a BAO solution for the level 3 support organizations in our IBM India Software Labs. That deployment provides a case study in how to integrate two concepts I introduced in earlier blogs. This entry is longer than the others. I hope you find it worth the wait and effort to read.
In those previous entries, I discussed two frameworks for reasoning
These frameworks address different aspects of the problem of using measures to achieve business goals by measuring the right things and taking actions to respond to the measurements. In fact, these frameworks fit together hand and glove.
Recall that level 3 support teams provide fixes to defects found in delivered code. Each of the teams deals with an ongoing series of change requests (aka APARs, PMRs). An organization goal is to reduce the time to and cost of completion of these requests. To achieve the goal, they are adopting some Rational-supported practices and supporting tools. So the questions that need to be answered are:
1. What is the time trend of the time to complete of the change requests?
2. What is the time trend of the cost to complete of the change requests?
3. In each case how would I know that some improvement action resulting in significant improvement in the trends?
Now comes the hard part: determining the measures that answer the questions. The change requests come arrive somewhat unpredictably. Each goes through the fix and release process and presumably gets released in a patch or point release. So at any given time there is a population of currently open and recently closed releases. The measures that answer the question are a time trend of some statistic on the population on some population of change requests.
Each of the change requests requires different amount of time and effort to complete. So to measure if the outcome is being achieved, one must reason statistically: defining populations of requests, building the statistical distribution of say time to complete for that population, defining the outcome statistic for the distribution. So we need to do things to define the measure:
1. Specify the population of requests for each point on the trend line
2. Specify the statistics on that population
To keep it simple (as least as simple as possible), lets form the population by choosing the set of change requests closed in some previous period, say the previous month or quarter. To choose a statistic, one needs to look at the data and pick the statistic that best answers the question. Most people assume the mean of the time (or cost) to complete is the best choice. However, that choice is appropriate when the shape of the histogram of the time to complete is centered on a mean as is common in normally distributed data.
One of the advantages of working in IBM is that we have lots of useful data. Inspection of some APAR data of the time to complete from one of our teams in the IBM Software Lab in India shows the distribution is not centered on a mean. and so reduction of the mean time to complete is not the best measure of improvement.
We have looked at literally tens of thousands of data points for time to complete of change requests across all of IBM and have found the same distribution. For you statistics savvy, it appears to be a Pareto Distribution, but statistical analysis carried out by Sergey Zeltyn of IBM Research’s Haifa lab shows that this distribution does not well fit any standard distribution. A possible explanation is that is the time required to fix the defects is Pareto distributed, but since the resources available to fix them is limited, the actual time to complete is not pure Pareto. In any case, a practical way to proceed is to choose a simple (non-parametric) measure: width of the head, i.e. the time it takes to complete 80% of the distributions.
So with this analysis in place, the organization decides to precisely specify the goal such as a 15% reduction in time and cost to complete 80% of the requests closed each month. So the outcome measures are the time it took to close and costs of 80% of the requests closed each month.
Having chosen this measures, we are ready to identify the data sources and instrument the measures. So far so good. But wait, we still need to answer questions 3.
As I mentioned, in order to improve the outcome measure and achieve the goals, the lab teams have agreed to adopt appropriate Rational practices and tools to automate certain processes. The practices were selection using the Rational MCIF Value Tractability Trees (a development causal analysis methed). Adopting and maturing the practices and their automations are the controls. Some control examples are automating the regression test and build process, and the adoption of a stricter unit test discipline to reduce time lost in broken builds. There are control mechanisms with associated control measures such as time-to-build, regression test time-to-complete, percent of code unit-tested, and a self-assessment by the team of their adoption of testing and build practices.
To answer question 3, we need statistical analytics to determine if the changes in the control measures have had a significant impact on the outcome measures. Our Research staff has settled on those analytics, but I will discuss that in a later entry. This entry is already too long.
This case study is both reasonably straightforward and far from trivial. It does show as promised that GQM(AD) and Outcome and Controls work together. I leave you all with a thought problem. How would you apply the pattern to teams developing new features to existing applications?
firstname.lastname@example.org 110000CH6X Tags:  risk system horizon gulf deepwater oil bp spill 2 Comments 2,331 Visits
First, some personal disclosure: In the late 1980’s, I worked for a while at Shell Research, developing seismic modeling and data imaging algorithms. (See this link.) While there I received training on oil exploration. I remain awed by the passion, expertise, daring, and discipline of the engineers, scientists, technicians, and skilled laborers who take responsibility for providing the hydrocarbons we completely rely on.
Oil exploration is remarkably costly and risky. Even then in the late 1980’s, it was not uncommon to spend $1B on an exploration well, hoping to find oil based on the seismic data only to find it dry. At Shell, I was on a team that developed, for the time, a highly compute-intensive algorithm for imaging seismic data captured in complex subsurfaces. They literally bought us a Cray since running the algorithm might make a marginal difference in the success rate of exploration drilling.
Hence, I am not an oil industry basher, far from it. So I have been watching the BP, Deepwater Horizon gulf catastrophe with great interest. In this entry, I will share what I have gleaned from various news sources. (I have found the Wall Street Coverage very credible). So here is my net:
Recall, the blowout occurred shortly after capping an exploration well (a will drilled solely to confirm the presence of a oil resevior). The depth of the well, reported 18,000 ft, is no big deal. The depth of the water, 5000 ft, is far from the record of around 8000 ft. So the well itself was routine for the industry. So what happened?
BP had drilled many of these wells. In fact, ironically the blowout occurred while BP executives were celebrating their safety record. However, over the last few years have become profitable by building a very cost-conscious culture. Such a culture is likely to cut corners, repeatedly taking small risks in business operations. Such behavior may be rational if you believe the total liability is bounded. There is reason to believe BP’s liability is ‘capped’ at $75M. This culture seemed to be at work on the oil platform:
I bet that BP managers routinely made the same decisions for years with no adverse outcomes. They were probably rewarded for this behavior. Such a culture makes such disasters inevitable over time. A great case study of such cultures is found in Diane Vaugh’s The Challenger Launch Decision: Risky Technology, Culture, and Deviance at NASA. She studies the NASA culture that led to the decision to launch the shuttle that exploded on launch killing ‘the first teacher in space’ while tens of thousands of school children watched on television. The managers overruled the engineers who advised them that the temperature was out of spec for a launch. As she explains, the managers had gotten away with taking similar risks in the past and had decided to bow to political pressures and approve the launch.
So what they were thinking is something like, “These risks are no big deal and the savings matter.”
A key moral is that over time the unlikely becomes inevitable. Further, experience and past data lead to exactly the wrong behavior: they reward the risk taking, not the caution. Many have made exactly the same point about behavior of the financial firms during the financial meltdown.
Internalizing this moral and acting accordingly is key to our industry. Increasing, we will be building life critical, economic critical systems that are very complex, will operate over long periods, and whose failure could be catastrophic. There is no turning away from this inevitability. So, we all need to understand that cost savings must be balanced by a clear understanding of the overall risks of failure, their consequences and the real return of investment in failure avoidance. This of course takes some math. In particular, thinking about averages is not useful. That is topic of my next entry.
email@example.com 110000CH6X Tags:  software_estimates flaw_of_averages system_estimates 2 Comments 1,830 Visits
Folks who have heard me present will recognize the following discussion as a variation of what I have used as an example to explain the importance of variance in software and system estimates. Imagine this time you are a development organization manager given the following artificial opportunity. You can agree to the following deal: Have the teams at your own expense develop some application, each meeting a given set of requirements. The client really wants the applications and will accept them if acceptable and perfectly will to be consulted throughout the projects. Here is the catch: if you deliver the projects on time in 12 months, you will receive $1M per application. If you are a day late, you get nothing. You have to decide whether to take the deal.
Lets suppose you take the projects to your estimators and they tell you the estimated time to complete is 11 months and the estimated cost to complete is $750K for each of the projects. So you stand to make an estimated $250K per project. So you staff up as much as you take on three projects looking forward to your bonus. Was this a good deal?
Those who have read The Flaw of Averages by Sam Savage and Dan Denziger already know the answer. Those who haven’t read the book should. This book nicely captures the sort of statistical reasoning that underlies IBM Rational’s approach to business analytics and optimizations (found in the RTC agile planner and the ROI calculations in Focal Point). Some key rules:
Back to the example: The time to complete is an uncertain quantity and so must be described by a distribution. Often, the estimate returned by the estimator is the mean of that distribution. The distribution may be pretty wide and so may look like Figure 1 of the attached document. (I have had bad luck trying to embed figures in the blog and I have put the figures in this this attachment.) Note that 40% of the distribution lies beyond 12 months.
Assuming the $750K cost to complete estimate is dead on, lets apply some simple high school probability to get the distribution of profit (See Figure 2):
· The chance of succeeding at all three projects and getting $3M is revenue.is (0.6)3=0.216,
· The chance of succeeding at exactly two projects and getting $2M in revenue is 3(0.6)2(0.4)=0.432
· The chance of succeeding at exactly one project and getting $1M in revenue is 3(0.6)(0.4)2=0.288
· The chance you will fail at all three projects yielding no revenue is (0.4)3=0.064.
The weighted average of the distribution of revenues is
(0.216)($3M) + (0.432)($2M) + (0.288)($1M) + (0.064)($0) = $1.8M
So the likely outcome of your (3)$750K = $2.25M expense is a loss of $450K.
But wait, it is worse. The distribution is probably not normal. Programs are more likely to late than early and so are skewed to the right. In this case the average (i.e. the mean) is less than the 50% point. So, as shown in Figure 3, it is possible to have the estimate of 11 months and the likelihood of failure is 50%. The revenue distribution is given in Figure 4. In this case, the weighted average of the distribution of revenues is
(0.125)($3M) + (0.375)($2M) + (0.375)($1M) + (0.125)($0) = $1.5M
In this the expected loss is $750K.
But wait, it is still worse. The cost to complete is also uncertain. To keep things as simple as possible, lets suppose the cost to complete for each of the projects is described by three values: best case is $700K, the likely case is $750K, and the worse case is $1M To compute the expect profit in this case requires using this values as parameters for a triangular distribution (see Figure 5) and then apply Monte Carlo methods to do the calculation to get the distribution that describes the profit. The result is shown in Figure 6. Briefly in this case:
· The most likely outcome is a loss of $945K
· There is a 90% certainty of losing at least $805K
· There is a 10% chance of losing more than $1.1M
So taking this deal is at best career limiting!
Notice by ignoring the rules, one is tempted to make a bad deal. Applying each of the rules with more discipline shows how bad the deal is. The moral of all this is that making business decisions based on calculations of averages can lead to disastrous outcomes.
This moral needs to be taken to heart by our industry. Far too often, managers when faced with making funding projects or business commitments insist, “Just give me the number.” What they need is a distribution; the number they are given is likely to be an average. Decisions based on the number will likely go sour. No wonder the software and system business outcomes rarely delight their stakeholders. The good news is that there are robust, proven techniques to avoid the flaw of averages.
It should not be a surprise that I have been following the BP oil spill with much interest. In fact, as I starting typing this entry, I was watching the grilling of the BP CEO, Tony Heyward, by Congress. Rep. Stupak is focusing on the BP’s risk management.
Some of you have read my earlier posting on my thoughts of the BP decision process that led to the Deepwater Horizon blowout. So far, information uncovered since that posting is remarkably consistent with my earlier suppositions. In this entry I would like to step back a bit and discuss what broader lessons might be learned from the incident. While it is all too easy to fall into BP bashing, I would rather use this moment to reflect more deeply on risk taking and creating value. (BTW, some of you might now that my signature slogan is ‘Take risks, add value’.)
In our industry we create value primarily through the efficient delivery of innovation. Delivering innovation, by definition, requires investing in efforts without initial full knowledge of the effort required and the value of the delivery. This incomplete information results in uncertainties in the cost, effort, schedule of the projects and the value of the delivered software and system, i.e. cost, schedule and value risk.
Deciding to drill an oil well also entails investing in an effort with uncertain costs and value. In this case, the structure of the subsystem and productivity of the well cannot be know with certainty before drilling. As I pointed out in an earlier blog, a good definition of risk is uncertainty in some quantifiable measure that matters to the business. So in both our industry and oil drilling we deliberately assume risk to deliver value.
So, what can we learn from the BP incident? Briefly, one creates value by genuinely managing risk. One creates the semblance of value for a while by ignoring risk.
Assuming risk, investing in uncertain projects, provides the opportunity for creating value. That value is actually realized by investing in activities that reduce the risk. The model that shows the relationship is described in this entry. So, reducing risk has economic value, but reducing risk takes investment. In the end, the quality risk management is measured with a return on investment calculation. This in turn requires a means to quantify and in fact monetize risk.
I wonder what was there risk management approach was followed by BP. A recent Wall Street Journal article suggested they used a risk map approach – building a diagram with one axis a score of the ‘likelihood of the risk’ and the other a score of the ‘severity of a failure’. So with this method, they would score the risk of a blowout as very low (based on past history) with a very high consequence. So, such a risk needs to be ‘mitigated’. (Some actually multiply the scores to get to some absolute risk measure.) Their mitigation was the installation of a blow-out preventer. They could then confidently report they have executed their risk management plan. Note these scores are at best notionally quantified and not monetized.
Paraphrasing my good colleague, Grady Booch (speaking of certain architecture frameworks), risk maps is the semblance of risk management. As pointed out by Douglis Hubbard in The Failure of Risk Management (and in an earlier rant in this blog), this sort of risk management is not only common, but dangerous: It is a sort of business common failure mode that leads to bad outcomes. Also, Hubbard points out, useful risk management entails quantification and calculation using probability distributions and Monte Carlo analysis. I would add that since risk management in the end is about business outcomes, risks need to be monetized as well as quantified. I am willing to bet a good bottle of wine that BP did no such thing. Any takers? The business common failure mode was over-reliance on the preventers, even though there are several studies showing they are far from ‘failsafe’.
Further, it appears BP assumed risk by consistently taking the cheaper, if riskier. design and procedure alternative, the one with greater uncertainty in the outcome, even when the cost of an undesired, if unlikely, outcome was possibly catastrophic. The laundry list of such decisions is long; some outlined in Congressman Waxman’s letter to Tony Hayward. CEO’s of Shell and Exxon testified before congress that their companies would have used a different, more costly designs and followed more rigorous procedures. According the congressional and journalistic reports, this behavior is BP standard operating procedure. So BP assumed risk by drilling wells but did not invest in reducing the risk.
For quite a while they got away with the approach of assuming but really reducing risk, and appeared to be creating value as reflected in stock and value and dividends to the investors. The BP management raised the stock price from around $40/share in 2003 to a peak of around of $74/share prior to the Deepwater Horizon incident. At this writing the stock is trading at $32/share and the current dividend has been cancelled. Investors might rightly wonder if there is another latent disaster and so discount the apparent future profitability with the likelihood of unknown liabilities. The total loss of stockholder value is over $100B, which is in the ballpark of the eventual liability of BP. So, whatever approach BP used to manage risk failed.
BTW, some may recognize this same pattern in the management of financial firms that participated in the subprime mortgage market. In that case, they ‘mitigated risk’ by relying on the ratings agencies. Those who actually built monetized models of the risk realized there was a great opportunity to bet against the subprime mortgage lenders and made huge fortunes (See, e.g. The Big Short: Inside the Doomsday Machine by Michael Lewis .).
Readers of the blog will notice a recurrent theme is some of the postings. It is essential that we assume and manage risk. To repeat a favorite quote, “One cannot manage what one does not measure.” The risk map, score methods, while common are insufficient to the needs of our industry; they do not measure, nor really manage risk. We as a discipline need to step up to quantifying, monetizing, and working off risk in order to be succeed as drivers of innovation. We need to step up to the mathematical approach found in the Douglas and Dan Savage’s (see this posting) texts.
I came to this same realization probably a decade ago. I held off at first because I had not deep enough understanding of how to proceed, and I knew I would encounter great skepticism. I tested the waters in 2005 and posted my first paper on the subject in 2006. I indeed received a great deal of skepticism and resistance, but enough acceptance to go forward. I have learned some important lessons from all that. In my next blog, I will share my experiences of bringing more mathematical thinking to risk management for SSD.
firstname.lastname@example.org 110000CH6X Tags:  conseg2011 economics quality software 2 Comments 3,076 Visits
I have the honor of giving one of the keynotes at the Conseg2011 conference this February in Bangalore. I have chosen a large, perhaps overly ambitious topic: "The Economics of Quality". Here is my conference proceedings document. My goals in preparing the paper and presentation is to make the case is that quality is fundamentally an economics concern and to suggest an overall approach for reasoning about when the software has sufficient quality for shipping. For those who have read some of my earlier entries, you will see have different my thinking on the topic differs from those who take a technical debt approach.
Anyhow, the brief proceedings paper is very high level and there is considerable work to filling in the details and validating the approach. This paper really is the beginning of a program that I believe, when carried out, will have great benefit to our industry. So, I would like to hear from anyone who has similar interests and perspectives. There must be some existing relevant research. Perhaps we find enough like-minded folks to build a community exploring the topic.
email@example.com 110000CH6X 2,033 Visits
I know. It has been a long time since my last posting. Over the last few months, nothing much happened that prompted an entry. I was thinking of writing something sort of philosophical on the nature of estimates, but never got to it. Then there was the tsunami and the associated nucler reactor failures at the Fukushima power plant. Suddenly, the topic became more urgent. This is relevant to this blog, because our domain includes the engineering and economics of safety critical systems. Presumably the nuclear reactor industry uses the state of the art methods. I have been exploring what is going on there and while, I am far from an expert, I have found out some things worth sharing in a blog.