Matching: software X
firstname.lastname@example.org 110000CH6X Tags:  conseg2011 economics quality software 2 Comments 5,715 Views
I have the honor of giving one of the keynotes at the Conseg2011 conference this February in Bangalore. I have chosen a large, perhaps overly ambitious topic: "The Economics of Quality". Here is my conference proceedings document. My goals in preparing the paper and presentation is to make the case is that quality is fundamentally an economics concern and to suggest an overall approach for reasoning about when the software has sufficient quality for shipping. For those who have read some of my earlier entries, you will see have different my thinking on the topic differs from those who take a technical debt approach.
Anyhow, the brief proceedings paper is very high level and there is considerable work to filling in the details and validating the approach. This paper really is the beginning of a program that I believe, when carried out, will have great benefit to our industry. So, I would like to hear from anyone who has similar interests and perspectives. There must be some existing relevant research. Perhaps we find enough like-minded folks to build a community exploring the topic.
I have mentioned in the first posting, I am still getting the hang of blogging. I guess one use of blogs is to share what on my mind while staying in the neighborhood of the topic of analytics. So, I have been putting a lot of thought to Toyota's diemma about how to deal with the reports of dangerous acceleration in their cars. The recent reports of Prius incidents (see this article in the New York Times) confirmed some of my earlier suspicions and hence this blog.
First, I need to come clean; all I know is from news accounts. I have had no contact with Toyota or any IBMers working with Toyota. Further, I need to say the the opinions here, in my opinion not controversial, are my own and do not reflect any IBM position.
So what do we know:
Now, say there is one chance in a million miles of driving of the latent defects manifesting. They may be impossible to find with standard testing and will inevitably happen every so often to drivers. This is the standard insight that with large volumes unlikely events become inevitable. So with Toyota's large sales, they may be the victim of their success.
The avionics community has developed a discipline around safety-critical software. There are design and model testing methods to validate that the embedded software is good enough to stake people's lives on the code running correctly. (There is a good article is the latest Communications of the ACM on model checking for avionis) It seems Toyota and the entire auto industry needs to adopt these safety-critical disciplines going forward. The cost of these practices is overshadowed by the costs of costs of the highly publicized incidents, the suits, and other liability.
email@example.com 110000CH6X Tags:  development software systems measurement 1 Comment 6,125 Views
One of the things that characterizes software or systems development is that the project manager routinely commits to deliver certain functionality on a given date at an agreed-upon level of quality for a given budget. It is the role of the project manager to make good on the commitment, The software and systems organization leadership may count on the commitments being met in order to meet their business commitments or there may be an explicit contract to deliver on time for a fixed budget. The measure of a good project manager is the ability to make and meet commitments.
iIn this blog entry, I will discuss the nature of that commitment and how it relates to project analytics. First of all, lets define 'commitment' in this context. Of course, I do not mean the confinement to a mental institution, I mean, as suggested above, the promise to deliver certain content with acceptable on or before a certain date.
The first thing to notice is that the future is never certain, and so we are in the realm of probability and random variables, i.e. a quantity described by probability distribution. Going forward, I will assume the reader is familiar with the concepts of random variables and their associated distributions . Soon, I will devote a blog entry just that topic.
Meanwhile, the best way to describe the likelihood of meeting a commitment is the use of a random variable. Consider the distribution of the time it will take to meet the commitment. It might look something like this:
A similar distribution would apply to cost to complete.
Recall, the probability then of the commitment being met is the area under the curve that falls before the target date:
The manager, in making the commitment, is essentially betting (perhaps his or her career) that he or she will meet the commitment. According to this measurement, the odds are about 50-50. The key measurement then is the amount area of the random variable that lies prior to the target date, which in turn relies on the the ability to calculate the probability distribution. I also will discuss some techniques to do that in a later entry.
Now consider for example, "project health". What I believe what is meant is the likelihood of meet the commitment to deliver the project on time.
If it highly probable the project will ship at the target date, the project is 'green' otherwise it is 'yellow' or 'red' like in the following figure.
There are three reposes to a yellow or red project. One can move the target date, move the distribution, or change the shape of the distriburion, again a topic for a later bog.
firstname.lastname@example.org 110000CH6X Tags:  management software development_analytics agile andes development project 3,537 Views
In my last blog, I laid out a vision of how a project lead and her stakeholders might use the predictive analytics to drive to better project outcomes. As I mentioned in the entry, IBM Rational is work on such a tool. A demonstration of this tool is found here: Agile Development Analytics Demo. The video was created by Peri Tarr, the lead architect on the project.
Some of you might notice that the terminology and development process described in the demo is at odds with your understanding of Agile. We do understand that and currently we are working on making a robust tool that accommodates a wide range of processes for what might some might call 'pure Agile' to various hybrids we are discovering in the market place.
In the next blog, I will explain more on how the tool works.
When I started this experiment in blogging, I wrote that I am not a natural blogger. I am not the affable, chatty web presence who on a daily or weekly basis shares one's thoughts. I have learned since then what kind of blogger I am. My style is to write little essays that might take weeks to prepare, given the priorities of my day job. I have also found I do enjoy writing the blog as it gives me a chance to share some issue that is top of mind. So here goes:
A few weeks ago, my good colleague displayed a chart in one of his PowerPoint decks entitled something like "How to Understand Murray." The chart was an explanation of probability distributions. It was both flattering and a bit of a wakeup call. As Arthur mentioned and the readers of this blog know, much, if not all, of my writing assumes an understanding of probability and probability distributions (aka probability densities). My experience in discussions with folks from our industry is that most of them have vague memories from some stat class in college and so can generally follow the discussions, but most could use a refresher. I could simply refer readers to a good Wikipedia article, but, instead, let me given a domain-specific example.
Let's go with a topic I wrote about in a previous entry: time to ship. For explanation purpose, let's take a fictional example. Suppose you are starting a project expected to ship in 110 days. That said, we cannot be 100% certain of being ready to ship on exactly that day, no sooner, no later. In fact, it is very unlikely we will exactly hit that day. Maybe we will be ready the day before, or maybe the day before that. Since being ready on or any day before day 110 is success, we can sum up the probabilities on being ready on any of those days to get the probability we really care about. All that said, the probabilities for each of the days matter because we need them in order to get the sum. The set of probabilities for each of those days is the probability distribution or density that is our topic.
Let's look at a simple example.
In this example of a triangular distribution there is 0% probability that the product will be ready before day 91 and we are 100% certain we will be ready before day 120. We think the days become more probable and reach a peak as we approach day 110 and fall off after that. This graph then shows the probability, day by day, of being ready on exactly that day. You might notice that the peak is less than 0.07 (actually 0.06666...). This makes sense since we are assuming that the project may be complete on any of 30 different days and so the densities would be in the neighborhood of 1/30 = 0.03333. In our case, some are above and some below.
These distributions are the basis of calculating the likelihood of outcomes. The principle is very simple: The probability of being ready within some range of dates is the sum of the probabilities of being ready on exactly one of those dates, i.e. , we add up the density values for those days. As I explained above, if we want to compute the probability of being ready on or before day 110, we would add up all of the densities for days 90 to 110 to get 0.7. Using the same reasoning the probability of being ready on some day before day 120 is the sum of all the densities which comes to exactly 1.0, which was one of our going in assumptions. In fact the property that the sum of densities for all possible outcomes equals 1 is a defining property of distributions. Those who want to try this out at home could use this spreadsheet.. For example, can you find on what day, being ready on or before that day is an even bet?
For most development efforts, the overall state of the program (some would say 'health') is characterized by the shape of the distribution, This shape changes every day. Every action the team takes changes the shape. So, one key goal of development analytics would be to track the shape of the distribution throughout the lifecycle, a daunting task. More on this (probably ) in future postings.
In the previous entry, I introduced a probabilistic view of a commitment. The main idea is that when you commit to deliver something in a future, you are making a kind of bet. The odds of winning the bet is the fraction of the distribution of the time=to-deliver before the target date. For example, in the following example, the project manager has a 47% likelihood of winning the bet.
The raises a couple of questions. First, how is the distribution of time-to-complete determined? There are variety of methods to estimate time to complete of an effort. I am not taking a position on what method to adopt. The important point is that the estimation method should not return a number but a distribution! The major estimation vendor have this capability even if it not always surfaced. I will expand on this point in the next blog entry. For now, the key point is that you should be working not with point estimations, but with the distributions.
Second is how the project manager affects the shape and position of the distribution and therefore affects the odds. Some of the techniques are intuitive, some not so much, There are two things one might do: move the distribution relative to the target date, and change the shape if distribution typically narrowing it so that more of it .lies within the target date.
In the first, one can either move the target date out, so that the picture looks like this
This is, of course, intuitive - moving out the date lowers the risk. Another intuitive thing a project manager might do is the descope the project - commit to deliver less functionality. This may have two effects on the distribution: It will move it to the left as there will be less work to do. Depending on the difficulty of the descoped feature, the descoping may also narrow the distribution. By removing a difficult to implement feature. one is more certain of delivery, narrowing the distribution, removing risk resulting in this diagram:
Now comes the unintuitive part. Suppose the target date and content are not negotiable. What is a project manager to do then? The idea is to take actions that will narrow the distribution in Figure 1 so that it looks like
How is this done? Many project managers, in the name of making progress choose the easiest functions to implement first, "the low hanging fruit". However, by doing this the shape of the curve in figure in minimally affected, The less intuitive approach, Following the principle of the Ration Unified Process, is to work on the most difficult, riskiest requirements first! These are the requirements of which
the team has the least information and so should tackle first in order to have time to gain the information needed to succeed. Putting off the riskier requirements and doing the easy stuff first gives the appearance of progress, but by putting off the riskier requirements, one will run out time to do the riskier requirements and fail to meet the commitment.
All this has to be while ensuring their is sufficient time to fulfill all the requirements, risky or not. So in the end, one must account for both the time to complete tasks and their uncertainty to meet commitments. Some techniques for doing that will be discussed in the next blog entry.
email@example.com 110000CH6X Tags:  net management ppm portfolio roi npv return investment software of on present value 4,072 Views
In an earlier blog entry, I mentioned my article Calculation and Improving the ROI of Software and System Programs. I am pleased to announce to that it has been published in the September 2011 issue of the Communications of the ACM.