• Add a Comment
  • Edit
  • More Actions v
  • Quarantine this Entry

Comments (6)

1 santosg commented Permalink

I think this is due different views and perspectives on what "risk" here (the "clash of cultures" you refer to). To a great extend this is due to the ambiguity of the term itself, and how it is used.

On one hand, risk is considered simply and loosely as "things that may go wrong unless something is done to prevent them". In spite of this simple-but-lose characterization of risk, this perspective has a strong natural pull to anyone managing anything (whether oneself, your children or your software projects). I know that driving without my seat-belt fastened is risky. I know that this project is at risk because it's success is jeopardized by not being properly staffed
Other, more sophisticated perspectives, seek an operational and measurable notion. Risk as the measure of our ignorance. This approach that Williams in the 1930's and Markowitz applied to Economics and the financial markets in the 1950's (in itself a synthesis of probability theory fom Pascal and evolved through Bayes, Gauss, Bernoulli and Von Neumann)... an approach that brought the factor of "risk" (in addition to the traditional "return" factor), which then became main stream after the 1973 crash as investment portfolio management. This approach to risk was rooted on the minimization of covariances among the returns of individual securities in an investment portfolio (or in relation as the market as a whole)... an objective measurement of risk that could be used quantitatively in decision making. Variance became a useful measure of the uncertainty/volatility of return (risk), because it measures the frequency with which the expected fails to happen. The most efficient portfolios are those that combine the best holdings with the least price variability. The perspective to risk was exported to project management.
A variation of this second class of approach to risk that was introduced that goes beyond uncertainty (variance) alone, and couples it with consequence... i.e., a volatile portfolio is not risky if it's returns have little probability of ending up below a given benchmark/goal.
Although this second perspective it's more robust and attractive, it seems to me that it has some open questions that make the first, simpler perspective still attractive for many. There is a distinction between “decision making under risk” and “decision making under uncertainty” that we still gloss over. The former assumes knowledge of the distribution parameters, while “decision making under uncertainty” doesn’t (which is more the case here). Financial risk management tends to equate the two (to deleterious consequences). The distinction goes back to the 1920’s (“Knightian risk” and “Knightian uncertainty”), and was also used later by Keynes. In Knight's words (1921) "... Uncertainty must be taken in a sense radically distinct from the familiar notion of Risk... measurable uncertainty, or risk proper, is so far different from an unmeasurable one that is is not in effect a uncertainty at all" . Kaynes (also back in 1921) questioned the existence of the objective probability of a future event ... "Our ignorance denies us the certainty of knowing what that probability is"). Decision under risk assumes knowledge of the distribution parameters, while decision under uncertainty” doesn’t.
I wonder if most of the decisions involving software relate more to the latter than to the former. Is the software world represented by unbounded, unknown distributions?
(I realize now that i ended up digressing from your original point, though it's still an attempt to rationalize, if not justify or defend, the appeal of approach in the talk your describe).

2 mcantor@us.ibm.com commented Permalink

Thanks Santosg. I really did want to start a dialog. Your defense of the some of the common thinking is challenging and has forced me to think further.

First we agree the word 'risk' is used lots of way in natural language and further, many like the looser, more ambiguous use of the word. I have no interest is discussing the correct use of the word, but rather the more useful.
Coupling uncertainty with consequence is entailed by the definition in the Mun reference I give above. He describes risk not as uncertainty (as measured by variance in a distribution) but uncertainty in something that matters (say meeting a schedule) and so we have no apparent disagreement there, There is no need to bring in the complications of equity portfolio management here.
When you say my project is at risk due to lack of staff, I come back to my main point: Risk of what? Meeting the deadline, not meeting requirements, ....To say a project is simply 'at risk' is not useful. Measuring the kind and extent of risk that can be mitigated by staffing is useful.
Of course, as you say, this is not useful is the software world were rife with 'unbounded, unknown distributions'. These days unknown is not a problem if you can get good enough bounded approximations to the 'real distributions' to drive good decisions. Our experiments along those lines have been very positive. I again refer you to Douglas Hubbard's books.
A final point, I may be an outlier, but I manage lots of things (including the occasional project) and I have for years been troubled by the looser concept of risk. So the cultural pull is not felt by 'everyone'.I do agree that many, if not most, are subject to this cultural pull. In other industries, they have gotten over it. I do not believe our industry is so different that we should not also get over it/
Again, thank you for the opportunity to pontificate on a few points.
Finally, one last point. You say

3 mcantor@us.ibm.com commented Permalink

One more thought. With the distribution of say the time-to-complete (taken as a random variable') and a fixed delivery date, one can detect the area of the distribution that lies past the deadline. That area is the likelihood of missing the date. One could then define the schedule risk as this likelihood. So 80% of the estimation is greater than the delivery date, the project is 80% likely to be late.

This approach strikes me as perhaps the most useful definition of risk. In fact, it is supported in the Agile Planning Component of Rational Team Concert.

4 santosg commented Permalink

Agree Murray (i'm one of those converted in favor of a more strict notion of risk :).
Saying that "this project has 80% probability of being late" is a very straightforward and useful way of assessing it's risk. Although this is based on underlying computations of variance, it is a much more compelling and intuitive form of operationalizing risk than saying "this project has x variability in time-to-complete" (i.e., translating the variance into a probability statement about an event that matters is a more appealing characterization).
The other side of this, though, is the confidence that we can give consumers of the reliability/validity of that probability statement. How confidence are we when we tell you that the probability of being late is 80%. If it is based on approximation to an unknown distribution (a distribution whose parameters are unknown)... how "good enough" are those approximations? How "good" will the decisions driven by those approximations be? Experimentally it would require replicating the exact conditions in a project multiple times to see if the actual outcome matches the probability statement. Problem, seems to me, is that replicating the exact conditions surrounding a project could be as difficult as replicating the conditions surrounding the financial markets at a particular point of time (this is where the equity portfolio extrapolations can be relevant). This is important, because we would need to be prepared to answer a corporate executive asking us "How much can i bet on your probability estimation being well calibrated"? (i.e., my decision to do something would need to be based not only on the probability of the project being late, but on the accuracy of that probability). However, I should look at Hubbard's writtings, as it looks like there are ways around this.

- Lucinio

5 ClayEW commented Permalink

Hello all:

I would like to add three thoughts here.
1. Terminology. This goes to Murray's main point in the post, with which I agree. I think that having precise and meaningful terminology is very important, hopefully without the ensuing terminology wars that often accompany such desires. We need to be precise about uncertainty, outcomes, and risk in order to be helpful to development organizations. Fortunately, the literature is clear here (e.g. see Mun's book on Applied Risk Analysis). An example based on Lucinio's seat belt example may help. I don't think that referring to the risk associated with not wearing a seat belt as "seat belt risk" would be very useful. We are talking about the risk of injury or death (the bad outcome) in the event of an accident. Getting very clear about the outcomes we care about and discussing risk in those terms is essential.
2. Mitigation. Being precise about risk makes mitigation easier. Referring to "staffing risk" doesn't really help someone know how to solve the problem. For example, risk of slipping a schedule due to a project being understaffed has very different mitigation implications than does risk of delivering low quality code due to skills mismatch on a project. To effectively mitigate risks in development, we need to understand (a) the outcome that we are concern with, (b) the risks associated with them, and (c) factors that influence those risks.
3. Measurement. As Lucinio notes, sometimes there are significant gaps in our knowledge where we are truly uncertain, and we may not know the distribution associated with events of concern. This can seem like a catch 22, but we would be falling into a trap to get hung up on trying to get the "right" distribution at times when it cannot be known. What we can capture is our understanding at a given point in time, which can be used to provide insight and drive discussions regarding current risks based on the best information we have at this time. Of course, this will evolve as a project progresses, and this is where the real value comes in. Projects that are performing well should work off risk over time. Tracking this progress gives deep insight to the team.

6 santosg commented Permalink

Good points, Clay.
Thinking about your "Mitigation" point, and taking it back to the talk at Stevens Institute conference that caused Murray's dismay...there maybe a salvageable aspect in their approach. The concept of braking down the sources of risk. Assuming the right conceptualization of risk (e.g., "risk of going over budget" or "risk of missing the delivery date" instead of the ambiguous "people risk" or "program risk)... the notion of breaking down the sources of that risk is useful. This may have been their ultimate goal (though maybe approached in a misguided way).
It would be a legitimate effort to figure the sources of risk... the source of variability. This would amount to an analysis of variance, where we could tell people what % of the uncertainty is due to what sources. Ultimately this would translate in the (more consumable)... "your chances of delivering late would decrease from 80% to 40% if you manage to staff these 10 pieces of work with the right skill", or "your chances of running over budget would go from 60% to 30% if you cut in half the cost rate of a third of your staff. That chance could be cut another 15% further if you improve your quality by just 5%".
This could get even more interesting in the multivariate, inter-related case. Like "Your chance of delivering late is 80%, and the chance or running over-budget is 60%. If you decrease the chance of being late by 10% you would decrease the chance of running overbudget by 10%. However, if you decrease the probability of running overbudget you are increasing the change of delivering late by 25%".

In any case, this sort of break-down analysis, though not quite causal, would hit home well in terms of aiding decision making and realizing what is affecting risk and how you can control it.
- Lucinio

Add a Comment Add a Comment