There were several very good questions asked in Orlando. Two have been sticking in my mind as very valuable to DREw; with gratitiude to the attendees, I'll paraphrase:
Could Production defects be given more weight than test? and What about the concept that defects found later in the SDLC are more costly to fix?
The technique for reweighting defect severities at differing SDLC stages (including Production) would be rather simple and could be automated, even though the long-held maxim that defects found later are more costly is being challenged by more agile SDLCs which embrace change. As mentioned before in other forae, my goal is to find/have/share a method for associating defects with reasonable, supported, dollar values. These questions are steps along that journey.
A longer step in that direction is to leverage the idea of iteratively delivering business value. If the business value of a usage scenario were a basis for its delivery prioritization, and if Walker Royce’s description (in EXEC02 at RSC2009) of business value as a function of ROI, ROA, and Product Revenue Profile holds, then we are close to formulating a definition for defect value in terms of business dollars (as a function of degree for deliverable business value). Perfect? Not in any sense, but defined, consistent, and supported by the prior decisions of our business stakeholders themselves.
Join in my reverie: Imagine a day when the business representative gives us a use case, the business leadership assigns a dollar value to the basic flow for that use case and a relative value to each scenario, the software delivery team produces and tests the highest priority scenario, finding one defect (of any severity). The benefit of assigning dollar values would lie not in deciding which defects to attack (for the emphasis should be on delivering value, not eliminating defects) but in deciding which defects to study for defect prevention in the next iteration/evolution. We could measure, for each evolution of a product, the change (we would hope, “improvement”) in delivered versus delayed value, perhaps developing a new metric for software which approximates the meaning (not method) of Taguchi’s loss function. When we add common existing methods for tracking project costs, we could offer true insight into practical questions. Imagine being able to quantify, in dollars, how much a practice, say pair programming, improved delivery of business value relative to its cost over a prior method!