Cummings speaks of testing as if it were an inexhaustible facility. Well, one can always test and test and test, but as Joe points out, there is always a cost: "The problem with that is that no economic venture can afford to test forever. First, the costs are prohibitive, and, second, the product never gets to market! The real question is this: where is the appropriate economic trade-off point?"
Well said. Joe goes on to observe that "If we want to guarantee that no life will ever be lost to a software defect, we are just kidding ourselves. It has happened many times whether we know it or not, and it will continue to happen. We do need to minimize obvious blunders, but with increasing complexity comes increased risk. Trying to eliminate that risk, through either brute force or elegance, is a costly proposition." I'd add that it might even be an intractable problem, if one considers the size of the state space of most every interesting system of systems. No system of sufficient complexity can be fully tested; no system of sufficient complexity can be proven to be "correct." If really want to get deep about it, my intuition suggests that this is an instance of Godel's incompleteness Theorem: there are some propositions (i.e the operational behavior of a complex software-intensive system) that cannot be proven true or false using the rules and axioms (i.e. the mathematical model of computation) of that system itself.
I have discovered a truly marvelous proof for this, but the margin is too narrow to contain it.
Quote of the day:
Read more at Grady’s Handbook blog