OK - I'll admit right up front that I am not prepared. I spent my weekend focused on the New England Patriots. I hope you will provide me with a pass on my lack of readiness. Afterall -- I did have to figure out where I was going to watch the Patriots game, what I was going eat, what kind of beer I was going to drink, and if I would return to my house to watch the NFC Championship Game. I did take one golden nugget out of the games I watched though. Statistics!
It always amazes me the amount of statistics that are garnered for any given player, team, sport, etc... A lot of times there are some pretty far out statistics. The number of catches a receiver makes in the 2nd half of playoff football games in the month of January. The number of carries that a running back gets on a rainy day versus a snowy day versus a perfectly sunny day. The number of passes a quarterback makes in the first 3 minutes of football games that falls on an odd numbered day in the month of October. I'm exaggerating, of course! However, I challenge you to sit through a sporting event on television and pay attention to the statistics. They do get pretty funky. Soooo ... that will be today's topic.
Statistics just don't exist in sports. They are alive and kicking in the software development world. If you have any question, just ask a development manager how many lines of code there are for a particular application. Ask a developer how many components there are. Ask yourself, how many test cases you have. Don't forget the defects. There are trending statistics, density statistics, static statistics, etc... Hey -- it's all numbers, right? Our job, is to make sense of them.
Let me step outside of our statistics box. Let's look at software tool evaluations. I'm sure that you know what they are and have probably been involved with an evaluation or two. When we evaluate software tool, we're primarily looking for the product the best meets our needs. For instance, in the software test automation world, we want to find the tool that will best fit our testing needs. To find out what best meets our needs, we need to first establish what is of value to us. Is it ease of use? Is it a powerful scripting engine? Is it datapooling? Once we have our list of values established, it is a matter of assessing any given tool's features to see if they provide the value that you are looking for. What if we didn't set the list of values that we were looking for? Further, what if we attended a "feature dump" demo, where features were not correlated to the values that they provide. We would be left simply looking at what a tool does. I see the same truth about statistics. If we don't know what values we need from our statistics, they're just numbers.
So I ask you this ... do you know what your statistics mean to you? Can you make a "go/no go" decision based off of them? If you answer NO to either or both of these questions, you may want to peel the onion back and evaluate what your value list is. Are you looking for requirements coverage? Then your statistics will be tied to the number of test cases that validate requirements. Are you looking for code coverage? Then you will need to find out if your current test case suite covers the percentage of code you're looking to cover. How about defects? Does a density report necessarily tell you that one component is weaker than the others? Pretty much so! However, what about the flip side of that coin? Are you just testing that one component more than the others? Ahhh ... that starts speaking to test case metrics. It's all about the value, whether it is features of a tool or statistics in a development world.
Before I sign off, I would like to put this spotlight on test automation. I often hear customers say they are seeking 100% test automation. That's fantastic. It is all idealistic. I believe the majority of people would agree with me that you don't automate 100% of your test cases. Why? Sometimes you can't (do to technical issues like 3rd party controls). Sometimes you don't (the cost-benefit analysis would show it isn't worth spending the time and money automating certain test cases). Could you shoot for 40%? 60% 80%? Sure ... but does that statistic add value? In other words, would it be of more value to have a similar chunk of your testing automated than a disparate chunk? I would love to have an automated smoke test (e.g. a small percentage of a test suite). I am not a fan of disparate test automation, covering a decent percentage of my test suite. I like consistency. I like to know that a certain chunk of my testing is done versus a smattering here and there.
Anyhow ... as Led Zeppelin likes to say -- "Ramble On"
Stats and Statistics
MadTester 270000VUXV Tags:  test management qa statistics testing automation defects 1 Comment 1,576 Visits