According to Wikipedia, the Japanese word Zen is derived from the Chinese word Chán (禪). It had nothing in common with Technology, Software Quality or testing techniques. However, several years ago to use the phrase "The Zen of ..." becames a bit fashionable to highlight things that, despite being obvious, are not taken into account.
The first time I saw "The Zen of ..." related to software development was in a Visual Basic development web page. The author wrote a list of the obvious things, that no one could say that there are not true. It made up a nice list of tips under the name "The Zen of Visual Basic".
Another example is the Easter egg that have the Python language interpreter, by typing the command "import this" it shows "The Zen of Python".
I am convinced that there are many obvious things that is important to remember in SQA, but are not always taken into account. This is the reason why I have decided to write this blog entry.
It does not mean I was sat in meditation for hours together a of hundreds of candles, like in the Kung Fu series, writing this list. The Zen of SQA here is only the compilation of several obvious things that in my experience I have had to highlight and repeat.
The Zen of SQA
- SQA is not Testing.
With no doubt, this is one of the most obvious points I had to stress. I'm not talking about a semantic error, I heard many times things like "We do SQA without documentation or specifications".
We can find several definitions about Software Quality Assurance like ISO 1994, which sounds pretty but are difficult to understand.
Either way, it is easy to see that testing is just one activity of SQA, and it goals is to find errors. Moreover SQA is to ensure that a object meeting the specifications.
Bellow is a "book example" definition of SQA:
Software quality assurance (SQA) consists of a means of monitoring the software engineering processes and methods used to ensure quality. It does this by means of audits of the quality management system under which the software system is created. These audits are backed by one or more standards, usually ISO 9000.
- What is not tested, it fails.
I'm tired of seeing that what is not tested, it fails. In a non-trivial system, is relatively easy to make mistakes that generates faults.Too many factors that can generate a fault: a lack of clear specifications and human errors are commons, and a high frequency of updates increases the possibility of a failure. It is essential to having a spider web of test cases, compact enough, to catch as many bugs as possible.
- There is no half-QA.
Unlike software programming, a testing object cannot be "half-SQA verified". SQA and programming are development tasks of different natures. A developer can give you an incomplete a part of software and say "It's a half programmed", but if there is one testcase left to execute, you cannot assure the half of quality. Once defined the scope of testing, it must be completed. Otherwise, it is not ensuring the quality of software.
- Test execution is not just testing.
The times of writing test cases in a spreadsheet, and then use this spreadsheet as part of the test execution, and even as a report of the tests is definitely oudated. The test management software that exists today, allows us to collect information from any part of the testing process. Depending on the metrics that we defined, testing will provided more information than just a test case has passed. It's very important to have the correct process defined in order to feed metrics naturally and economically.
- Test Analysis weighs more than Test Execution.
The analysis is, undoubtedly, one of the most important activity of the quality processes. If you make an error during testing, probably, it will not be too serious. If you have reported an issue that it was not an issue, the development team will let you know soon. Otherwise, not reporting a real bug is worse, but it is always possible to find that failure on futures test executions.
Unfortunately, mistakes made during test analysis or test design, are much more difficult to detect and often recur over and over again during several test cycles.
That is why it is very important to have a correctly sized area of analysis in our SQA department.
- Quality of Quality counts.
However small it may be a quality control department, it is important to have minimum of quality of quality process controls. Humans make mistakes, and members of a SQA department usually tend to be humans. You can set controls for quality assurance as accurate as desired, the project budget can be a good limit for this area.
- Not everyone can be an analyst.
The SQA Analyst is a specific profile, specific knowledge and certain skills are required. Incredible as it may seem, I had to explain it several times. Particularly, I think it's a good idea, to have defined some SQA processes before hiring an analyst. The best analysts are often specialized and is good to have specialists in the methodologies and QA techniques that we use.
- Not everyone can be a tester.
Testers, like analysts, has a specific profile too. Throughout my career as test manager, I found many times, people thought it was not necessary to hire testers. They thought we could use the same SQA analysts to execute manual testing, or even developers who was programming the application to be tested.
An analyst, undoubtedly, knows the task of the tester, but this does not make him a good one. Analysts and testers have different profiles and perform different tasks. A developer should run their unit tests, but this activity does not have too much in common with tester activities.
- Testers and developers are on the same side.
Some people think that testers and developers should be on opposite sides. Even the testers must play the role of "police" saying who do the wrong things.I'm sure there are many more tips like these, but now are these that come to my mind.
At first, I thought this idea was shared only by those who were not related to SQA, but to my surprise, I found a company that is dedicated to SQA outsourcing and they slogan is "Your devil's advocate".
To think that developers and testers do not have the same objectives is stupid and trying to create enmity between them is much worse.
The errors committed by a developer that causes failures in a system, are not a symptom of incompetence. Everyone makes mistakes, when a developer introduce a failure in a system, he usually doesn't have the tools to find it, him main objective is to develop software, that is him specialty, that's what he do better than anyone. Moreover, the aim of testers is to find bugs and help developers to fix them by providing accurate information about detected issues. Both, testers and developers must work together in order to create competitive software.
I'm not including references due to herein is based on my personal experience and knowleadge. Even the photograph was taken by me.
I hope you enjoyed this reading.