The Summer Time Slow Down is upon us. Vacations are rampant. We find ourselves with some extra time between projects. Having it is a luxury. So ... what do we do with this extra time that is now sitting in our hands, oozing between our fingers? One suggestion would be to review a couple of projects and begin tightening the loose screws in the process.
Sometimes this can be a daunting task -- a leviathan, daring us to attack it. No problem. Simply de-construct things into smaller pieces. For each project that we review, look at the things we did. For instance, did we perform ...
1. Test Management?
2. Manual Test Script Creation?
3. Functional Test Automation?
4. Performance Testing?
Once we have divided things into smaller components, we can break out our magnifying glass and dig into each one. We can make this as formal or as informal as we want. However, we want to make it fun. For instance, we can set up a series of meetings that take over a meeting room during lunch. We can order up some food or make it a brown bag affair. Perhaps we even bring in some of everybody's favorite tunes. The idea is to create a series of collaborative sessions that our team wants to participate in. Once we create the desired environment, let the nit-picking being.
When we begin to dive into the smaller chunks we should treat it like we're newspaper reporters. We should ask the basic "who", "what", "when", "where", "why", and "how" questions. They will help us identify key pieces of information.
WHO - identifies the team members involved (both on the test team and within the extended teams, such as project management).
WHAT - identifies all of the things that influenced the project (test cases, requirements, schedules, tools, etc...).
WHERE - identifies where things took place (e.g. offshore? multiple sites?)
WHY - identifies the baseline of the objective (e.g. What was the mission? Was it to release software sans Sev0/Sev1 defects and have all Sev2 defects addressed via technotes?)
WHEN - identifies the overall schedules (iterations, phases, builds, etc...)
HOW - identifies how the project developed, tasks that occurred, risks that were mitigated, etc..
We can now start identifying critical pieces of each chunk. Let's try it with our Test Management example.
WHO - QA Analysts on the team, QA Manager on the team, Business Analysts from the Requirements Team, Project Manager for overall project
WHAT - Develop test cases, designs, schedules, and strategies
WHERE - Development is both onshore and offshore. Testing is being handled onshore.
WHY - Verify the quality of the application. Don't ship with Sev0 and Sev1 bugs.
WHEN - 6 month project. Multiple iterations, pushing out multiple builds per iteration. Next iteration doesn't occur if Sev0/Sev1 bugs exist.
HOW - Use Word to capture test plan, Excel to capture manual testing, and homegrown defect tracker
We have just collected facts about our test management. If we were to then identify problems that we encountered in our projects, we could apply them to our facts and begin to see correlations. For instance, some project issues may have been:
** Application didn't meet customer expectations
** Released software late
** Defect tracking wasn't managed well
OK!! So ... what if we look at some simple correlations:
Application didn't meet customer expectations -> Didn't trace test cases to requirements
Released software late -> Test schedule got pushed
Released software late -> Didn't use test automation tools.
We have just identified 3 loose screws. To tighten them, we need to look at the different answers that could apply. Could we invest in test automation tools for the next project? Do we develop a contingency plan for accelerated test schedules (e.g. test all critical and major requirements)?
Ultimately, we are looking at iteratively honing what we do. What can be better?
The Summer Time Slow Down
MadTester 270000VUXV Identificações:  automation metrics process assurance quality test qa 1.201 Visitas