Focus on Quality - a DevOps scenario for Middleware
Roger LeBlanc 27000082QN Visits (15586)
This blog post has been a collaborative effort with Marianne Hollier, Cindy VanEpps, Roger LeBlanc and Anita Rass Wan.
Ever jump into a TV series in the middle of a season? Welcome to Episode 4, Season 1 of DevOps Scenarios for Middleware. For an overview of this riveting and not-
Fingers wagging, eyes squinting – “Why didn’t you find this problem during testing?!” How many times have you been asked that, only to try to defend yourself with things like “We can’t test everything!” and “It’s a corner case”? Everyone knows you can’t improve quality by testing alone, so what can you do when your testing team is taking the blame for poor quality in production?
Consider our test lead, Tammy, hell-bent on changing how her test team is victimized by constant “blamestorming”. She gathers the troops, including representatives from all roles in the organization who should feel ownership of product quality. After an inspiring talk about how quality is everyone’s job, and the tried and true “fixing bugs early in the cycle saves money”, she asks what others will do to take ownership. She begins by capturing and discussing the application test plan – including the new features in scope. She also shares a report she ran to show the analysis of defects from the last release. Clearly, the integration between application components, including middleware components, caused a large number of significant defects that were difficult to remediate. Al, our architect, says he can help identify and rank by risk, the application’s integration points for the new features. These integration points are notoriously the hardest to test, are never ready to test on time, and always have the most severe defects.
Deb, our developer, offers to collaborate with Tanuj, our test automation specialist, using Al’s feedback on the application integration points, to create the virtual services, or stubs as we call them, so that we can begin testing earlier. This allows the testing to “shift left” i.e. earlier in the software process. Tanuj works with Donna, our database administrator, to extract and mask production data for testing, because he’s learned that using realistic data during testing is key to exercising many paths through the application. With advice and insights from Katia, our dedicated data center operator, about optimal performance settings, Tanuj, using his integrated test management console, then creates automated integration and performance tests, as well as automated functional regression tests, using a data-driven approach.
Meanwhile, our developer Deb leads her team to start thinking about features from the outside-in, so that integrations are tested as soon as a build is made. Early versions of the new features for our release are built into application artifacts used to test the interfaces only. These magic beans are transported to our initial middleware test environment by our automation-machine that does the heavy lifting of deployment throughout our lifecycle. Once there, the automation-machine starts the necessary stubs and triggers the automated integration tests. As development of the details in the features progresses, the continuous integration and testing, facilitated by our automation, helps Tanuj verify the quality of our application earlier than ever before. Continually using the performance tests helps us pinpoint which changes cause a significant performance hit.
All roles are notified of test results and quality trends at each sprint, supporting complete team ownership of quality throughout the process.
The build of our “complete” application artifacts are then sent by our automation-machine to the revered “QA” environment for the full system testing. Here, the other dependent systems are finally available, so the stubs are turned off. The automated functional regression tests are triggered, and Tammy’s team of test specialists run their scripted and exploratory tests, poking and prodding according to their test plan. The team celebrates together that they don’t find any showstopper defects. Historically, this is when the proverbial wheels would “fall off”.
Our product owner, Bob, and our development lead, Marco, see the real-time results of testing, and approve the auto
Bob sends a congratulating announcement to the team on the smoothest release he has ever been a part of. He adds special thanks to Tammy for pushing for everyone’s ownership of quality.
Stay tuned as our next posts in this series will review recommended and alternative tool-chains to implement this scenario.