As a tester, the idea of having stable, working code at the end of every iteration makes my heart sing. And. believe it or not, many years ago (ahem, about 20), long before agile software development was the rage, I worked at a company that did just that. For two weeks, the developers would code like crazy. We would get a stable build, run a set of (manual) regression tests on it, and declare it ready for testers to use. The test team would go off and work on that build for the next two weeks, doing more and more complex testing, while development was coding like crazy on the next "stable" build.
And we were all co-located, so it really worked, pretty much. Of course, we didn't ship milestones, and there were still plenty of defects. And no, we didn't have stakeholders looking at each iteration. We were missing a key aspect of agile software delivery, namely time-boxing: stable, working code that meets a stakeholder's need.
Not much has changed, even with the advent of agile development, and all of the goodness that has come with it. In order to do complex tests, independent test organizations are still picking up "milestones" and then proceeding to test them during the next iteration. We are still trying to solve the problem of having test aligned with development on the exact same code. And we are still compromising on the definition of stable, working code that meets a stakeholder's need.
But with the advent of agile development, and its definition of done, done, done, this has given rise to a new complaint. Developers feel like testers are intruding on their "need for speed". Testers are finding defects in code after the developers have moved on. At least in my old job, that was expected. Developers did not say things like, "That's so two weeks ago. I'm working on something else now." But if there are fundamental problems being found in tests, are they really "done, done, done"? Seems like something's amiss.
Okay, you say, what about automation? Isn't that a basic agile tenet? To do test-driven development (TDD)? Developers cannot deliver any code without unit tests? If that's good enough to prove that a milestone build is stable and that working code meets a stakeholder need, why are we finding so many defects, using traditional test methods? How are we defining validation to meet a stakeholder need? If the development team is doing demos, what are they showing? Is it a fully integrated demonstration, showing the value of the new feature to the stakeholder, in the context of the entire system? The more complex the system, the more likely that the answer is "no".
What's going on here? Let's peel the onion a little bit. First, we have a development organization that's adopting agile methods, but you might have noticed that I mentioned an "independent" test team. Agile gurus generally recommend embedded testers. The agile process is grounded in the whole team approach (and that contributes greatly to its success). So why would there be an independent test team? Because the application is part of a system that's really too large to contain the testing. Even with all of the best intentions that include comprehensive TDD and unit testing coupled with some level of complex testing (manual or automated), the development team cannot contain the system test: full system integrations, large-scale performance, heavy load, large datasets, security — you get the idea. Organizationally, there's an independent test team responsible for this next level of tests, typically to achieve economies of scale through Test Centers of Excellence.
So things probably look a bit like this:
Figure 1. Integration testing falls behind
And at least some of the time, that tester takes N days to install and configure, only to discover that some basic functionality does not work. We call that gross breakage when it's an unusable build — and it's really not anyone's fault. It's a symptom of how human beings handle complexity. We learn deep details and become experts in smaller and smaller areas, because the amount of detail that we can master limits the number of areas that we can comprehend deeply. This creates boundaries and places where hand-offs need to happen.
Okay, so enough about the problem. If you're still reading, this is your world. And you've probably been attempting to solve it the same way that I have at least three times: by adding into the build process the ability to do more complex testing than is traditionally included in the build, i.e., unit tests. You build a lights-out automation setup to install and configure the build, validate and configure the test tool environment, kick off the automated test suite, and report the results.
What if we could make that easier? Or make it accessible to more and more complex heterogeneous systems that leverage all kinds of external systems via SOAP, MQ, SOA, and so on? There are now service virtualization tools that allow comprehensive integration testing to happen all the time. That means less hardware and time to configure complex heterogeneous systems, making it attainable to run integration tests on each and every build. And if your development team has adopted continuous integration, that means integration testing on every integration build.
Figure 2. Service virtualization
Service virtualization works by recording once in the production or stage environment and then smart stubbing of components in the complex system under test. I like to think of this as virtualizing the complexity of the system, leaving the changing parts as the parts I want to test. This works well in a lot of cases, but particularly well when the other components of the system are not changing or not changing rapidly. It aligns really well with testing best practices of reducing the number of variables that change, test to test. There are a few things about service virtualization that are really exciting:
- Virtualize the complexity of the system to streamline test environment setup
- Smart service virtualization includes stateful-ness that allows your tests to do cool things, such as act like a service is down every X times
- Test data management in the virtualized service complements data pools and enterprise test data tools, such as Optim
- Services can be virtualized before they exist
- Test teams can align with the development team on the same milestone because setup is no longer a bottleneck
This brings us back to the promise of agile software development and delivering stable, working code at the end of every iteration. It truly is a brave new world, when testers and developers can align on the same code at the same time and really build in quality. The Green Hat technology is now available as part of the IBM Rational Test Workbench and IBM Rational Test Virtualization Server.
- Visit the Rational Test
Workbench site for more information.
- Visit the Rational Performance
Test Server site for more information.
- Visit the Rational Test Virtualization Server site for more information.
- Visit the Rational software area on developerWorks for technical resources and best practices for Rational Software Delivery Platform products.
- Subscribe to the developerWorks weekly email newsletter, and choose the topics to follow.
- Stay current with developerWorks technical events and webcasts focused on a variety of IBM products and IT industry topics.
- Attend a free developerWorks Live! briefing to get up-to-speed quickly on IBM products and tools, as well as IT industry trends.
- Watch developerWorks on-demand demos, ranging from product installation and setup demos for beginners to advanced functionality for experienced developers.
- Learn and participate in the Agile community
Get products and technologies
- Download a free trial version of Rational software.
- Evaluate other IBM software in the way that suits you best: Download it for a trial, try it online, use it in a cloud environment, or spend a few hours in the SOA Sandbox learning how to implement service-oriented architecture efficiently.
- Join the Rational software forums to ask questions and participate in discussions.
- Ask and answer questions and increase your expertise when you get involved in the Rational forums, cafés, and wikis.
- Join the Rational community to share your Rational software expertise and get connected with your peers.
- Rate or review Rational software. It's quick and easy.
Monica Luke has almost 20 years experience in software engineering. She joined IBM Rational software nine years ago in the test organization. Since then, Monica led several test automation teams, held the role of test automation architect, and earned an Outstanding Technical Achievement Award for a test automation framework that is widely used internally at IBM. In 2010, she moved into the IBM Rational Strategic Offerings team, helping to drive integrations to accelerate client value across the Collaborative Lifecycle Management tools, including the recorded demos for the "Five ALM Imperatives," which are available at jazz.net/blog. In 2012, Monica is leading the effort to accelerate agile testing in a Collaborative Lifecycle environment with the Green Hat technology.