How early Integration testing enables agile development

Comments

Everything old is new again…

As a tester, the idea of having stable, working code at the end of every iteration makes my heart sing. And. believe it or not, many years ago (ahem, about 20), long before agile software development was the rage, I worked at a company that did just that. For two weeks, the developers would code like crazy. We would get a stable build, run a set of (manual) regression tests on it, and declare it ready for testers to use. The test team would go off and work on that build for the next two weeks, doing more and more complex testing, while development was coding like crazy on the next "stable" build.

And we were all co-located, so it really worked, pretty much. Of course, we didn't ship milestones, and there were still plenty of defects. And no, we didn't have stakeholders looking at each iteration. We were missing a key aspect of agile software delivery, namely time-boxing: stable, working code that meets a stakeholder's need.

Not much has changed, even with the advent of agile development, and all of the goodness that has come with it. In order to do complex tests, independent test organizations are still picking up "milestones" and then proceeding to test them during the next iteration. We are still trying to solve the problem of having test aligned with development on the exact same code. And we are still compromising on the definition of stable, working code that meets a stakeholder's need.

Agile development hits the system test wall

But with the advent of agile development, and its definition of done, done, done, this has given rise to a new complaint. Developers feel like testers are intruding on their "need for speed". Testers are finding defects in code after the developers have moved on. At least in my old job, that was expected. Developers did not say things like, "That's so two weeks ago. I'm working on something else now." But if there are fundamental problems being found in tests, are they really "done, done, done"? Seems like something's amiss.

Okay, you say, what about automation? Isn't that a basic agile tenet? To do test-driven development (TDD)? Developers cannot deliver any code without unit tests? If that's good enough to prove that a milestone build is stable and that working code meets a stakeholder need, why are we finding so many defects, using traditional test methods? How are we defining validation to meet a stakeholder need? If the development team is doing demos, what are they showing? Is it a fully integrated demonstration, showing the value of the new feature to the stakeholder, in the context of the entire system? The more complex the system, the more likely that the answer is "no".

What's going on here? Let's peel the onion a little bit. First, we have a development organization that's adopting agile methods, but you might have noticed that I mentioned an "independent" test team. Agile gurus generally recommend embedded testers. The agile process is grounded in the whole team approach (and that contributes greatly to its success). So why would there be an independent test team? Because the application is part of a system that's really too large to contain the testing. Even with all of the best intentions that include comprehensive TDD and unit testing coupled with some level of complex testing (manual or automated), the development team cannot contain the system test: full system integrations, large-scale performance, heavy load, large datasets, security — you get the idea. Organizationally, there's an independent test team responsible for this next level of tests, typically to achieve economies of scale through Test Centers of Excellence.

So things probably look a bit like this:

Figure 1. Integration testing falls behind
Test setup time causes integration testing to lag
Test setup time causes integration testing to lag

Larger view of Figure 1.

And at least some of the time, that tester takes N days to install and configure, only to discover that some basic functionality does not work. We call that gross breakage when it's an unusable build — and it's really not anyone's fault. It's a symptom of how human beings handle complexity. We learn deep details and become experts in smaller and smaller areas, because the amount of detail that we can master limits the number of areas that we can comprehend deeply. This creates boundaries and places where hand-offs need to happen.

A brave new world for system integration testing

Okay, so enough about the problem. If you're still reading, this is your world. And you've probably been attempting to solve it the same way that I have at least three times: by adding into the build process the ability to do more complex testing than is traditionally included in the build, i.e., unit tests. You build a lights-out automation setup to install and configure the build, validate and configure the test tool environment, kick off the automated test suite, and report the results.

What if we could make that easier? Or make it accessible to more and more complex heterogeneous systems that leverage all kinds of external systems via SOAP, MQ, SOA, and so on? There are now service virtualization tools that allow comprehensive integration testing to happen all the time. That means less hardware and time to configure complex heterogeneous systems, making it attainable to run integration tests on each and every build. And if your development team has adopted continuous integration, that means integration testing on every integration build.

Figure 2. Service virtualization
Incremental integration testing
Incremental integration testing

Service virtualization works by recording once in the production or stage environment and then smart stubbing of components in the complex system under test. I like to think of this as virtualizing the complexity of the system, leaving the changing parts as the parts I want to test. This works well in a lot of cases, but particularly well when the other components of the system are not changing or not changing rapidly. It aligns really well with testing best practices of reducing the number of variables that change, test to test. There are a few things about service virtualization that are really exciting:

  • Virtualize the complexity of the system to streamline test environment setup
  • Smart service virtualization includes stateful-ness that allows your tests to do cool things, such as act like a service is down every X times
  • Test data management in the virtualized service complements data pools and enterprise test data tools, such as Optim
  • Services can be virtualized before they exist
  • Test teams can align with the development team on the same milestone because setup is no longer a bottleneck

This brings us back to the promise of agile software development and delivering stable, working code at the end of every iteration. It truly is a brave new world, when testers and developers can align on the same code at the same time and really build in quality. The Green Hat technology is now available as part of the IBM Rational Test Workbench and IBM Rational Test Virtualization Server.


Downloadable resources


Related topics


Comments

Sign in or register to add and subscribe to comments.

static.content.url=http://www.ibm.com/developerworks/js/artrating/
SITE_ID=1
Zone=Rational, DevOps
ArticleID=819380
ArticleTitle=How early Integration testing enables agile development
publish-date=06052012