How early Integration testing enables agile development

It's hard to deliver on the agile priniciple of "done, done, done" for complex, heterogeneous systems. Monica Luke explains how service virtualization can improve team collaboration and align the independent test organization's focus on the same milestone as the development team.

Monica Luke (mluke@us.ibm.com), Lifecycle Scenario Architect, IBM

author photoMonica Luke has almost 20 years experience in software engineering. She joined IBM Rational software nine years ago in the test organization. Since then, Monica led several test automation teams, held the role of test automation architect, and earned an Outstanding Technical Achievement Award for a test automation framework that is widely used internally at IBM. In 2010, she moved into the IBM Rational Strategic Offerings team, helping to drive integrations to accelerate client value across the Collaborative Lifecycle Management tools, including the recorded demos for the "Five ALM Imperatives," which are available at jazz.net/blog. In 2012, Monica is leading the effort to accelerate agile testing in a Collaborative Lifecycle environment with the Green Hat technology.



05 June 2012

Also available in Chinese Russian

Everything old is new again…

As a tester, the idea of having stable, working code at the end of every iteration makes my heart sing. And. believe it or not, many years ago (ahem, about 20), long before agile software development was the rage, I worked at a company that did just that. For two weeks, the developers would code like crazy. We would get a stable build, run a set of (manual) regression tests on it, and declare it ready for testers to use. The test team would go off and work on that build for the next two weeks, doing more and more complex testing, while development was coding like crazy on the next "stable" build.

And we were all co-located, so it really worked, pretty much. Of course, we didn't ship milestones, and there were still plenty of defects. And no, we didn't have stakeholders looking at each iteration. We were missing a key aspect of agile software delivery, namely time-boxing: stable, working code that meets a stakeholder's need.

Not much has changed, even with the advent of agile development, and all of the goodness that has come with it. In order to do complex tests, independent test organizations are still picking up "milestones" and then proceeding to test them during the next iteration. We are still trying to solve the problem of having test aligned with development on the exact same code. And we are still compromising on the definition of stable, working code that meets a stakeholder's need.


Agile development hits the system test wall

But with the advent of agile development, and its definition of done, done, done, this has given rise to a new complaint. Developers feel like testers are intruding on their "need for speed". Testers are finding defects in code after the developers have moved on. At least in my old job, that was expected. Developers did not say things like, "That's so two weeks ago. I'm working on something else now." But if there are fundamental problems being found in tests, are they really "done, done, done"? Seems like something's amiss.

Okay, you say, what about automation? Isn't that a basic agile tenet? To do test-driven development (TDD)? Developers cannot deliver any code without unit tests? If that's good enough to prove that a milestone build is stable and that working code meets a stakeholder need, why are we finding so many defects, using traditional test methods? How are we defining validation to meet a stakeholder need? If the development team is doing demos, what are they showing? Is it a fully integrated demonstration, showing the value of the new feature to the stakeholder, in the context of the entire system? The more complex the system, the more likely that the answer is "no".

What's going on here? Let's peel the onion a little bit. First, we have a development organization that's adopting agile methods, but you might have noticed that I mentioned an "independent" test team. Agile gurus generally recommend embedded testers. The agile process is grounded in the whole team approach (and that contributes greatly to its success). So why would there be an independent test team? Because the application is part of a system that's really too large to contain the testing. Even with all of the best intentions that include comprehensive TDD and unit testing coupled with some level of complex testing (manual or automated), the development team cannot contain the system test: full system integrations, large-scale performance, heavy load, large datasets, security — you get the idea. Organizationally, there's an independent test team responsible for this next level of tests, typically to achieve economies of scale through Test Centers of Excellence.

So things probably look a bit like this:

Figure 1. Integration testing falls behind
Test setup time causes integration testing to lag

Larger view of Figure 1.

And at least some of the time, that tester takes N days to install and configure, only to discover that some basic functionality does not work. We call that gross breakage when it's an unusable build — and it's really not anyone's fault. It's a symptom of how human beings handle complexity. We learn deep details and become experts in smaller and smaller areas, because the amount of detail that we can master limits the number of areas that we can comprehend deeply. This creates boundaries and places where hand-offs need to happen.


A brave new world for system integration testing

Okay, so enough about the problem. If you're still reading, this is your world. And you've probably been attempting to solve it the same way that I have at least three times: by adding into the build process the ability to do more complex testing than is traditionally included in the build, i.e., unit tests. You build a lights-out automation setup to install and configure the build, validate and configure the test tool environment, kick off the automated test suite, and report the results.

What if we could make that easier? Or make it accessible to more and more complex heterogeneous systems that leverage all kinds of external systems via SOAP, MQ, SOA, and so on? There are now service virtualization tools that allow comprehensive integration testing to happen all the time. That means less hardware and time to configure complex heterogeneous systems, making it attainable to run integration tests on each and every build. And if your development team has adopted continuous integration, that means integration testing on every integration build.

Figure 2. Service virtualization
Incremental integration testing

Service virtualization works by recording once in the production or stage environment and then smart stubbing of components in the complex system under test. I like to think of this as virtualizing the complexity of the system, leaving the changing parts as the parts I want to test. This works well in a lot of cases, but particularly well when the other components of the system are not changing or not changing rapidly. It aligns really well with testing best practices of reducing the number of variables that change, test to test. There are a few things about service virtualization that are really exciting:

  • Virtualize the complexity of the system to streamline test environment setup
  • Smart service virtualization includes stateful-ness that allows your tests to do cool things, such as act like a service is down every X times
  • Test data management in the virtualized service complements data pools and enterprise test data tools, such as Optim
  • Services can be virtualized before they exist
  • Test teams can align with the development team on the same milestone because setup is no longer a bottleneck

This brings us back to the promise of agile software development and delivering stable, working code at the end of every iteration. It truly is a brave new world, when testers and developers can align on the same code at the same time and really build in quality. The Green Hat technology is now available as part of the IBM Rational Test Workbench and IBM Rational Test Virtualization Server.

Resources

Learn

Get products and technologies

  • Download a free trial version of Rational software.
  • Evaluate other IBM software in the way that suits you best: Download it for a trial, try it online, use it in a cloud environment, or spend a few hours in the SOA Sandbox learning how to implement service-oriented architecture efficiently.

Discuss

Comments

developerWorks: Sign in

Required fields are indicated with an asterisk (*).


Need an IBM ID?
Forgot your IBM ID?


Forgot your password?
Change your password

By clicking Submit, you agree to the developerWorks terms of use.

 


The first time you sign into developerWorks, a profile is created for you. Information in your profile (your name, country/region, and company name) is displayed to the public and will accompany any content you post, unless you opt to hide your company name. You may update your IBM account at any time.

All information submitted is secure.

Choose your display name



The first time you sign in to developerWorks, a profile is created for you, so you need to choose a display name. Your display name accompanies the content you post on developerWorks.

Please choose a display name between 3-31 characters. Your display name must be unique in the developerWorks community and should not be your email address for privacy reasons.

Required fields are indicated with an asterisk (*).

(Must be between 3 – 31 characters.)

By clicking Submit, you agree to the developerWorks terms of use.

 


All information submitted is secure.

Dig deeper into Rational software on developerWorks


static.content.url=http://www.ibm.com/developerworks/js/artrating/
SITE_ID=1
Zone=Rational, DevOps
ArticleID=819380
ArticleTitle=How early Integration testing enables agile development
publish-date=06052012