Test Data and the Continuous Delivery Pipeline
monica914 060000WAY8 Comments (2) Visits (3113)
I had an awesome time last Saturday at CITCON in Boston. Not only were the views amazing but the conversations were thought provoking. The conference follows open-space technology which means that all the topics are meaningful to the participants (we get to vote!) and there is lots of interaction in every session.
I continue to be thinking about the issues related to test data in the continuous delivery pipeline that was a CITCON topic. It’s typical to create test automation expecting certain sets of data to be available. This typically leads to cloning the production database once or generating data once and then deploying that data to every test environment. Either of these approaches will eventually not be like the production database. What happens if there’s a schema change? How much work is it to recreate your sandbox database now? How often does that happen?
And then, there’s the problem of proving that tests run against sandbox data are sufficient to push the code to production. In organizations where Development is rewarded for improving the software quickly and Operations teams are held accountable for every minute of downtime, finding a way to automate over the divide is crucial. And for that, the Operations team has to BELIEVE in the testing. Which means the test data has to look like their stuff. If you can’t build trust in the process and the data before your bits get to staging, then everything has to be retested, usually by hand, by the Operations team.
It seems like a primary difference between sandbox test data and production data is diversity (the testers didn’t think of something that should be included or it was never in production before) and volume (test databases usually aren’t even 10% the size of production). I’m working through a hypothesis, that for some organizations, maybe the volume problem can be handled in production through monitoring. In addition to CITCON, I recently attended a local testing organization meeting where the panelists were saying they don’t do performance testing any more. They do monitoring in production to find their performance bottlenecks. Initially, that felt radical, but the idea is growing on me.
Well, then that leaves the problem of diversity in the data. I’m still thinking about ways to tackle that, but one attendee at CITCON did suggest flagging test automation to indicate tests that require known data and tests that don’t. For all the tests that don’t, you could run those in the final staged environment with the cloned production database. This could radically free up the Operations team. And what if we design tests around this concept, i.e. specifically designing tests for all critical path features to have a reasonable set of automated tests that don’t care about the data… well that would be heading us down the right path. And maybe that would be enough testing. Another radical thought….