Comments (2)
  • Add a Comment
  • Edit
  • More Actions v
  • Quarantine this Entry

1 MarcvanLint commented Permalink

Hi Monica, thanks for you thoughts. Just to add some comments:
If changes are little, and the architecture remains the same, it might be that a performance test does not bring value or address any risk. I would say, don't execute a performance test. Monitoring is important, a great way to get 'real life' insight. On the other hand, you can't do a controlled stress-test on it or hit a specific area of concern.

 
On the other aspect of the amount of test data. I think that test case design, which I learned from Sogeti's TMAP Next, is a major timesaver and increase of quality. For both manual and automated tests. My statement is that you don't need a copy of production or a sub-set. You need the right data, controlled. Instead of running 600 test, maybe 12 will give you enough coverage. That again highly depends on the chosen depth, which has a relation to the risk and impact.
 
Customer who say that they always use production data are not leveraging a core competency of a tester. They should leverage that to make a organization more effective.
Interesting!

2 monica914 commented Permalink

Hi Marc,

 
Thanks for your comments. No question, the approach needed to both test data and performance depend on the particular circumstance. And there are most certainly circumstances where pre-release performance testing is needed. I was pretty interested to discover, though, that one of the reasons many organizations are using monitoring in production rather than pre-release performance testing is the need to have sufficient volume in the test data. They have found it more efficient and useful than producing that data.
 
Regarding your comment on having the right data, that's true, but up to a point. One of the issues I'm trying to address in the blog is how to build automated tests into the pipeline that can take you through the last mile into production. And one key aspect of that is having automated tests that the Operations team can trust. Given that they are on the line if the system goes down, trying to have a conversation about the limited depth being sufficient and ensuring, from their point-of-view, that nothing will fail in production is a pretty hard sell.