Saturday, October 27, 2012

Experiences in automation ... WeTestWorkshop

I have never lived anywhere quite like Wellington. The thing that constantly amazes me about this place is the sense of community amongst technical folk, aided by the various meetups which are organised by people who are passionate about their relative crafts.

I'm already a regular face at the AgileWelly events that are organised. I was thus noticably pleased when following KWST2, several testers decided we needed a more regular workshop, and thus the WeTest workshop was born.

This Thursday was the first event, sponsored by the consultancy I used to work for, Assurity. Katrina Edgar (the organiser behind the whole WeTest meetup) led with an experience report on the subject of test automation. It was a potentially tough crowd – about half the room were ex-developers turned testers, and 75% had experienced test automation in the last two years.

Katrina, like myself, was an ex-developer turned test automation specialist. She'd encountered 3 major projects so far in her career ...

On Project A, she was allowed to branch out into automation on a typically very manual task. She'd been given enough free reign to choose whether to automate or not. She used it to automate the very onerous task of setting up multiple configurations, which removed a lot of monotony from her tasks. The work was relatively low cost, and reaped high benefit. But that said, she felt just left to get on with in, and her automation was expressly that, HERS. It was never reviewed, and no-one else ever used.

Project B was more advanced. She came onboard and there was an automation framework already in place – a Jenkins machine which provided continuous integration tests using Selenium scripts. She felt where as Project A was more about “what are you testing” with the free reign on “how are you testing”, life here was quite the reverse. Everything was about “how can we test on Jenkins”.

The system had a Concordian front end to show whether tests had passed or not, and it was considered that if the test passed, it was “all good”. There were no follow-on manual tests, because it was believed “well our Selenium scripts test it all, don't they?”.

The problem was that the testing was highly technical, and to understand the script content, you had to be able to read Java. This meant not enough people did read through and understand the scripts and what they did. Testers would modify and make new scripts, but no-one ever reviewed them. So on the whole no-one could be sure whether these tests did test the core values of the project.

Project C echoed a lot of Project B. It was a similar system where everything had been done by automation. But it was a much older, legacy system, and all the original staff, and much of the expertise had moved on.

Thus the scripts were flaky, and needed a lot of maintenance by people who didn't really understand them all. A lot of time was spent fixing them, but no-one knew everything they did. But they'd always seemed to work before, so no-one wanted to tamper with them too much either.

Her experience report finished, the discussions around the room began. And this is where a peer-conference drastically differs from presentation-based events. It's much more interactive with many joining in, asking questions, sharing tales. Whereas in a presentation you walk out with on persons experience and ideas, at the end of a peer conference, you've had those ideas pooled together by everyone in the room.

Thus by the end of the 2 hours, we'd investigated and reached consensus in a way which was surprising to both myself and Katrina. In fact no-one could have predicted it – which is what can make these things so exciting. These were some of our take-homes by the end of the night …

Know why you are automating

Automation should be almost always about addressing a pain if you tried to do something manually. In particular it should use some strength of the computer against an area where a human being is much weaker and slower.

Here are some areas in which computer excel over humans, especially in a “testing” capacity (I will explain the use of quote marks later),
  • they are much faster
  • they can do the same task over and over again with no variance
  • always do exactly what they're told

On the flip side, this is what an experienced human being can do which computers can't,
  • does not need a script to start using the software
  • uses software in the same way as an end user (if a human is going to use your end-product, you need a human opinion on it during testing)
  • can investigate as they test
  • can notice things in software even if they're not on the checklist

To make efficient use of automation (and thus get a return of investment on the time you spend automating), you need to be addressing a pain point in your testing, and you need to be doing something in your automation that computers can do well (from the first list) rather than something that humans do well. It also needs to be doing something that you're likely to do again and again - so once scripted, it'll save you time every time it's run.

If you're Agile, and 3 days of every sprint is taken with your testers running repetitious regression tests on a mathematical function, this is the kind of pain point you can and should automate to free up some of those 3 days of testing effort.

Know what you should be automating

When test automation was first introduced in the 1990s there was a belief that many test departments should have suites of 100% automation. Experiences of the last decade have really challenged that idea.

Test automation has a place, but it's not the alpha and omega of testing. In fact many like Michael Bolton believe automation tools should be called automated checkers over automated testers (hence the quotation marks before).

The reason for this is an automated script can only check for what it's scripted to check for. It will never tell you if a screen flashes pink and yellow unless you tell it to check for that. It will never notice the kinds of things that a human tester will go “well is it supposed to be like that?” where something is not necessarily against a requirement, but not quite right either.

The cyborg tester

I've heard the concept of the cyborg tester before, and this is what started to come out from people's experience with automation. I believe I've heard it from people like James Bach and Ben Simo on Twitter – the idea is that testing isn't about doing testing all by humans, and not by doing testing all by machines.

The cyborg tester is a fusion of both man and machine, using both blended together to produce superior testing.

Automated checks are fast, repeatable, and if done right, anyone can push “play”. But they miss a lot of defects a tester would find. They are best used essentially for unit testing between building a piece of code and giving it to a human tester.

We've all had scenarios where developers have delivered new builds daily – when asked if it passed testing, you are greeted with “well it built” (which does not mean it passed any kind of test). The test team start to use it, and there are major issues, with elementary functionality failing. That means anywhere from a half day to a full day of testing is lost because we have a bad build, and no capacity in some systems to rollback to a previously working build.

How much better then to include those kind of smoke checks as part of the build process, and if the software doesn't pass those checks, it's not deployed? Such a policy follows the “test early” philosophy, and means manual testers are protected from bad builds which are so fundamentally flawed it would force them to down tools until addressed. [A working old build allows more investigation than a new, broken one]

Such a system is one of synergy, allowing testers to continue investigating on a previously stable build until a useful new build with basic core functionality can be delivered.

Automation TLC

As alluded to by Katrina, and in line with comments I've had from Janet Gregory, the automation you are doing needs to be clear and visible to the whole project. Everyone should be encouraged to look at what you are doing, review, and give feedback, especially as to whether or not it addresses business and technical needs enough.

How can you be sure your automation really addresses the core business values of your end-product? You need that feedback to target the important stuff, and cut away anything which doesn't add value (otherwise you waste time running it, you waste money maintaining it).

But more than that, many places will automate phase one of a project and like Katrinas Project B and C, will say, “we're done”. Automation isn't a “we're done” thing. Automation in an ongoing commitment.

Every time you make changes to your code base, you need to have someone looking at your automation scripts and going “does anything here need to change”. That's how you keep your automation working and relevant. If you develop for a year, and only then start to use the scripts again, you might have a nasty shock (much like Project C) where nothing seems to work any more. You might even be tempted to bin it all and start again. At this point, the automation which was there to remove your pain in testing, actually becomes your point of pain!

But more than just making sure you have resources to maintain scripts, you have to ensure your scripts are maintainable. In software code, good practices are to have commenting within code to say what each pieces intent is, peer reviews of code and to even have coding standards of things you should try to avoid (forever loops anyone?). Being an ex-developer myself, these are things I encourage in any test automation project. Going around the WeTest workshop, it became clear I was not alone.

When can we get time to automate?

This was the final question of the night I was involved in (alas I had to leave for a train at this point).

But the comment would be one many would be familiar with, “we're going to flat out with our manual testing, we're having trouble creating our base of automation scripts”.

It's tempting to think it's possible to go flat out on a project and also be able to do process improvement as you go along. Sure you can make small improvements. But to achieve significant benefits you need to allocate effort, because you need to concentrate on that task, not run it in spare time (for more on this subject read this article).

If you are finding you have too much testing to do, and find it harder and harder to achieve deadlines you need to go back to our first point, look for the areas of pain that are taking time, and see if automation will help. You might have to say to your Project Manager “look we need to pause for a bit to get some breathing room here” and perhaps extend timelines or investigate other options – it's possible development need to pause to do some cleanup themselves. But you don't need forever, you just need enough automation to ease your painpoints, and then enough resource to keep it maintained when needed.


A fantastic first meeting, and looking forward to future – thanks to all those who turned up and made this such a memorable workshop! I certainly came away with a lot to think about, and have enjoyed revisiting it here ...

The following photos are taken from the WeTest site ...


  1. Hi Mike!

    Firstly, I'd like to thank for the nice entry!

    Secondly, I'd like to comment on this
    "Here are some areas in which computer excel over humans, especially in a “testing” capacity (I will explain the use of quote marks later),
    they are much faster
    they can do the same task over and over again with no variance
    always do exactly what they're told"
    because I don't quite agree with this.

    1) Computers can be faster, but it depends a lot also on the task at hand. But I agree, speed is often a reason to run automated scripts.

    2) Computers can never do the same task again exactly in the same way. They can repeat the same steps programmed into the code, but that will not take away variance from the task.

    3) They don't also always do exactly what they are told. There are memory leaks, power problems, other applications affecting, network lags... and mostly, people have big difficulties in telling computers what to do.

    Thanks for bringing "Wellington point of view" in the eyes of the world!

    Best regards,

    1. But generally they do the same behaviour again and again.

      And you are right - this is something I've learned the last month "when your script fails, the automation needs to provide enough information to the human interpreter so they can determine if it's a failure of the system under test, or the system doing the test".

  2. Thanks for sharing this informative blog about Over-Automation. It is very nice blog.

  3. that's fabulous book of the Software Society.Over-automation is so good.