Tuesday, May 31, 2016

Just because you can automate everything ... doesn't mean you should

In total I've accumulated almost 20 years in IT - throughout that period I haven't always been a tester, although I've pretty consistently tested.

To me, when I was a developer, "doing testing" was something I was very good at, but something I was assigned at the end of a project to "get it over the line".  But in 2004, someone wanted to put my programming skills to good use, and I was brought in to help out on a project which was using test automation.  It was a huge learning experience, which caused my job title to change from software engineer to automation tester.  Since that day the word "test" has been in my job title ever since.

Currently I'm picking up a kind of "ghost in the wires" murmuring about automation testing replacing manual testers.  I was even asked about directly by a student at Summer Of Tech who'd been advised "don't go into software testing, it'll soon be extinct".

Back in 2004 though I joined a department where all testing was done through automation.  Since then I've always attempted to learn more about tester, and importantly tried to blend automation and manual techniques.

For many today 100% automation is the Holy Grail - why don't I believe in that anymore?  What did I see?



* Note - I'm going to use the terminology "automated test" in this experience report, because it's appropriate from my understanding of the period of which I'm writing.  Since James Bach and Michael Bolton have done recent work I'd really call these "automation checks" today.  However I want this experience report to be true to my understanding of the time.


Project Case - an experience report

Project Case as I've said was the first project I've worked on with a dedicated team of full-time testers. It was a bespoke product of about 10 years, with a rich functionality.  The test manager Stephen was a first in many ways for me - even though I was an internal transfer he put me through a barrage of questions on how I'd think to make tests for a product before accepting me on the team.  He took testing very seriously, and I did learn a lot from him.  Though I didn't always agree with him - but at the time, my understanding of testing, and more importantly being able to talk about it, meant I couldn't counter argue with him to the extent I could.

We were system testing for Project Case - and our tests were 100% automated.  Typically about 2 months before a new release we'd start scripting up new automation.  We'd even create scripts which would repeat defects that the customer had found.

At the start of system testing, we'd spend about 2 weeks testing new functionality, running tests on new features.  After that it'd be 6-8 weeks running regression.  Stephen was terrified about regression - which meant we ran every script we'd ever run, plus every script for every defect we'd ever found.

That in a nutshell was our project, now in more detail, I'm going to talk about where we found problems (you've probably already guessed some of them).

Automation wasn't really very efficient

So we had 100% automation - BOOM!  So I guess that meant we had a tight ship?  Actually the team was 12, and comparing to other teams I've since managed, I'd expect to be able to go through the same level of manual testing as we achieved with a similar sized team.  Maybe (shock-horror) even with a somewhat smaller team.

We weren't running faster and smarter, for reasons that will become obvious.

The team was hired foremost for their testing ability - not coding skills

This was one of the best teams of testers I've ever worked for.  And they were hired for their ability to test.

They were then made to write Visual Basic scripts.  And this is where it fell apart a bit - because many weren't very good coders.

Although I never met him on this project, a lot of it was written by John Bunting.  John Bunting, for Terry Pratchett fans, is a kind of Farnborough version of Bloody Stupid Johnson.  He's a guy who's code is so bafflingly strange and bad, I know of friends who were still trying to unpick what he's written 5 years on from his retirement.

But our automation tests were riddled with weird code - such as a function called "back to menu" which would repeatedly hit the escape button, take a screenshot, and check for the main menu.  Only sometimes it would repeatedly do this, miss the menu, then keep hitting escape and taking a screenshot.  If such a script was left running overnight, it would overload the database, and we'd need to call a DBA to fix before we could run anything the next day.

It's fair to say I spent a lot of time trying to address the technical debt within our automation, but ultimately the best I managed was band aiding the code.  I also tried to make the team familiar with Visual Basic coding standards, but in many ways the damage was done.  Thanks John.

We needed a good level of programming skill in the team.  In truth, that's why they got me.

We didn't really understand the system we were testing

Our main priority was running the legacy automation, and fixing problems as they occurred.  We spent more time fixing our scripts than actually using the product under test.

Although we had some testing heavyweights here, our attention was always on the automation and fixing it.  A constant problem thanks to the code of John Bunting Esquire.

Consequentially we never really understood the product under test to the level other test teams would have who'd manually tested.

We needed to be able to understand the system being tested, not the automation.

We didn't really understand the automation

These automated tests had been run for years, because they'd always been run.  They were large and meandering testing many things, and as lengthy automation even when they worked they'd take several hours.  When they failed, because of the quirks of our tool we'd have to start them running from scratch, whereas a manual tester could go "well I can adapt and finish from here on in".

We needed the automation to be simpler, quicker to run and with an obvious feature that it was trying to test.

The regression was too heavyweight

Often with a mature product "testing the core functionality" which people think of when they think of with regression testing.  We were trying to test "everything we've ever tested", which meant for each release it got harder and longer, and we needed more people.  Testing became a huge burden for the project.

We needed to focus down on what mattered, and dump everything else.  If we didn't think an automated script failure would end with a critical or high level defect raised, we probably should have dumped it.

Our automation was a monumental failure

I hate reducing things to metrics.  But for every one defect we were finding in the system, we were finding and fixing between ten and twenty issues in the automation code.

Think about what that means - we were testing our automation code more than we were testing our product.  That in itself is a massive failure.

We needed robust code which was easy to fix, and failed relatively infrequently.



Dr Ian Malcom says ...

Yesterday, when talking about automation, and this project in particular, I found myself quoting Jeff Goldblum from Jurassic Park,



Don't get me wrong, automation can be incredibly helpful when used right, and I hope to focus on that.  It can really help you, but it does so by aiding your manual testing efforts, and stop you running manual tests which are repetitive and boring - for more information please reread.

Here are some good questions to ask of your automation check you want to build ...

  1. Do you want to run this automation check time and again?  If not, automation is the wrong strategy.
  2. Do you have too many automation / does your automation take too long?
  3. If this test failed, what level of defect would you raise?  If the answer to that is low, then does it make sense to check it frequently?
  4. Is it clear what this script is supposed to check?  Try to make each automation script check a single thing - makes it simple.
  5. If you have a wide variety of tests you want to run on a product, which you only expect to run once, manual testing is still the more efficient way to run them.

Sadly - I've learned this, and it cost my company of the time to learn this.  People like James Bach and Michael Bolton have learned this.  Local testers like Katrina Clokie, Aaron Hodder, Oliver Erlewein and Kim Engel have learned this.  But it seems through gossip and misselling, there are a few organisations who need to learn this for themselves all over.


And finally - revisiting that Dr Malcolm speech ....

Revisit that famous Jurassic Park speech here.  Listening to it last night, I realised with just a few tweaks about automation it would be all-too relevant.

Dr Malcolm: The lack of humility about testing that's being displayed here, uh... staggers me.

Manager: Well thank you, Dr. Malcolm, but I think things are a little bit different then you and I had feared...

Dr. Ian Malcolm: Yeah, I know. They're a lot worse.

Dr. Ian Malcolm: Don't you see the danger, inherent in what you're doing here? Test automation is the most awesome force the planet's ever seen, but you wield it like a kid that's found his dad's gun


Dr. Ian Malcolm: The problem with the engineering power that you're using here, it didn't require any discipline to attain it. You read what others had done and you took the next step to copy them. You didn't earn the knowledge for yourselves, so you don't take any responsibility for it. You stood on the shoulders of geniuses to accomplish something as fast as you could, and before you even knew what you had, you patented it, and packaged it, ... and now you're selling it, you wanna sell it. Well...

John Hammond: I don't think you're giving us our due credit. Our people have done things which nobody's ever done before...

Dr. Ian Malcolm: Yeah, yeah, but your scientists were so preoccupied with whether or not they could that they didn't stop to think if they should.



If there is a single lesson from this, just be very clear when you want to test to be automation, if it makes sense, and if it should be tested.  Don't be a John Bunting.

8 comments:

  1. "Just because you can, doesn't mean you should." is right on target. One hundered percent automation is a myth, and needs to be explained well to the unknowing.

    I use the following acronym for my work "SMART", which means the following:
    S - Straightforward and simple. Keep your automation and code simple and on target. The code doesn't need to be a Rembrandt.
    M - Modular and maintainable. Build the code in modular way. Make sure the code and other assets can be easily maintained. It needs to be managable.
    A - Appropriate. Only automate what is needed. Don't try to automate everything.
    R - Reusable. Make your code and other test assets reusable. Allow for some one off's, but make sure overall you design/build for reusability. Don't keep re-inventing the wheel.
    T - Testable and Transportable. Make sure you can test your own code/assets efficiently and easily. Make sure code and assets are transportable to a good degree project to project.

    Jim Hazen

    ReplyDelete
  2. Yep I've seen this pitfall too, and can second what Jim said above. On automating what's needed, a tactic that works well for me is something I call "Tripwire Automation". https://testzius.wordpress.com/2015/06/05/tripwire-testing/

    People are smart. Testers are smart too. Instead of using automation to test everything, try using it to tell you where some problem areas -might- be.

    Testers can dig in and figure out where the actual problem is quicker than writing (and then maintaining) the automation.

    Automation's job is to free up time. Having everything automated kinda negates that.

    Great article--you made a follower out of me :)

    ReplyDelete
  3. I agree 100% with your point - just because you can automated doesn't mean it's a good idea.

    Unfortunately, a lot of teams use this as an excuse to not even TRY to automate things that really should be automated - such as regression checks.

    I see this less than I used to, but, I still run into lots and lots of teams that don't have ANY automated tests, not even at the unit level. I even still see a lot of teams with NO CI.

    That is why when I first started writing about testing on XP teams (after having done successful test automation on waterfall teams for years prior to that), I built on the message of the original XP practitioners: Start by assuming you will automate ALL your regression testing, as well as other things like builds and deployments. When you practice enough so that you get skilled at automation, THEN you can make informed decisions about what should and shouldn't be automated.

    There are two things that make this work. One is that all our software delivery teams ALREADY HAVE PROGRAMMERS. Automation is, in general, easier for them than writing production code. So it doesn't take them long and they can write maintainable test code.

    The second thing is that it avoids having the manager or the programmers (who are probably being rewarded only for writing production code, not test code) use an excuse such as "Well, that Mike Talks is quite a leading practitioner in the testing world, and he says we don't really need to automate." I know it is lame. But if we don't strongly present a case for defaulting to automating when we're talking about regression testing, we often lose. The sad testers are doomed to running through manual regression test scripts and never have time for productive things like exploratory testing.

    ReplyDelete
  4. There was once a manager who asked what do we need to have 100% automated test cases!!! It was really an impossible task simply because we were also asked about ROI - how much time savings for the created tests. Mind you, this is for devices - not web app. Think 2 way radios - conventional or trunk radios which has knobs and button but no simulator. We had a xyz plotter to press the buttons on the front panel and side buttons. There were very many accessories to test but each had a few tests. All of this made ROI not worth automating it. Then again, the code base was ever increasing while Management was trying to reduce the testing cost/effort.

    ReplyDelete
  5. Pretty good post. I just stumbled upon your blog and wanted to say that I have really enjoyed reading your blog posts. Any way I'll be subscribing to your feed and I hope you post again soon. Big thanks for the useful info. You should comment manually.

    ReplyDelete
  6. Online casino games are exotic with you.
    Gclub It's an option that is open to online lotto games that are fun to play gambling games online casino new look. Online gambling games designed to meet the needs of modern people. And to entertain the gambler regularly with our online casino that is open every now and play online games to win the online games of our online betting site in the kind of mobile games, smart phones. Convenient Because our online risk website needs to meet the needs of those who want to bet different types of online games. So the online gambling games of our website are all games that focus on games that have a lot of fun and have a lot of online games that are easy to understand. Betting online gambling games that provide a fun way to enter the game. Get a chance to win every game on the smartphone. Join the fun with online games to make the service users better with the convenience of access to the game. You can place your favorite gambling online at any time and also make money with online casino games that can be real gamblers. Gclub มือถือ

    ReplyDelete