Thursday, February 20, 2014

Learning to use exploratory testing in your organisation ...

We have a certain view of test practises which have been passed to us from the waterfall project roots we've all had some experience of.  In many ways, these methods are familiar, especially to non-testers, and from this familiarity we feel a measure of comfort and security which sadly can often be unwarranted.

When veterans of waterfall such as myself are first faced with some of the concepts of exploratory testing our reactions can range from shock to abject horror.  Almost always though the initial reaction can be a negative one.

This article will help to define exploratory testing, de-construct what we do in scripted waterfall delivery testing, and talk about how we provide the same essential value with exploratory testing.  It will cover our approach to using exploratory testing as part of a system of testing, although of course other approaches can vary.  This is how we've found to get not only the best results, but the best buy in and turned our detractors into our allies.

Back when I did a series of articles on Building A Testing Culture for Testing Circus based on conversations I was having with other software testers, a major change and indeed the future of software testing was seen to be in the hands of increased use of exploratory testing.  And yet the idea isn't new - it was first coined by Cem Kaner in the 80s.  So what is it, and why is it becoming increasingly applicable to the way we're running software projects?

A few homegrown definitions

Let’s start by making a few definitions before we discover further!  These are the definitions I'm comfortable with, but you may find definitions will vary.

A test script is a set of instructions and expectations for actions on a system which a user then confirms when executing testing

Historically I've seen many a project where such scripting has meant we’d define every action of every button press, and detail every response.

Typically in waterfall, we would have a good amount of time to test … or rather we would ask for it, and whether we get it or not would be another tale.  

A product definition would come out of requirements and design, and whilst everyone starts to program against these, testers start to produce plans of what is to be tested, coupled with defining scripts for the new functionality based on these definitions.

Once done these plans and scripts are ideally put out for review – if you’re lucky you'd get feedback on them, which means you expand or reduce your coverage to include the areas which are key to the product.
When we run our scripts, typically life couldn't be simpler, all we do is go through, try each step, and tick or cross as we go to “prove” we've tested.

The problem with scripts is that they’re very brittle,

  • they require a lot of time upfront (before the tester has access to a product) to scope to the level of detail discussed above.
  • they are fallable in the fact that just as developers are not immune to misinterpreting product definition neither are testers.  And that means large amounts of rework.
  • the requirements and design need to be static during the scripting and execution phase.  Otherwise again, you have a lot of rework, and are always “running at a loss” because you have to update your scripts before you’re allowed to test.
  •  they depend on the testers imagination to test a product before they've actually seen a working version of it.

Unfortunately talk to most testers today, and they’ll report to you these are areas they’re under greatest duress.  The two biggest pressures they feel revolve around timescales being compressed and requirements being modified as a product being delivered.

Some testers have a desire to “lock down the requirements” to allow them to script in peace.  But obviously that involves to some extent locking out the customer, and although it’s something that we might feel is a great idea, it’s important to have an engaged customer who feels comfortable that they've not been locked out of the build process to have a successful project.

So testers have to be careful about not wanting to have a model for testing which works brilliantly in theory, but breaks down because of real and pragmatic forces on their project.

Exploratory testing has multiple benefits – one of the greatest being that it doesn't lock down your testing into the forms of tests you can imagine before you've even had “first contact” with a prototype version of software.

Exploratory testing is a method of exploring software without following a script.  There are many ways to perform it – the best parallel I have found is with scientific investigation.  You set out with an intention, and you try experiments which touch the area, devising new experiments as you go along and discover.
With exploratory testing there is more value in noting what you've actually done over recording your intentions.  Compare this with scripted testing, where you put the effort in ahead of time, and all you record during your testing is either a tick for a pass, or cross for a fail!

A test session is a high level set of behaviour that the tester should exercise when executing testing.  Often this can be a list of planned scenarios we think would be good to exercise.  But unlike a script, it's a set of things to try in testing, without getting bogged down early on with the step-by-step instructions of how that will be done.  It also is more a set of suggestions, with room for additional ideas to be added.

So what is exploratory testing?

There are many definitions out there for exploratory testing, and I'm going to add my own understanding of it.

Exploratory testing to my team is about

  • building up an understanding first of what the core values and behaviour of a system is (often through some form of Oracle)
  • using that understanding to try out strategic behaviour in the system to determine whether what we witness is unexpected

Oracles

An oracle is a guide for understanding how the system’s supposed to behave.  An obvious one we all know is a requirement document functions as an important oracle.  However it’s not the only oracle there is.

Recently when we were making changes to our registration page, we took a look at registration pages on Gmail, Hotmail, Facebook, Twitter.  Many of these had some of the features we used in our registration page, so it gave us a method of playing and getting familiarity with.

Especially if you’re producing a system that’s got a broad customer base, most users don’t have the advantage of reading a whole stack of requirements when they use your system.  Your product has to make sense, especially given similar products in the market.

This ability for a tester to look at a page and ask “does it make sense” is an important freedom that’s required in exploratory testing.  Sometimes saying “as per requirement” isn't enough, and we have to push further.

Recently we put together a system to allow a user to manage their own password reset when they’d forgotten their password.  All the required behaviour was put in front of a lot of people who signed off on what they wanted.  The requirement we had read that the email the user would receive would say “your account has been unlocked, and you can no login”, just as per requirement.  My tester dared to suggest that “you can now login” would perhaps make more sense, going beyond just taking the requirements to be the holy oracle of all truth, and using a bit of common sense.

Somehow that typo had got through a lot of people - they did indeed want "you can now login" - but then, that's the nature of testing.  I've seen much larger slips than that pass by uncommented before now ...

Example

You’re testing a login page, the requirement says, “When the user provides the correct password and username, the user is logged in”.

Within exploratory testing, it’s an expectation that the tester will be able to expand on this, using their experience with similar models,

This is a list of a tester using an Oracle based understanding and going beyond just the requirements to plan testing.  They'll typically expect,

  • If I give an incorrect password for a username, I won’t be logged in
  • If I give a correct password for the wrong username, I won’t be logged in
  • On other systems the username I use isn't case sensitive – should it be here?
  • On other systems the password I provide is case sensitive – should it be here?
  • What should happen when I try to log in incorrectly too many time?  Does the account get locked or should it be locked for a period of time?
Skill Based Testing

This ability to think beyond just a one sentence requirement is part of the inherent skill which is a core need for exploratory testers, it calls for,

  • An understanding of what the systems supposed to do.  Not just functionality, but the “business problem” it’s trying to address.
  •  An understanding of how the system has worked thus far is helpful
  •  An understanding of similar behaviour in other products
Exploratory testing is thus often referred to as “skills based testing”.

An argument often used in support of scripted testing over exploratory testing is that “if you have your testing all scripted up – then anyone can do it”.  Indeed such scripts it’s possible to provide to almost anyone, and they can probably follow them.

The problem tends to be that different people will interpret scripts differently.  Most inexperienced testers will tend to follow an instruction, and if what they see on screen matches, they'll just tick a box, without thinking outside of it (and hence would be quite happy with "you can no log in" example above because it was what the requirement said).

Lets explore this with another example ...

Test script

Action
Expectation
Pass/Fail
Click the link to take you to the login screen.
There are two fields for entry,

  • Your account
  • Password


The Test System

Here are 3 screens for the above script ... do they pass or fail?




Can you be sure that every tester would pick up on that from the script?  If so would it be a simple cosmetic defect, yes?

Sadly some people will combat this by trying to turn this into a version of the evil wish game, where you go overboard on explaining what you expect, so there' no room for ambiguity.  Hence an obsessed scripted tester might try and write the expectations as,


Action
Expectation
Pass/Fail
Click the link to take you to the login screen.
There are two fields for entry,

  • Your account
  • Password
The text on the display is 
  • all in Calibri font 11
  • all UPPER CASE
  • black colouring is used for all text


Yes sirree - that will stop those 3 problems above occurring again, but it won't stop other issues.  Especially if it turns out the root cause for the problems is a disgruntled programmer who's tired of reading your scripts, and currently serving out their notice period until they move to another company where they do exploratory testing ...


"But our scripts are our training!"

This is a common objection to exploratory testing.  If you have detailed scripts, anyone can follow them, learn the system and be a tester right?

Most people I have spoken with have found that it’s far easier to have a handbook or guidebook to a product, which details at a light level how to do basic flows for the system, than to have high level of detail throughout your testing.

The counterargument to handbooks is that “anyone can follow a script” but as discussed previously to build up familiarity with the system and the values, you will always need some training time to get that.  If you just throw people at the “your account” problem you’ll get differing results, you need people to be able to be observant and in tune with the system, and that doesn't tend to happen when you just throw people blind at a system with only scripts to guide them.

The bottom line is that there are no shortcuts to training and supporting people when they’re new testers on their system.  If you’re going to train them badly, you’re going to get bad testing. 

One of the few exclusions we've found to this is areas which are technically very difficult - there are in our test system some unusual test exceptions we like to run, and to set up these scenarios is complex, and completely unintuitive (includes editing a cookie, turning off part of the system etc).  This is one of the few areas where we've voted to keep and maintain scripts (for now), because we see real value in the area, and with maintaining so few other test scripts, we can do a good job, without it chewing up too much time.  

But we've found we certainly don't need test scripts to tell us to login, to register an account etc - when the bottom line is our end users only get the instructions on the screen to guide them.  Why should a one off user not need a script to register, and yet testers like ourselves who are using the system every day require a script to remind us that if we enter the right combination of username and password, we're logged in?

Debunking the myths



A common complaint about exploratory testing is that it’s an ad-hoc chaotic bug bash chase.  You tell a team to “get exploratory testing”, and they’ll run around like the Keystone Cops, chasing the same defect, whilst leaving large areas of functionality untouched.



This is some people's view of exploratory testing

With such a view of exploratory testing, it’s no wonder a lot of business owners see it as a very risky strategy.  However, it’s also misleading.

Whilst such ad-hoc testing can be referred to as exploratory testing – it’s not what many people’s experience of exploratory testing is like.

Just because exploratory testing doesn't involve huge amounts of pre-scripting, doesn't mean that exploratory testing is devoid of ANY form of pre-preparation and planning.

You will often hear the words “session based” or “testing charter” being referred to – these are a way at looking at the whole system and finding areas it’s worth investigating and testing.  The idea is that cover your sessions, and you will cover the key functionality of the system as a whole.

Drawing up a map of sessions

Creating a map of sessions is actually a little harder than you’d first think.  It involves having a good understanding of your product and it’s key business areas.

Let’s pretend we’re working for Amazon, and you want to derive a number of sessions.  Two key sessions jump out at you right away,

  • Be able as a customer to search through products and add items to the basket
  • Be able to complete my payment transaction for my basket items


Beyond that, you’ll probably want the following additional sessions for the customer experience,

  • Be able to create a new account
  • Be able to view and modify the details of an existing user account
  • Give feedback on items you ordered, including raising a dispute


Finally obviously everything can’t be usre driven so there is probably,

  • Warehouse user to give dispatch email when shipment sent
  • Helpdesk admin user to review issues with accounts – including ability to close accounts and give refunds

The trick at this point is to brainstorm to find the high level themes to the product.  Ideally each session has a set of actions at a high level that really encapsulates what you’re trying to achieve.

For more on mind mapping (I tend to be very much list driven in thinking), and I recommend Aaron Hodders notes on the subject.

Fleshing out the planned test session

For many skilled testers, just giving them the brief “be able to create a new account” will be enough, but perhaps a lot of detail came out of your brainstorming that you want to capture and ensure is there as a guideline for testing.

Let’s take the “be able to create a new account”, here are some obvious things you’d expect,
  • Can create an account
  • Email needs not to have previously been used
  • Checking of email not dependent on case entered
  • Password
o   Must be provided
o   Has a minimum/maximum length
o   Must have a minimum of 2 special characters

  • Sends email recript

A session can be provided in any form you find useful – mind mapped, bullet point list, even put into an Excel sheet as a traceable test matrix (if you absolutely must).  Whatever you find the most useful.

Such bullet pointed lists should provide a framework for testing, but there should always be room for testers to explore beyond these points – they’re guidelines only.

Keeping it small

Typically not all test sessions are equal – the test sessions for “adding items to basket”, if you've got over 10,000 items to choose from (and no, you’re not going to test all of them) is obviously a bigger task than creating an account.  However if some sessions are too big, you might want to split them up – so you might want to for instance split “adding items to basket” to,

  • Searching for items
  • Retrieving details for items
  • Adding items to basket

But really breaking it down is more an “art” than a set of rules, as you’d expect.

Test Ideas For Your Session

Okay, so you have a session, which includes some notes on things to test.  Hopefully with your experience in the project, and test intuition you have some ideas for things to try out in this area.

A common thing testers use to explore and test is called heuristics, which are testing methods.  Elisabeth Hendrickson has a whole list on her Heuristic Cheat Sheet.

But here are a few of those which should jump out at you,
  • Boundary tests – above, on the limit, below
  • Try entering too much/too little data 
  • Use invalid dates
  • Use special characters within fields
  • Can a record be created, retrieved, updated, deleted
Using heuristics, a simple statement like “can I create an account” can be expanded to be as large/small as needed according to the time you have.

Working through a session


In running and recording test sessions we've gone right back to University.  For those of us who did science classes, an important part of our lab experience was our lab book.  We used this to keep notes and results of experiments as we tried them out.

They weren't wonderfully neat, and we’d often change and vary our method as we went along.  But we could refer back to them when we needed to consult with other students or our tutor.

For our group, these notes are an important deliverable as we move away from scripted testing.  Typically we’d deliver our scripts to show what we intended to test.  Now when required we deliver our notes instead showing what we tested (over intended to test).  Not every exploratory testing group needs to do this, but to us, it’s an important stage in building the trust.

Obviously a key part of this involves “just how much detail to put in your notes”.  And this is a good point.  The main thing you’re trying to achieve is to try our as many scenarios as possible on your system under test.

We've found a table in simple Microsoft Word is the most comfortable to use (you can make rows bigger as you need to put more details in).  Basically you put in your notes what your intentions are.

We couple this with using a tool to record our actions when we test.  Each row in our notes table covers about a 10-15 minute block of testing we've performed.  We use qTrace/qTest to record our actions and the system responses during this time.



This allows us to use our notes as a high level index into the recordings of our test sessions.  Obviously things like usable names for our recordings help – we go by date executed, but anything will do.

A screenshot from a typical session log

It should be noted of course that not all exploratory testing needs to be recorded to this detail.  

For my group, as we move away from scripted testing, this approach allows us to build up trust in the new system. The bulk of this documentation – screenshotting, and a button-by-button trace of what we do – we find qTrace invaluable in absorbing the load for us (it would slow us down like crazy otherwise).

Of course, any test we find ourselves doing again and again is a candidate for automation – and even here, if we have a suitable flow recorded, we can give a qTrace log to our developers to automate the actions for us.

Closing a session

When a tester feels that they've come to the end of a session, they take their notes to another tester, and discuss what they've tried, and any problems they've encountered.  The peer reviewer might suggest adding a few more tests, or if there have been a lot of bugs found, it might be worth extending the session a bit.

This peer reviewing is essential, and we encourage it to lead to some peer testing.  Basically no matter what your experience, there’s no idea that you can have that can’t be enhanced by getting the input from another individual who understands your field.

This is why for us, the method we've used for exploratory testing really confounds the standard criticism that “when you write scripts, they’re always reviewed” (even this is rapidly becoming the unicorn of software testing, with many scripts being run in a V0.1 draft state). 

In both using reviewing (through brainstorming) of when we plan our test sessions and also using a level of reviewing when we close a single session, we’re ensuring there’s a “second level of authentication”tof what we do.  Sometimes we wonder if our process is too heavy, but right now an important thing is that the reviewing, being a verbal dialogue, is fairly lightweight (and has a fast feedback loop), and both testers get something out of it.  Being able to just verbally justify the testing you've done to another tester is a key skill that we all need to develop.

Peer Testing

We try and use some peer testing sessions whenever we can.  I've found it invaluable to pair up especially with people outside the test team – because this is when the testing you perform can take “another set of values”. 

Back at Kiwibank, I would often pair up with internal admin end users within the bank to showcase and trial new systems.  I would have the understanding of the new functionality, they would have a comprehensive understanding of the business area and day-to-day mechanics of the systems target area.  It would mean together we could try out and find things we’d not individually think to do.  Oh, and having two pairs of eyes always helped.

Pairing up with a business analyst, programmer or customer is always useful, it allows you to talk about your testing approach, as well as get from them how they view the system.  There might be important things the business analyst was told at a meeting which to your eyes doesn't seem that key.  All this helps.

It’s important as well, to try and “give up” on control, to step aside from them machine, and “let them drive and explore a while”.  We often pair up on a testers machine, and etiquette dictates that if it’s your machine, you have to drive.  It’s something we need to let go of, with a lot of the other neuroses we've picked up around testing …



References

I don’t usually include reference to my blog pieces, but this piece has been a real group effort, and our understanding in this area has evolved thanks to the interaction and thought leadership from a number of individuals beyond myself.

Instrumental amongst these have been

  • The Rapid Software Testing course from James Bach which really focuses on the weaknesses of scripting and adoption of exploratory testing, and is a road-to-Damascus, life-changing course I would recommend to any tester.
  • The book Explore It by Elisabeth Hendrickson
  • The many verbal jousting sessions I've enjoyed at WeTest
  • My engaged team of testers at Datacom

Monday, February 17, 2014

Fishing for defects with Rob Sabourin


Having met Rob Sabourin at STANZ 2013, I was really pleased to hear he was returning to New Zealand to teach Software Education's Testing Mobile Applications course, which I've got one of my team on.  Rob is a great, very charismatic speaker - if you ever get the opportunity, I really recommend hearing him speak, you'll not be disappointed.

Fortunately for Datacom, it turned out that Rob had a free day, and asking around I managed to get enough interest across the company to bring him in to run his one day "Just In Time" workshop, that focuses on managing your time and resources to perform risk based testing.

Unfortunately for me, this is the second course I've organised that, much as I'd love, I couldn't justify sending myself on.  [Organising courses for your company doesn't mean you just put yourself forward for all the cool stuff, that would be a bit greedy]

However, I managed the next best thing, which was taking Rob to lunch, and getting to talk to him.  I love catching up with other testers, whether famous speakers like Rob, people from WeTest workshops or colleagues within Datacom.  Networking, and sharing your experiences and approaches with other testers is such a great way to grow your experience, as well as find allies and kindred spirits.

So we talked testing, Canada, snow (somehow the topic always turns to snow when you're talking to a Canadian), and before I knew it, fish.  I started to gather fishing gear and giving fishing a go last year after being encouraged by one of my BA's at Kiwibank.  I have to say sadly, I'm pretty rotten.

But for Rob, fishing in a passion - he showed me pictures barely a week apart where he'd been ice fishing, and where he'd been fishing off Florida.  Ice fishing was certainly the most interesting - he's allowed to have 10 fishing rods per person in his party, and they'll sit in a heated hut, and watch their rods (because more people = more rods, he's always keen to have company when fishing it seems).  With limited rods, he has to think about how he spaces them - if he gets a lot of bites, he's developed a knack for knowing if he thinks he's exhausting the fish in that part of the ice (they're getting wise, and moving elsewhere), or if it's worth to move more rods nearby.



Obviously that has all kinds of parallels with testing - we only have limited time and resources to test, and we have to make choices about where we put our rods under the software ice to fish for problems.  Some areas we can choose to space out a lot, some we focus a lot more resources on.  But the trick is to make an estimation, make it clear your reasoning to the rest of your fishing party, and go for it ...

Of course this brings me to another point - fishermen's tales!  Rob never once talked about a fishing trip where he talked about the number of fish he brought in.  Fishermen don't seem to count.

Fishermen don't tell you the number of fish they landed, they tend to tell you about one or two, but they typically have one or two things going for them - usually it's because they're a particularly rare fish, or because it was the biggest fish they'd caught that trip.  No, fishermen don't tend to tell you how many sardines they land - the talk is about tuna, kingfish, marlin.

Even that, Rob tells me (I'm a rubbish fisherman remember), is a choice.  If you go fishing and you want big fish, you have to make a choice, you choose the bait and hook to catch big fish - little fish will slip right through.

I was lucky to catch the last 15 minutes of this workshop - even though I thought I was experienced enough in the area Rob was talking about, he presented a few ideas and techniques which I went "ooh - never thought of doing it like that".  I'm glad to have had a couple of my team on there, which means they might well get the opportunity to champion with me some new ways of doing things - it's always exciting to have the team feeling engaged and excited about new ideas!

So my takeaways for the day were without a doubt that when you know time and resources are constrained,

  • think about where you want to concentrate and priorities your efforts
  • don't communicate by counting all your defects, talk about the individual biggest issues you have
  • don't go out fishing for sardines on day one - look for the big fish.  Worry about the smaller defects when you've overfished the big ones

Friday, February 7, 2014

Revolutionary thinking?

Yesterday was an important public holiday in New Zealand - Waitangi Day, where we celebrate the treaty which established the country in 1840.  It's a complex day, the treaty is still fiercely debated - partly because different Maori tribes believed they were signing up to different things because of poor translations.

That said, as far as stories of Europeans settling in foreign lands, New Zealand is a relative success story, as the native Maori here were treated and integrated with better in the emerging nation than many other countries (though of course there's room for improvement).

Te Papa

Ironically we ended up looking at a culture for whom the polar opposite is true.  We visited the Aztec exhibition at Te Papa, Wellington's premier museum.  Based in what we now call Mexico, the Aztec empire was a loose collection of sometimes quite diverse tribes who gave tribute to their emperor.  They had farming, trade, artisans, sports, a complex calendar which showed their understanding of seasons and the movement of the Sun/Earth.  When 16th Century Spainish explorer Hernan Cortes looked upon their capital city of Tenochtitlan he was looking at a city far larger than any city in Europe.

A picture representing Tenochtitlan - the Aztec temple is at the heart of the city

However whenever you try and talk about the Aztecs, sooner of later you come back to the subject of their ritual of human sacrifice.  Oh yes, that.

To the Aztecs, human sacrifice made a lot of sense.  Their culture taught them that their gods had created life on earth from their own blood.  And these gods demanded the blood of human beings (which the gods had "gifted to them") to make adequate tribute - it HAD to be human, and it HAD to be in blood.

Tlaloc - the god of rain

Foremost of these tributes, were those made to Tlaloc, the god of rains.  It was his gift of rain upon which the Aztecs depended for growing the food their communities depended on (ironically for a culture of human sacrifice, the Aztecs had pretty much a vegetarian diet).  Because of this, if they had a dry season and the rains didn't come, then the logic was pretty clear cut, they'd offended the rain god with insufficient sacrifice.  So the high priests would decided the only way out of this would be more sacrifice - so more and more humans would be put to the blade.  And if the rains finally returned, then logically this had to be because "we had given enough sacrifice to appease the gods".  Sure enough next year there would be more sacrifices than ever to ensure better weather and avoid a repeat of last year's embarrassment.

An Aztec temple - this was the original stairway to heaven.  As my wife put it, "those stairs would be the death of me".

Of course this sounds crazy and barbaric to us, the idea of keep doing the same ritual until things seem to get better.  Should any brave Aztec challenge this idea, then pretty much they were punished by taking a one way trip up the Aztec temple stairway-to-heaven.

Such practices shocked Hernan Cortez when he arrived in the New World of the Aztec Empire in the 16th Century.  Well, not only was Hernan shocked by "barbaric" Aztec practices, he also thought they were hiding a stash of gold somewhere.  So there was only one thing for it - war!  Using European gunpowder, and allying with the many tribes who had blood feuds with the Aztecs over the years, Hernan wiped out Aztec culture, with the Spaniards burning and destroying all their written history.

In it's place, they replaced the Aztec beliefs with Christianity.  Now instead of fearing blood sacrifices, the former Aztecs would be taught that God had sent Jesus as god-made-man, and his blood sacrifice had saved them (oops, blood again, this is getting like a vampire teen romance novel).  Now when their crops failed, they would be told it was God's punishment for their impurity and lack of faith.  They could look forward to chapters of the Spanish Inquisition (a horror that none of the Aztecs could have expected) seeking out witches, heretics and the impure, and having them brutally executed in the name of Christ.  Pretty much one model of thinking was replaced by another, but the end result (state killing lots of people) was pretty much the same, if not worse.  The Aztecs had actually shown those they sacrificed some reverence - indeed their were promised the short cut to paradise over the purgatory and damnation the Christian faith promised them.

History is full of tales like this - we develop a model of understanding our world, and it makes a lot of sense, so we stick with it through thick and thin.  But over time, the model becomes too stretched or just plain falls over, however we are oversold the model as some form of "holy truth", so any evidence that challenges this model, or shows it's shortcomings is heresy - and usually there's a giant wicker man waiting for you if you do ...


Usually those in power have an interest in those models, and they don't like dissent,

  • Socrates was ordered to commit suicide for being a "smart arse" and challenging Athenian ideas.
  • Jesus was nailed to a cross for telling people "hey, be nice to each other", and challenging ideas about temple money lenders.
  • Galileo Galilei challenged the churches idea that the Earth was the centre of the universe, by showing the Earth instead moved around the Sun.  Heresy!  So he was put under house arrest for the rest of his life, only receiving an apology in 1992 - only 23 years after we'd been to the Moon.


The problem is, we're not yet into an age of enlightenment yet.  As Einstein put it insanity is doing the same thing over and over, but expecting a different outcome.  Models are useful, because they help us get a general understanding of something - but they're fallible.  If the model breaks too often we need to find new ones.

But though we mock the Aztecs or fundamental Christians for the models they clung to as "the one way", we're increasingly addicted to our models of the modern world.  I think it's fair to say that the recent banking crisis which has led to the recession and so much turbulence shows that.  In the years running up to that crisis, I know I and many others questioned "surely house prices can't continue to go up like this if wages are relatively static?".  But generally we were made to feel "you know nothing", until it seemed the housing bubble went pop, and a lot of the banking that was riding on it burst with it.  Was this really so different to how people over-invested in shares during the Wall Street Crash?  I thought we'd learned from that ... or maybe not.

Over the years I've also seen people try to run software projects in the same way you run other businesses such as ad campaigns or sales schemes, and so import their same business models.  This is really interesting, because developing software runs exactly like an ad campaign or sales scheme ... except for all the places it's nothing like an ad campaign or sales scheme.  And that's where problems happen.  The model works outside of software, so like Aztecs or the Spanish Inquisition, the only answer must be that your people are no good, and you need to demand more sacrifice from them to achieve this.  Alas I've seen my fill of dazed veterans walking from such projects when the inevitable car crash occurs ...

Likewise to answer the challenges of software where testing is increasingly under scrutiny and sometimes brutally not seen to be delivering, the ISTQB has found themselves under the gun.  After all with their sale pitch of so many people in the world now with Fundamental and Advanced qualifications, surely we should have a more educated testing workforce than ever before - so what's the problem, why are projects still having problems?  Well the ISTQB solved this ... we need more certifications!  So they've added a new "Expert Level" above the Advanced one.  It reeks of the Aztec high priests keeping demanding sacrifices until the rain starts, but nevermind ...

We've also done battle with the High Priests of Best Practice, who advocate that if we're experiencing problems, then it must be because we're not following some best practice.  Again, without realising that even the idea of "best practice" is just a model, which suits some circumstances.

A recent Twitter discussion where "best practice" reared it's ugly head again ...

So where does that leave people who are rational thinkers?  The people who want to try and reason with the high priests and the lynch mobs?  The badge heretic is a difficult one.  But history has proved Socrates, Jesus and Galileo right.

Every revolution starts somewhere ...

For myself the most revolutionary and exciting ideas I've heard around testing and software have involved the ideas of Agile and CDT.  And as a dyed-in-the-wool V-model tester on military applications, I initially hated and was opposed to them both.  But then I realised that not every piece of software flew a Harrier.  The model that worked for such a project didn't always work for others.

I was at first intrigued, and then won over, and finally signed up as Comrade TestSheepski ...  and I'm not alone, with both schools of ideas becoming more mainstream, though not fast enough.

Of course even so, these schools of thought are just models themselves, and not to be slavishly followed, but challenged to ascertain if you're getting value from them.  The difference though is that both Agile and CDT depend on this challenge to strip away any delusion and ensure you are on the right path, that what you're doing actually has value.  Answer a challenge of "why are you doing it like that?", with "but we've always done it like that", and you go sit in the corner for a minute to think about it.

I've found myself a few times even this year coming out with that sentence at times, and going "oh".  Programming/conditioning is hard to break, and free thinking even harder to get used to.  We get addicted to routine, or maybe the routine becomes an addiction.  There are all sorts of things we do in our work routine that we've lost the ability to question.  We've bought the culture, we've bought it's a necessity.  But is it?

Don't be an Aztec - challenge the sacrifices you're making ...


The Gods Of Software demand more certifications!


Sadly the Te Papa exhibit on Aztecs closes this weekend.  Thanks also to James Bach's blog post on reviewing his son's book manuscript, and why he felt he didn't need to use KPI's to justify his reviewing.  That piece gave me the extra ammunition I needed!

Thursday, February 6, 2014

Programming - it was acceptable in the 80s ...

Let’s turn back the clock, way back.  I've plundered some of my memories for this blog before, though usually they’re the somewhat traumatic parts of my past.  My life isn't always so serious, and so something fun has been long overdue ...

At work, I've been telling a few tales about my experience with programming in the 80s.  Oh I'm sounding quite the great-grandfather of IT.  But I think in telling how we got here, there's some important things we can pick out.  I have to take you back as Life On Mars would put it, to a time that’s “like living on another planet”.

In the 70s, we didn't have computers in the home.  My dad worked at a large mining research establishment, and they had one single huge computer that ran things like the payroll.  Everything he needed to do he did on paper,

  • if he needed to make a graph, he used graph paper not Excel
  • if he needed something typed up there was a lady from the secretarial pool who’d work from his notes
  • if he needed something filing it went into a cardboard folder in the filing cabinet
How science fiction made us think of computers ...

To me, computers were something that you saw on the bridge of the Starship Enterprise or trying to take over the world in Doctor Who.  And it’s probably exactly because of this that I became so interested, imagining the mischief I could reek!  We had a book in our junior school library – Ladybird’s “How computers work”, that I regularly took out.  I even asked my junior school teacher Mr Appleby what skills I’d need to work with computers, to which he said I needed to stop fantasising, as they were just a fleeting fad (what a great advert for education, actually discouraging the next generation from learning and asking questions).


Then in the 80s, something magical happened – the home computer!  Companies were developing mass manufactured entry level machines.  Suddenly from having a life plan of 6 more years at school focusing on maths and then maybe getting my hands on a computer, I was able to skip straight to the “get your hands on a computer” part.  And in Christmas 1982, whilst my brother got upgraded to a new bike, my dreams came true when I got a ZX Spectrum.



The ZX Spectrum was a popular computer in the United Kingdom, and one of the first affordable home computers here.  Many remember it for a few of it's quirk, the one oft quoted being to save money and make it affordable, it had a rubber keyboard which made programming “interesting” (I'm now a proficient touch typist, and the thought of using a rubber keyboard again makes me think how slow it would be).

Home computers at the time used a feed to a TV to function as a monitor, and used a tape recorder to load/save programs.  It was a slow process, but then we didn't know better at the time.  Typically you’d set the machine to load, check it had started then go to the toilet, make a cup of tea, and return to hope it was all done.


The ZX Spectrum came with a manual and a sample tape which included a few demo programs.  No-one really knew what was going to be popular, so included on there were a few educational programs (if I remember mathematically simulating fox and hare populations as well as Conway’s life algorithm), together with a couple of games (a Space Invaders and Pacman clone).

In that first couple of years, the ZX Spectrum didn't initially have a huge number of games (it would end up having all told 24,000 titles).  As mentioned, no-one really knew what people would use it for – it was designed more to teach people to program and to experiment, than as a games machine for instance (other of its quirks included limited sound and graphics capability - sound was handled in an in-built speaker using the BEEP command and graphical sprites could only be in one colour).

The computer came with a manual which explained the Basic programming language, and there were also a few simple programs for you to type in and  try out – you can find a copy of this book online here.

In both magazines and books, you could get listings of programs that you could enter yourself, and “write your own game”.  The books were in my opinion far better, because instead of just a program listing to copy, it explained as you went along how the different sections and logic pieces work.  You learned something - especially how to copy and imitate!  [The most sincerest form of flattery]




What remained constant for both magazine and book program listings was that nerve wracking moment when you entered the RUN command.  No program worked first time!  There was almost always a transposition error.  But sometimes the listing was actually wrong, and contained a typo.  Oh, I was learning some important basics there in the fallibility of the software creation process.

And once working, there were many tweaks to be made.  Once you had a working game, there was a lot of fun to be had customising it.  Though sometimes those tweaks were made when you weren't looking – I remember we had a “Space Colony Management” game where you could allocate resources to food production, life support or manufacturing.  Me and my brother, ruthless capitalists that we were (in games), would sometimes “let it ride” by setting food production to zero for a couple of turns to maximise manufacturing.  Until one game, where we did this,to find there was a civilian uprising and we were executed as traitors to the people.  Well that was new!  Seems my father took exception to how we ran things, and modified the game to make us less ruthless capitalists, and teach us a valuable lesson about socialism.  It's a shame the CeauÈ™escus didn't play my dad's modified game ...

The process of getting a program to work was a difficult one.  There was no instruction manual for working through the problems that were thrown at you – although obviously if an error code was thrown up, yes indeed you could look THAT up.  However the more problems you encountered, the more you got the knack of recognising them and resolving them.

Here is the typical basic program you’d cut your teeth on.  It would come up with a random angle, and you’d enter a value to be told you’d either gone too far, too short or spot on.

10 REM "Artillery game"
20 CLS
30 LET a=lNT (RND*90)
40 INPUT "What angle to fire your artillery? (0-90) ",b
50 REM "Remember this was the 80s ..."
60 IF a=b THEN PRINT "Enemy tank explodes. Girls in bikinis thank you for saving their village!": STOP
70 IF a<b THEN GO SUB 100
80 IF a>b THEN GO SUB 200
90 GO TO 40
100 PRINT "Shot goes too far"
110 RETURN
200 PRINT "Short falls short"
210 RETURN

Obviously with the game ending, this would have been a precursor to the Red Alert series!

Even though this would work, it would have oddities.  If you entered text, it would typically crash trying to interpret it - it expected a number, and treated what your provided as such.  However if I remember correctly, if you entered “a” at the command line, “a” would be interpreted as the variable “a”, which means if you type it, you've automatically won!

Using it as well, you’d find of course that you could enter anything, not just angles between 0 and 90.  Common modifications to the above program would be,

  • adding a count of the number of tries
  • scan any input from the user, and not count if less than 0, bigger than 90 or contains none numeric.
  • replace variables “a” and “b” with meaningful names like “target_val” and “user_input” for maintainability

Making changes contained risks of breaking what you had – you might cause a bad loop (one you'd never get out of).  You might just forget something, so for instance had instead of the line,

30 LET a=lNT (RND*90)

You had done,

30 LET a=(RND*90)

You would have ended with setting a number between 0 and 90 (just as you wanted)… except instead of this being an integer number, it would have been a floating point number.  This would have played a bit like this,

Guess a number between 0 and 90?
> 56
Too High!

Guess a number between 0 and 90?
> 55
Too Low!

Guess a number between 0 and 90?
> 55.56782
Sorry you failed.  The correct answer was 55.5678210234857

[Basically programmers, never try and compare if two floating point numbers are the same – they very rarely are to the level of accuracy a computer will insist on by default]

ZX Spectrum basic was really a very primitive language compared to what we now know as programming languages.  There were concepts I never really got used to like GOSUB commands (a form of function), all your data was considered global, if you had your computer upgraded you had all of 48k of memory (check today to see how much you can get with that).  However everyone was programming and giving it a go, customising what they could, seeing what you could achieve.  It was an exciting time to play with the technology.

I remember coming home with maths homework in 1983, and using a quick program to loop through solutions until it gave an answer using a stepping algorithm (though I didn’t even know what the term algorithm was, it just seemed intuitive and logical).  I remember producing programs over the years to simulate Browian motion, simple harmonic motion, falling objects etc.  It could be a powerful tool used like that.

But most of all, and setting me up for the future, I appreciate the apprenticeship of “what can go wrong” with programs.  Typing the command RUN always was an unsettling and exciting feeling, sometimes you’d just get exasperated with “What’s wrong now?”, "6 Number too big?  Really?".

Here’s a list of the dreaded error codes from a Spectrum fan site.

0 OK, 0:1
1 NEXT without FOR, 0:1
2 Variable not found, 0:1
3 Subscript wrong, 0:1
4 Out of memory, 0:1
5 Out of screen, 0:1
6 Number too big, 0:1
7 RETURN without GOSUB, 0:1
8 End of file, 0:1
9 STOP statement, 0:1
A Invalid argument, 0:1
B Integer out of range, 0:1
C Nonsense in BASIC, 0:1
D BREAK – CONT repeats, 0:1
E Out of DATA, 0:1
F Invalid file name, 0:1
G No room for line, 0:1
H STOP in INPUT, 0:1
I FOR without NEXT, 0:1
J Invalid I/O device, 0:1
K Invalid colour, 0:1
L BREAK into program, 0:1
M RAMTOP no good, 0:1
N Statement lost, 0:1
O Invalid stream, 0:1
P FN without DEF, 0:1
Q Parameter error, 0:1
R Tape loading error, 0:1

Over the years, stumbling over these codes, and fixing the problems beneath helped to develop an understanding of the ways software can go wrong.  

Years later I’d learn the word “heuristic” at James Bach’s Rapid Software Testing course which would better explain what had been going on.  I’d developed certain “rules of thumb” based on my experience using my own software, I’d gained knowledge of methods and experience where software just plain broke down.

Examples of those devious methods to break things included using

  • using out of range numbers
  • using nonsense numbers (such as decimal numbers where integers expected)
  • using text instead of numbers
  • guiding a graphical element off the screen


In fact look through those Spectrum error codes again, and you should be able to have a good guess at methods you could trigger half of those.  Oh yes, our languages might be more sophisticated, and our computers about a million times more powerful, but there is the same potential to cause those same fundamental errors listed above!

A few of the testers in my team are looking at understanding the basics of programming, and I'm encouraging them to.  Don’t get me wrong, I don’t really feel testers need to be able to code, but at the same time I believe it’s useful to have experience of writing code – not only to see how you break an idea into something deliverable as a series of statements, but primarily it’s the experience of having code fail on you, to understand how you investigated, debugged and fixed.  This allows us to have more empathy for the developers when they have to address any issue we find, and allows us to focus on what information will be useful to them.  But primarily it allows us to see how small divergences in code can manifest themselves when we use software.  Understanding that helps to drive understanding that provokes constructive ideas for tests.

In the end when we go hunting for problems in software as testers, yes indeed we tend to make sure it works out of the box in a manner consistent with expectations - the save button saves, the open button opens etc.  But also we tend to do a “greatest hits” mashup of our experience in testing – things we've seen go wrong before, things we've heard go wrong for others before, add a splash of imagination and then filter according to key values of the project to find the “it’d hurt if this screwed up” points.

For myself it would be not until 1996 that I’d learn another computer language beyond BASIC, or about concepts such as either data encapsulation or those all important subroutines and functions.  That learning would involve unlearning some of what I’d already learned.  But in terms of testing … the groundwork had been prepared.

Me and my beloved ZX Spectrum, circa 1983 ...

Monday, February 3, 2014

A comic look at workplace surveys ...

Everywhere I've worked, there's been some form of workplace survey, often title as "best places to work".  It's where you answer a series of questions and answer how well (or otherwise) you find the company, your team and the people you report to weigh up.

It's easy to feel pretty much feel defeatist about them, feeling your opinion doesn't matter, and shrug off why you should bother ...


What difference does your opinion make right?  Actually the last two places I've been, it's made a big difference.  Remember when I talked about personal 360 reviews, I said that their benefit was to allow you to find areas you do well, and target one or two areas that need work?  Well this time this is your chance to do the same for where you work.

The last couple of places have worked hard to address those weak points.  But you need the feedback to know what those weak points are.  The first step in tackling a problem is knowing it exists, and we testers should know that better than any!

This year, I've been looking through some of the standard questions - which often come up for misinterpretation.  And I'm interested in giving them a bit of a TestSheepNZ workover.

So I give you, for entertainment purposes, my ideal "best places to work" survey, with pictorial clarification!  Enjoy ...

Each day I come into work knowing what's expected of me ...


I have all the tools I need to do my job well ...


I receive recognition when I do a good job ...


I feel that I have the opportunity to do my best ...


I am encouraged to develop myself and my abilities ...


At work I feel that people listen to me when I express myself ...


I feel my team is committed to delivering our promises ...


I believe we are innovating and pushing the boundaries of what can be achieved in technology ...


I work alongside people who I feel care about me ...


I work alongside someone I can trust and confide in, and who I can ask for help from ...


I understand and respect the goals of our organisation ...
 


I can honestly say I enjoy my work ...