Friday, April 27, 2012

Rapid exploratory website testing ...

I was faced with an interesting challenge today at 12:50pm.  We have a minor project which I'm not involved directly with, and hasn't needed any testing staff.  However they've had a new supporting website produced to provide information … this had been delivered that morning - could I spend half an hour checking it out for anything "obvious".

This was an ideal opportunity to really stretch my muscles and give it a going over exploratory testing style.  I knew nothing of the website, though maybe that wouldn’t be a disadvantage, I’d evaluate it much as I could within the time allowed.

1:00pm, the link came through.

Opening the page, I began my analysis.  The site was essentially a marketing toy, telling prospective customers about a new service which was being provided, and would allow them to register an interest.  It detailed all the features which were coming, why it would make life a lot easier, as well as links to both supporting partners and associated stories about the initiative.  It also had a top menu bar which allowed you to jump to parts of the page you were interested, which dealt with a particular aspect of the new service.

Several areas had icons, which if you held your mouse over, would allow bubbles to expand, giving a more detailed graphical explanation.

Starting with the basics, I attempted to click every link, every button on the page, making sure it went to the right target.  Two of the links to supporting partners could not be selected with the left mouse button, but could with a right button click menu [Defect 1].

I tried out the webpage in Internet Explorer.  The menu bar buttons did not take you to the right part of the page at all, which was most curious [Defect 2].

I opened the website using Chrome and Firefox (the browsers I had available).  The page looked identical in all browsers.  However in these two browsers the menu bar buttons DID work as expected [Revised defect 2 with this information].

Dragged my mouse over the icons that opened graphical explanations, confirmed they made sense and didn't behave oddly if close to the browser view edge.  I did wonder why one story had two links to website when others only had one (inconsistent) [Defect 3].

Read through the website – did it make sense?  I noticed that two sentences in the same paragraph were virtually identical, and referred to supporting partners in an inconsistent manner [Defect 4].

There was a field to register interest by adding your email address.  Tried a valid email, it was accepted (good).  Tried one junk email, and got told it was invalid (good).  Tried a couple of variations of invalid emails, not all were rejected.  Noted it as a possible problem  [Defect 5].

The bottom of the page had a huge, but empty grey box which just looked messy [Defect 6].

At this point I thought I was done.  Then I had a moment of slyness.  Thinking about how the menu worked for Firefox and Chrome was a little suspicious - I know web developers tend to love working and testing out on these browsers.  Likewise, I also know how web developers love their big, high resolution screens.  So I went into my personalised settings, and dropped my screen resolution to 600 x 800.  The page now no longer fitted to the screen, and some buttons on the menu bar became mangled with some icons missing altogether [Defect 7].

I emailed my list of discoveries to the project manager.  It was 1:30pm and time for lunch.

That was testing at it’s most raw, and a lot of fun (and a nice break from the meetings and documentation of the rest of the day).  For the product, it was a perfectly pitched piece of ah-hoc testing.  I defected everything I thought could be an issue - of those, there are about 3 things which need doing (the non-selectable links, menu not working in IE and probably the behaviour in low screen resolutions), the rest are more about consistency and look, which might be equally important if there's time.

The issue with the menu bars was discovered by a BA during the same time.  But where they reported it, I managed to define it was an Internet Explorer issue, and not one on Chrome and Firefox.  This made me realise that testers are more than people who "find problems" (their BA, a very talented and smart woman did that), however being a tester, it was my nature to go further than just find the problem, but "define the problem".

A most interesting exercise for sure ...

Thursday, April 26, 2012

Roadrunner solutions ... why they need a tester

Last night I was reading Anne-Marie Charrett 's article “I am the Queen of Defocus”.

It's an interesting piece – the idea being really “what is it that some testers bring to the project table”.  Almost any developers understands the technology better.  Many BAs have better understanding of what's written in requirements.

Anne-Marie in evaluating her role has said her skill lies in putting together the “big picture”, or what she calls "defocus".  Many developers can tell you how “their bit” will work, but as long as their piece works as they think, they're not really so intrigued by what's “outside the box”, because from their perspective it's not something they have to work on.  They focus on the areas they're in charge of changing.

A lot of this is understandable – you want people to pay attention to what they're doing.  But at the same time, you need a layer of security that's keen to take over from them, and look at the bigger picture, where the components delivered act in a holistic way.

From this I found myself spinning a fun analogy (sometimes I can't help myself) The Roadrunner project.

Wile E. Coyote is in no doubt an engineering genius and shows great imagination.  His business case is really rather simple “get that bird”.

To this end, he's supplied with everything her could need by the Acme Corperation, and gets to use their pre-tested commercial-off-the-shelf products to scale a suitable roadrunner-catching solution.

The problem is, Wile E. is so focused on how his solution should catch that bird, he's never able (until too late) to see the 1001 ways his solution will go wrong.  Actually watching Roadrunner cartoons is great tester-training, because we'll almost always notice things Wile E. misses, such as

  • The rock he's tied himself to is on the edge of a very dodgy precipice.

  • The trigger to chop his prey is set for Coyote and not Roadrunner weights

  • Although the firework he's strapped to will give him tremendous speed, ultimately it’s designed to go “BOOM!”.

  • His "unit testing" of his trap failed to return the trap to it's original start-up conditions.

Maybe Wile E. Coyote needs a defocused tester …

Saturday, April 21, 2012

The book that test built …

At the beginning of the year, ElisabethHendrickson put together a Leanpub book which was a collection of her notes and essays from over 15 years of writing about testing.

The Leanpub method is a fascinating way to “build an electronic book” from blogs. The idea of building such a book truly appealed and thus the journey on what was to be called “The Software Minefield” began.

As you can tell that was January, and I'm talking here in April – The Software Minefield was not just a dump of every article I'd written on and off this blog.

I've been writing this blog, many times as a reaction to the many challenges of my job – sometimes there are themes over several posts, but mainly I write whatever I feel inspired to by my work. And this is fine for a blog.

But with a book, more than being just a series of articles, it's a journey. And so I collected much of my writing into “chaptered themes”. Some were explored in detail, some were missing areas I'd like to see explored more, so I started to write new articles to “fill in the gaps”. Going further than just this, where many articles for this blog were used as a starting point, every one was visited and revisited – some were rewritten so much only the original title remained.

In all with writing, rewrites, proof-reads, and putting everything together in Leanpub, I probably read each article about 12-15 times. To be honest in March I was getting sick of reading my own writing!

As any good tester will tell you, no matter how much you rewrite and revisit what you've written, there are still defects in there. I've recently had my Father (who is a technical author in his field of metallurgy) proof-read it, and so this weekend I've finally perfected my final draft.

It's been an emotional journey – but it's felt wonderful to achieve the destination. I've been inspired by people like my Grandfather who worked as a mining engineer – he would be able to tell me stories and fables about his mining career, sometimes sad, sometimes funny, sometimes the frustration still in his voice with plain stupidity he'd faced decades ago.

In talking with other testers, it become clear to me that through growing up in a family of engineers like my Father and Grandfather that I picked up a culture around engineering which I always perhaps took for granted. I've been really pleased how other testers have responded to some of my articles, and the aim of the book to help testers think for ways to develop and build relationships in what they do. 

But also important to me is how non-testers like my Mother and my Son have picked up the book, and learned a little about what I do and went “well that made sense to me”.

It's all about storytelling …

[PS – Buy my book!]  

Friday, April 13, 2012

Icebergs ahead ...

This weekend will see the 100th anniversary of the Titanic – some of you might feel compelled to watch the movie, or some form of dramatisation.

The story of the Titanic is one of a ship seen as virtually unsinkable, trouble-spotters yelling “icebergs ahead” and business owners insisting “full-steam ahead”.  Yes there's lots there to think about in terms of projects for sure.

What happened with the Titanic as with many projects which get into trouble, tends not to be a single failure, but a lot of things conspiring to produce absolute disaster.

Virtually unsinkable

Notice the term “virtually” there (it'll come back to haunt us).  There was good reason for this claim to be "virtually unsinkable" – the Titanic had the most advanced system of safety features for it's time.  It had watertight compartments, so it could survive damage to 3 of it's compartments and still remain afloat.

However the iceberg strike damaged 5 compartments, and the watertight compartments didn't go high enough so once they flooded to a certain level they failed completely.

Icebergs ahead

It was known that the Titanic was entering an area of icebergs.  Titanic received warnings of this over the radio, and some iceberg activity was seen around the ship.  However the ship never slowed.

It was considered more important that the ship arrive on time, and such liners were usually ran at their maximum cruising speed to assure this.  It's interesting that Captain Smith had gone on record saying over icebergs he could not "imagine any condition which would cause a ship to founder. Modern shipbuilding has gone beyond that."

Tragically – in this deadly situation the people whose hands the ship was in were the lookouts … and they weren't allowed the tools (binoculars) to be effective.

Ironically ships before had run into icebergs and survived with only bow damage.  It was Titanics attempt to evade the iceberg which caused it's problems, it caused the iceberg to graze the ships side, damaging multiple compartments as it went.  A frontal impact would have most likely damaged one or two compartments (which the ship was designed to cope with).

Disaster contingency

Once the disaster happened, yet more problems conspired.  Much is known how the ship, again having all the necessary lifeboats for the time, only had lifeboats for half the passengers and crew.

But the less than half those on board survived … the crew were not practiced in lifeboat boarding techniques, not even knowing how many people could survive in each boat – some boats were sent out with only half the people on-board.

Combine this with the fact that there were ships nearby who were not obliged (by the regulations of the time) to keep radio operators through the night, so failed to receive the Titanics distress signals, and mistook the distress flares for fireworks.

When you look at disasters like the Titanic, the Challenger Space Shuttle and the Chernobyl meltdown there are similar themes which come out,

  • A lack of imagination to believe a problem could occur.  Who could believe 5 compartments could be damaged?  So only contingency for 3 was built in.  In hindsight only having lifeboats for half the occupants is staggering - yet for the time this was considered more than adequate.
  • Overconfidence in technology.  Captains Smiths comments on how modern shipbuilding is beyond problems with icebergs turned out unjustified.  He didn't even see there being a risk for running at speed in an iceberg area.
  • Driven by demand.  The Titanics need to keep to a timetable meant it would not modify it's speed in a dangerous area.  Richard Feynman would say something similar during the Challenger inquiry, “For a successful technology, reality must take precedence over public relations, for nature cannot be fooled.”
  • Insufficient contingency or disaster plan.  Not using lifeboats to their full capacity.  Not having an agreed system with other ships to monitor for maydays or distress flares.

The Titanic has stuck with us mentally because of the scale of these follies and the awful human price that was paid.  That's why there will always be a sad attraction and fascination to it's story, and why even 100 years on it still haunts us ...

Monday, April 9, 2012

The Easter Bunny Cometh …

It's that chocolaty time of year where everyone is talking eggs and bunnies. Loads of pictures of the cute critters are passing around in the internet on Facebook and Twitter. And yet there is a small part of my mind which wants to cry out in terror …

I lived for a while in England as part of the community of the Isle of Portland, who have a particular quirk that a lot of outsiders will regard as quaint. They have a mortal dread of bunnies. Bunnies are seen as the harbingers of death and doom. The R-word is forbidden – in fact I've known someone who was asked to leave a bar because they thought it funny to mention the word “rabbit”, and it really upset the locals. When the film “Wallace and Grommit and the Curse of the Were-Rabbit” came out, they had to have special posters for the island which removed the R-word.

You've probably read that and had a bit of a giggle. It does sound unbelievable doesn't it? And yet it's completely true. Now read on … There is actually something in this. Portland is most famous for it's stone, of which St Pauls Cathedral in London, The Tower of London, Westiminster Palace, London Bridge were all made.

Within the Portland quarries, warrens of rabbits would sometimes build burrows near the quarry edge. These would weaken the quarry sides, and often cause landslides which would lead to the deaths of the workers below. Workers began to notice the connection between bunnies and deadly landslides, and so someone calling out “rabbit!” would mean “danger ahead”.

Over time the fear of the rabbits remained, passed on by urban myth to each new generation. But the reasoning for it got lost.

The way urban myth plays out is often somewhat fascinating – usually in the stories the grain of truth remains, but a lot of the details get embellished so the real story gets lost. I've found that with a lot of legacy code, similar urban myths hang around about it.

At work at the moment we are using an application which is only about 15 years old. We're making a very small cosmetic change to a part of it, but the team responsible tells us we have to run a complete 6 week regression test – which obviously is more costly than we'd planned. On paper it looks a minor change, which we should only have to test around with a few token checks elsewhere. But several members of the team insist they know someone who knew someone who tried that before and it caused major problems.

We've looked through this application's documentation, and we can't find this written down anywhere. And yet this team's insistence that this level of testing is required is so absolute there's obviously something to what they say, but no quantifiable proof. It does make my role difficult – of course I challenge them on this, but at the same time have to bow down to their experience in this area.

However at the end of the day, I'd like to see just what is myth and what is reason in their argument ...