Monday, January 30, 2012

Schrodinger's Code ...



Recently Elisabeth Hendrickson published an excellent article about Schrodinger's Cat and software.  The infamous Schrodinger's Cat is a famous thought experiment (by the physicist of the same name) used in quantum mechanics.  The cat is placed inside a sealed environment, with a poison gas capsule which has a 50% chance of being released each hour.

It's then covered it up, you walk away and return to it in an hour.  At this point the cat can be in one of two states – alive or dead.  But you cannot be sure for certain either way, it's 50:50.  The only way to be sure is to lift the cover and take a peek.

I will repeat again, it is a thought experiment – meaning you think about this, and not do it (that's animal rights off my back).  But it's more than a thought experiment, it's an emotional experiment too.  People like cats in general, with only the odd proto-serial killer taking exception to them.  So although even in the thought experiment you know from the logic that there is a 50% chance the cat is dead, emotionally you want and hope the cat to still be alive.  [Well maybe YOU shouldn't have put it in the death trap to begin with]

But I see the same phenomenon which I call Schrodingers Code on waterfall projects, especially outsourced waterfall projects …

Project managers will send work out to development teams, and the work will start to run late.  But that's okay because the developers are promising that everything will be done one week before release is needed.  The project is running over it's plan, but it can still be delivered.

As a tester you'll ask to get in and test early versions of what development is putting together.  But your project manager insists that would only get in the way and it would be better to leave the developers to “get on with it”.

Can you see what's happening there?  Someone has put a cover on the death trap, and even though four hours have passed when it should have been left for just one hour, they are desperate that the cat should still be alive.  Because they really need it to still be alive.  It then becomes project management by desperation.

What happens then is that the code is delivered, but under simple tests it falls over.  Under questioning, the developers admit “we were so busy getting it to you we didn't have time to test it”.  The problem is that what needs fixing means the software is far from finished and there's a lot more still to do.  The Project Manager and Business Owner conclude “everything was going well until testing got involved”.  The tester sighs and looks for a new job …

It's one of the things which have most struck me recently – what I call the death of unit testing.  Having moved not only from the United Kingdom to New Zealand but also from military software to commercial applications, it's stark and sometimes shocking on past projects.  The shortness of timescales, the ambition to deliver more functionality in less time.  Usually it's testing which can all too often end up compressed.

Outsourced companies are desperate to deliver something to the project plan, even if they know it's not tested, so they can claim they've met their contractual obligations.  And everyone hopes that a single phase of systems testing can cover a lack of testing anywhere else.  [It could happen is the reasoning]

No wonder there's a growing underground army of testers saying “it's not done until it's tested”, because until it's tested we have no idea if it's any good.

I'm a big believer in testing often and early.  I did a presentation to some developers about testing, where as an ex-developer I spoke that the greatest reward you can have is the first time you run some code you've been working for weeks on, and experience for yourself the thrill of it working.  It's something developers should be not just encouraged to do, but find the true joy in.  It's like someone making toys which they never get to play with.

What am I asking in terms of Schrodinger's Cat?  Well basically that we lose the cover, we keep an eye on the cat the whole time, and if it gets into trouble, maybe we need to rethink it all, smash the glass and rescue the cat, rather than hoping the cat gets lucky …

Sunday, January 8, 2012

The fundamentals of testing ...

I've been working to try and capture the fundamental aspects of testing, and what our our "prime directives" and place within projects should be.

Here's a revised version of what I feel are the essential drive and vision for a testing department.  Whatever your product, whoever your customer is, you should find yourself nodding along to the majority of this.


The value of testing

Testing is important. It holds the key to safeguarding the quality and reputation of services from your organisation.

Projects most testers will encounter during their career will be diverse in their ambition, scale and partnerships. Thus a one size-fits-all approach will not work, but there are some essential traits which testers should identify as necessary for their role.

Mission statement


Testers should be dedicated to serving and championing its customers and business users. To that end the greatest customer experience is quality and trust from our customers in the services we provide.

Testing in your organisation should be dedicated to ensuring that any product you deliver will, wherever possible, continue to support your customers trust in your organisation.

Your aim is to understand, document and assess any risks within a new system, and aim to prevent any surprises when that system goes into production.


The Principles of Testing


The following principles should be upheld and championed by testers,

Objectives

  • The aim of testing is to ensure products are fit-for-purpose and to enable them to be delivered.


Assisting Project Management
Testers should,

  • Use their experience within the software industry to mentor Project Management on the testing process, as well as seek to support the decisions of management.
  • Regularly report to project management regarding progress of testing activities and expected timelines.




Defects
Testers should,

  • Work to illuminate high impact defects rather than report cosmetic defects en mass.
  • Work to verbally communicate directly to development whenever this is possible.
  • Communicate any high impact defect project management ASAP after it is raised. A similar attitude should be taken to any holdup which is delaying testing activity, such as delays receiving documentation or test environment not being available.
  • Work with development and software providers, not against them. Testing is be an exercise done in partnership with these people to ensure the highest quality of product possible.
  • Clearly communicate problems within defects.  Defects should include any reproducible steps, together with reference to requirements/design where possible.


Testing activities

  • Early access to any software under development will help both scripting and early feedback on software.
  • To best understand a product and it’s intent, software testing will need access to business analysts, technical architects and developers as well as any relevant requirement and design documents which allow understanding of the planned product.
  • Where behaviour of a planned product is complex, testers should take leadership and ensure design walkthroughs occur.  Here the functional behaviour will be talked through with business analysts, architects, developers to ensure a working knowledge of the proposed product.
  • The scope of testing should be scaled according to it’s appropriateness for this project rather than because it was done on another project.
  • Test scripting should add value, and function as a repository for future regression testing.
  • Wherever possible, testers should attempt to verbalise any issues encountered before formally communicating them.



Doing things better
Things don’t always go well on a project. Testers should be committed to learning from past mistakes to avoid repeating them. But neither should they overreact.





Deliverables
The following deliverables should be provided by testers for any test project,

  • Test strategy document – initial scope and estimates for each testing phase applicable to project defined early on in the project, from which test budget can be calculated.
  • Detailed test plan – based on test strategy, this is a more comprehensive plan for a particular phase based on available business/technical requirements
  • Test scripts – a structured detail of what is planned to be tested / was tested on a software product
  • Proof-of-testing – details of what was tested when and by who. Might be test script with tester name and date on.
  • Defect report/notification – communication of any defect raised
  • Test Exit Report – final document detailing what has been tested, what problems were encountered, what was fixed, what defects are outstanding, and why it’s considered acceptable.


Standard Risks
The following are standard risks which can affect testing timelines and budget,

  • Late delivery of software for testing – the software for testing does not become available until later than planned, and hence testing is ready to begin, but cannot commence.
  • Dependency on prior testing – within any test plan there tends to be dependencies that some testing has occurred beforehand. Otherwise testing will be the first point at which many defects are detected, impacting with a higher than expected level of retesting. Can be mitigated a little by asking for release notes of fixed defects, and test exit reports from prior phases which illustrate which issues were encountered and resolved.
  • Time erosion – has any events which take testers off-task such as meetings been incorporated into the test plan. Meetings are a vital forum for understanding and communicating issues on the project, but the level of meetings needs to be known, and made sure to be “just enough”.
  • Requirements – is there a single source of truth for technical, business and design requirements to cross reference? Are the people who wrote these documents available? This will help reduce unnecessary defects being raised.
  • Build turnaround – the amount of time it takes development to turn around a new build and address defects is vital. Most software, even essentially good software, requires 2-3 builds to get polished for release. If your developers can only agree to a build weekly, and you have 3 days retest planned, this will not work. There need to be planned software release drops.
  • Test environment – has the designed architecture been suitably understood that the correct test environment has been ordered and set up?
  • Mis-estimation of scope – has an element or system not been assumed to be part of the test effort, but actually is? This will cause more to need testing than expected. This is why a Detailed Test Plan needs to revise initial estimates, and re-review in line with signed off Business and Technical requirements when they become available.

Sunday, January 1, 2012

Let 2012 be the death of Superman!



Christmas break is a wonderful time for reflection on the year that's been, heck that's the reason we often find ourselves starting the new year with resolutions.

New Year's resolutions come in for a lot of stick.  But really the driving force between many good ones is a retrospective of the year.  I'm organising in January a meeting with our team as a reflection on 2011.  I think it's best summed up with “we achieved a lot of things in 2011 we can be proud of, but the idea of 2012 just being a repeat of the same ways of doing things is enough to make me cry”.  And yes, those were my words!

If I had to sum up something for myself which has to die, it is the myth of Superman.  In a nutshell it's the idea that anything is possible if all we put into it is a superheroic effort.  If we fail, we're not working hard enough.  It feels like something that harks back to the 80s mentality of “successful people make their own success”.

In actual fact as we all know the truth is often more that if we try to do more, we just burn out, get tired, end up making more and more mistakes, and actually deliver less.

Most of us who are professional seek to have a desire to “go the extra mile” (so 80s in itself), to try and move heaven and earth to make the impossible or ambitious happen.  We want to be the superhero who says “I can save you” to our project leaders.  This in itself isn't a bad thing.  We've all worked with naysayers and jobsworths who when presented with a problem will just shake their head, and cluck their tongue like a plumber who's about to present a really big bill.  These kind of people can be awful to work with and can be defeatist before you've begun.

There's nothing wrong with wanting to take on something ambitious.  We should feel stretched, challenged and engaged by our work.  As testers we are problem solvers, and we rise to a challenge.  Nothing is worse than just testing-by-numbers repeating scripts and formulas which were set down years before you - it's boring, and you feel undermined as a tester because you're not really doing anything creative or bringing something new to your work (PS – why haven't you automated such a project).

However the difficulty with taking on a challenge is to know when you're over your head, and having the courage to admit it.  This year as I've said I had a period of two months where I took a lot on at work.  And rather than do a few things well, I did a lot of things kind of poorly, struggling to keep my head above water.

The irony was although my managers were giving me these tasks, it was me who was putting myself under pressure to accept everything and try and "get on with it".  Superheroes like Superman don't admit defeat, even in the face of Kryptonian multitasking which is robbing us of our ability to do anything well.  They just try and plough on and still play the hero.





I've read two things this week which has reinforced a rather important message to me ...

Firstly was the mission statement for the 2008 season of training at Farnborough Rugby Club (I discovered whilst tidying up my office).  It's a wonderful document, stirring and inspiring (there are few corporate ones which manage that).  But within it is talks about building success through each player “looking to themselves to see where they need to develop, and where they need help, and looking to coaches and fellow players to achieve that”.

The other was a short story of mine, “The Saga of The Stone”.  It's a short story of a true life incident which happened to me in 1991, where as a young student I'd gone out for a night walk on the Moors, and had an accident.  Although I got myself out of immediate danger, I ended up coming out of the Moors the wrong side (20 miles from home) and suffering from hypothermia.  I was too determined and proud, and it took a lot for me to just ask for help / rescue from a local Church vicar.  In the epilogue I summed it up “it's okay when you're lost to ask for help”.

I think the Rugby club probably said it best.  Players were asked to identify themselves where they needed help, so they could become better.  It wasn't about admitting weakness, as much as building future strength.

So that's my resolution.  No more Superman, just my mild-mannered alter ego.  When I feel swamped, I'm going to say “I need help”.

Could it be that simple?