Saturday, October 15, 2011

Dennis Ritchie dies




Last week Steve Jobs died, and consumers and world leaders stopped to pay tribute.

This week Dennis Ritchie died … and many outside the world of computing will go “who?”.

He was a computer scientist, who not only created the C programming language, but also helped to create the UNIX operating system in the 60s and 70s.

If like me you have ever done programming, you've no doubt done it in C – it's a hugely popular language, and incredibly powerful and flexible one. It's spawned and influenced many of our current languages like C++, C#, Perl and Java. If you've programmed in C, then you've owned C The Programming Language by Kernighan and Ritchie.


The same can be said for the impact of UNIX, itself written in C.

The technologies have made pretty much every type of device and gadget you can imagine today possible, it's used to program mobile phones to XBoxes. You cannot imagine the world of software today without them.

Quite rightly it's been said that the world of computing has lost a titan.


Wednesday, October 12, 2011

Those darn test estimates …



How long is a piece of string?  I'm tempted to be a wise-ass and say that to project managers when they ask me “how long will it take to test my project?”.

That's actually unfair, experience gives us as testers an idea, based on similar projects, of how long it'll take to test.  But one of the problems is, testing is an activity which has a complex relationship with other factors in a project – we can keep testing and testing, but if development don't start fixing some bugs, we're going to be here forever!

So yes, we can look through the designed features and estimate how long it will take to script, and how long to execute those scripts.

But how long until the product is finished testing?

How long is a piece of string?

Having worked in a test consultancy, there is no doubt about the importance of estimates.  They need, when a project manager looks at them, to be attractive but also realistic, with some contingency.

What we introduced was a list of estimates for testing tasks for “best case”, “probable case” and “worst case”.


                BEST   PROB   WORST
Test Plan         1 2 4
Test Conditions   2      3      5
Test Scripting    4      7     10
Pre-Testing       2      4      6
UAT Execution     4      5      8
Retesting         2      5     10

This gives the test manager some leeway, usually if things go okay it should follow the Probable estimates.  If they book the Best case estimates, be very worried.

What I'm finding is my project managers are taking my estimates, adding the Probable case figures together and times it by an hourly rate to get a budget. I don't know why I'm so surprised ... makes sense, but I'm used to working against time and not $$$.

Unfortunately at the end of the last two projects we've been considerably over that budget.  There seems to be several factors at play which determine which of those estimate paths our testing is going to follow, and it's important to understand and recognise them.

Software Delivered Late

You book in a test contractor to help you test for 6 weeks.  They arrive on week 24 to start analysis and scripting, with some test execution happening in week 26 for 4 weeks.

Then your chief developer tells you there's going to be a 2 week delay getting the build together, it won't be available until week 30 now.

You've made a commitment to your test contractor, and are so obliged to pay him, and possibly find them other work.  If you can't get them to assist elsewhere, then by week 30, you're 4 weeks in and not testing yet.  You've blown over half your budget, and Lord help you if there's any more delays!

We all know developers can often deliver late.  Late software is going to burn up budget.  You need to work with your Project Manager to make them aware of their duty to get software to you as scheduled in order for your budget to be met.

Software Delivered Is Of Poor Quality

Kind of the flip side of late delivered software.  Your vendor has promised that the software delivered has been unit and system tested, and no bugs were found.

You wrote your test plan for acceptance testing, expecting the software to have been extensively tested beforehand, with a certain level of quality.  Your project manager and you are expecting what's delivered to be a candidate for release.  You turn it on, and immediately notice a dozen problems, not able to finish basic use cases.

The developers under duress delivered what they had available to schedule instead of flagging any delays.  Little if any testing has happened, and basic bugs are being discovered only now.  Testing 101 says "more bugs = more fixes = more builds = more retesting".

One thing I try and do with vendors is ask for a release note and end report for testing, detailing what defects were found and what were fixed.  This is a bit of a game of bluff.  If I receive an end report which says “everything was tested, and no defects were raised” I get suspicious.  Very suspicious.

I've also had vendors on conference calls inform me “we're running a build up now, you'll have the install delivered in an hour”.  I pull my project manager to one side when this happens and warn them that maybe that will mean no testing whatsoever has been done …

The Delivery Chain

If you have developers on-site who you can give defects to, they fix, build, test it's possible to get a build almost every day.

If they're off site, only receive defects daily, have to courier builds, you'll be hard pressed to get a build weekly.

If you have two weeks to test, and have a daily build, you'll have 10 opportunities to get it right.

If you have weekly builds it's not likely to happen.  Your second build will have to be perfect – and it usually takes about 3-4 even with an initially high quality piece of software (there are always tweaks needed).

Time erosion

It's so easy to happen.  You have,

  • a daily half hour team meeting
  • a one hour weekly project progress meeting
  • a one hour weekly project technical meeting
  • a daily 15 minute end-of-day defect wrap up meeting
  • each day you spend half an hour writing a progress report for the concerned business owner


Oh you're giggling there, but we've all been there.  Did you add it all up?  Yes, you're losing about a day a week.  Look at your estimates, did you plan on there being so much leakage?

I'm finding we're increasingly working on projects where there are a large amount of meetings to keep track of progress.  This needn't be a bad thing, and small meetings daily can help set the direction and key priorities of the day/week.  But it's easy for reporting to actually delay any progress being made, and become a sizable and unknown overhead in itself.

And some projects need test management – how do you budget for that?  It's not a solid “task” again more an ongoing overhead.

Requirements?

Requirements?  We didn't have time to write down everything we asked for!

Due to constraints a project has been only broadly defined, but you're required to perform specific testing against it, to a limited timeframe.  Oh and business analysts are too busy to answer your questions, so just get on with it and you know, test!

This is a nightmare position to be in.  You press a button, a message is displayed.  But you have no idea if it's the right message or not.  There are some things you can do – you can check the application didn't die when you pressed the button, and the message made sense in the context of the button.

But if you have vague requirements, you can only vaguely test.  Such projects really feel like they're setting up the test project to fail.  And take the blame.

Another variant of this is you raise about 10 defects against requirements, there's a review, and a business analyst says “oh yes these aren't defects, I asked for these changes by phone from our vendor”.  If things aren't documented, how can anyone keep track of these changes?




Take it easy.  Take it nice and slow.  That's no way to go.  Does your PM know?

Thankfully there's usually a place for these factors in a test plan under risks and assumptions.  But I can't emphasise enough to you the importance to talking them over time and again with your project managers before you embark on any test estimates, so they can understand and more effectively evaluate the risks and the impact to budget.

Friday, October 7, 2011

Testing's Men in Black


At a time when the world is watching the Rugby World Cup in awe of a certain set of men in black, it's interesting to see how this story has been doing the rounds on the internet – thank's to the Testing Club's Rob Lambert

http://www.t3.org/tangledwebs/07/tw0706.html

It's a no doubt apocryphal urban legend about IBM - but a lot of fun to read anyway!  In the 1960s the world of computing had a different emphasis – programming was a much slower business, and there was one shot at delivering software, no patching, it had to be right on release.

IBM supposedly found that programmers who wrote the code were blind to any faults in their software when it came to testing.  Some people though showed a natural aptitude, and thus one of the world's first test teams - “the Black Team” - came into being.

The Black Team were made up of the best-of-the-best when it came to breaking software.  They became a kind of bogey-man to terrify young developers, able to break any software they came across.  This tale here is just a brilliant parable of the supposed lengths they'd go to in order to test software ...

http://www.penzba.co.uk/GreybeardStories/TheBlackTeam.html

Tales of the Black Team go further, telling how team members started to form an identity together, wearing all-black to the white collar IBM offices.  Some even growing Dali-esque moustaches they would twirl sinisterly as they tested.


I'm very dubious – but I absolutely love how software testing, which feels sometimes like a very recent discipline (in New Zealand sometimes it feels not quite respected as a profession at all some days), has managed to pick up this urban legend.

But it also takes me back to one of my first posts here, on “what is software testing”.  The tale of the Black Team is all about a team who go out of their way to break things.  The story where they work out the resonant frequency of a large tape reader so it rocks itself over whilst reading a file is bang-on-the-money for a lot of people who see testers as people who just go out of their way to destroy.


Today in a management meeting it came out that the business owners see testing as a problem.  “The project was all going well until testing got involved”.  As if testers are responsible for the defects they encountered.  I think the reality is much closer to “we managed to delude ourselves that everything was fine until testing gave us a wake up call”.  If testing is done well there's no hiding the truth of where a project lies.

But no- testing is not about breaking things.  It's about proving quality.  And that can be a bogey man of it's own to a complacent project.

Thursday, October 6, 2011

Steve Jobs - a legacy of quality ...


Steve Jobs has died today at the age of 56 …

He leaves behind him a successful legacy, with Apple computers now the most lucritive IT company in the world, having ridden out the financial recession where other companies have stumbled.

But is the financial ledger at Apple his only legacy?

I first heard about Apple with a passion back in 2003.  Lots of people in my C&C project were getting Macs.  It was explained to me by one of our developers “you know how on Windows, you try and install software, need to put an update on, has to reboot, won't work, then the update fails and trashes your installation?  On a Mac it just works”.

That seems to be part of the Steve Jobs alchemy – in a world where we're used to software not working first time, where we know we're going to get some quirks until there are a few updates, Apple seemed capable of making products that seemed virtually faultless.  In fact, it would become big news if any issues were found.

Here are what I think are several traits which have made Apple unique,

  • They don't work to release deadlines.  They release a product when they're satisfied with it.  Only this week we had complaints that the iPhone 4 was 18 months old, and wasn't it time for a new one to be released?  Apple work to their own timetimes, releasing products when they're ready.
  • Diversity.  Or maybe lack of it.  Nokia makes many brands of phone, a new one each month.  Apple only make one phone – the one you want.
  • Simplicity.  Apple products are designed to be accessable by everyone.  Hence the uptake of their products includes people outside the normal “gadget freak” band.  So called “silver surfers” (older computer users) find Apple intuitive where Windows leaves them baffled.
  • Quality.  As my developer so rightly put it, Apple products “just seem to work”  [It looks easy, but incredibly difficult to achieve]


It's ironic but many companies are obviously trying to chip away at Apple's lead.  They are envious of it's strong position, and hungry to take some of the market, so

  • They rush out a product to be first to market.
  • They cram their product with as many features as possible, hoping to beat Apple on specification.

But it never works out for them, because in their gold rush to feature add, their product suffers in the quality realm, and word gets out it's a stinker.

We all want a piece of the Apple pie.  I sit now on meetings which talk endlessly about the important of “customer experience”, the Steve Jobs buzz-word.  But when marketing types are talking up the planned “customer experience” they're not trying hard enough to bake quality into their products.  That's not taking on Apple, that's setting yourself up to lose your customers to Apple.

Today Steve Jobs was described as a contemporary Leonardo da Vinci.  I don't think that's right – as his genius wasn't so much in technology as in business.  I described him at work at a modern Henry Ford of computing – who also made products only in black.


But tonight, I think weighing things up, he was perhaps a William Denning of the modern computer age, a champion of quality, although of course, quality with a price tag.





Tuesday, October 4, 2011

Spinning Plates – The Test Manager's Stage Show

I've been a test manager about 6 months now all told ...  As a senior tester I used to scoff at what my old test manager got up to – but now I know!

If someone asked me what it's most like, to me it would be spinning plates …



We know the stage show.  Someone sets about 6 plates spinning, and keeps rushing between them to give them an extra bit of momentum to keep spinning and not fall off.

And that's pretty much what I do – our company has a whole host of projects in the pipeline.  I look at the future load for the next few months, and get involved in early meetings about them, review business requirements (if they exist), write the original master test plan, work out how much effort it should require to test, and try and organise test resources so we've got someone to do the actual testing.  And maybe a bit of sleight of hand to keep two projects from getting to testing at the same time ...



It means getting involved in a lot of projects.  Our department is part of customer delivery, and a lot of projects as you can imagine come through us for testing.  I always need to have a trick up your sleeve in case something comes late so we're not busting the budget.  Have a rabbit in the hat, just in case I need extra resource because time's running out.



As a rough rule of thumb (and don't tell my project managers) I always plan for myself to do pure test management … so when things get tight, I can go “all hands on the pump” and magic almost an entire tester out of thin air for 30(ish) hours a week.  Of course that can only be a short term band-aid, and on some projects that's not enough.

But all of the above can become tiring!

In Agile it's said you become much less effective if you're always task switching during the day.  Something like 20% they estimate.

Yesterday I kept track, and I worked on 5 projects during the period of the day.  Ouch!

Although joking about it on Twitter this morning I am starting to show some signs of fatigue.  Getting bits mixed up between projects, when someone asks me a question there's so much I am working on, it takes me a while to straighten it all out mentally.

At the beginning of the year after being “on the bench” (not working on site and doing desk based training) – I was sharp, eager.  Now come October, with an unforgiving workload, we're just getting by week to week.  All the promise of trying to improve processes in March has been whittled down to “just get it out the door” - and not by management, but by me.  The delays that are part of any testers lament have forced us into a level of technical testing debt, and we're having trouble getting things out to time because we're so close to delivery dates.

And I know I've talked about it before in this blog – but no-one wants to be the one to tell the business owner their delivery dates are unachievable, especially when everyone else is saying there's no problem.

I know people should have the courage to, but everyone quite rightly wants to give it their best shot at achieving it first.

So right now, I'm realising we're in a kind of testing deathmarch.  There are things we need still to get out this year.  But we have a freeze from late November onwards – I'm hoping we'll be able to catch up on ourselves a bit then, and hopefully set up 2012 for a bit of an easier year.

Otherwise I'm sure we're all headed somewhere in a very large basket ...