Monday, February 20, 2012

Learning to test with Big Trak



In the 1980s, Big Trak was the toy you wanted if you were a boy.

It was amazing - a motorised truck which you controlled by entering a program into the keyboard (see below).  You could choose to move it forward, backwards, turn left, turn right and then enter a number which would allow it to turn/move a specified distance.  See a dated demo here!



Hence,


  • UP 2
  • LEFT 15
  • UP 2
  • LEFT 15
  • UP 2
  • LEFT 15
  • UP 2
  • LEFT 15


would cause it to move around anti-clockwise in a box pattern.

You could do up to about 100 instructions, meaning you could do some really fancy patterns.



The toy line has recently been revamped for the 21st Century as Big Trak Jr, and you can buy them reasonably cheaply from places such as here.  [Yes I got one myself]

It's something I'd recommend anyone who's passionate about testing to get their hands on, because it's a very tactile learning tool for teams.  Samantha Laing (@samlaing) was asking for Agile workshop ideas for testing, and I think this really gets the point across.

If you've got hold of a couple, set up an assault course to navigate around, and ask a couple of teams to develop a program to get the Big Trak around the various obstacles.  More than likely you'll see at first someone blundering on, trying to do a big program all in one.  See how that works for them ...

Big Trak really parallels what we do in software.  You can give it to a developer, and it makes a lot of very nice and pleasant beeping noises, so you can assume it's making progress.  But you test it by hitting the GO button, to execute the problem, and more often than not you've missed something and it throws itself off the table or runs into the skirting board.

To make the Big Trak do something cool, you have to balance the programming you're doing with the execution (testing) using the GO button.  There is even a TEST button to allow you to test out a single line command.

We way to build up a complex program to get around an assault course is to do as with software, program a bit at a time, and test.  Then add a little bit more.  For instance the team might have to trial-and-error the distance to the first obstacle before turning etc.

Programming with the Big Trak it soon becomes obvious that if you are going to play with this toy, you're going to have to keep using the GO or TEST buttons.  The program really has no meaning until you hit the GO button.

Likewise in development, what we're working on has no meaning until we've attempted to execute it.  Testing our software, playing with it, is a vital step to seeing if we're doing things right, or sending our Big Trak over a cliff ...

Sunday, February 19, 2012

Roles and responsibilities 101



When I worked on military projects, on day one I would always be shown a project chain-of-command, detailing everyone on the project, their relationship to other roles, pecking-order, job title, responsibility.

In New Zealand, departments tend not to be quite as hierarchical, which can be both a good and bad thing. Good because employees are a lot more empowered than in other parts of the world, and there's no limit of “that's not your place”. Bad because sometimes things got missed and became “I thought you were doing that”, because no two departments did things the same way and so a lot of time everyone just assumed thing's be done the way they were on the last project they'd worked on … even though the team had worked on different prior projects (and so unspoken different expectations).

In some places it's even confusing when it comes to job title. Previously, I was senior tester for my department, and I'd hire "test managers" for projects who'd report to me. I would in turn report to another "test manager" for the whole company.

I decided to deal with this I'd need to write up a list of everything I could conceive we'd need to do for testing, and allocate them to positions – yes roles and responsibities 101. I also decided for simplicity that “manager” was overused, so maybe only the project manager should have this title …

The problem with smaller and newer companies is they might not have mapped out the needs of testing before. They might have previously used only developers or brought in contract staff. If you find yourself in this position, this is an ideal opportunity to take a real leadership role and set out some testing policy and practise. If you're an experienced tester, mentally touch upon every project you've been involved in, and just brainstorm and write all the things you need to happen to make testing succeed.

Here's my list of the key responsibilities within testing. Within New Zealand most of these are “up for grabs” with our looser structure and “all muck in” mentality, although I'm told in America people are much more rigid. Basically I feel someone should be clearly identified in your project to take responsibility each of these actions.

Mentoring of other staff

Not everyone is experienced in the challenges and limitations of testing. That there be a centre of knowledge which should be passed on where possible to more junior staff members, but also needs to be shared with other non-testing staff members such as project managers, developers, business analysts and market managers. This might just take the form of letting other members in the department know what is achievable or not within testing, as well as being the subject matter expert for testing.


Initial estimates for testing

Usually a project will be initially fleshed out by both business analysts and market management to address a business need or opportunity. Looking at the proposed solution, an initial estimate will need to be made of expected effort to test based upon experience of testing similar projects.

This is important, as these estimates will be used to define testing budget. Too small and you risk pushing the project over-budget. But too large and the cost of delivering this solution with testing might just kill the project as “too costly for the benefits it will bring”.

Secure testing budget

This is normally a project manager task, but important to testers, as you need to have this budget secured before you can get on and test. People generally like to be paid.

Resourcing of test staff

Whether staff come from within an organisation, or are brought in as contractors, someone needs to organise the resourcing of test staff. They need to have suitable start and end dates, and any access or permissions set up for when they start.

[As an ex-testing consultant/contract resource who is now bringing such people into his organisation, I pride myself in always having a login and security pass set-up for anyone we're bringing in for their first day. Something I'd often only get after week two.]


Managing delivery of software builds

As a tester it's very hard to test without software. It's got to be in a suitable state and delivered for testing to occur. This is one of those tasks I feel really fall on the project managers shoulder – they're responsible for managing the developers who're delivering this after all.

However it's possible those developers are external and might actually deliver the software via DVD or email “to the guy/gal in charge of testing”.

Organise test environments

Increasingly the software we develop isn't designed to work on just a standalone machine, but has some connection to a larger system. We need to ensure we have a test environment which is representative of the final production system we're going to be using. Someone has to take ownership of organising everything so the testing that occurs can be representative of the end production system. Perhaps you have an environments team in-house, and you just need to book a slot. Maybe you need to actually organise a test server with an external supplier … but someone needs to take charge of this and drive it.


Authoring

Testing has a number of documents which should be created and developed during testing – test strategy, test plans, test requirements, test scripts, test exit reports. Someone needs to write them, meaning ...

Reviewing

Pretty much every document produced by testing needs to be reviewed by those above or at least on a peer level. Otherwise you miss things. This includes test strategies, test plans, test requirements, test scripts, test exit reports …

Progress

You've brought in a contract tester and he's been sitting at his desk on his computer being very busy. But has he actually delivered value this week? Are you making progress?

Junior testers need to be keeping senior testers up to date of what they're achieving and just as important, anything which is slowing them down. The senior testers report up to the project manager. This is how the project knows the status of things withing testing - what's going well and what needs intervention and assistance.

Booking time

Remember that test budget? The more time you spend on testing, the more that budget gets used up. So it's kind of important that everyone on the project books their time at least every week, so the project manager knows where they stand on terms of budget used.

Execute test scripts

Yes indeed, someone has to actually run the tests you've planned. Maybe you've decided to go in more of a exploratory testing direction for test execution – in which you can perhaps reword this. But all the same, someone has to actually test the system.

Defects

If we're going to execute tests, we're likely to raise defects. These have a whole lifecycle of their own, they need to be raised, managed, reviewed (and possibly changed)



Potential test phases for a project

With all that set out, lets try and divide the work over a number of roles. For any project, there are likely to be a number of test phases. Typically they include,

  • Hardware testing
  • Unit testing
  • System testing
  • Acceptance testing
  • Usability testing (if required)



Test Strategist
Across all projects

  • Resourcing of testing staff
  • Mentoring in testing for project staff , managers and test resources

For a project

  • Initial estimates for projects
  • Authors test strategy document, including determining the types of test phases required
  • Review testing plans from projects



Project Manager
Responsibilities for project (regarding testing)

  • Management of delivery of software (and patches) for testing to occur
  • Secure budget for testing
  • Executing the agreed test strategy
  • Reviews and feedbacks regarding

           o Test Strategy and Plan
           o Defects


Testing Chief
Responsibilities for all test phases for a single project

  • Authors high-level test plan for all phases (or delegates to Lead Tester)
  • Coordinating between phases of testing – including getting weekly reports from Lead Testers
  • Arrange organisation of test environment
  • Records time against project through timesheet
  • Reports progress to project manager

On some projects this role will not be required, or else duties delegated between project manager and Lead Tester.


Lead Tester
Responsibilities for a single test phase of a project;

  • Authors detailed test plan – based on Test Strategy Document. Includes revising initial estimates.
  • Requests test environment
  • Reports progress to Project Manager and Testing Chief (if available)
  • Manages defects (assessment, communication, tracking, retest, closure)
  • Authors and manages requirements for testing (review, enhance, assign) based on design and business requirements
  • Review test scripts
  • Authors test exit report
  • Records time against project through timesheet



Tester
Responsibilities,

  • Authors test scripts
  • Execute test scripts
  • Raises defects
  • Retests defects
  • Communicates new defects and issues to Project Manager, Lead Tester, Development in a timely manner, determined by defect severity.
  • Reports progress to Lead Tester
  • Records time against project through timesheet




These roles are pretty scalable to any size of project. If you have a small project needing only one tester, the Lead Tester and Tester role will both be taken by one person. The Testing Chief role will vanish, with those duties either taken by the project manager or Lead Tester.

The most important thing is though to have something like this written, and talk it through with your project manager. All the things on the list need to be done, but they are also to an extent negotiable. It doesn't so much matter who does each item, as long as it has an owner who knows they's supposed to be doing it.

Are we there yet? - The metrics of destination


Consider these stories …


The developers tale



You're taking your children on a long distance journey. You know your route and your final destination. You’re all packed up in the car, it's bright and you're wearing sunglasses ...

It’s going to be an exhausting drive. So imagine how you feel when you reach the bottom of your road ...

Are we there yet?”
How much longer?”
When are we going to get there?”
I think we need to go back.”

Frustrating isn’t it? You’re trying to get on with your drive, but you’re being pestered for updates. And no amount of volume on your Cat Stephens CD is going to drown it out.


The manager’s tale



Where the hell is the bus? You’re at a bus stop, and it’s raining. You’ve been here what seems like ages. You check the bus stop, but there’s no timetable and no indication when the next bus is due. You try ringing the bus company, but after 15 minutes of being told your call is important to them, all the voice on the other end tell you is that buses are running today, and a bus will be with you at some point.

The minutes tick by and you’re sure you’ve been here for over an hour. You don’t know whether to give up and just get the car, or if the bus will appear in a couple of minutes. It's frustrating, and you feel an idiot no matter what you do.




These two stories are being played out in many software projects around the world, and it leads to friction. The source of all this strife? The need for balance in a project between needing to monitor progress vs just getting on with the job, and the role metrics play in all this.


Developers and testers often feel like the parent driving their kids. They want to get on and “just drive”, but they feel harassed constantly for updates. “How far are we now?” / “Two more hours” / “You said it an hour ago”. They want to concentrate on the job at hand (driving to their destination), but they feel constantly harassed to stop at every gas station and check the distance and directions to their destination. They point out to the kids that stopping to ask this information so regularly is actually slowing down their journey, which would be so much quicker if they just kept driving.


Managers feel more like the person stranded at the bus station. They know a bus is coming, but they've waited a long time already, and they want to know if it's worth continuing to wait or to make other plans. They're given some information “a bus is on it's way” but it's so vague, it's not really helping with their decision making. It could be minutes, but it could also be hours.


These are the different values and importance that both those in technical delivery and those managing that delivery can take when looking at metrics. It's an emotive case on both sides of the fence. Look at those two stories, you most likely identifed with both the parent being pestered and the man abandoned at the bus stop. In our office do we have to take sides with one viewpoint or the other, or try and make it easier for both with a little compromise?


Why metrics matter


It's important to realise that metrics are important. I've learned this myself from working closely with project management. When I'm asked for estimates for testing times on a project I might say “ooh, 1 week for best case with no major issues, 3 week for most probable case, and possibly 5 and up for worst case if we encounter huge issues”.


The project manager then has to secure a budget for testing for that. They might only be able to get enough money for 3 weeks of testing. When you come to the end of week 2, if you look likely to need more than another week to test because of issues, how will they know? If you think it's now going to take 6 weeks, your manager will need to go to “the business” to get more funding for the projected overspend (unless they have enough contingency budget squirreled away). And “the business” will want some kind of indication to back up the managers claim that it is going to take 6 weeks. This is where some metrics can be needed to argue your corner. But which ones tell the most meaningful stories?


Metrics that have value


As a tester then, you need to be able to provide metrics that are meaningful. We also need to be able to provide them relatively painlessly , because any time we spent compiling metrics is time not spent “getting the job done”.


What about hours spent on project? I know some people hate recording hours on a project. I personally think it's vital, because it helps a manager to determine spend on a project. And when I used to run my own testing company (Ultrastar Ltd) those hours on-project would become the hours I would bill for. And hence they were vital to the process of “getting paid” - suddenly this metric became important to me (funny that).


However I don't really feel hours booked do tell us “percentage completed”. It helps us work out how much budget we've used up, and that's really important to managers, but it doesn't really measure our progress. It's a bit like trying to use the fuel gauge in our car to work out how far we've travelled. Your car manufacturer might have told you that it'll do up to 300 miles on a full tank, and you know you're journey is going to take 200 miles. So when your tank is half full you must be 75% to your destination? [Erm remember mileage varies with car speed, road conditions, idling, age of car ...]


What about the number of test requirements tested and the number passed? Personally I like this metric, as it gives a good feel of how many paths and features we've tested, and I do think it's useful to keep track of this (as long as it's relatively painless). However, I often joke that it takes “90% of a testers time to test 10% of requirements”. If you use requirements tracing you'll probably know that not all your test the same number of requirements. Usually the first test (happy day) cover a whole lot of requirements in one sweep, whereas other test scripts will be as long but only test a single requirement.


In fact I've known runs of test scripts where we've had 3 busy days of testing. We test 90% of requirements day one, 9% on day two, 1% on day three. And this often seems consistent with every project I've been on since– with later tests in an iteration typically having some of the more fiddly and complicated tests in it (you make sure a build can walk before you send it on a marathon).


Measuring requirements tested is going to tell your manager how thorough your testing is. But the brutal fact is you might be 100% tested, with 98% requirements passed, but it's still not a piece of software you're happy to see in production.


Another metric I've seen is simple number of test cases run, and number passed. I'm not a huge fan of this, as this does measure the velocity of testing (although again assuming all tests of similar size), but again I don't feel it's telling us how many requirements and pieces of functionality we're checking. However it's more than likely a lot easier to track this number than the number of requirements if you're running manual test scripts which are just written up in Word (unless you're an Excel wizard).


What about measuring defects encountered for each build? Makes sense yes? As we test each build we should see less defects which means more quality. So for build 1.019 you found 4 defects, and for build 1.024 you have 28 defects – so that means quality is going backwards isn't it?


Well no – turns out that build 1.019 had 4 defects, of which 3 were so catastrophic that not really much testing got done. Build 1.024 has those all resolved, and more testing is getting done – we only have 1 high level defect now, 11 medium, 7 low and 9 cosmetic. So really things are looking much better. I like to track the number of open defects (in total all severity) as well as the number of open defects which we can't go live with (ie. high or severe severity).


As subsequent builds get better you should see the number of defects decrease, but most importantly their severity decrease.


The best thing about modern testing tools if you can get one in your department is it'll usually track all these numbers for you as you go through testing. It's like having a satnav telling your kids how many miles are left to your destination, it takes away a lot of the pain.


Regularity of metric updates …


A big obvious feature about getting the balance is the frequency you need to provide updates on numbers. Every month is obviously too infrequent (although I've known a good number of technical people who even complain about that).


On the other hand, every day can be draining for technical people, and it's too frequent, and lots of tasks span out over a few days. Although if you've entered into formal testing and maybe every day is about right.


Otherwise every week is a good time, usually on a Friday to sum up the progress of the week.


However numbers aren't enough


As you've seen, there's no magic single metric which really does “do it all”. Often there needs to be a few being juggled. Much like having a satnav which tells you there are 44 miles to go, and your current speed is 50 mph. It feels comfortable that you should be at your destination in an hour. But traffic lights, roadwords and urban areas ahead might well slow you down.


And so numbers give you some awareness of possible risk areas, but they're not the whole story. Much like there is no single right statistic for Doctors to use – they use heartbeat, blood pressure, body temperature – we need to use different readings to measure the health status of our testing.


Looking through the metrics suggested, each one can tell a different story,
  • Hours booked on project. Is it lower than expected because testers are being pulled off onto other projects? Is it higher because the progress (as slow as it may seem) is coming with testers working late and weekends? Is it even acurate? If permanent staff aren't paid overtime, they'll often only book their core hours to a project to spare it expense. And hence a manager might say “well we can meet our targets by working evenings and weekends, unaware that this is already happening”.
  • Both requirement coverage tested and test scripts executed shows us how well we're getting through our tests. Whether we've capable of executing the scripts we have in the time we have for testing. If we can't achieve 100% coverage over at least a couple of builds (even if it's not passed) then it shows we don't have enough capacity in our test team. Maybe we need to have another tester, or else look at reducing the number of tests we run, trying to simplify and merge where possible.
  • Requirements coverage and test scripts failed tell an obvious tale about the quality of the product, and a rough indication of how much more ready this build if over the previous ones.
  • Defects in build and high-level defects in build help to show us if our product is maturing, and the high level defects are disappearing, leaving us with the kinds of defects we could consider going live with.


We use metrics as part of our reporting. But our reporting should not be all about the metrics. If 10% of requirements failed in build 2.023, but only 5% failed in build 2.024, then this should mean that build 2.025 should be a candidate for release yes?


This is one of the problems with metrics, trends and graphs. We can't help trying to draw invisible lines through the numbers and see patterns that aren't there sometimes. Just cycling through the iterations doesn't make the software better build on build. Instead it's the management of the individual problems, especially the severe ones, together with any action plans to get them addressed. It's only by managing individual problems and defects that you increase quality and make the numbers “look better”.


Metrics help to identify areas for concern, but sometimes there are factors in these areas which mean the numbers can be misleading. Like having 44 miles to your destination, and doing 50mph, but you know that in a few miles it'll be urban areas and 30mph speed limits from then on … so you're going to be over an hour rather than under.


When I used to work as an automated tester on QARun, I had an assignment to create 3 scripts in 10 days for different functional areas. I had to keep daily progress number updates. After day 5 I had still not finished script 1. In fact after day 8 I still wasn't done on script 1. On day 10 all 3 of my scripts were completed.


My test manager regularly pestered me from day 5 onwards about my progress. And I kept explaining the scenario to him, but it felt like he never listened to me, only the progress numbers. You see all three scripts were essentially very similar. I was creating script 1 very carefully, and once done it required minor changes to produce the other two scripts.


Yes the numbers showed that my work was at potential risk, but my constant explanation of the nature of the assignment should have mitigated that concern and risk. [My view] Or should I have just fudged the numbers for progress on scripts 2 & 3 as I was working on them [My managers view]


At the end of the day though, numbers are only indicators. To please both the children in our car and the man waiting for the bus, we could tell them what they want to hear, and say “just 5 minutes away” to calm them down. But 10 minutes later we'll have serious issues. Something can be 99% done, but the 1% can be a big issue, whereas another project can comfortably go live with 80% done, because the 20% missing can be lived without.


Sometimes our metric indicators can cause us to stress about things which are in hand. Sometimes we they can make us feel comfortable right before a fall. Metrics can be great, but they only have meaning in context.

Friday, February 3, 2012

Getting your testing project into orbit!


If you look at a the underside of a rocket (hopefully not as it’s about to take off), you’ll find that there are two types of rocket engine with different functions,
  • Propulsion. This is performed by the huge rocket engines, and are the workhorse of a rocket where most of the energy and fuel goes. It gives it the power to take off and escape the Earth’s pull and get into orbit.
  • Steering and navigation. This is achieved both by much smaller engines (thrusters), and steering mechanisms on the big propulsion engines. They use much less energy but perform a vital function. They’re there to help to steer it and keep it on course.

A rocket launch will only be successful with a good balance of both propulsion and steering. Neglect the main boosters, and the rocket won’t have enough power, and will come plummeting down to Earth disastrously. But neglect the steering mechanisms and the rocket will be unstable and easily topple over, leading to a similar fireball.


Much like the engines of a rocket, I like to see testing activities as a similar context, they’re either for ‘moving us forward’ or ‘steering us in the right direction’.

Without doubt the activities that moves us and our project forward are those of actually executing tests, raising defect and reporting. This is how we add value to our products – we find problems and work with developers to resolve them. This is how we make our product better, and really we should aim to spend as much time and energy in here as possible.

And though we’d ideally like to spend less time on them, our steering activities are still important, because they make sure we’re on course with our main task. These activities include,
  • Test estimation
  • Test planning
  • Requirements review
  • Test scripting (and automation)
These actions by themselves have no real value, only in the way they have a relationship with our main activities.

In an ideal world you should look at the time and effort you spend on all these activities during testing a project. If you feel you’re spending more time on ‘steering’ activities than ‘main thrust’ activities something is wrong, and you’re not spending enough time on the core of where you deliver value.

This happened a few years ago – I worked on an automated project, where our scripts were often over 10 years old, and very badly coded in places. So frequently our scripts would often fall over and we’d spend a lot of time and effort during regression testing running them and fixing them. It became a bit of a joke that we ‘were getting very good at testing our scripts’ … but not so much our product.

Sometimes to just get it done it was easier to just run them manually! But that's the point. The value of these tests wasn't in supporting the automation, but to actually have the test execute the desired function in our test application.

The obvious and idealised model then for getting your testing where it needs to be is all about getting as much testing and hands-on the product as possible – this is your propulsion, your core activity as a tester.

But you also need to do just enough of the navigation tasks so that you know all this effort is pointed in the right direction to deliver what your business needs. If some task is becoming arduous (as with some of our automation), the big question has to be “is it really adding value to my core activity”. All your steering tasks should have a clear contribution towards your core tasks, if it’s not making the test execution phase better, then it’s probably excess baggage time-wise and needs a bit of trimming.

Test scripting is a good one for this. If you are writing complex and arduous scripts but having no access to your product when you do this, you have to wonder if this is going to really add value. How do you know your script will align to the finished product? Wouldn’t a lighter scripting approach, and using the product as you write them be better, because you’ll actually be doing some execution (and thus finding bugs as you go along).

Likewise with test plans. It can be tempting sometimes to try and make an iron-clad plan of all kinds of details, which is great if you have all that information to hand. However I’m frequently finding that no matter what you put in your test plan, there’s always something you’ve missed or has changed. Projects are almost living creatures with an ability to morph, change, grow from inception. Better to get a plan together, and up-issue it as changes evolve.

Get the balance right in your testing tasks, and there’s no heights you can’t reach …

The problems we can't fix ...


Testers are often the ones who are deeply involved in finding problems in software.  But they are more than this.  They are a core part of how problems are solved.

It has an unfortunate mental affect on many of us, which I think we're not aware of.  We get used to the idea that we can fix any problem.  It’s what makes us positive people around projects - our managers can be panicking about the latest showstopper defect, but to us it’s something we’re used to dealing with.

We get very good at feeling there’s not a problem that we can’t work with.  Sadly there are limits.  Away from our test labs, not every problem or issue can be so easily solved.

It’s an unfortunate fact that life can be unfair, and sometimes quite brutal.  We tend to invent things like religion, karma or ‘what goes around comes around’ to make it feel fairer.


I’ve just sent a friend a copy of a book “The Road Less Travelled” by M. Scott Peck, which kind of touches on some of this.  Its theme is very much that life is complex, and we often expect it to be simple and fixable, but we have to learn to accept that it’s complex, and accept those limits of what we can do.

Back in 2010 I lost my close friend Violet.  She was someone very special in my life, in some ways I’d want to say best friend, but she was also a mentor.  In your life you will meet only a handful of people who will champion you and see qualities within you especially when you can’t see them in yourself.  This was what Violet was to me.

Her death came as a bombshell.  It was upsetting, and I was so angry.  For about a fortnight my mind kept going over and over how unfair it was, how could it have happened.  Part of me really wanted to make it not so.  Much later I realised how much we really feel if we protest about something enough it’s like a soccer match where footballers feel if they protest enough to a referee, they can get him to reverse a decision.  But unfortunately God’s not a referee.

It was an awful feeling – my best friend was dead, and there was nothing I could do to “fix”it.  There would be all these moments and achievements in my life I’d now never get to share with her.  There’s an enormity in realising this friend is gone forever, that you never seem to be able to come to terms with.

It’s something we’d really not like to think about, but during our working life there are going to be moments of extreme upset in our personal life, and also times of tragedy.  There’s an awful Superman myth in some organisations that professionals should leave their personal life at home and give their all at work.

To an extent yes, we can’t turn into work and be snappy and irritable with our customers and co-workers because we’re going through a bad patch.  But we're not machines.  Sometimes we have to realise that if we are going through a bad time, maybe work isn’t the place we should be headed.

2011 was a terrible year for my team, my co-workers were put through the wringer in various ways – family bereavement, divorce, long term injury, house washed away.  What shone out was the way everyone tried to be supportive and sensitive within our team.  And this was echoed by my company really stood out as one where people's wellbeing was vital.

I know for myself I was aware that the company provided a limited number of counselling sessions.  And a year on from Violet’s death, I’d still not really got closure over it.  You get to a point where you feel inside just how much impact this person had in your life, but you're also aware that just as their life is over, yours needs in some ways to move on.  

I took the courage to book an appointment with the company's counsellor.  I say courage because taking such an appointment can feel like an act of weakness, and it’s a hard thing so admit “I can’t deal with this myself”.

But the session helped a lot.  Overall I was told my thought processes to Violet’s death were pretty much spot on, the problem was I was trying to go through the grief process to a timetable (typical tester on a Waterfall project there).  I just needed to accept this was going to take time.  I needed to hear that, and it was a huge weight from my shoulders.

This is in many ways a follow on to my post that we need to let go of our need to feel like Superman.  When bad things happen to us, we need to be wary of just soldiering on.  There’s no shame in being upset about it – just because we’re a professional doesn’t make us emotionally neutered.  Sometimes we need to stop, and take time to deal with it, and not feel ashamed for doing that (although obviously we should always be wary of overly lingering and dwelling on any event to the point that we never move on).

Sometimes the thing that most needs to be fixed is ourselves …