Looking back at my two stories before going on to discuss KWST2 (Kiwi Workshop on Software Testing), it's interesting to see my different take on two similar tales ...
In Wernher, I'm overall sympathetic to Wernher von Braun as a man in a difficult place under the Nazis, just trying to get through the war. In The Man On The Cutting Room Floor though I feel there's an element of relish and karma in the fact that Nikolai Sevnik suffers the same fate as those who he's helped to paint out of history.
There's probably something desperately unfair about that – in a way there are many parallels between the two stories, with both Wernher and Nikolai,
- living under difficult and brutal regimes
- both dreaming of something better – Wernher through rocket exploration, Nikolai through art
- both men though knowing of the atrocities of the regime they live in, they both do work that is complicit in supporting it
The very act of reading through, much like reading Winston Smiths tale in Nineteen Eighty-Four, we realise how difficult their choices are and empathise to some extent with them. Even though we know how these people act and their choices are not ones we'd readily make in the society we live in.
That is the benefit of “experience reports” or “war stories” which a conference like KWST2 bring out. Much like my two stories, it allows you to walk in the difficult shoes of another person. This is why the choice of “ethical choices in the workplace” turned out to be such a great one.
Some of the stories we heard during our sessions were about
“A bad tester and a bad role model”
If someone is visibly seen to be associated with testing, but you feel their ideas and values they put out are to the detriment of the testing community – how do you challenge that? Do you seek to confront them, to educate them or do you try and distance yourself?
This initially seemed a little judgemental. But when you introduce yourself to a new team member as “hi I'm a tester” and they roll their eyes and go “oh not one of those people” then your profession and by association yourself have been affected and corrupted by the expectation and press these “bad testers” can cause.
“My last tester gave me the metrics I wanted”
Following on from that was a tale about how these “bad testers” could cause expectations which could be detrimental to people's expectations. One of the most contested metrics in use is “percentage tests completed”. The ISTQB and several “experts” like the story above referred to will tell all and sundry that this is a good metric to monitor testing and regularly report to management.
Most good testers will know a problem from the off. You can refer to “percentage test cases passed”, but the tester themselves know that some of those tests are 5 minutes long, and some of those tests are 3 hours long. It's like following the advice of “eat 5 pieces of fruit a day and stay healthy” so you eat 5 currents, whilst your friend eats 5 apples.
For a number to be a metric, it must be just that, a measurement. That this scenario … I have 10 tests to run,
- 4 should take 15 minutes to run if there are no issues
- 2 should take 30 minutes to run if there are no issues
- 3 should take an hour
- 1 should take 3 hours
At the end of day one, I report I am 50% through. So how long will it take me to run the other 50%? Will I be done by tomorrow?
This is the problem with metrics, and this metric in particular. If can be manipulated to tell almost any story. The peer conference was not against reporting progress to management, on the contrary there was a feeling of obligation to tell project management on what testing was achieving and the milestones it was reaching. But more through collected defect reports and progress on areas of functionality tested, not through vague and potentially misleading numbers.
“Who steers the test community?”
So there is a group of testers with certain practices, and maybe certificates to prove they belong together. And over in a different area there are some more testers who have a different set of practices. So who are the better testers with the better practices with the right to call themselves “the test community”?
What became apparent through conversation was that no-one testing thought leader has the right, no matter what their credentials to declare themselves “The Test King” and demand the community bow down to them. Instead by building reputation, groups of testers should be able to cluster together, exchange ideas (much as was done over KWST2) and reach consensus. And that way the community would move forward as a group and not as a cult.
Senior testers and leaders needed to understand their ideas would have to be challenged – this was a form of empirical testing to work out the value of the ideas. This could feel a brutal system at times (I can vouch for that personally), however this is how community advanced. There was a great quote that we're all on an airliner, and we have a habit of deferring to the Captains authority (our thought leaders) at all times, but the truth is most of us have a pilots license ourselves, and we won't learn from the Captain unless we occasionally question their actions. What's more those Captains need us to question their actions to stop them from doing something really dumb when they're not paying attention – because even Captains are fallible.
In science, theories are often put forward, often causing uproar as they challenged what was seen as accepted theory and truth of the time. Both Galileo and Darwin put off releasing their ideas of 'the Earth moving around the Sun' and of evolution, because of fear of ridicule and persecution. But overall we benefited from their perseverance.
Community cannot always move forward together. In an ideal world the whole world of testers would move forward together as a mass movement. But the nature of free will, and the way we see different values in different aspects of testing, there will always be some element of fraction over ideas. And sadly sometimes those fractions cannot be band-aided. However we can't stay stuck in the past thinking the Sun moves round the Earth. Some ideas are so powerful they have to be explored – nothing should be seen as untouchable or sacred in testing.
“I'm watching you … always watching you”
When a co-worked does something which you feel is wrong, what action should you take? Do you just straight go up and report them to management? Do you give them the benefit of the doubt and just keep an eye on them? Do you try and discuss directly with them?
On the topic of just reporting people– we started by talking about 1984 regimes of monitoring. I lived for a while in East Germany as a student, and the stories of the Stasi secret police keeping notes and tabs on people was deeply unpleasant. Everyone distrusted everyone, and it created a climate of fear and distrust. And so I find the idea of a project full of people reporting each other to management for the slightest infraction as really the kind of place I would not want to work.
I'd like to think I'd always want to get involved and coach the individual back into line as best I could, in fact I've done that myself several times. Often the individual had been upset or just not thought something through, and what they'd done had a minimal and fairly unseen impact. When it comes to reporting someone to a manager it just feels to me like I've failed (but sometimes it's a resort you have to sadly take). But as the stories of Wernher and Nikolai have shown, the difficulty with ethics it's sometimes all too easy to say “well I wouldn't do that” and pass judgement on another's actions. It's all too easy to be judgemental, yet your co-worker may have laboured in a difficult ethical battle of their own before taking that action. This is the importance of “walking in another's shoes” first, and in my opinion engaging with that person wherever possible before escalation.
Yet in saying that, I've also known co-workers who were caught using their work account for the most horrific acts (two members of a past company were discovered using their work machines to distribute child pornography). There are some things which are inexcusable and criminal, in such cases had I known I'd have had to report them, as they'd crossed a line that trying to talk and engage with them would not help.
“Just do what you're told”
Variations on this theme came around again and again in experience reports. The scenarios involved a non-tester (often management) giving explicit instructions to a test team on what they should and shouldn't do. So in a powerful example we explored, a project manager tells the test team, “look we're going to ship this product in two weeks as is. We want you to continue testing – but any more defects, we don't want to hear about them”.
What do you do? The point of testing is to confirm behaviour, and where it deviates from expectations, report bugs. If you can't report a bug, what's the point of testing?
If you follow to the letter what you've been told, then what you're doing is acting unprofessionally – what James Bach calls “malicious compliance”, letting something fail because “I was only obeying orders”.
But what do you do? This is where the theme of “who does testing serve?” came about – do we serve the project management or do we champion customers and end users? In actual fact we're in many ways answerable to both. This was why many felt that the test team needed in some respects to be slightly independent of the rest of the project so it can make it's own calls on such doctrine. It's a good point, although the power of having the testing team under the project umbrella is the feeling of “we're all in this together” and there's much more sharing with testers than the suspicion testing's just coming in to audit software.
Strategies for dealing with this request involved a level of disobedience which attempted to honour the intent of the request, but also continue to work in a professional manner. And so bugs encountered would still be noted, but use of the company official bug tracking tool would be avoided.
If a defect was a high impact one, someone would try and talk to the manager and developers with “look, I know we're not supposed to find any more bugs … but we found a big one”. It might be that the project manager only said that comment in a moment of frustration about not reporting any more bugs – we've all heard variations of “this project would be fine if only testers didn't find any more bugs” as if it's the testers who put the bugs in there (testers don't break code, it arrives to us already broken to quote Jerry Weinberg) – and in truth this project manager despite saying “don't tell us about more bugs” would actually be pretty peeved if people followed his instructions and failed to tell him something critical about his product.
To sum up …
An amazing couple of days. I originally felt the format too confrontational on day one, but by day two where consensus was coming about, it felt like a course of therapy where we'd made real progress and got to learn more about each other, and find a lot of common ground. Just as importantly I felt the two days had taken some peers, and forged some important friendships from our shared experiences.
It's with great pleasure I've heard that several people behind the event are launching a regular WeTest meetup in Wellington, which I'm already looking forward to …
A couple of snaps I've stolen from David Greenlees (who had the honour of being sung a variation of Greensleeves to by James Bach during the event) ...