I'm a member of my local Les Mills gym. Without doubt one of my favourite classes is Impact, a boxing themed class where we put on our gloves, and hit various punch bags and pads. With a few bits of skipping, squats, press-ups and burpees thrown into the mix. It's a very fun but challenging class, with a great atmosphere.
After class this week, I stumbled upon a History In Pictures photo on Twitter, and shared it with my Impact instructor Josette.
It's a picture of some women training on a rooftop in the 1930s. I didn't notice much except that,
- It's black and white, and the kind of grainy quality of a 1930s photo
- The cars in the background look the right period
- Clothes and hairstyles seem period
- The girls all have really rather nice legs (I know, potentially a bit sexist, but the honest truth)
Have a look again? Notice anything?
From my own knowledge, I know that for this historical period women's professional boxing would be rather unusual. Only relatively recently have women been taken seriously in boxing - as Josette of course knows.
Josette loved the photo, but had an observation to make ...
"I wonder if they were part of a circus. Just the accessories on the floor for juggling and something that resembles a small trampoline."
My reaction to this was "huh? What accessories?". Darn that I was too distracted by legs to notice, but yes indeed, there they are! Once you see that, your understanding of what you're seeing changes. More than likely this isn't sparring proper, but more than likely practicing some form of staged/faked fight for entertainment (and probably titillation of a male targeted audience ... so maybe the legs ARE important).
So what's it all about?
We all know that two minds are better than one. Likewise two sets of eyes. Indeed the very existence of testing as a profession has come from people finding that we're more likely to find issues in software if someone writes the program, and a second person tests the software. This studies have shown time and again this gets better results than just a single developer who "programs and tests". That's because the developer who tests his own code uses a mental model of how the software should work to build it, but then use exactly the same model to test it. We don't check in any way we've not previously expected, and that leaves us blind.
There are lots of pieces out there about the impact of inattentional blindness in software testing, especially from context driven testers. To put it simply, we can get hung up looking for one thing, and miss something else that's under our nose.
Although here I joke about the girl's legs, I was too busy looking for clues to the pictures period (rooftops without aerials, cars, picture style, clothes hair) that I actually missed a lot of key things in it - namely the trampoline and various juggling items.
When we were out exploring Baring Hill, I noticed some things about the site, but my son had to bring my focus down to the pipes in the ground for me to notice them and follow them and use them to tell a story.
This reminds me of something I was told years ago when we were giving a statement about a traffic incident to a police interviewer. The police try and gather eye witness statements from as many sources to piece together a picture of what's happened in any incident - usually everyone recounts the same events based on their perception, but not everyone tells it the same way, with some adding details that others fail to mention as relevant. One baffling example was a case she'd been a part of where several people had independently described a suspect to her, but the eighth person mentioned "and he was blind". In actual fact, the previous seven had forgotten to mention this key detail, although later all went "oh yes - he WAS blind". Sometimes we're so focused (which is the same as saying distracted) on the little details - hair, shoes, clothes - we miss the big stuff.
In all these stories, having an extra perspective allowed the person to question or state things, to bring them to the table for discussion, and thus expand on one persons perspective.
This is the power of pairing.
Who to pair with?
Most people when they talk about pair testing usually mean testing between two peer testers. I've not always had the luxury of another tester, especially when I'm working on small projects.
Sometimes I've paired up with another tester outside of my project to talk about my approach and bounce ideas off - it's always been useful for me to talk and justify my approach with someone I trust, and I often get ideas regarding fine tuning my focus.
There are a mix of people on a project though who you can get access to, and doing paired sessions with these people can be a great ways to exchange knowledge and values with. Pairing up with these people for brief testing sessions where you demonstrate the system to them can be an enriching experience.
Here are some typical people you can look to pair up with, and what they can offer you,
Admin users - they may be in-house or customers. Especially on legacy projects where you hear the words "not everything is documented", these people are a Godsend. They usually belong to cultures where their knowledge is passed on verbally admin-to-admin as they are mentored in the system - often they know the obscure business processes that make the current system tick. Spending time with them allows you to understand the key processes which keep the system ticking over. Likewise when you have new functionality, they can really help to zero your time in on the pieces around it that would impact the business if they didn't work. This can help you to focus your testing time on high value areas.
Business Analysts - they are quite literally living requirement oracles. Except having been on the meetings that shaped the requirements, they understand the "whys" regarding the importance of key features. You'll learn a lot about your customer through them, "oh Karen asked for this because currently when they do X, they have to do Y first to activate it, and it takes them extra time to do so".
Marketing - an interactive session with marketeers helps them to understand how the system works, which helps them tighten down on how to promote features, as well as think about future enhancement.
Project Managers - a session with them will often help to shake out key concerns from the steering committee they report to, including potential horror stories from the past. "Mark is worried that X might happen" or "last year they had problems when Y happened and brought down the system". This allows you to be aware of areas where there are concerns, and make sure you're comfortable about how your testing is addressing them.
Junior Testers - paired sessions are a great way to coach junior testers. Putting them in the driving seat gives them the opportunity to take the initiative, and to explain their approach. It allows you the opportunity to suggest additional tests they might want to consider, or give feedback about what tests might be more important to run first.
What do they get from it?
Sharing is a two way experience. Whilst they share these values with you, you're sharing yours with them. Namely,
- Demonstrating what you're testing, especially new functionality
- Showing the methodology you use to test features in the system
- For admin users - helping to train them on how to use new features
- For business analysts and marketing - demonstrating how their requirements and requests Frwere realised
- For project managers - feeling engaged and involved in the testing being performed
- For junior testers - coaching
How do I know if I'm pairing or doing a demo?
This is an important distinction to be aware of. If you're sitting with someone, and one of you is controlling all the conversation for the whole session, then you are not pairing.
Pairing is an interactive partnership. There is a certain level of inquiry and challenge, without feeling of threat or accusation. This is important - another party is essentially reviewing your testing as you perform it, and it's important to respond to these comments. Some of this feedback may suggest other tests, which if feasible you should attempt to include. [Sometimes if your session has a goal, and the suggestion would take you off topic, you might want to leave it until the end though or schedule another session]
If you're exercising your testing, or guiding your pairee and they simply don't respond, then you're not pairing - you are essentially demoing.
Here are some potential areas of challenge - notice that most of them suggest areas for extra tests, which if easy to accommodate you should include,
- "You used a keyboard shortcut - I didn't know that existed - we're told to use the icon to launch". Extra test - show them the system launched via the icon, and teach them the shortcut.
- "The server admin machine is ancient and still has a Netscape browser, did you check this screen is usable in Netscape?" A potential additional test, maybe using an virtual machine of an older operating system. [Although then again likely to need some set up and investigation that you might want to return to it later]
- "You showed me a currency transaction in Yen, but 95% of our transactions come in US dollars, Great British Pounds or the Euro - did you cover them?" Possible here to either show them, or show them results where you did. But they're trying to make sure you covered the test conditions which are most likely to occur.
- "That looks different to our current system - I didn't think this page was changing?" Could be worth chasing up with your BA if you don't know, otherwise fill them in on the change.
It's important to take these things in the spirit of pairing. If you're challenged and there's something you didn't think about, don't go on the defensive, but instead realise it's valuable feedback you've got, and the whole reason why you're spending time doing this. If you think a suggestion is genuinely (and I mean this) unhelpful, try and explain as tactfully as possible - but remember your goal here is to influence, to win them over to your side, not to make them feel stupid.
One example of this was a project a few years ago. We had just 12 options our customer could choose to add to our product when they set up an account, and they followed the below rules,
- The customer needed to select at least 1 option
- The customer could select options in any order
- The customer could select up to all 12
My business owner wanted me to be sure that I covered ALL options. If you've read The Elf Who Learned How To Test, you know how much I warn about ever saying the word ALL. I spent some time explaining how permutations worked in mathematics, and showed them some of the different ways you could arrange 12 products etc. I then worked through an Excel spreadsheet to show them how many permutations of product there were - the number was something like 100 billion. Their mouth fell open, because they had no idea when they said "all permutations" it actually meant so many.
At this point, I talked to them about what I had done,
- Gone through the Business Intelligence for the current system. About 80% of people had just 1 option. Then about 18% had 2 options. Less than 0.25% had 3 or more. Hence I did a lot of testing with 1, 2 and 3 options.
- I had covered some scenarios where all 12 or just 11 options had been chosen.
- I had tried out different option numbers each time I did exploratory testing, and not found any issues thus far.
I have to thank James Bach's Rapid Software Testing course for this kind of engagement with non-testers, but it helped a lot to make what I was spending time on more understandable. I was very open to suggestions of distinct scenario, usually phrased "what about when ...", but I never tried to deceive them that ALL options were going to be tested or even achievable.
When this level of engagement is done well, it helps others to understand the challenge and skill of testing because it makes testing visible. When it's not then typically organisations often find themselves believing that "they have a testing problem - that's why our products are late / have bugs". Don't be that organisation.