Skip to main content

Exploring It!



This week the test team at Linguamatics held our first internal conference. There was no topic, but three broad categories could be seen in the talks and workshops that were given: experience reports, tooling, and alternative perspectives on our work. (The latter included the life cycle of a bug, and psychology in testing.) My contribution was an experience report looking at how I explore both inside and outside of testing. I've tidied up some of my notes from the prep for it below.

There are testing skills that I use elsewhere in my life. Or perhaps there are skills from my life that I bring to testing. Maybe I'm so far down life's road that it's hard to tell quite what started where? Maybe I'm naturally this way and becoming a tester with an interest in improvement amped things up? Maybe I've so tangled up my work, life, and hobby that deciding where one starts and another ends is problematic?

The answers to those questions is, I think, almost certainly "yes".

Before I start I need to caveat what I'm about to say. I'm going to describe some stuff that I do now. It's not all the stuff that I do, and it's certainly not all of the stuff that I've done, and I'm definitely not saying that you should do it too. It'd be great if you can take something from it, but this is just my report.

Exploring in the Background

When I say background research I mean those times when I'm not actively engaged in looking up a particular topic. I have a couple of requirements for background research: I'm interested in keeping up with what's current in testing, my product's domain, and related areas, including what new areas there are to keep up with; and I'm interested in what some specific people have to say about the same things.

One of the tools that I use for this is Twitter. I scan my timeline a few times a day, often while I'm in the kitchen waiting for a cuppa to brew. I'll scroll through a few screenfuls, looking for anything that catches my eye. This is where happenstance, coincidence, and synchronicity come into play. Sometimes — often enough that I care to continue — I'll find something that looks like it might be of interest: potential new spin on a topic I know, someone I trust talking about a topic I've never heard of, or something that sounds related to a problem I have right now. When I see that, I message the tweet to myself for later consumption.

I also maintain lists. One of them has my Linguamatics colleagues on it and I'm interested in what they have to say for reasons of business and pleasure. Because there aren't many people on that list and because I'm not worried about losing data off the bottom of it (as in the timeline), I'll check this less frequently. When you see me retweet work by testers on my team, I've usually come across it when scanning that list.

I do something similar with Feedly for blogs, although there I have more buckets: Monitor (a very small number of blogs I'll read every day if they have posts), Friends and workmates (similar to my Twitter list) which I'll try to look at a couple of times a week, Testing (a list of individual blogs) that gets attention once a week or so, and Testing Feeds (a list of aggregators such as the Ministry of Testing feed) which I'll skim less frequently still. Blogs move in and out of these lists as I discover them or the cost of looking outweighs the value I get back.

I can map this back to testing in a few ways. On one recent project I was trying to get to grips with an unfamiliar distributed system. There were four components of interest, and I wanted to understand how communication between them was co-ordinated. I found that they each had logs, and I identified aspects of the logs that I cared about and found ways that I could extract that easily. I then turned the log level up in all places as high as it could go and ran the system.

This gives me the same kind of split as I have on Twitter: carefully curated material that I know I want to see all of, and a firehose of other material that I'll never see all of but that could have something interesting to show me. In the case of the application I was testing, I could search the logs for interesting terms like error, warning, fatal, exception and so on. I could also scan them a page at a time to see if anything potentially interesting appeared, and I could go direct to a time stamp to compare what one component thought was happening with another.

Summary:
  • I decide what I want, how much time and effort I'm prepared to put in, and which tools I'll use.
  • I curate the stuff I must have and leave a chance of finding other stuff.
  • I change frequently by trying new and retiring old sources.

Exploring ideas

When I finally read Weinberg on Writing: The Fieldstone Method I was struck with how similar it was to the working method I'd evolved for myself. Essentially, Weinberg captures thoughts, other people's quotes, references, and so on using index cards which he carries with him. He then files them with related material later. When he comes to write on a topic, he's got a bunch of material already in place, rather than the blank emptiness of a blank empty page staring blankly back at him, empty.

I work this way too. The talk that this essay is extracted from started as a collection of notes in a text file. Having decided on the topic, I'd drop a quick sentence, or a URL, or even just a couple of words, into the file whenever anything that I thought could be relevant occurred to me. After a while there was enough to sort into sections and then I started dropping new thoughts into the appropriate sections. When it came time to make slides, I could see what categories I had material in, review which I was motivated to speak about, and choose those that I thought would make a good talk.

It's a bonus that, for me, having some thoughts down already helps to inspire further thoughts.

I craft the material into something more like its final form (slides, or paragraphs as here) and can then challenge it. I've described this before but it's so powerful for me that I'll mention it again: I write, I dissociate myself from the words, I challenge them as if they're someone else's, and then I review and repeat. This is exactly the way that When Support Calls, my series of articles for the Ministry of Testing, was written recently.

It's also exactly the way I wrote new process at work for the internal audits we've just starting conducting to help us to qualify for some of the healthcare certifications we need. In the first round of audit, while I was learning how to audit on the job, I noted down things that I thought would be useful to others, or to me next time. Once I had a critical mass of material I sorted it into chunks and then added new thoughts to those chunks, and iterated it into first draft guidance documentation and checklists.

Summary:
  • I collect thoughts as soon as I have them, in the place where I'll work on whatever it is.
  • When I go to work in that area, I'll usually have some material ready to go, and that spurs new thoughts.
  • For me, writing is a kind of dialogue that helps me to find clarity and improvement.

Exploring My Own Behaviour

There's any number of triggers that might change the way I do something. Here's a few:

  • it hurts in some way, takes too long, is boring.
  • it upsets someone that I care not to upset.
  • it was suggested by someone whose suggestions I take seriously.
  • it is something I fancy trying.

Once I've decided to change I explore that change with the Plan, Do, Check, Act cycle. In Planning I'll observe the current situation and work out what change I'll try; in Doing I'll make that change, usually an incremental one, and gather data as I go; when Checking I'll be comparing the data I gathered to what I hoped to achieve; and finally I'll Act to either accept the new behaviour as my default, or go round the cycle again with another incremental change.

I do this regularly. Some recent changes that I've achieved include learning to type with relatively traditional fingering to help me to overcome RSI that I was getting by stretching my hands in weird ways on the keyboard.

For some while I've been avoiding asking very direct "why?" and instead asking a less potentially challenging "what motivated that?" or "what did you hope to achieve?" That's going pretty well (I feel).

I've also spent a lot of time ELFing conversations, where ELF stands for Express, Listen, and Field, a technique that came out of the assertiveness training we did last year.

When I commit to a change, I'll often try to apply it consciously everywhere that I sensibly can. I don't want for the perfect opportunity to arrive, I just dive in. This has several benefits: (a) practice, (b) seeing the change at work in the places it should work, and (c) seeing how it does in other contexts. These latter two are very similar in concept to the curation-synchronicity pairs that I talked about earlier.

I was interested to explore how developers might feel when being criticised by testers and thought that a writer being edited might be similar. So I went out of my way to get commissioned to write articles. I felt like I generally did OK when someone didn't like my work (though I've had an awful lot of experience of being told about my failings by now) but there are still particular personal anti-patterns, things that trigger a reaction in me.

Hearing opinion stated as fact is one of them. I saw this from my editors and had to find ways to deal with my immediate desire to snap something straight back. (Thank you ELF!)

In turn, when criticising software, I strive to use safety language. If I'm complaining about the appearance of a feature, say, I want to avoid saying "this looks like crap" and instead say "this doesn't match the design standards we have elsewhere and I cite X, Y, Z as examples".

But there have also been occasions where I have failed to change, or failed to like the change I made (so far). I have been on a mission to learn keyboard shortcuts for some time, and with some success. In general, I don't want the mouse to get in the way of my mind interacting with the product when I'm working or when I'm testing. However, I have completely failed to get browser bookmark bar navigation into my fingers.

I've been trying to avoid diving straight in with answers when asked (hey, I like to think I'm a helpful chap!) and instead leave room for my questioner to find an answer for themselves (when that's appropriate). Yet still I find myself going for suggestions earlier than I strive to.

I've also been sketchnoting as a way to force myself to listen to talks differently. It's certainly had that effect, and I've also learned that talks of 10 minutes or less are hard for me to sketch, which means that my notes from the CEWT that's just gone are not wonderful. But the reason I don't class it as a success yet is that I feel self-conscious doing it.

Summary:
  • I think about what I'm doing, how I'm doing it, and why.
  • I commit to what I want to achieve by trying what I've planned at every opportunity.
  • I review what happened, honestly, with data (which can be quantitative or qualitative)

Themes

I think these three kinds of exploration share some characteristics, and they apply equally to my testing:

  • I like to know my mission and if I don't know it then finding it often becomes my mission. 
  • I like to give myself a chance to find what I’m after but also leave myself open to find other things.
  • I like to keep an eye out for potential changes, and that means monitoring what and how I'm doing as well as the results of it.

A side-effect of the kind of approach I'm describing here is that it promotes self-monitoring generally. Even without changes in mind, watching what I do can have benefits, such as spotting patterns in the way that I work that contribute to good results, or bad ones.

To finish, then, a quote that popped up in my timeline while I was making some tea and thinking about this talk. (And it ended up in my notes file, natch.) It's by George Polya, from a book called How to Solve it:
The first rule of discovery is to have brains and good luck. The second rule of discovery is to sit tight and wait till you get a bright idea
I think that sitting tight is OK, but also that our actions can prompt luck and ideas. And, through exploration, I choose to do that.

Comments

  1. Glad you found my tweet inspiring ��

    I think you should try what Polya suggests, and wait. Of course not passively, sitting tight involves being attentive to the luck and possibility of something new popping up, but I interpret Polyas advice to say that having courage to let the idea shape itself is important.

    But now that you like exploration, try googling Deafult Mode Network. I think you'll like what you see. Add creativity to the search too ��

    ReplyDelete
  2. I'm so pleased you're finding my ELF technique of practical use James, it's a great example of one of the ways it can be beneficial in handling a challenging communication - which of course applies both at work and beyond. I do hope your team continue to build on the Assertiveness Session learning as proactively as you do!

    ReplyDelete

Post a Comment

Popular posts from this blog

Can Code, Can't Code, Is Useful

The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "If testers can’t code, they’re of no use to us" My first reaction is to wonder what you expect from your testers. I am immediately interested in your working context and the way

Meet Me Halfway?

  The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "Stop answering my questions with questions." Sure, I can do that. In return, please stop asking me questions so open to interpretation that any answer would be almost meaningless and certa

Not Strictly for the Birds

  One of my chores takes me outside early in the morning and, if I time it right, I get to hear a charming chorus of birdsong from the trees in the gardens down our road, a relaxing layered soundscape of tuneful calls, chatter, and chirrupping. Interestingly, although I can tell from the number and variety of trills that there must be a large number of birds around, they are tricky to spot. I have found that by staring loosely at something, such as the silhouette of a tree's crown against the slowly brightening sky, I see more birds out of the corner of my eye than if I scan to look for them. The reason seems to be that my peripheral vision picks up movement against the wider background that direct inspection can miss. An optometrist I am not, but I do find myself staring at data a great deal, seeking relationships, patterns, or gaps. I idly wondered whether, if I filled my visual field with data, I might be able to exploit my peripheral vision in that quest. I have a wide monito

Testing (AI) is Testing

Last November I gave a talk, Random Exploration of a Chatbot API , at the BCS Testing, Diversity, AI Conference .  It was a nice surprise afterwards to be offered a book from their catalogue and I chose Artificial Intelligence and Software Testing by Rex Black, James Davenport, Joanna Olszewska, Jeremias Rößler, Adam Leon Smith, and Jonathon Wright.  This week, on a couple of train journeys around East Anglia, I read it and made sketchnotes. As someone not deeply into this field, but who has been experimenting with AI as a testing tool at work, I found the landscape view provided by the book interesting, particularly the lists: of challenges in testing AI, of approaches to testing AI, and of quality aspects to consider when evaluating AI.  Despite the hype around the area right now there's much that any competent tester will be familiar with, and skills that translate directly. Where there's likely to be novelty is in the technology, and the technical domain, and the effect of

Postman Curlections

My team has been building a new service over the last few months. Until recently all the data it needs has been ingested at startup and our focus has been on the logic that processes the data, architecture, and infrastructure. This week we introduced a couple of new endpoints that enable the creation (through an HTTP POST) and update (PUT) of the fundamental data type (we call it a definition ) that the service operates on. I picked up the task of smoke testing the first implementations. I started out by asking the system under test to show me what it can do by using Postman to submit requests and inspecting the results. It was the kinds of things you'd imagine, including: submit some definitions (of various structure, size, intent, name, identifiers, etc) resubmit the same definitions (identical, sharing keys, with variations, etc) retrieve the submitted definitions (using whatever endpoints exist to show some view of them) compare definitions I submitted fro

Testers are Gate-Crashers

  The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "Testers are the gatekeepers of quality" Instinctively I don't like the sound of that, but I wonder what you mean by it. Perhaps one or more of these? Testers set the quality sta

Vanilla Flavour Testing

I have been pairing with a new developer colleague recently. In our last session he asked me "is this normal testing?" saying that he'd never seen anything like it anywhere else that he'd worked. We finished the task we were on and then chatted about his question for a few minutes. This is a short summary of what I said. I would describe myself as context-driven . I don't take the same approach to testing every time, except in a meta way. I try to understand the important questions, who they are important to, and what the constraints on the work are. With that knowledge I look for productive, pragmatic, ways to explore whatever we're looking at to uncover valuable information or find a way to move on. I write test notes as I work in a format that I have found to be useful to me, colleagues, and stakeholders. For me, the notes should clearly state the mission and give a tl;dr summary of the findings and I like them to be public while I'm working not just w

Build Quality

  The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "When the build is green, the product is of sufficient quality to release" An interesting take, and one I wouldn't agree with in general. That surprises you? Well, ho

Make, Fix, and Test

A few weeks ago, in A Good Tester is All Over the Place , Joep Schuurkes described a model of testing work based on three axes: do testing yourself or support testing by others be embedded in a team or be part of a separate team do your job or improve the system It resonated with me and the other testers I shared it with at work, and it resurfaced in my mind while I was reflecting on some of the tasks I've picked up recently and what they have involved, at least in the way I've chosen to address them. Here's three examples: Documentation Generation We have an internal tool that generates documentation in Confluence by extracting and combining images and text from a handful of sources. Although useful, it ran very slowly or not at all so one of the developers performed major surgery on it. Up to that point, I had never taken much interest in the tool and I could have safely ignored this piece of work too because it would have been tested by

The Best Laid Test Plans

The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "What's the best format for a test plan?" I'll side-step the conversation about what a test plan is and just say that the format you should use is one that works for you, your coll