Skip to main content

The Test in Test Match


There are times when getting a stakeholder to agree that there's a problem is not easy. Then there are times where, having found a stakeholder who accepts the existence of an issue, you have difficulty persuading them that it's important to find a solution to it now.  And then there are those times when, having found a sponsor who both recognises and wants to relieve the particular headache you've identified, they can't get past some narrow view of it - either in terms of the problem space or the complexity of the necessary solution or both - and you end up with incomplete fixes that might focus on a particular case or which will fail in logic for some subset of cases and perhaps even compromise the integrity of the implementation to boot.

Limited-overs cricket found itself in this position a few years ago. In this format of cricket the two teams each bat for one innings to try to score runs given a set number of balls bowled to them and within a set number of wickets (the number of players who can be out). When all the balls are bowled or all of the wickets are lost, the innings ends. Whichever team makes the most runs wins.

Unlike many other sports, cricket matches stop during rain and, if the rain lasts long enough, the match is reduced in length to ensure that it still fits into the day.  The cricketing authorities recognised the problem - how to perform that reduction fairly - and knew they needed a solution but, by the account in Duckworth Lewis: The Method and the Men behind it, failed to comprehend the range of scenarios that needed to be considered and were not prepared to think about the problem in anything other than a single dimension.

The case that preoccupied cricket was when it rained after team 1 (the first team to bat) had finished their innings but before team 2 started, or had completed, theirs. In the case where this reduced the number of balls that team 2 could face, they would simply scale the number of runs that team 2 had to make proportional to the number of balls. So, if team 1 made 100 in their innings, and team 2 only faced half as many balls as team 1 then they'd have to make half the total (plus one to win), i.e. 51. Which seems logical enough except that it doesn't take into account that team 2 still have as many wickets as team 1 had and so can take more risks without worrying about running out of wickets.

One key insight required here is that more then one parameter determines a fair rescaling of the total: both runs and number of wickets available are important. A second is that rain could occur at any point in the match, even forcing team 1 to receive fewer balls than they expected when they started their innings, if heavy rain falls during it. The rescaling needs to be fair to both teams and so needs to take into account the point in the innings at which the match was affected: if team 1 were cautious initially to preserve wickets, with the intention of being aggressive at the end of their innings, but then the innings is cut short they will feel aggreived because they could and would have been more aggressive had they known there were fewer balls to face.

Duckworth and Lewis are academics, statisticians and cricket lovers. Their book - which can be quite dry and occasionally petty, but is worth persevering with - details the original problem and a selection of sticking plasters that were applied to it. They show how broken methodologies implemented with apparently little thought or testing resulted in severe embarrassments such as a world cup match where South Africa's innings was reduced in length but their target was not (due to the then-current rules) and so instead of needing 22 runs from 13 balls they were rescaled to 22 runs from 1 ball, which is essentially impossible in the game.

The solution they proposed, the Duckworth Lewis Method, now in operation in cricket worldwide, introduces the notion of resource for a batting team which is generated from historical match data and takes into account the match position, the number of balls still to come and the number of wickets still standing. Under this system, the relative resources available to the teams when play is interrupted are used for scaling. For example, a team with many overs and wickets in hand will have a lot of resource; a team with the same number of overs but few wickets will have less because they would likely have to play more defensively to preserve their wickets.

In the book Duckworth and Lewis demonstrate the problem by highlighting a relatively small number of realistic scenarios in which unwanted results would occur - results which it was clear to the cricketing authorities would be travesties. They use these to enforce the message that a higher level of sophistication (than simply scaling based on number of balls) is required. They suggest a solution and describe it at a high level in terms that should be already familiar to their stakeholders. They, importantly, provide an implementation of the method that would be tractable too. (And over time they have refined the methodology so that its use during a match is now relatively streamlined.)

They discuss clearly and reasonably the relative advantages and disadvantages of their methods compared to others. They explain how some of the secondary stakeholders - the media in particular - seem unwilling to try to get their heads round the principles of the solution and what they've done to try to overcome that. They identify the weak spots that still exist in certain extreme scenarios and they suggest enhancements, some of which are rejected by the cricket authorities to Duckworth and Lewis's clear frustration. They address suggestions from third parties for the application of the method in situations it was not designed for, such as Test Match cricket, where the number of overs is not limited and explore other places it could be applied or extended to. And if that's not enough, they even provide a FAQ.

Let's not get into the is testing (a) science debate  here (although if you're interested...) but instead look at some relevant parallels with a testing role and the way that Duckworth and Lewis went about their business:
  • they analysed approaches using thought experiments and real data searching for the anomalous cases - cases not presented to them as problems by the stakeholders 
  • they communicated with their stakeholders and others in their own language to a relevant and realistic level of detail
  • they were and are open about their own approach and to other approaches, wanting to judge them all on a level playing field.
Image: http://flic.kr/p/6Pzxej

Comments

Popular posts from this blog

Can Code, Can't Code, Is Useful

The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "If testers can’t code, they’re of no use to us" My first reaction is to wonder what you expect from your testers. I am immediately interested in your working context and the way

Meet Me Halfway?

  The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "Stop answering my questions with questions." Sure, I can do that. In return, please stop asking me questions so open to interpretation that any answer would be almost meaningless and certa

Not Strictly for the Birds

  One of my chores takes me outside early in the morning and, if I time it right, I get to hear a charming chorus of birdsong from the trees in the gardens down our road, a relaxing layered soundscape of tuneful calls, chatter, and chirrupping. Interestingly, although I can tell from the number and variety of trills that there must be a large number of birds around, they are tricky to spot. I have found that by staring loosely at something, such as the silhouette of a tree's crown against the slowly brightening sky, I see more birds out of the corner of my eye than if I scan to look for them. The reason seems to be that my peripheral vision picks up movement against the wider background that direct inspection can miss. An optometrist I am not, but I do find myself staring at data a great deal, seeking relationships, patterns, or gaps. I idly wondered whether, if I filled my visual field with data, I might be able to exploit my peripheral vision in that quest. I have a wide monito

Testing (AI) is Testing

Last November I gave a talk, Random Exploration of a Chatbot API , at the BCS Testing, Diversity, AI Conference .  It was a nice surprise afterwards to be offered a book from their catalogue and I chose Artificial Intelligence and Software Testing by Rex Black, James Davenport, Joanna Olszewska, Jeremias Rößler, Adam Leon Smith, and Jonathon Wright.  This week, on a couple of train journeys around East Anglia, I read it and made sketchnotes. As someone not deeply into this field, but who has been experimenting with AI as a testing tool at work, I found the landscape view provided by the book interesting, particularly the lists: of challenges in testing AI, of approaches to testing AI, and of quality aspects to consider when evaluating AI.  Despite the hype around the area right now there's much that any competent tester will be familiar with, and skills that translate directly. Where there's likely to be novelty is in the technology, and the technical domain, and the effect of

Postman Curlections

My team has been building a new service over the last few months. Until recently all the data it needs has been ingested at startup and our focus has been on the logic that processes the data, architecture, and infrastructure. This week we introduced a couple of new endpoints that enable the creation (through an HTTP POST) and update (PUT) of the fundamental data type (we call it a definition ) that the service operates on. I picked up the task of smoke testing the first implementations. I started out by asking the system under test to show me what it can do by using Postman to submit requests and inspecting the results. It was the kinds of things you'd imagine, including: submit some definitions (of various structure, size, intent, name, identifiers, etc) resubmit the same definitions (identical, sharing keys, with variations, etc) retrieve the submitted definitions (using whatever endpoints exist to show some view of them) compare definitions I submitted fro

Testers are Gate-Crashers

  The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "Testers are the gatekeepers of quality" Instinctively I don't like the sound of that, but I wonder what you mean by it. Perhaps one or more of these? Testers set the quality sta

Vanilla Flavour Testing

I have been pairing with a new developer colleague recently. In our last session he asked me "is this normal testing?" saying that he'd never seen anything like it anywhere else that he'd worked. We finished the task we were on and then chatted about his question for a few minutes. This is a short summary of what I said. I would describe myself as context-driven . I don't take the same approach to testing every time, except in a meta way. I try to understand the important questions, who they are important to, and what the constraints on the work are. With that knowledge I look for productive, pragmatic, ways to explore whatever we're looking at to uncover valuable information or find a way to move on. I write test notes as I work in a format that I have found to be useful to me, colleagues, and stakeholders. For me, the notes should clearly state the mission and give a tl;dr summary of the findings and I like them to be public while I'm working not just w

Build Quality

  The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "When the build is green, the product is of sufficient quality to release" An interesting take, and one I wouldn't agree with in general. That surprises you? Well, ho

Make, Fix, and Test

A few weeks ago, in A Good Tester is All Over the Place , Joep Schuurkes described a model of testing work based on three axes: do testing yourself or support testing by others be embedded in a team or be part of a separate team do your job or improve the system It resonated with me and the other testers I shared it with at work, and it resurfaced in my mind while I was reflecting on some of the tasks I've picked up recently and what they have involved, at least in the way I've chosen to address them. Here's three examples: Documentation Generation We have an internal tool that generates documentation in Confluence by extracting and combining images and text from a handful of sources. Although useful, it ran very slowly or not at all so one of the developers performed major surgery on it. Up to that point, I had never taken much interest in the tool and I could have safely ignored this piece of work too because it would have been tested by

The Best Laid Test Plans

The Association for Software Testing is crowd-sourcing a book,  Navigating the World as a Context-Driven Tester , which aims to provide  responses to common questions and statements about testing from a  context-driven perspective . It's being edited by  Lee Hawkins  who is  posing questions on  Twitter ,   LinkedIn , Mastodon , Slack , and the AST  mailing list  and then collating the replies, focusing on practice over theory. I've decided to  contribute  by answering briefly, and without a lot of editing or crafting, by imagining that I'm speaking to someone in software development who's acting in good faith, cares about their work and mine, but doesn't have much visibility of what testing can be. Perhaps you'd like to join me?   --00-- "What's the best format for a test plan?" I'll side-step the conversation about what a test plan is and just say that the format you should use is one that works for you, your coll