Question: What have these four things got in common?
1. Choosing a random number generated by a lottery machine.
2. Reading tea leaves.
3. Blindfolding yourself and sticking a pin in a board.
4. Tossing a coin.
Answer: They’re all better methods of informing your decision-making than this:
This table is from an actual performance page within an official document. People get paid a lot of money to read these tables and make decisions about resourcing, funding and operational deployments, based on their assessment of the performance data such tables contain. Imagine it’s you. What would you prioritise?
Well, perhaps vehicle crime isn’t such an issue at the moment as it’s ‘in the green’ (and with a large percentage reduction), so you could ignore that for the time being and put more emphasis on tackling common assault, which seems to be raging out of control. It’s bright red after all, and red means ‘bad’. So, you throw some extra resources at the assault problem and leave the vehicle crime, robbery and the other green stuff to look after itself for a while.
Easy isn’t it? Now you’ve got your priorities sorted what else could you do? Well, one obvious choice is to look at the crime types that are going up and find someone to hold to account. What on earth is the local police commander playing at, by allowing total crime to rise by 6%? Maybe he or she should be replaced with someone who knows what they are actually doing.
So, you shuffle some of your people around, take a bit of funding from here, divert a bit there, have some ‘strong words’ about performance expectations, and hey presto, everything’s hunky dory. Or so it seems, until next time you are shown a similar performance chart and lots of the stuff that was green is now red. Time for more strong words, this time aimed at you.
Ooops! Should have read the tea leaves. It’s a much more stable and scientific method of assessing performance than using one of these silly tables featuring red and green boxes and nice ‘up’ and ‘down’ arrows.
Know why? Well I’ve chuntered about this before – this method of comparing data is known as a ‘binary comparison’, and when used as a technique for judging performance, it always gives false readings. It’s main weakness is that it doesn’t actually tell you anything about performance. Performance data should enable you to assess performance so that you can make informed decisions that hopefully lead to improved performance. The clue is in the name. Comparing just two numbers with each other can never achieve this.
This is what a binary comparison looks like on a chart:
Hopeless, isn’t it? It could be ‘this year vs last year’, ‘this month vs last month’, ‘this week vs last week’, or any other similarly useless comparison between two snapshots in time. The binary approach is rubbish because it ignores all the other data points between (and before) the two points chosen, as well as due to the fact that all the data points are subject to variation, which means you might as well compare today’s figure against any other figure that has ever gone before it. Or a moving object. Or a banana.
The binary comparison approach is commonplace and seen in everyday life; for example, ‘sales are up 5% compared to last Christmas / ‘unemployment is down by 45,000 compared to the same period last year / ‘accidents are down 3.6% compared to the previous quarter’. In all cases it’s meaningless, as the method relies upon scant data, moving variables and unstable assumptions, all of which lead to defective decision-making. Waste is driven into the system because managers react to something that essentially isn’t there; costs go up, people are unfairly held to account and performance gets worse.
So… ditch your tables of numbers, red and green boxes, comfortingly familiar ‘up’ and ‘down’ arrows, and use methods that actually tell you something about performance.
Don’t let this cute, but fundamentally evil, ‘performance banana’ trick you into thinking that you can ever make performance assessments using just two numbers. It’s impossible.