Sometimes when I say this…
…people hear this:
Not being able to differentiate between targets, measures and priorities is a deadly obstacle to effective performance. Let me explain why…
First of all, if you’ve ever met me, heard me speaking, or read any of my stuff, you’ll have come across my theory on targets. It’s straightforward. Only two bits to it. Here’s a reminder:
The reason that all numerical targets are arbitrary is because there is no known scientifically-accepted method of setting one. The traditional method of setting a target usually involves simply looking at last year’s performance then adding or subtracting a few percent. That’s it.
As a ‘method’, this is rubbish because it disregards the capability of the system and natural variation. It ignores all the fluctuations in the data that have occurred during the previous 12 months. It means that the ‘benchmark’ chosen is unstable and the assumption about performance is therefore flawed. It assumes that the system knows the target is there and will respond to it (it doesn’t and it won’t!) It ignores the fact that the greatest opportunity for improving performance lies in making systemic adjustments rather than berating, comparing, or otherwise trying to ‘motivate’ the workers to achieve the target. That’s the first part of the theory.
Next, when you hear people say that targets have an effect on performance, they’re right. Targets make performance worse. This brings us to the second part of the theory…
Targets change behaviour. Rather than collectively focusing on achieving the system’s purpose, individuals and departments are inadvertently placed into competition with each other, meaning that they turn their effort and ingenuity inward and focus attention on what the target makes them concentrate on. This happens at the expense of other equally important aspects of work that are not subject of targets. Numerical targets cause inter-departmental rivalries, cheating, gaming, data distortions, higher costs, lower morale, worse service delivery and all manner of other horrible consequences.
There are numerous papers and studies which demonstrate that these things always happen when targets are introduced into the mix. I know of no numerical target that is immune from causing such dysfunctional behaviour, hence part two of the theory.
Unlike targets, measures are really important. Unless you measure stuff happening in your system it is impossible to know how it is performing. The key is in this phrase:
First of all you need to determine measures that are derived from purpose. This means understanding what your system is there to do (e.g. ‘to help people and catch baddies’ / ‘to produce great widgets’ / ‘to help kids learn about stuff’), then ensure that your measures help to tell you whether you are doing this. If you choose the wrong measures you will never learn anything about how your system is really performing. Worse still, if you choose the wrong measures it makes people do the wrong things.
If you choose the right measures, you’re on the way! Next, all you have to do is measure them right. Don’t rely on ‘this year vs last year’ / ‘this month vs last month’ / ‘today vs yesterday’ to assess performance (or anything else for that matter). Why? Because it’s pants. It doesn’t tell you anything about performance. It ignores variation. It only enables performance to be envisioned in one of two ways:
…and that’s as about much use as a chocolate fireguard if you’re trying to understand your system. Judging performance by making such binary comparisons leads to terrible decision-making and wasteful, unnecessary deployments. Don’t do it!
Instead, plot your data using one of these:
It’s a lovely control chart (or SPC chart). Without boring you with the science bit, this wonderful invention tells you about the actual performance of your measures. It provides a ‘voice’ for your system to tell you what’s happening. Unless there are recognised signals or trends in the data (e.g. if a data point shoots out of one of the control limits), then usually it’s best not to react if the zig zags go up or down a bit. This is just normal variation and is seen in any set of data, whether you are tracking crime rates, widget production, or the number of red cars that drive past your house every day.
The key is to intelligently interpret the data from the measures – use the information to understand the capability of your system, look for recognised trends (if there are any) and identify opportunities for improving future performance through systemic changes. When used in this manner, the right measures provide you with an evidence base from which to make decisions, initiate systemic adjustments and determine priorities. Which leads us to…
If priorities are established from an evidence base (such as described above) you will be addressing the right things. These are the things that are linked to purpose from the customer or service user’s perspective. In policing, we like to stop crime from happening, so it could be argued that ‘to reduce violent crime’ is an appropriate priority for the police. No one would argue with that.
If ‘to reduce violent crime’ is therefore designated as a priority, it makes sense to track the rate of violent crime as a measure. If this measure is measured right, it enables us to see the true extent of violent crime and respond accordingly.
Three Different Things
Therefore, we have a priority which is underpinned by appropriate measures. I never trash priorities or measures – they help us keep the system on course to achieve its purpose.
The problem comes where your priority has a numerical target tagged on to it, for example:
‘To reduce violent crime by 9%‘.
Why? Nowhere do you need a numerical target.
And don’t worry, if you take that target away, nothing bad will happen.
It will if you leave it there though.
So, next time you hear me saying we should abandon numerical targets, listen carefully – I’m not saying ‘targets AND measures AND priorities’. Just targets.
o Priorities are important (when evidence based).
o Measurement (when done properly) is necessary.
o Numerical targets are bad.