At the beginning of the ‘Bad Performance Measurement On Tour’ series I issued a call-to-arms for eagle-eyed readers, asking them to send me their examples of bad performance measurement.
Twitter user @Wiggies13 responded to this rallying cry and sent me the following gem from a council website…
‘Percentage of relevant land and highway assessed as having deposits of dog fouling that fall below an acceptable level’.
And just to prove I didn’t make it up, here’s the screen shot:
As, you can see, this particular council has apparently excelled in keeping its ‘relevant land and highway’ completely free of dog muck. Quarterly performance figures indicate they consistently SMASH their 1% target. I would be confident in skipping barefoot along their roadside verges any day of the week with performance like that. It must be a dump-free Utopia.
On a serious note, you can’t knock anyone for wanting to rid the streets of steaming dog turds, but why have a numerical target for it? How does the target’s existence help those charged with locating and eliminating said detritus?
Other questions that spring to mind are:
- What is an ‘acceptable level’ of dog fouling? (I’d suggest ‘none’ if you’re the unlucky person who steps in it, especially if wearing flip-flops).
- Who decides where the surveys are carried out and how extensive are they?
- How is the percentage score calculated from the four national Keep Britain Tidy survey classifications of ‘None’ / ‘Light’ / ‘Significant’ / Heavy’, which this council says form the basis of its turd inspection criteria?
- Has a dog poo ever been sighted pre 2011?
Well rather than speculate, I thought I’d phone up the department in question to find out more…
I spoke to a most helpful chap who told me that apparently it all emanates from government data requirements. Unfortunately, he was unable to be more specific or answer any of my questions about the origins of the target. He did however say he’d get someone who could answer my queries to phone me back. When they do, I’ll update the blog.
I suppose the ultimate test of this pooey performance indicator is whether it meets the criteria for an effective measure, i.e. –
- Do the data as presented provide useful information about the current performance of the system?
- Does the performance indicator help identify opportunities for improving the system?
- Does it help the workers to achieve purpose from the service user’s perspective? (i.e. finding the offending material and removing it quickly. Perhaps even preventing a reoccurrence. There’s a thought).
In other words, does the screenshot tell us anything useful, or help the council continually improve their efforts to rid our verges of this horrible brown scourge (or white, if you remember those ones)?
I leave you to decide for yourself.
On a separate note, the same council also posts performance data relating to planning applications. Here’s the screenshot:
As you can see, the planning department works to an eight week target. Oh wait – the target isn’t eight weeks after all; it’s to hit the eight week target 87% of the time! (Or 88% if the application was made before the first quarter of the 2011-2012 performance year). What’s that all about then? An arbitrary numerical target against an arbitrary numerical target. Genius.
Typically, nothing about this mode of performance measurement takes into account the actual capabilities of the system or aids the planning department in meeting predictable levels of demand. (The fact that the department seems to consistently perform at a rate well below the arbitrary target certainly appears to indicate this). The problem is that data ignore such frivolities as numerical targets, and nothing about the target’s existence increases the capacity of the department, or improves service delivery.
Contrast this with a council planning department I visited this week in Wolverhampton, where they have done away with their targets and redesigned their system, simply aiming to achieve purpose as quickly and effectively as possible. What used to happen was that applications tended to be ratified immediately before the target date. Guess what happens now? Well, their total end-to-end time has been obliterated and one member of the team told me that the last application he handled was done and dusted in four days.
It goes to show that such astounding improvements in performance require a different approach, along with a different mindset. Tinkering around the edges of the system, playing with definitions, or introducing arbitrary numerical targets do not help achieve purpose.
As they say, you can’t polish a turd.
Great piece of research. I’d love to know what the follow-up response is.
If I lived there, how reassured would I be to know that 99% of “relevant land” has “acceptable levels” of dog poo? And that it’s been like that for the last 2 years.
I’ve also learnt that someone in Whitehall is monitoring all this on our behalf, and making sure that councils are not shirking in their poo monitoring duties. Although it seems from the Keep Britain Tidy analysis that some councils may need to named and shamed.
I’m particularly intrigued by the poo performance in Q1 of 12/13. Do you think they ran out of money to conduct poo surveys this quarter? Maybe the poo surveyor was on holiday. Or maybe it was raining so hard that when the poo surveyor looked out of the office window, they couldn’t tell whether there was poo on the corner opposite or not and therefore couldn’t record the usual 0% poo rate. Imagine the uproar if one day, on the day that the poo surveyor does their poo surveying duty and the corner has poo on it! Shall we expect a report of 100% poo coverage exceeding the acceptable level? Heads should roll.
I can’t help feeling that the data is an obvious case of crap in, crap out (sorry!)
And Government are still wondering how the hell they can cut costs in these times of austerity. Well, hello Mr very-necessary-central-monitor-of-dog-crap-data.
As a total coincidence, I have ended up explaining the planning “oh shit” chart to people twice this week. It’s a fantastic example of how targets change behaviour and well worth a blog post in it’s own right if no one has done so already?!
This brilliant piece of work illustrates that very pattern you describe, but in A&E admissions. http://www.patient-access.org.uk/userfiles/file/A&E%20-%20is%20there%20a%20better%20way%20HL%20v3.pdf It’s ‘just in time’, but in a very bad way…
Once upon a time there was a single issue pressure group. And they kept banging on and on about a perceived problem, that when looked at in comparison with larger, more pressing concerns was trivial. But they would not accept re-assurances to this effect, and kept mithering away, and then the ‘Chart of Despair’ was born.
And once created it hangs round like one of the undead, forever being compiled, even when everybody has forgotten why the data was being collected in the first place. Easy to spot, especially on a multi-trace graph [they tend to have a broad range like 0-100%] the virtually straight line that lingers near zero or one hundred. In a rational organisation the data collection would stop as it doesn’t contribute any ‘value’. But they linger on because “We have always done it” or “Just in case”.
The pressure group having long moved on to moan about some other ‘issue of the day’, well cosy little quango jobs don’t create themselves, do they?
[The drive for this little gem seems to come down from DEFRA though probably didn’t originate there.]
Not convinced that the 87% in 8 weeks is ‘entirely arbitrary’ [oh! that word again, NATOs on everybody] it’s probably based on previous performance, either local or national. And to understand it, we need to understand some of the external factors affecting the process.
Documents will flow in to be processed, a proportion will be incomplete, incorrect or otherwise defective, these are returned to the applicant to be corrected. They have effectively moved outside of the processes control, but the clock is still running, only to reappear at random times and cause ‘pile-ups’. [Design and Access Statements being a right PITA]
In a system where ‘actors’ are given free rein smaller, simpler tasks tend to flow through faster [Illusion of achievement], and harder, longer tasks tend to languish. The seasonality of demand also makes load planning difficult, with highest demand coinciding with peak summer leave.
There is an irreconcilable tension between having enough human resources to handle peak flow, and paying for under used assets at occasional slack periods. Where cost control dominates over service quality, it can start a chain reaction leading to system collapse. [an example being the current state of Police Forces. Even though they are unique in that they can deploy any spare personnel to useful activity without upheaval]. The interesting linked NHS study, seems to point to an example where service dominates, expensive assets are used early and this produces a cost benefit.
Don’t know what happened in Wolvo, could it be they were running massively late and got so badly spanked that they finally hired enough people to carry out the work?
[Anecdote, sample size of 1? Naughty, naughty MrG 😉 ]
“You can’t polish a turd”* But you can roll it in glitter!
*you can, but what would be the point?
Welcome back. You picked up on exactly what causes delays in the planning system – failure demand. What they do now is turn off the failure demand by working directly with applicants to ensure the forms are done properly first time round, thereby eliminating waste and rework. That’s where the additional capacity comes from, which is why end-to-end times are much shorter.
So, now that my email address is available for all to see under the ‘contact me’ tab, are you going to email me and tell me who you are..? 😉
They do say that you can’t polish a turd but in our office they do say that:
You can’t polish a turn but you can roll it in glitter
You can’t polish a turd but you can if you freeze it
One (nameless) local authority planning dept. identified that up to 80% of applications failed to meet the initial test of completeness and had to be sent back to the applicant – more info, incorrect plans/drawings, wrong fee, etc. 80% waste! And what’s more, more than half of these failed the second time around. Impact: masses of waste work generated, very unhappy customers (not surprisingly), expensive service, failure to meet targets (don’t get me started!). I suggested perhaps they put resources into the front end, where applicants could recieve assistance to guide them and ensure that applications were complete and thus more likely to proceed for consideration/decision, thus reducing the ‘waste demand’ being generated. Remember, up to 80%…
What did they do? Introduce fees for pre-application advice. So applicant phones department: “do I need planning permission for this [minor works]…?” – “Send your enquiry in writing, together with a cheque (!) for £50 and we’ll tell you”. Applicant: “But I only want to know if I have to make an application?!” Council: “General information can be found on our website. For detailed advice, please send a cheque…” at which point applicant hangs up (or has heart attack – either way, phone call ends).
Outcome? Customer phones other departments to try and get ‘advice’ from them, for free.
Result? Unhappy customers, incomplete applications, waste generated in other departments (e.g. mine!); oh, and planning dept still not meeting its ‘targets’ (those arbitrary numbers on notices posted in corridors…). As a certain Daily Mail columnist would surely say – ‘You couldn’t make it up’.