Stick Child has been getting increasingly irritated by the slack methods some adults use to present information about really important stuff, such as how long it takes for patients to be seen in Accident and Emergency (A&E) departments. Many a time recently he’s had to do a facepalm at the way hospitals and other institutions are judged and compared against each other, supposedly to inform the public about how well each is performing.
The problem is that the starting point of the conversation – the frame by which performance is judged – is totally wrong. Plenty has been written about the arbitrary nature of numerical targets and their propensity for triggering dysfunctional behaviour, so we’ll leave that to one side for now, and just look at why using them as the focal point for judging performance simply means people engage in the wrong conversation.
Stick Child has drawn a couple of charts, which plot the distribution curves of A&E admission times for two hospitals.
As you can see, the curves are different. Hospital ‘A’ manages to get 95% of patients seen within 4 hours, after which, a steep drop off in the curve indicates the remaining 5% are seen before 4 hours and 30 minutes has elapsed.
Hospital ‘B’ also manages to see 95% of patients within 4 hours, but the tail of the curve beyond this point is much longer, meaning that the remaining 5% of patients take much longer to be seen – some waits are as long as 7 hours. It’s also apparent that Hospital ‘A’ sees more patients during the early stages of their wait than Hospital ‘B’. This is evidenced by the fact that Hospital ‘B’s curve is weighted more to the right.
So, its quite clear that the patterns of waiting times are different for the two hospitals.
Not according to the target.
According to the target, the two hospitals’ performance is exactly the same. This means that opportunities to understand why some patients wait up to 7 hours in Hospital ‘B’ are missed. It means that managers don’t get the chance to understand their system, as the usefulness of A&E admission time data is undermined by using the target as a focal point, thereby degrading potentially useful information into a simplistic PASS / FAIL scenario.
Now consider what would happen if Hospital ‘A’s distribution curve actually showed that 94.9% of patients were seen within 4 hours, whilst Hospital ‘B’ achieved 95.1%.
Yep, despite the fact that Hospital ‘A’ demonstrates better overall performance, it FAILS, whilst Hospital ‘B’ PASSES. This fixation on the target and a binary definition of ‘good’ or ‘bad’ performance means no one learns anything about either hospital.
That’s the real FAIL.
Then there’s those convincing-looking charts which look authoritative, but actually spew out what can only be referred to as pseudo performance data, such as this:
Looks good, doesn’t it? Well it’s not. All it does is tell you the percentage of cases where performance has crossed that invisible and imaginary dividing line between ‘good’ and ‘bad’, as defined by the target. A chart that tells you about a target to hit a target. Utter waste of time.
If you’ve got the data, simply plot them and learn from them. Why throw in a target and thereby replace the richness of potentially useful performance information with a meaningless and mind-numbing YES / NO game? Seriously, why?
It drives the wrong conversation.