Patient coding and the ratings gameBMJ 2010; 340 doi: https://doi.org/10.1136/bmj.c2153 (Published 25 April 2010) Cite this as: BMJ 2010;340:c2153
- Nigel Hawkes, freelance journalist
When a hospital singled out last year as being in “urgent need” of improvement by the regulators was rated at the same time the ninth safest in England by health analysts Dr Foster, questions were bound to be asked.
Mid Staffordshire Foundation NHS Trust, branded “appalling” by the Care Quality Commission and since warned that unless it improves it may lose its licence to operate, set tongues wagging when Dr Foster’s 2009 Good Hospital Guide rated it as among the five “most improved hospitals” in the past three years and in the top ten for quality of care.
Somebody must be wrong. Evidence is accumulating that in the case of Mid Staffordshire and some other poorly performing hospitals, Dr Foster’s performance evaluations have been affected by variations in the way patients are allocated diagnostic codes, flattering the hospital standardised mortality ratios (HSMRs) and making the hospitals seem much better than they are.
Take, for example, the death rate after admission for a broken hip recorded by Mid Staffordshire in the 2009 guide. It reported a rate of 19.87 against a national average of 100. That means that an elderly person who breaks a hip is five times less likely to die if admitted to Mid Staffordshire than to the average English hospital.
Among the top 30 hospitals in the Dr Foster guide, Mid Staffordshire’s death rate after hip fracture is less than half that of any other hospital, and less than a seventh of the figure recorded by, for example, Barts and the London.
Struck by the implausibility of this figure, I asked Mid Staffordshire if it was right. It treated my inquiry as a freedom …