Information In Practice

Not everything that counts can be counted; not everything that can be counted counts

BMJ 2004; 328 doi: http://dx.doi.org/10.1136/bmj.328.7432.153 (Published 15 January 2004) Cite this as: BMJ 2004;328:153
  1. Martin McKee (martin.mckee{at}lshtm.ac.uk), professor of European public health1
  1. 1London School of Hygiene and Tropical Medicine, London WC1E 7HT

    Few examples better show the triumph of ideology over evidence than the continuing quest to encourage patient choice by publishing the outcomes of healthcare providers.1 Perhaps because the stated objectives seem so self evidently reasonable—providing information to the public who pay for and use health services and supporting patients' ability to choose where they will be treated—opposition to this idea from sceptics is difficult without being accused of paternalism or worse. But the task of improving health care by publishing outcomes is far from simple.24 Essentially there are at least three problems. The first is to develop a means of assessing outcomes that provides comparable information which allows patients reliably to differentiate good and bad performers, adequately capturing differences in case mix and with sufficient power that differences do not arise by chance.5 The second is to embed this information within a system that leads to genuine improvements in quality by those underperforming, rather than opportunistic behaviour in relation to either recording6 or work undertaken,7 designed solely to improve what is reported which often makes things worse. The culture of often meaningless targets within the NHS is throwing up new examples of the latter almost every week. The most recent is the way in which hospital emergency departments, anxious not to exceed the target for patients to wait no longer than four hours before being admitted or discharged, are now refusing to admit patients from waiting ambulances until they are ready to be seen. Ambulance trusts, whose vehicles are tied up in queues outside hospitals, are investing in inflatable tents into which their patients can be deposited, in a kind of target-free limbo.8

    It is the third set of problems related to what the information actually tell us about healthcare providers which Broder and colleagues investigate in their paper from California.1 Even in a setting where the amount of investment in information technology can only be dreamed of by those working in most other countries, the published data cover at most only 12% of procedures. And by looking only at procedures the data ignore the vast amount of care that does not involve one. In other words, such systems capture only a tiny amount of the overall work of a healthcare provider. The information is also largely out of date. Given the rapid pace of change in health care, how useful is it to know, when seeking treatment now, how a provider was doing five years ago?

    Although the British public is already ambivalent about the value of such information,9 this paper is unlikely to deter those policy makers whose faith in the benefits of publishing the outcomes of healthcare providers is unshakeable by reason, although it may help to inform those who are undecided. In coming to a view they might refer to a sign that Einstein kept on his wall: “Not everything that counts can be counted; not everything that can be counted counts.”

    Footnotes

    • Conflict of interest None declared.

    References

    View Abstract