Intended for healthcare professionals

Editor's Choice

Need good results? Fiddle them

BMJ 2003; 326 doi: https://doi.org/10.1136/bmj.326.7398.0-e (Published 15 May 2003) Cite this as: BMJ 2003;326:0-e
  1. Richard Smith, editor

    Don Berwick—one of the world's leading thinkers on improvement in health care and a friend of mine—tells a story that illustrates how data on performance can mislead. He was responsible for quality assurance in a hospital. The radiology department had spectacular results. Patients waited hardly a moment. Everybody was satisfied. Why did the department do so well? Don wanted to find out and encourage the department to share its learning. “How is it,” he asked the director, “that you get such good results?”

    “Simple,” she answered, “we make them up.”

    I was reminded of this story as I read the results of a BMA survey that showed how hospital trusts had poured scarce resources into accident and emergency departments during the week when performance tests were conducted (p 1054). Some even cancelled operations in order to free up beds to speed up admissions. The result was a huge rise in the number of patients treated quickly.

    It's unsurprising that people play the system when the results have consequences. Fail to meet your targets and you may be sacked. Meet them and your hospital might become eligible to be a “foundation hospital” with extra resources and freedoms. Only a fool would not game the system, but the result is that we are all fooled (news extra on bmj.com). Sticks and carrots are distributed not on the basis of true, consistent performance but on the ability of people to “do well” that week, perhaps at the expense of other weeks and other services.

    But a maxim of management is that “if you can't measure it you can't manage it.” Otherwise, you make a change and you've no idea whether things are better or worse. (We've experienced this problem at the BMJ with our attempts to improve our decision making times.) Thus improvement experts like Don Berwick argue that “measurement should be used for learning not judgement.” Another complication was identified by Albert Einstein: “Not everything that can be counted counts and not everything that counts can be counted.” (We understand this as well at the BMJ, where, if we are not careful, profit—which can be counted—overrides influence, which cannot.)

    I can understand, however, how all this sounds pathetic to a red blooded politician like Alan Milburn, Secretary of State for Health. He's putting millions into the National Health Service—and he needs results not only to get his party re-elected but also because he has the men from the Treasury pursuing him relentlessly. (Those boys—sexism intended—are not interested in anything you can't count.) Further, the public wants to know. The trick is to produce some data on performance that are meaningful but still leave lots of room for measurement for learning. Britain's cardiothoracic surgeons have had a go (p 1053), and any surgeons who are “below average” can be consoled by the thought that the extremely powerful force of “regression to the mean” is on their side—and that no politician understands its power (p 1083).

    Acknowledgments

    To receive Editor's choice by email each week subscribe via our website: bmj.com/cgi/customalert

    View Abstract