Intended for healthcare professionals

Press

Popularising hospital performance data

BMJ 1999; 318 doi: https://doi.org/10.1136/bmj.318.7200.1772 (Published 26 June 1999) Cite this as: BMJ 1999;318:1772
  1. Pat Anderson, freelance medical journalist

    The publication last week of the first set of performance tables for hospitals in England provoked a storm of comparisons between hospitals by the media. England's Department of Health is keen,however, to stress that comparative death rate data should not be used to “name and shame” particular hospitals. Within two hours of the department's press conference last Wednesday, BBC television reported live from a ward at St George's Hospital, London, pointing out that it seemed to have the worst death rate after non-emergency surgery of any teaching hospital in the country. By 6 pm, the BBC's online news service had pinpointed St George's Hospital, Salford Royal Hospitals NHS Trust, and North Devon NHS Healthcare Trust—all apparently having high death rates compared with their counterparts.

    Thursday's national newspapers continued the exercise. The Times, Independent, Daily Telegraph, and Guardian published excerpts from tables listing different hospitals' performance, and interviewed representatives of those hospitals that seemed to have the worst performance. Among the tabloid newspapers, the Daily Maildedicated two pages to “disturbing differences in standards of care,” while the Sun and Mirror dealt with the story in a single column.

    The hospitals concerned have been quick to point out that there are many factors contributing to death rates and that the way the figures are collected may make things look worse than they actually are. For example, St George's said it frequently treated very sick patients transferred from other hospitals, while Salford said the death of the only patient treated in a particular age group had skewed its data.

    Commentators have suggested that a lack of general understanding of statistical conventions—such as confidence intervals—leads to misleading coverage of these sorts of data. John Appleby, director of the health systems programme at the King's Fund, pointed out: “When it comes to confidence intervals people have an almost emotional difficulty in getting their head round them.” He explained that people find it difficult to accept that a figure is not a definite one but could fall within a range. He also pointed out that confidence intervals between health authorities at the top and bottom of some of the performance tables nearly overlapped, so publishing a table ranking their performance may be meaningless.

    Professor Harvey Goldstein, a researcher who has studied educational league tables in Britain and surgeons' league tables in the United States, added that it is very difficult to compare like with like by adjusting for factors such as deprivation. Even when this is done, sampling errors can still be huge—particularly if small numbers of patients are involved. He warned that the poor data used in the latest tables mean that the government is displaying false openness: “The government could be accused of deliberately withholding information about the quality of its statistics. The apparent publication of information which looks as though it is giving you something you don't have is in some ways worse than not publishing, because it's misleading.”

    Meanwhile, some doctors fear that inappropriate media coverage will lead consultants to change their behaviour, excluding patients who will have an adverse effect on their hospital's performance statistics. Jim Johnson, chair of the BMA's Joint Consultants Committee, said: “It is possible that this type of exercise might lead to surgeons being a bit more choosy about who they operate on. If I found myself in a league table and my hospital was down at the bottom, I would be a little bit choosier next year. I wouldn't announce it, but it would be in the back of your mind.”


    Embedded Image

    Credit: DAILY MIRROR 17 JUNE

    Overall, however, Mr Appleby believed that the “naming and shaming” exercise was a healthy one: “You don't have to call it shopping around, but being more of an informed consumer. If the parentsinvolved in the inquiry into their babies' deaths at Bristol Infirmary had had access to clear andunderstandable data about clinicians' performance, perhaps they would not have consented to the treatment.” And the previous experience of the health service in Scotland, five years ahead of the rest of the United Kingdom in publishing such data, seems to be reassuring. A spokesman for the Scottish Office said that when comparative clinical outcome data first appeared in 1993 they were thesubject of a media scare: “The media chose to put the ‘death league’ tag on them at first, but nowinterest has lessened.” He explained that the information is used by doctors and managers in the health service but is not particularly “punter friendly” because of its complex nature.

    As time goes on, the English tables could go the way of their Scottish counterpart, becoming a matter of routine and perhaps even indifference to the public. But, in spite of the data's inadequacy, national and local coverage about hospitals' performance should, in the long run, have a positive effect on quality of care. Mr Johnson concluded: “The bottom line is someone's always going to have to be 400th, and it's really about getting the difference between number one and number 400to be smaller. The secretary of state is well aware that it's about narrowing that margin rather than about witch hunts.”

    View Abstract