Letters

Hospital league tables

BMJ 2001; 322 doi: https://doi.org/10.1136/bmj.322.7292.992/a (Published 21 April 2001) Cite this as: BMJ 2001;322:992

There are lies, damn lies, and hospital statistics

  1. Andrew Bamji (Andrew_Bamji{at}compuserve.com), consultant rheumatologist
  1. Queen Mary's Hospital, Sidcup, Kent DA14 6LT
  2. Sandwell Health Authority, West Bromwich B70 9LD

    EDITOR—Kmietowicz's news article on hospital league tables states that the biggest predictor of death rates was the number of doctors in the hospital.1 This conclusion was drawn by Professor Sir Brian Jarman and (in the Sunday Times) underpinned by data from Greenwich District Hospital.

    Unfortunately for the conclusion, the data are wrong. The Department of Health's figures for Greenwich seem to be for consultants only and do not include junior doctors. This introduces an error of an order of magnitude between two and three. All hospitals' figures are likely to be distorted further by the fact that staff in unrecognised posts (often posts with strange titles such as “trust doctors”) are not counted.

    When such a fundamental and massive data error passes unchecked and results in false deductions, doubt is cast on the whole process. We cannot blame Dr Foster Ltd, which issued a disclaimer on data accuracy in the small print, but in my view it is quite wrong of the Department of Health to allow publication without looking closely at figures that departed so greatly from the mean.

    If garbage is put in, one gets garbage out. A pity, really, because the idea is not bad.

    References

    1. 1.

    Analysis is flawed

    1. Jammi N Rao (jammi.rao{at}sandwell-ha.wmids.nhs.uk), deputy director of public health
    1. Queen Mary's Hospital, Sidcup, Kent DA14 6LT
    2. Sandwell Health Authority, West Bromwich B70 9LD

      EDITOR—The analysis by Dr Foster Ltd of death rates in hospital trusts is so flawed that the NHS should ignore it.1 Standardised hospital mortality ratios are inappropriate for this exercise and difficult to interpret. They were originally public health measures intended to apply to whole area populations that are relatively static.

      Patients admitted to a hospital do not constitute a predefined population; this population is arbitrary and depends heavily on admissions policy and the availability of support and other community services locally. Furthermore, standardised mortality ratios cannot be used to compare different areal units.2

      The report does not give managers and clinical leaders any clue about how to improve quality. Should they look for rogue surgeons or killer nurses or shortcomings in clinical care? If the latter, then what's new? We were doing that anyway. 3 4 The study has served only to divert managers in “bad” hospitals into answering hysterical queries from the press; to induce self righteous complacency in “good” hospitals; and to encourage lawyers to chase after every death, expected or otherwise.

      Patients don't benefit either. How does knowing a hospital's mortality index help? This index is a crude estimate of the a priori average risk of dying while in hospital. Who does it apply to? It applies only to statistically “average” patients—an esoteric concept for risk modelling enthusiasts, but of no help to individual patients, who need an estimate of their individual chances of a successful outcome.

      Nor can the analysis be improved. Clever statistical manipulation of the dataset cannot get us out of the mess resulting from the inversion of the logical process of rational epidemiological analysis. The study started with data that happened to be there; then the researchers did some sophisticated (and therefore seductively persuasive) analysis, suggested a few answers (if you torture a dataset enough it will confess to whatever you want), and then asked “What possible question is this the answer to?” It certainly does not answer the question “Which hospitals have poor quality care as judged by mortality?”

      Ideally we should start with the question, refine it as far as possible, determine what data we need to answer it with an acceptable degree of validity, collect the relevant data, and then analyse them. There are other and better approaches to quality measurement.5 Blunderbuss analysis of a dataset collected for administrative purposes is unhelpful.

      References

      1. 1.
      2. 2.
      3. 3.
      4. 4.
      5. 5.
      View Abstract

      Sign in

      Log in through your institution

      Free trial

      Register for a free trial to thebmj.com to receive unlimited access to all content on thebmj.com for 14 days.
      Sign up for a free trial

      Subscribe