Intended for healthcare professionals


Hospitals' star ratings and clinical outcomes: ecological study

BMJ 2004; 328 doi: (Published 15 April 2004) Cite this as: BMJ 2004;328:924
  1. Kathy Rowan, director1,
  2. David Harrison, statistician1,
  3. Anthony Brady, senior statistician1,
  4. Nick Black ({at}, professor of health services research2
  1. 1Intensive Care National Audit and Research Centre, London WC1H 9HR, 2 London School of Hygiene and Tropical Medicine, London WC1E 7HT
  2. 2London School of Hygiene and Tropical Medicine, London WC1E 7HT
  1. Correspondence to: N Black
  • Accepted 12 November 2003


The English Department of Health is developing global measures of the performance of all NHS bodies, including 166 acute hospital trusts. Since 2000-1, the trusts get zero, one, two, or three stars to indicate performance.1 This rating may not reflect the effectiveness of clinical care measured in patient outcomes because of the lack of accurate routine data.2 One exception is in adult critical care3; we checked whether a hospital's rating provided an indication of its clinical outcomes.

Methods and results

We compared the 2001-2 rating of 102 acute hospital trusts for which we had validated data for that year. We calculated each patient's predicted risk of death before discharge from hospital4 and compared it with actual mortality for all admissions in 2001-2 for each unit.

We compared rating with crude mortality at the patient level rather than aggregated by hospital; our sample of hospitals with all hospitals; and university with non-university hospitals using χ2 tests for trend. We compared rating with size of intensive care unit and mean age of patients, using Spearman'sρ. We calculated confidence intervals for mortality adjusted for risk, using logistic regression of mortality on rating and predicted log odds of mortality. We tested rating and adjusted mortality using the likelihood ratio test.

The distribution of ratings for the 102 acute hospital trusts was similar to that for all 166 trusts (χ>2 = 1.7; P = 0.19). Rating was associated with teaching status (university hospitals had more stars than non-university hospitals—52% v 29% had three, 38% v 45% had two, 5% v 19% had one, 5% v 7% had zero; χ2 = 3.9; P = 0.05) but not size of its critical care unit (Spearman's ρ = 0.09; P = 0.34).

Rating and crude mortality for critical care admissions were significantly associated (χ2 = 4.1; df = 1; P = 0.04) (figure): mortality in trusts with three stars was about 4% lower than in trusts with zero stars. However, case mix of critical care admissions also differed considerably. Rating was inversely associated with the mean age of critical care admissions (ρ = -0.19; P = 0.04). The association between rating and hospital mortality was no longer significant when case mix differences were taken into account (P = 0.4) (figure).


Odds ratio for crude case mix and for case mix adjusted for risk hospital mortality by star rating of acute hospital trust


For adult critical care, star ratings do not reflect the quality of clinical care provided by hospitals. Patients do just as well in a trust with no stars as they do in one with three stars. Crude mortality data are misleading because they ignore the fact that higher rated trusts tend to be teaching institutions with patients who are less severely ill on admission to critical care units.

We did not expect to find an association between the rating of the whole trust and the effectiveness of critical care. Firstly, hospitals are complex organisations containing many services; performance across a hospital will not be uniform—a poorly rated hospital may contain some excellent services and vice versa. Secondly, ratings are determined by a small number of process measures; outcome measures play only a small role and are based on scant poor quality data, which do not adequately account for case mix.

The study's principal limitation is its confinement to one small, though important, group of patients and services. Our findings may be atypical, and trusts' ratings may reflect outcomes elsewhere in hospital care.

If these findings reflect other areas of hospital care, the government is not yet fulfilling its “commitment to provide patients and the general public with comprehensive, easily understandable information on the performance of their local health services.”1 Outcome ought to be a principal concern alongside process indicators, such as waiting times and cleanliness; to fulfil its aim, the government needs to use specialised clinical databases (accessible through

This article was posted on on 23 January 2004:


We thank the participating intensive care units and Minesh Tailor at the Intensive Care National Audit and Research Centre.


  • Contributors KR and NB devised the study; all authors designed the study and interpreted the data; and NB, KR, and DH wrote the paper. NB is guarantor

  • Funding None.

  • Competing interests None declared

  • Ethical approval Not needed