- David J Spiegelhalter, senior scientist (firstname.lastname@example.org)1
- 1 MRC Biostatistics Unit, Institute of Public Health, Cambridge CB2 2SR
- Accepted 5 September 2005
Chance variability makes it impossible to assess reliably whether individual trusts are meeting annual targets for reduction in the risk of MRSA infection
One of the core standards set by the Department of Health is to achieve year on year reductions in rates of infection with methicillin resistant Staphylococcus aureus (MRSA).1 This was clarified in November 2004 by the (then) health secretary John Reid, who said that he expected “MRSA bloodstream infection rates to be halved in our hospitals by 2008,” that “NHS Acute Trusts will be tasked with achieving a year on year reduction,”2 and that such a target was “achievable, measurable, and not too burdensome.” Several problems can arise, however, when measuring change in rates, particularly when the observed number of events is fairly low. These include the effects of chance variability, regression to the mean, and low power to detect genuine underlying changes. These problems are accentuated with an infectious disease, since cases tend to cluster and hence rates are “over-dispersed” relative to chance variation.3 So how should we interpret government targets on MRSA infections?
What is the target?
The government target of a 50% reduction in cases over three years essentially corresponds to a 20% annual reduction. But the term target is ambiguous: does it refer to an observed change in rates or a true underlying reduction in risk? At a national level this distinction may be unnecessary because of the large numbers involved, but for individual trusts the play of chance can mean that an observed rate reduction that meets the target may not accurately reflect a corresponding change in underlying risk and, conversely, that a true risk reduction may not be reflected in the observed rates, which might even increase. This lack of clarity can give rise to a range of possible criteria to determine …