Intended for healthcare professionals


Any variability in outcome comparisons adjusted for case mix must be accounted for

BMJ 1999; 318 doi: (Published 09 January 1999) Cite this as: BMJ 1999;318:128
  1. D F Signorini, Senior statistician. (dfs{at},
  2. N U Weir, Wellcome research fellow
  1. Department of Clinical Neurosciences, University of Edinburgh, Western General Hospital, Edinburgh EH4 2XU

    EDITOR—Parry et al draw attention to the difficulties faced by those wishing to use comparative outcome data to indicate performance.1 They clearly show the importance of adjusting for differences in case mix and allowing for random variation by establishing 95% confidence intervals for estimates of adjusted outcome. In addition to the uncertainty in the observed mortality, however, there is uncertainty in the predicted mortality. The overall lack of clarity in the rankings of the neonatal intensive care units might therefore be even greater if this additional uncertainty were acknowledged, which would reinforce the reservations expressed about decision making with these kinds of data.

    Predictive models are only approximations to reality. They must be estimated from previous data and thus are themselves prone to noise and random fluctuation. Both the size of the original dataset and the predictive ability of the variables used determine the precision of the predicted outcome. In practice this uncertainty is reflected in the covariance matrix of the estimated model variables, and Hosmer and Lemeshow show how this can be used to calculate the uncertainty associated with the expected mortality.2

    The potential influence of this variability can be illustrated with an example from stroke medicine. We calculated the expected 30day fatality in a cohort of 436patients with stroke admitted to a Scottish hospital, using an externally validated logistic regression model derived from 530patients from the Oxfordshire community stroke project. We used the ratio of observed to predicted mortality to standardise the outcome for case mix (a method independent of unit size), which gave a value of 0.95.We calculated two different 95% confidence intervals for this ratio. For the first we used only simple binomial variation (95% confidence interval 0.79to 1.11); the second, for which we used binomial variation plus model uncertainty (0.75to 1.16), was 28% larger. This considerable increase in uncertainty might be found in other circumstances, such as the study described by Parry et al. Indeed, the clinical risk index for babies model used for adjustment for case mix was derived from a similar number of cases (812), but without explicit knowledge of the model covariance it is impossible to confirm this hypothesis.3

    In the current climate of continual comparison of outcomes and performance review, it is vitally important that all sources of variability in outcome comparisons adjusted for case mix are accounted for; the consequences of a false positive declaration of significantly substandard performance are becoming ever more serios.


    View Abstract

    Log in

    Log in through your institution


    * For online subscription