Papers

Commentary: The message is in the medium

BMJ 1995; 311 doi: https://doi.org/10.1136/bmj.311.7012.1059 (Published 21 October 1995) Cite this as: BMJ 1995;311:1059
  1. Ruairidh Milne, honorary senior clinical lecturer in public health medicinea,
  2. David Sackett, professor of clinical epidemiologyb
  1. aDepartment of Public Health and Primary Care, Radcliffe Infirmary, Oxford OX2 6HE
  2. bCentre for Evidence Based Medicine, John Radcliffe Hospital, Oxford OX3 9DU

    A growing number of studies have shown that the way in which evidence about a treatment's effectiveness is presented to clinicians affects how they react to it and may (arguably) lead to more rational clinical judgments. In this paper Fahey et al show that members of health authorities, the vast majority of whom are not medically trained, react like doctors to different ways of presenting the same information.

    The authors did this by sending a questionnaire to health authority members asking how likely they would be to support the purchasing of two health care programmes, taking into account only the evidence from four “trials.” In each case there was in fact only one study, but the results of that study were presented in four different ways. They found most willingness to support programmes when impact on mortality was expressed as a relative risk reduction and least support when the results were expressed with measures that combined the relative risk reduction with information about untreated patients' susceptibility to death: the absolute risk reduction and the number needed to treat.

    This was clearly an artificial exercise: no alternative purchasing options were specified, no costs were included, and respondents were told to disregard national or local policies; and of course purchasing decisions are not made on the basis of evidence alone. Nevertheless, the strength of this approach was that it allowed the authors quickly and simply to assess how health authority members might react to evidence.

    As well as its important central findings this paper throws up some intriguing side questions. For instance, in the case of mammography (though not cardiac rehabilitation) the numbers needed to treat produced much greater enthusiasm for purchase than did the absolute risk reduction (even though they are simply reciprocals of each other). Might this be because health authority members knew that mammography in the United Kingdom is offered mainly in the context of population screening, and so a number needed to treat of 1592 did not seem too large? It is intriguing, too, that the only people to say that they realised that all four “results” came from the same study were non-medical members.

    Because the relative risk reduction makes no assumptions about patients' susceptibility to an outcome (in this case death) it can easily be generalised to different groups of patients. Numbers needed to treat, however, can also be related to groups of patients different from those for which they were first calculated. All that is needed is to divide the published number needed to treat by a decimal fraction that relates your patients' susceptibility to the outcome of interest to that of the patients in the trial.1

    There is currently a lot of support for basing purchasing as well as other health care decisions on evidence and the NHS research and development strategy is putting considerable effort into developing the NHS's evidence base. The paper by Fahey et al suggests that purchasers may also need a programme of skills development to ensure that when charged with basing decisions on evidence they know how to make sense of that evidence. This entails three separate but interrelated steps: systematically examining the trustworthiness of its conclusions; assessing whether its results are important; and considering its applicability to the local population.

    These various measures of a treatment's benefits of course say nothing about costs, central to any purchaser's considerations. Another problem is that the benefits are not measured in a way that facilitates comparison across programmes. If we look more closely at mammography and cardiac rehabilitation the focus in each case is on deaths prevented, taking no account of the age at death or of the quality of the life years gained. It is for reasons such as these that some health economists have criticised the current drive to clinical effectiveness in the United Kingdom. (In theory it means that purchasers, by investing more resources in services of proved benefit, might divert resources from services that could yield greater improvement in their population's health.)

    Valid and thoughtful cost-utility analysis, by using measures such as QALYs, will certainly be vital for evidence based purchasing. There are few purchasing problems, however, which can at the moment be analysed in this way. For that reason, measures such as numbers needed to treat and absolute risk reduction are best seen as simple “half way technologies” that can help busy people trying to make the best use of evidence in their decisions.

    References

    View Abstract