Intended for healthcare professionals

Letters

Search for evidence of effective health promotion

BMJ 1998; 316 doi: https://doi.org/10.1136/bmj.316.7132.703a (Published 28 February 1998) Cite this as: BMJ 1998;316:703

Quantitative outcome evaluation with qualitative process evaluation is best

  1. Annie Britton, Research fellow,
  2. Margaret Thorogood, Reader,
  3. Yolande Coombes, Lecturer,
  4. Gillian Lewando-Hundt, Senior lecturer
  1. Health Promotion Research Unit, London School of Hygiene and Tropical Medicine, London WC1E 7HT
  2. NHS Centre for Reviews and Dissemination, University of York, York YO1 5DD

    EDITOR—Non-randomised studies are currently regarded as inferior, if not worthless, and Speller et al are right to question whether randomised controlled trials are always the best or most appropriate method of evaluating health promotion.1 Evaluation entails quantifying worth: wellbeing is an important asset but is difficult to quantify and hence to evaluate. It is important to distinguish between “not effective” and “not evaluable.”

    Attribution of the effects of an intervention (and the relative costs involved) is the goal of evaluation. An insistence on randomised controlled trials ignores some of the unique features of health promotion: interventions often take place at a community or national level, the expected proportional benefits to individuals are small, and beneficial outcomes are delayed.2 Potential contamination and confounding mean that attribution can rarely be a certainty, and even when it can be, replication is limited.

    The external validity of randomised controlled trials of preventive interventions is questionable. Patients who agree to participate in such trials tend to be affluent and better educated and to adopt a healthier lifestyle than those who do not participate (A R Britton et al, unpublished systematic review). Moreover, most health promotion interventions involve individual behaviour change, so the use of blinding techniques may be impossible. This has implications for the effect of patient preference on the result.3 The value of randomisation in ensuring internal validity is unquestionable, but such trials are not always an appropriate design in health promotion. Therefore, lack of evidence from randomised controlled trials should not be viewed as a failure in the quality of research; rather, more attention should be given to refining and strengthening other trial methodologies (community trials, before and after trials) and incorporating these appropriately into the evidence base.

    We do not dispute that, where possible, randomised controlled trials should be conducted and incorporated into systematic reviews. Nor do we dispute that systematic reviews are an important tool for judging evidence. We cannot support Speller et al's argument that reviewers should exclude interventions simply if they consider them poorly conceived—this would mean a return to the bad old days of the “expert review.”

    An increased use of qualitative methods is needed to complement quantitative research, but, more than that, there should be a new integration of both methodologies within dynamic multicausal models.4 The focus on effective outcomes too often ignores the process of an intervention. Quantitative outcome evaluation combined with qualitative process evaluation may be a way forward in understanding the interrelation between people's behaviour and the social structure within which they live.

    References

    1. 1.
    2. 2.
    3. 3.
    4. 4.

    Systematic reviews include studies other than randomised controlled trials

    1. Trevor A Sheldon, Director,
    2. Amanda J Sowden, Research fellow,
    3. Deborah Lister-Sharp, Research fellow
    1. Health Promotion Research Unit, London School of Hygiene and Tropical Medicine, London WC1E 7HT
    2. NHS Centre for Reviews and Dissemination, University of York, York YO1 5DD

      EDITOR—Health promotion specialists seem threatened by rigorous systematic review methods that attempt to judge the value of theories and practice in the light of the data rather than by what is fashionable.1 The approach sometimes used by the International Union for Health Promotion and Education (discussed by Speller et al), involving the selection of about 10 favourite studies, is interesting but provides insufficient basis for policy. Systematic reviews are an improvement on the casual way in which health promotion has been assessed. Health promotion practitioners often base their practice on opinion, received wisdom, or a favoured theory, occasionally supported by selective reference to a few studies of variable quality which rarely assess health outcomes. To deny the centrality of examining the effect of health promotion on health related outcomes at a personal or community level (regardless of whether one is using a traditional medical or more holistic definition of health) is to raise serious questions about the legitimacy of some health promotion activity.

      The authors are correct to emphasise the importance of looking at how interventions are delivered (even though Speller et al's own review on childhood accidents gives no details about process),2 but they wrongly assert that this is ignored in our reviews. Our guidelines highlight the importance of a qualitative approach in assessing the effectiveness of interventions.3

      Speller et al mistakenly assume that systematic reviews include only randomised controlled trials and that these exclude the use of qualitative methods. The guidelines from the NHS Centre for Reviews and Dissemination do not prescribe the research designs to be included in a review. Several of the cited reviews include studies that have used other designs appropriately. The authors' reflex rejection of a key role for experimental designs ignores a growing appreciation that well designed experiments can be conducted in the community and provide less biased estimates of the impact of programmes than traditional, poorly controlled approaches. 4 5 Speller et al's article makes little contribution to our understanding about which sorts of study designs and methods are best for different purposes.

      The authors accuse those who conduct systematic reviews of health promotion of drawing false conclusions which may “lead to the long term detriment of public health.” However, not one example is given of a finding from any of the cited reviews that is misleading. Their critique is even more difficult to take seriously given that Speller recently undertook a paid commission from the NHS Centre for Reviews and Dissemination to disseminate the results of one of the reviews that she criticises.

      References

      1. 1.
      2. 2.
      3. 3.
      4. 4.
      5. 5.