Intended for healthcare professionals

Letters Evaluating complex interventions

Health improvement programmes: really too complex to evaluate?

BMJ 2010; 340 doi: https://doi.org/10.1136/bmj.c1332 (Published 10 March 2010) Cite this as: BMJ 2010;340:c1332
  1. Lyndal Bond, associate director1,
  2. Peter Craig, programme manager2,
  3. Matthew Egan, senior investigator scientist1,
  4. Kathryn Skivington, pre-doctoral fellow1,
  5. Hilary Thomson, senior investigator scientist1
  1. 1MRC/CSO Social and Public Health Sciences Unit, Glasgow G12 8RZ
  2. 2MRC Population Health Sciences Research Network, Glasgow G12 8RZ
  1. l.bond{at}sphsu.mrc.ac.uk

    Imagine an intervention whose effects vary within and between individuals and depend on subtle interactions between deliverers and recipients, and in which exposure is uncertain. Given this complexity, who would contemplate conducting a randomised controlled trial? In fact, all these issues must be dealt with in drug or therapeutic trials, as well as in more obviously complex interventions.1

    Mackenzie and colleagues describe the problems of a specific intervention—Keep Well—to argue that randomised controlled trials are inappropriate or impossible for evaluating most health improvement interventions.2 In doing so, they ignore the many successful randomised controlled trials of health improvement interventions that suggest this intervention type is not a special case,3 4 and misrepresent the MRC guidance for the development and evaluation of complex interventions.5

    Keep Well has many features that make evaluation difficult, but shifting the focus of evaluation from effectiveness towards implementation is a mistake. Why should decision makers want to know about implementation, populations reached, or impact on practice unless they know whether the intervention is effective? Such questions are highly pertinent, but only in the context of good information about outcomes.

    The MRC guidance is pragmatic in recommending randomised controlled trials. It warns researchers to beware of blanket statements prescribing specific methods for settings, and draws attention to successful and useful applications of a range of non-experimental approaches. The guidance also recognises that rigid protocols are often impractical, and emphasises the need for theoretically informed process evaluations. It is misleading and unhelpful to class it as part of a powerful “lobby for controlled trial designs.”1

    Notes

    Cite this as: BMJ 2010;340:c1332

    Footnotes

    • Competing interests: None declared.

    References

    View Abstract