Intended for healthcare professionals

Analysis

Do health improvement programmes fit with MRC guidance on evaluating complex interventions?

BMJ 2010; 340 doi: https://doi.org/10.1136/bmj.c185 (Published 01 February 2010) Cite this as: BMJ 2010;340:c185
  1. Mhairi Mackenzie, senior lecturer in public policy1,
  2. Catherine O’Donnell, professor of primary care research and development2,
  3. Emma Halliday, public health adviser3,
  4. Sanjeev Sridharan, associate professor, health policy, management and evaluation4,
  5. Stephen Platt, professor of health policy research5
  1. 1Department of Urban Studies, University of Glasgow, Glasgow G12 8RS
  2. 2General Practice and Primary Care, Division of Community based Sciences, University of Glasgow
  3. 3Policy Evaluation and Appraisal, NHS Health Scotland, Edinburgh EH12 5HE
  4. 4Keenan Research Centre, St Michael’s Hospital, University of Toronto, Toronto, Ontario, Canada
  5. 5Centre for Population Health Sciences, University of Edinburgh, Edinburgh, EH8 9AG
  1. Correspondence to: M Mackenzie m.mackenzie{at}lbss.gla.ac.uk
  • Accepted 29 December 2009

Although planning of new health policy could be improved to enable more robust evaluation, Mhairi Mackenzie and colleagues argue that randomised controlled trials are not always suitable or practical

In 2000, Medical Research Council guidance recommended that evaluators adopt a sequential approach to testing complex interventions in health care.1 The approach would lead to a well theorised and replicable intervention that could be assessed using a randomised controlled trial. The model, largely reflecting that adopted in clinical drug trials, was criticised on several fronts, including a failure to appreciate the complexity of policy related programmes and contextual variation. Although the updated framework in 2008 addressed many of these criticisms, it still argued that evaluators should strive to use the model of randomised controlled trials.2

The recent health select committee report on health inequalities3 also criticised the missed opportunities to conduct controlled studies of recent policy interventions and called for policy makers to develop interventions that could be better evaluated. These would be more clearly defined, reasonably stable over time, and have specified levels of consistency in implementation between different contexts. Ideally, these features would provide the opportunity for randomised testing.

However, Pawson and Tilley have argued strongly that treating complex programmes as single interventions is misguided and that the randomised controlled design is not appropriate for answering pertinent questions about what works for whom in what circumstances.4 Hawe and Shiell, although advocating the judicious use of controlled designs, have argued that the MRC guidance does not acknowledge the unpredictability of organisational systems into which interventions are introduced. They suggest that, rather than viewing interventions as discrete packages, they should be viewed as “events in systems.”5 We use the example of Keep Well (the Scottish government’s major investment in cardiovascular anticipatory care) to show the problems …

View Full Text

Log in

Log in through your institution

Subscribe

* For online subscription