Developing and evaluating complex interventions: the new Medical Research Council guidance
BMJ 2008; 337 doi: https://doi.org/10.1136/bmj.a1655 (Published 29 September 2008) Cite this as: BMJ 2008;337:a1655All rapid responses
Rapid responses are electronic comments to the editor. They enable our users to debate issues raised in articles published on bmj.com. A rapid response is first posted online. If you need the URL (web address) of an individual response, simply click on the response headline and copy the URL from the browser window. A proportion of responses will, after editing, be published online and in the print journal as letters, which are indexed in PubMed. Rapid responses are not indexed in PubMed and they are not journal articles. The BMJ reserves the right to remove responses which are being wilfully misrepresented as published articles or when it is brought to our attention that a response spreads misinformation.
From March 2022, the word limit for rapid responses will be 600 words not including references and author details. We will no longer post responses that exceed this limit.
The word limit for letters selected from posted responses remains 300 words.
The new guidance from the MRC is a welcome improvement on the first version. Importantly, it adopts mixed methods throughout an evaluation and promotes adjusting interventions to fit the local context.
Unfortunately, the new framework is limited by the failure to fully embrace the value of theory explaining how an intervention works. In a complex system, experimental methods based on comparing the outcome measures of an intervention and a control group do not intrinsically validate the theory of the intervention.
The article fails to acknowledge the implicit assumption in the framework that the intervention will work for everyone. It is unlikely that any complex intervention will ever work for everyone. This challenges us to embrace subgroups defined on a theory that may be based on human behaviour rather than medicine.
Research into interventions in complex systems should embrace the value of experimenting to validate theory as well as outcomes. Through developing theory we can build alternative interventions appropriate to different groups of people. Without it we will only ever ask, “On average does this intervention give a statistically relevant improvement in outcomes relative to a control.” The flaw in this question, given the observation that a complex intervention is unlikely to work for everyone, is obvious. The overall failure of the research, is the failure to recognise the need for the type of intervention it claims to evaluate.
Competing interests: None declared
Competing interests: No competing interests
Sir - The latest MRC framework for research in complex systems continues to demonstrate its persistent fundamental flaw – a conflation of complex with complicated. It is now generally accepted that complex systems and interventions are most appropriately viewed as non-linear. These features imply that such systems cannot be understood by a reduction into their component parts. Outcomes are never an end in themselves but simply a further reiteration of an ongoing process where a more appropriate focus is on the interaction of system variables from which patterns emerge that are not always predictable. The MRC programme continues to ignore other bodies of work which have grappled with these issues. There is a large systems literature which has addressed all these problems in a much greater depth, particularly in the field of soft systems thinking. (1) Realistic evaluation offers alternative methodological approaches (2) and complexity theory itself has much to offer.(3) Lakatos has suggested that within a discipline, although research programmes can change with time, they evolve in such a way as to form a protective belt of auxiliary theories that defend the fundamental and unassailable core.(4) The centrality of the randomised controlled trial continues to be protected in this way. As a consequence, inappropriate methodologies are perpetuated and the development of more relevant approaches to research in complex systems are inhibited.
1 Checkland P. Systems thinking, systems practice. Willey: Chichester 1999. 2 Pawson R, Tilley N. Realistic evaluation. Sage: London 1997. 3 Kernick D. Wanted new methodologies for health service research. Is complexity theory the answer? Family Practice 2006;3:385-90. 4 Lakatos I. The methodology of scientific research programmes: philosophical papers. Volume 1. Cambridge University Press: London, 1978.
Competing interests: None declared
Competing interests: No competing interests
We welcome the new guidance on developing and evaluating complex interventions, prepared by Peter Craig and colleagues on behalf of the MRC[1]. Integral to the guidance is a list of factors that make an intervention complex, including "number … of behaviours required by those delivering or receiving the intervention". However, researchers and research funders may not recognise the behavioural components of some interventions.
Even in simple interventions there are ‘invisible’ behavioural components that are rarely specified and therefore rarely measured, controlled, theorised or tailored. It has long been recognised that a drug trial should take account of non-adherence. So, if the recipient does not perform certain behaviours (e.g., swallow the pills or turn up to the surgery for an injection) the drug will not be effective[2]. Low levels of adherence[2] can reduce the power of the trial and threaten the validity of trial findings. Unused drugs cost the NHS millions of pounds each year[3]. Identifying the role of behaviour and evaluating behaviour change techniques in combination with clinical interventions are thus fundamental to good trial methodology.
The nature of behavioural instructions can influence both the recipients’ intentions to adhere and their success in adhering to their intended actions[4]. Behavioural instruction (e.g., as set out in patient information leaflets) is thus an integral part of a ‘simple’ drug intervention. Even recovery from surgery can be influenced by information or instructions given[5]. Furthermore, in unblinded trials (e.g., most trials of surgical interventions) the effects of behavioural instruction are likely to interact with the treatment being evaluated.
Behavioural instruction delivered through a patient information leaflet can be standardised and effects of variation can, in principle, be tested using randomised designs. However, behavioural instruction (or other behaviour change techniques) delivered by a physician, surgeon or pharmacist may be tailored, but is usually not measured, reported or controlled. This may confound the effects of other intervention components.
We argue that almost all interventions delivered by clinicians or research staff (or patients themselves) are complex, in that their effects are influenced by measurable behaviours performed by the recipient or the deliverer. In summary, apparently simple interventions to improve health often include behavioural components. Trials methodology can be enhanced if these components are identified, reported, measured, standardised (at the appropriate level[6]) and tailored (to recipient and context). If many ‘simple’ interventions are actually complex, the new MRC guidance is applicable more widely than may have been recognised in the past.
Competing interests: None declared.
References
1. Craig P, Dieppe P, Macintyre S, Michie S, Nazareth I, Petticrew M. Developing and evaluating complex interventions: the new Medical Research Council guidance. BMJ 2008;337:a1655.
2. Sackett DL. Priorities and methods for future research, in DL Sackett, RB Haynes (eds): Compliance with Therapeutic Regimens. Baltimore, Johns Hopkins University Press, 1976, pp 169-189.
3. http://news.bbc.co.uk/1/hi/scotland/7674308.stm
4. Prestwich A, Conner M, Lawton R, Bailey W, Litman J, Molyneaux V. Individual and collaborative implementation intentions and the promotion of breast self-examination. Psychology and Health 2005:20(6);743–760.
5. Johnston M, Vögele C, Benefits of psychological preparation for surgery: a meta-analysis. Annals of Behavioral Medicine 1993:15(4);245-56.
6. Hawe P, Shiell A, Riley T. Complex interventions: how “out of control” can a randomised controlled be? BMJ 2004:328;1561-3.
Competing interests: None declared
Competing interests: No competing interests
Re: Developing and evaluating complex interventions: the new Medical Research Council guidance
Since the revised MRC guidance on the development and evaluation of complex interventions was published in 2008, it has become increasingly clear that there were gaps where further guidance was needed. One such gap has now been filled by the publication of new MRC guidance on process evaluation (http://decipher.uk.net/wp-content/uploads/2014/11/MRC-PHSRN-Process-eval...). Stemming, like the 2008 paper, from a workshop sponsored by the MRC Population Health Sciences Research Network, the new guidance provides a comprehensive overview of the why, when and how of process evaluation. Starting from the principle that simply knowing whether an intervention is effective is not enough, and we also need to understand why it succeeds or fails, the new guidance usefully situates process evaluation in the context of other approaches and theories of evaluation and presents a series of detailed case studies.
Work is also underway on many other aspects of the evaluation of complex interventions where the 2008 guidance provides only brief pointers to good practice, such as intervention development and the reporting of pilot, feasibility and implementation studies. It is clear from citations of both the original guidance published in 2000 and the 2008 update, that interest in complex interventions is growing. A comprehensive set of guidance covering all aspects of the cycle of development, evaluation, implementation and reporting is still some way off, but the authors of the new process evaluation guidance have brought it an important step closer.
Competing interests: No competing interests