Complex interventions: how “out of control” can a randomised controlled trial be?BMJ 2004; 328 doi: https://doi.org/10.1136/bmj.328.7455.1561 (Published 24 June 2004) Cite this as: BMJ 2004;328:1561
All rapid responses
Hawe et al (2004) have given an interesting solution to the problem
of evaluating complex interventions, by standardising interventions by
function rather than by form. Their Table 1 gives some idea of how
standardisation may be carried out either by form or by function. I agree
that to treat something like an educational intervention as if it were a
drug is a serious mistake. However I feel that their ideas still need some
refinement. Firstly, even apparently simple drug trials can be regarded as
complex, because issues such as adherence depend on a variety of
influences, including feedback as to whether symptoms improve or not. As I
see it, the problem with their suggestions is the generalisability of the
outcome. For example, a reader of a trial standardised by function might
like to educate his/her own patients about depression. Faced with the fact
that each site in the trial adopted a different form of intervention, how
does the reader choose a suitable one for his/her patients? One suggestion
is to supply an algorithm which will recommend an optimum intervention for
a given case mix. An important concept is that the algorithm is developed
in pilot or Phase II trials, so that the Phase III trial is testing the
algorithm, which has been developed a priori. Thus the interventions will
vary depending on the literacy, culture and leaning styles of the
patients, but any subsequent user of the intervention will know how to
apply it to his/her patients.
There is an analogy here with evaluation of complementary therapies
such as homoeopathy. Therapists sometime claim that randomised controlled
trials in homoeopathy are impossible since the treatment has to be
tailored to the individual. Trialists counter this by saying the
intervention is simply ‘referral to a homoeopathist’ – what the
homoeopathist actually does is a ‘black box’ that it is impossible to
disentangle. Thus referral to the homoeopathist is the ‘function’. One
might assume that homoeopathists have an intuitive algorithm that helps
them optimise care for their individual patients. The important design
aspects here are to have a control group (such as conventional medical
care), some form of random allocation and an agreed outcome measure that
is applicable to both groups. A challenge to the designers of trials here
is to separate the effect of therapy from the effect of the therapist.
Thus a trial with a single therapist would be a poor one, since a patient,
faced with a successful outcome from such a trial would have no idea
whether it would be worth going to a different therapist. Similarly,
trials of complex interventions are poor if they restrict entry to, say,
well educated, middle class patients.
Competing interests: No competing interests
Hawe et al.’s paper on the evaluation of complex interventions makes
a valuable contribution to the discourse on conventional evaluation
methodologies for complex interventions (1, 2, 3). We support their
argument that the way in which complex interventions have been
conceptualised in the past is not conducive to their effectiveness or
their (meaningful) evaluation but disagree with their emphasis on the RCT.
Hawe et al. suggest that if complex interventions are standardised by
their functions then the RCT remains a legitimate evaluation technique.
They say contextual-adaptation increases effectiveness of complex
interventions, but this is difficult to reconcile with their support for
the RCT to demonstrate such effectiveness. RCTs align the experimental and
control groups and disregard the social context in which programmes are
applied. Will this crucial account of context not be lost with an RCT, no
matter how the intervention is standardised? This may have been one reason
why marginal or no effects are seen from RCTs of complex interventions,
some of which Hawe et al. refer to (2, 3).
The examples of evaluation of complex interventions from the MRC and
WHO are interesting and useful. However, we were surprised at the lack of
reference to the wider body of literature in this field which has observed
the limitations of the experimental and quasi-experimental approach (3,
4). Hawe et al. make no reference to pluralism, where a range of
approaches such as qualitative methods, case studies and ecological
analyses can be used.
Hawe et al. highlight the consequences of evidence hierarchies
overlooking certain areas where effectiveness is not commonly demonstrated
by RCTs. However, they then find a way to continue using the RCT for the
evaluation of complex interventions. The complexity of the intervention
is a necessary influence on the evaluation technique. We are concerned
that the RCT will be maintained as the only acceptable form of evaluation
for any intervention.
Lucia Dambruoso, Emma Pitchforth, Julia Hussein on behalf of the
Dugald Baird Centre for Research on Women’s Health Journal Club,
University of Aberdeen.
1. Hawe P, Sheill A, Riley T. Complex interventions: how "out of
control" can a randomised controlled trial be? British Medical Journal, 26
June 2004; 328: 1561-1563.
2. Medical Research Council. A framework for the development and
evaluation of randomised controlled trials for complex interventions to
improve health. London: MRC, 2000.
3. Pawson R, Tilley N. Realistic Evaluation. Sage Publishers,
4. Milne L, Scotland G, Tagiyeva-Milne N, Hussein J. Safe
Motherhood Program Evaluation: Theory and Practice. Journal of Midwifery
and Women’s Health, July/August 2004; 49 (4): 338 -344.
Competing interests: No competing interests