Intended for healthcare professionals

Education And Debate

Why do we need randomised controlled trials to assess behavioural interventions?

BMJ 1998; 316 doi: (Published 14 February 1998) Cite this as: BMJ 1998;316:611
  1. Judith Stephenson (jstephen{at}, senior lecturer in epidemiology,
  2. John Imrie, research fellow
  1. Department of Sexually Transmitted Disease, University College London Medical School, London WC1 6AU
  1. Correspondence to: Dr Stephenson
  • Accepted 21 July 1997

The value of the randomised controlled trial still generates debate.1 Although some of the earliest examples of these trials can be found in behavioural and psychosocial research, this is not an area that has adopted readily the randomised controlled trial to assess interventions.2 Two recent developments have intensified debate about the role of randomised controlled trials—the urgent need to find effective behavioural interventions against HIV 3 4 and the advance of evidence based medicine, which is moving the randomised controlled trial beyond clinical trials into areas such as health promotion. This article considers the merits and limitations of randomised controlled trials in the behavioural area compared with clinical medicine, and asks how these trials can be applied successfully to assess behavioural interventions.

Merits and limitations

The merits and limitations of randomised controlled trials in general have been widely discussed 5 6; only key points are repeated here. In clinical medicine, the randomised controlled trial is considered the best way of measuring the efficacy of interventions because of its ability to minimise bias and avoid false conclusions. Random assignment of individuals to different treatment groups is the best way of achieving a balance between groups for the known and unknown factors that influence outcome. This may seem to run counter to the traditional medical model of the doctor deciding which treatment is best for each patient, but it is considered ethical only when there is genuine uncertainty about which treatment to offer. By the same token, failure to tackle genuine uncertainty about treatments through randomised controlled trials can be considered unethical because it allows ineffective or harmful treatments to continue unchecked.


Aside from ethical issues, the limitations of randomised controlled trials are relative, and shared to some degree by other study designs. These include cost, feasibility, and relevance to the real world. The effect of an intervention in an ideal research setting (efficacy) may well differ from its effect in the real world (effectiveness). This is particularly true of “explanatory” trials, which are designed to establish a cause and effect relation, but less so of “pragmatic” trials, which aim to mimic real life situations.7 Efficacy tends to differ from effectiveness because people who give informed consent to enter trials usually differ, in ways that affect outcome, from those who are eligible but decline or are not invited. Furthermore, taking part in research often involves procedures and commitments that are different from routine practice. In this sense, effectiveness cannot be judged from tightly controlled research, but without prior evidence of efficacy, it can be hard to attribute events in the real world to the effectiveness of an intervention (see below).

Summary points

Merits of randomised controlled trials in behavioural and psychosocial research do not differ fundamentally from those in clinical medicine

Interventions that target behaviour are often complex and demanding, as are the requirements of good randomised controlled trials to assess their efficacy

Standardising the content and delivery of an intervention in a trial may be more challenging than justifying randomisation during informed consent

When blinding of participants and researchers to treatment allocation is impossible, it is important to minimise bias through blinded assessment of the outcome

The contribution that participant choice makes to the efficacy of an intervention is hard to measure

What is the debate?

Debate about randomised controlled trials generally takes one of two forms. If it is accepted that the randomised controlled trial is the method of choice for estimating the efficacy of interventions, then debate is confined to the conditions which permit the trial on ethical and practical grounds and make the findings useful beyond the trial itself.7 If a randomised controlled trial is not possible for ethical or practical reasons, observational studies may be the only way of assessing an intervention, and some of these have undoubtedly been useful. To accept the randomised controlled trial as the best way of gauging efficacy is not necessarily to dismiss other study designs that have the same objective. 1 7 8 For example, no randomised controlled trial of the efficacy of condoms in preventing sexual transmission of HIV has been carried out. Given the seriousness of HIV disease and the consistency of early, albeit inconclusive, studies supporting condom use, such a trial would have been unethical. However, in a well designed prospective study of sexual partners whose HIV status differed, the seroconversion rate was zero between couples who used condoms consistently and about 5% per year in those who did not.9 Given this evidence of condom efficacy, we can be more confident that the decrease in the rates of sexually transmitted diseases and HIV infection in Thailand is due to the effectiveness of a nationwide campaign that dramatically increased condom use in brothels.10


Is it the cognitive-behavioural component of the therapy that works, or is non-specific group support more important?

The second form of debate involves more fundamental opposition to randomised controlled trials. In the behavioural and psychosocial field, ethical objections have been raised about withholding interventions that are believed or assumed to be beneficial. In addition, it is argued that randomised trials are not applicable in this field because they ignore the importance of external influences, participant choice, qualitative research methods, and the complexity of behavioural and psychosocial interventions.11

Assessing behavioural interventions

Behaviours and outcomes related to health are clearly influenced by complex social and economic factors.12 Randomised controlled trials may have little to contribute in terms of explaining how and why these factors affect health and behaviour, but this does not deny their usefulness in testing applied interventions with specific objectives. In fact, randomisation seeks to balance out external influences between groups so that the true effect of an applied intervention is detectable. For example, advertising and pricing policies undoubtedly have a major impact on smoking levels, but the effectiveness of a specific intervention to stop smoking—for example, hypnosis—is best examined by a randomised controlled trial of smokers.13

Interventions that target behaviour are often demanding and costly. They generally require several sessions run by highly skilled staff. Without evidence of efficacy, scarce resources might be better spent elsewhere, and the possibility of causing harm should be considered. One study of counselling after an HIV test found that the incidence of gonorrhoea in people who tested negative was twice as high in the six months after testing and counselling than in the preceding six months.14 Without a control group these findings are hard to interpret, and there are few good trials in this area. The point is that well meaning measures may not work as intended.

Behavioural interventions are often evaluated through uncontrolled, before and after comparisons. 4 15 Dissatisfaction with these comparisons in clinical medicine is partly related to the statistical law known as regression to the mean. If extreme values (for example, of blood cholesterol) are singled out from a distribution, they are likely, for purely statistical reasons, to fall closer to the usual level if measurement is repeated. In the absence of a control group, lower cholesterol concentrations at follow up might merely reflect the laws of statistics but be wrongly attributed to the effect of an intervention.

It is easy to imagine a similar, although non-statistical, phenomenon occurring in a behavioural setting. For example, people are most at risk of contracting a sexually transmitted disease when they are taking greater sexual risks than usual, and are likely to return to their usual level of risk behaviour afterwards. If people with a sexually transmitted disease were recruited to an uncontrolled study of a behavioural intervention, lower rates of sexually transmitted diseases or increased condom use might be expected at follow up, even if the intervention had no effect (particularly if participants were reluctant to disclose high risk behaviour or if the process of diagnosing a sexually transmitted disease and treatment alone had had an impact). Effects of this kind could not be detected without a comparable (randomised) control group receiving standard care but not the intervention of interest.

Randomised controlled trials have limitations

An important limitation to randomised controlled trials in behavioural or psychosocial research relates to “blinding” and participant choice. Blinding to treatment allocation in clinical trials (for example, by using placebos that are indistinguishable from active drugs) is intended to prevent the expectations of patients or researchers from influencing the outcome. In behavioural trials, blinded allocation to treatment may be impossible, but blinded assessment of outcome need not be. For example, in a randomised controlled trial of psychotherapy versus supportive listening in patients with irritable bowel syndrome, both the psychiatrist and the patients knew which treatment group they were in, but the outcomes (psychological and bowel symptoms) were assessed by another psychiatrist and a gastroenterologist who were blind to treatment allocation.16

Does choice improve outcome?

Excluding choice by allocating patients randomly to one or other treatment is seen as a great strength in clinical trials. In behavioural trials, this has brought criticism. Choosing your preferred intervention, it is argued, increases motivation and thereby the success of the intervention.17 Even if this is true, can such a motivational effect be measured? Some researchers claim it can be measured by comparing outcomes between those who are randomly assigned to a particular intervention and those who choose it.17 The problem with this is that people who choose an intervention probably differ from those who do not in ways that affect the outcome. For example, in the trial of patients with irritable bowel syndrome, psychotherapy was better than supportive listening at three months, at which time the supportive listening group was offered psychotherapy.16 The 77% who accepted the offer clearly differed in psychological symptoms—at the point of decision—from the 23% who declined. Other (unknown) factors related to irritable bowel syndrome may also have differed between the two subgroups.

Inappropriate for exploratory research

As noted above, randomised controlled trials are not appropriate for exploratory research into factors that determine behaviour related to health. This is the domain of qualitative research.18 Identifying the cultural context, values, beliefs, and community norms of target groups through good qualitative research is the key to the design and implementation of promising interventions. Examples can be found in the development of behavioural interventions led by peers to reduce the risk of infection with HIV, particularly among hidden or hard to reach groups such as injecting drug users who are not in contact with treatment services, and young gay men. 19 20 Clearly the strengths of qualitative research should not be pitted against those of randomised controlled trials.

Standardisation is a major challenge

Standardising the content and delivery of a complex intervention in a randomised controlled trial is a major challenge. More extensive training and monitoring of those delivering the intervention may be needed than in clinical trials. Monitoring can be done through supervision and feedback or, more objectively, by audiotaping or videotaping random sessions by independent assessors. Careful monitoring and qualitative research may explain why complex interventions fail or, conversely, shed light on the factors that lead to behaviour change—the active ingredients.21 For example, if a support group based on cognitive-behavioural therapy proves effective, is it the cognitive-behavioural component of the treatment that works, or is non-specific group support more important? To be confident about which is most effective would require each component to be tested in additional experiments or a trial with several arms. Both options are usually too costly and too lengthy to be realistic. In terms of assessing healthcare interventions, pragmatic randomised controlled trials may therefore be more appropriate and manageable than explanatory ones, even if they do not identify the active ingredient. By comparison with the effort required to standardise and monitor a complex intervention, the additional effort of justifying randomisation to potential participants is small, while the gain in reducing bias is great.


The value of the randomised controlled trial in behavioural research does not generally differ fundamentally from its value in clinical medicine. Issues that occasionally arise in other areas, such as lack of opportunity for blinding and complexity of intervention, are particular features of behavioural and psychosocial research. Standardising the content and delivery of a complex intervention may prove more limiting than random allocation. When interventions are complex, pragmatic trials may be more likely to succeed than explanatory ones.


Funding: JS is funded by a Medical Research Council programme grant and JI by a North Thames research and development project grant.

Conflict of interest: None.


  1. 1.
  2. 2.
  3. 3.
  4. 4.
  5. 5.
  6. 6.
  7. 7.
  8. 8.
  9. 9.
  10. 10.
  11. 11.
  12. 12.
  13. 13.
  14. 14.
  15. 15.
  16. 16.
  17. 17.
  18. 18.
  19. 19.
  20. 20.
  21. 21.
View Abstract