Producing better evidence on how to improve randomised controlled trials

BMJ 2015; 351 doi: (Published 25 September 2015) Cite this as: BMJ 2015;351:h4923
  1. Joy Adamson, senior lecturer1,
  2. Catherine E Hewitt, deputy director1,
  3. David J Torgerson, director1
  1. 1York Trials Unit, Department of Health Sciences, University of York, York YO10 5DD, UK
  1. Correspondence to: David J Torgerson david.torgerson{at}
  • Accepted 31 July 2015

Effective recruitment and retention are essential to successful clinical research but we have little good evidence about how to achieve this. Joy Adamson and colleagues call for more use of methodological trials embedded within clinical trials to improve our knowledge

Randomised controlled trials form the bedrock of evidenced based clinical decision making. Many clinicians will be involved in recruiting, treating, and following up trial participants during their career. Although there are many randomised trials aiming to reduce treatment uncertainty, there are few trials of interventions to reduce uncertainty about how best to recruit and retain participants. For instance, is it better to use doctors or nurses to recruit? Only one trial—among patients with prostate cancer—has attempted to answer this question.[1]

Recruitment to and retention in trials are extremely important. Both problems increase the risk of a type II error—that is, incorrectly concluding that there is no clinically important difference between treatments. Attrition, arguably, is more worrying because as well as reducing the power of the study it potentially introduces bias, particularly if attrition differs between the randomised groups.[2] [3]

The lack of evidence makes it difficult to select the most appropriate recruitment and retention strategies when designing clinical trials, contributing to the waste of research resources.[4] Indeed, most clinical trials do not recruit on time and to target, which delays the acquisition of clinical evidence as well as adding to costs.[2] Improving the evidence therefore needs to be a priority.

Problems with the evidence

Although there are many evaluations of interventions to improve recruitment and retention, few of these use randomisation. Two recent Cochrane reviews lament the paucity of evidence to inform trialists and clinicians on the best strategies.[5] [6] Treweek and colleagues found 27 randomised trials evaluating recruitment strategies, only 19 of which were in the context of a real as opposed to a hypothetical trial.[5] Similarly, Brueton and colleagues found only 38 studies looking at interventions to improve retention of participants.[6]

Observational studies have provided some information. For example, Brealey and colleagues attempted to improve recruitment to a clinical trial by replacing recruitment followed by telephone randomisation, all completed by general practitioners, with the potential participant taking study materials away from the GP consultation and then mailing information to the supporting trials unit, which then randomised.[7] The study found postal recruitment had no effect. In contrast, a before and after study by Donovan and colleagues using qualitative techniques to enhance patient recruitment showed large effects on recruitment.[8] Both of these studies evaluated interesting and potentially important recruitment techniques, but more robust trial evidence is necessary.

Trials within trials

One way to improve the evidence is to routinely embed randomised evaluations of interventions aimed at improving recruitment and retention within host clinical trials. For instance, Jennings and colleagues embedded a trial of £100 incentive in five different trials and showed that this significantly increased recruitment.[9]

At the York Trials Unit, we try to embed recruitment or retention trials within all randomised clinical trials. For example, we have tested or are testing the following interventions to improve retention rates: electronic reminders, newsletters to patients, adding sticky notes to questionnaires encouraging completion, free pens with postal questionnaires, and offers of trial results. We have so far found that electronic reminders are effective,[10] newsletters to trial participants are promising (one trial showing effectiveness, another in progress),[11] and sticky notes may be ineffective (one small trial showing no effect with others in progress).[12] For recruitment, we have tested or are testing different designs of patient information sheets, publicity, prerecruitment notification, envelope colour, and additional centre visits by a trial coordinator.

Challenges of trials within trials

Conducting embedded trials presents a number of challenges. The first is cost. Although some embedded trials are relatively inexpensive, they usually require some funding. We have overcome this mainly by evaluating relatively inexpensive interventions (such as electronic reminders) and exploiting the goodwill of staff. Nevertheless, studies to estimate the effectiveness of more expensive interventions to improve trial recruitment, such as qualitative methods,[8] will require additional resources.

Although we routinely try to embed at least one methodological trial within each clinical trial this has, at times, aroused opposition. One issue that recurs is that of statistical power. Sometimes the host trial is large enough for the results to reach conventional significance. For instance, the trial of qualitative methods by Donovan and colleagues was embedded in the ProtecT study of prostate cancer screening with a target sample size of more than 100 000 men and consequently they could do a usual power calculation.[8] However, in our experience the sample size of the host trial is not usually sufficient to show the small differences in recruitment and retention that would be important.

Some investigators do not think it worth the effort to undertake an underpowered trial that is unlikely to find a statistically significant difference. Indeed, one of the criticisms that we often receive from referees of our methods trials is the absence of a power calculation. We argue that a small underpowered trial is helpful as it adds to the evidence base and evidence from a small trial is better than no evidence. For instance, in our three trials of electronic reminders all had a beneficial effect on retention, but only one was close to statistical significance; yet when we combined them in a meta-analysis it suggested a worthwhile positive effect.[10] If more trials within trials are completed then pooling them in a meta-analysis will partly address the power objections. The trial by Jennings and colleagues shows this nicely.[9] It was conducted across five separate trials with sample sizes ranging from 93 to 332, giving it sufficient numbers to show that a relatively small difference in recruitment rates was significant.

The sample size of embedded trials is often limited by that of the host trial so is beyond the researchers’ control. For embedded trials of recruitment strategies, however, the sample size is usually predicated on the numbers of people approached rather than the sample size of the host trial, which generally means they have greater power than studies of retention methods.

Another challenge is the resistance of the chief investigator or co-investigators. It is no coincidence that most of our methodological trials are linked to trials where the chief investigator is a member of our university department. Objections are varied and will be familiar to many trialists trying to persuade clinicians to recruit patients. For instance, some trialists “know” that an intervention is effective in improving recruitment or retention; they have used it for years and are reluctant to change.

Alternatively, trialists may know which sites have poor recruitment and therefore target interventions at these sites. If there is a consequent upsurge in recruitment the intervention gets the credit. Although this is often viewed as good evidence of effect, in the absence of randomised evidence observed improvements could have other explanations, including regression to the mean, temporal effects, or selection bias.

Another objection to embedding a trial is the distraction and extra workload for research staff. This is especially acute when evidence emerges of poor recruitment or retention, when there is urgency for “all hands to the pumps” to save the trial. Although chief investigators may agree to an embedded recruitment trial at this stage, the statistical power problem is exacerbated because not all of the host trial participants can be randomised.

Avoiding unnecessary distractions is an important concern because trial investigators are reluctant to do anything that might damage the success of the host trial. Nevertheless, as well as being critical of evidence free medicine, we should be critical of our trial practices, which may be costly or even damaging to the conduct of the study. To limit the impact of conducting embedded trials it is best to plan them before the host trial starts because this allows relevant ethical approval and logistical planning to take place before any crisis emerges.

Choice of interventions

As noted previously cost and logistics can often drive the choice of intervention. However, there is scope to learn from other areas of research. For example, our trials of sticky notes were driven by a study published in the marketing literature that seemed to show questionnaire response rates were increased by attaching sticky notes to them.[13] However, our trials do not support their use in trial retention, which shows the importance of replication of promising interventions from another context. In future we are proposing to conduct trials of SMS messages including the name of the recipient to see if this improves response rates. The idea for this came from a criminal justice study where a trial found that including the person’s name in an SMS message improved rates of payment of fines.[14]

Improving the evidence base

Despite randomised trials being relatively easy to implement, few trialists routinely use them to evaluate the design and conduct of their studies. Perhaps feasibility studies should routinely embed recruitment and attrition prevention trials. For instance, the VIDAL feasibility trial of vitamin D supplementation has embedded a trial of open versus placebo control to test which is best for recruitment, retention, and contamination.[15] We believe that trials within trials along with other approaches, such as improving trial related infrastructure and a reduction in regulation, will improve recruitment and retention.

Recently a number of initiatives have tried to increase the number of trials within trials to provide evidence of effective interventions to enhance recruitment and prevent attrition, such as Trial Forge and Studies Within a Trial.[16] [17] In addition, the Medical Research Council has funded the Systematic Techniques for Assisting Recruitment to Trials (MRC START) team to evaluate two recruitment interventions using trials within trials.[18] Nevertheless, the number of embedded trials stills needs to increase. The UK has 45 UK Clinical Research Collaboration (UKCRC) registered trials units. If each unit undertook an embedded trial of a single recruitment or retention strategy each year this would more than double the evidence base within two years.

We have found trials within trials to be valuable. Most of our embedded trials have been unfunded and address relatively simple questions. Nevertheless, we now have accumulated sufficient evidence to show that electronic reminders can reduce attrition and prenotification for recruitment is sufficiently promising to explore in further studies. This is in addition to abandoning useless interventions (such as using a copy of a newspaper article to increase recruitment).[19] We always publish our embedded trials, which enhances the CVs of, often junior, research staff as well as allowing the dissemination of our findings. They can make excellent student projects, and a trial within a trial gives both students and research staff a good grounding in how to conduct and write up a randomised controlled trial: lessons which can be applied to future clinical trials.

We therefore encourage other trialists and trials units to join us in evaluating methods to improve recruitment and reduce attrition using embedded trials. We also ask funders to be open to providing relatively small additional funds to large clinical trials to allow testing of more expensive but promising interventions.

Key messages

  • Few interventions to reduce attrition or enhance recruitment have been tested in randomised controlled trials

  • Embedding such trials within clinical trials can help to improve the evidence

  • Many such trials can be done at low cost and need little or no additional resource

  • If all UK trials units embedded at least one recruitment or retention trial each year the evidence base would be doubled in less than two years


Cite this as: BMJ 2015;351:h4923


  • Contributors and sources: All authors are experienced trial methodologists and the paper results from discussions between them. DT had the original idea for the paper and wrote the first draft and coordinated the paper. JA and CT contributed to the manuscript and provided examples. All authors agreed with the submission and have seen the final manuscript. DT is the guarantor.

  • Competing interests: We have read and understood BMJ policy on declaration of interests and have no relevant interests to declare.

  • Provenance and peer review: Not commissioned; externally peer reviewed.


View Abstract

Log in

Log in through your institution


* For online subscription