Intended for healthcare professionals

Analysis And Comment Research methods

Is restricted randomisation necessary?

BMJ 2006; 332 doi: (Published 22 June 2006) Cite this as: BMJ 2006;332:1506
  1. Catherine E Hewitt, PhD student1,
  2. David J Torgerson, director (djt6{at}
  1. 1York Trials Unit, Department of Health Sciences, University of York, York YO10 5DD
  1. Correspondence to: D Torgerson
  • Accepted 27 April 2006

Restrictions during randomisation make it easier for investigators to guess the next allocation. Statistical correction of any imbalance in confounders at the end of the study is equally accurate for most trials and would be safer

Randomised controlled trials commonly use some form of restriction when allocating participants—for example, blocking, stratification, or minimisation.1 2 The main reason is to achieve a better balance of known confounders. For an individual trial simple randomisation may lead to some chance imbalances on some variables. These imbalances are unimportant if the variables have a weak relation with the outcome. However, if by chance the groups differ at baseline on one or more confounding variables, the trial result may be misleading; this effect could be in either direction.

Stratification and minimisation are used to reduce these chance imbalances. Blocking needs to be used in combination with stratification to ensure that roughly equal numbers of participants are randomised to each of the treatments in the individual strata.1 Otherwise, it is statistically equivalent to using simple randomisation. However, using restricted randomisation can generate its own problems.

Problems with restricted randomisation

Introducing any form of restricted randomisation increases the risk of subversion (conscious or unconscious) and technical error. Most examples of known subversion relate to situations where allocation sequences are public knowledge or the concealment of the allocation is inadequate, such as using sealed envelopes that can be tampered with.3 4 However, this type of subversion is not specific to restricted randomisation.

Embedded Image

What will come next?


With restricted randomisation subversion remains possible even when adequate precautions have been taken to conceal the randomisation sequence.5 In an open trial, if we know the block size and we keep a record of previous allocations then we will always be certain about the last allocation in the block. Furthermore, we will also often be able to guess the penultimate allocation correctly. If trialists make predictions only when they are certain of the next treatment, then for a block size of four they will be able to predict 33% of the treatments and for a block size of six they will be able to predict 25%. However, if the trialist simply guesses that the next treatment will be opposite to the previous treatment, then for a block size of four they will be correct 71% of the time and for a block size of six they will be correct 68% of the time.6

Predictability can be reduced by using larger block sizes or more than one block size. However, the best way to reduce predictability is to keep the block size hidden and to blind all people in the trial to the treatment the participants are allocated. When minimisation is used, researchers have advocated adding a random element to the minimisation algorithm to reduce predictability.7 The dangers of using restricted allocation were shown in a recent survey of 25 clinicians and research nurses in which four (16%) admitted to keeping a log of previous allocations to help predict future ones.8

Restricted allocation increases the complexity of the randomisation process and errors could be introduced into the randomisation. For example, the comparative obstetric mobile epidural trial used minimisation to allocate participants to treatments to improve balance on age and ethnicity.9 Unfortunately, the software had an error, which resulted in large imbalances in both of these variables. The trial had to be started again at considerable cost. One of us (DJT) was an external member of the steering group of another MRC funded trial in which a minimisation algorithm with an error was introduced part way through. Fortunately, this error was identified before many participants had been allocated to the treatments. These two examples suggest that even with well funded and conducted trials human error can potentially lead to disaster.

Simple randomisation: the solution?

Simple randomisation is safer than restricted randomisation as it is completely unpredictable (assuming steps have been taken to prevent foreknowledge of the sequence) and is less prone to technical error. The main reason for restricted randomisation is for stratification on important covariates. However, we do not need to stratify the randomisation because we can do this statistically at the end of the trial.10 Grizzle noted that for sample sizes above 50, stratification followed by an adjusted analysis does not add much to the statistical efficiency (the statistical power to detect a treatment effect) of the study,11 even if we stratify on strong prognostic factors. Similarly, Rosenberger and Lachin state that for sample sizes of 100 or more there is less than 0.01 probability that an unstratified randomisation will reduce the relative statistical efficiency of 0.9 or less.12 Others recommend simple randomisation for sample sizes above 200.13 Even if simple randomisation results in quite large imbalances, with large sample sizes an adjusted analysis with simple randomisation will be virtually as efficient as stratified randomisation plus an adjusted analysis. Given the dangers of introducing selection bias when restricted randomisation is used, it seems better to sacrifice a small amount of precision rather than risk introducing bias.

Small trials

Although restricted randomisation may improve precision and reduce chance bias in small trials, it also presents problems. As the number of stratifying variables increases so does the number of randomisation sequences required to implement the procedure. This can lead to more strata than participants in the trial or increase the chance that various cells produced by the stratification will include no participants. Therefore small trials should stratify on only one or two variables. If a larger number of stratifying variables is required it might be better to use minimisation, which does not have this problem.

Current practice

We recently reviewed trials published in 2002 in four general medical journals and examined the adequacy of allocation concealment.14 We examined the trials in this database to assess what kind of randomisation they reported. Of the 232 trials we identified in that review, only 21 (9%) used simple randomisation (table 1). Interestingly, although three out of the four journals endorsed the CONSORT statement, which asks authors to clarify how randomisation was undertaken, we could not decide how randomisation was done for 79 (34%) of the trials. Furthermore, two trials that claimed to have used stratified randomisation, implying some form of restriction, seemed actually to have used simple randomisation.15 16 Thus Durelli and colleagues reported that: “Randomisation was done centrally by the coordinating centre. Randomisation followed computer generated random sequences of digits that were different for each centre and for each sex, to achieve centre and sex stratification. Blocking was not used.”15 While Hulscher and colleagues stated: “Randomisation was stratified according to the hospital and tumour site (esophagus or cardia). No blocking was used within each of the four strata.”16

Table 1

Method of randomisation and sample size in trials published in four journals in 2002

View this table:

Summary points

Most randomised trials use some form of restriction when allocating participants

Restricted randomisation increases the chance of selection bias and technical error

Simpler randomisation is safer

Differences in baseline characteristics arising during simple randomisation are generally unimportant

For sample sizes over 100 stratification can be done statistically at the end of the trial without loss of power

As we stated earlier, blocking should be used in combination with stratification to achieve covariate balance between groups. These two examples highlight the confusion among some researchers about exactly how the randomisation sequence was generated. It also shows the importance of giving a full description of the methods used to generate the randomisation sequence, as encouraged by the CONSORT statement.17

A large number of trials in our review reported using some form of restricted randomisation (table 1). Despite the dangers of using small, fixed block sizes 50 trials reported using these methods, with 46 reporting a block size  6 and 35 reporting a block size  4.

Restricted randomisation is most beneficial for relatively small sample sizes. Therefore, we would expect to see a relation between the type of restricted randomisation used and the sample size, with larger trials using simple randomisation or large block sizes more frequently than small trials. However, table 1 shows no relation between sample size and use of restricted randomisation. Indeed, the average sample size for the most restricted randomisation schedules seemed to be as large, if not larger, than the average sample size of the trials that reported using simple randomisation.

Investigators are more likely to be able to predict upcoming treatment allocations in open trials that use small block sizes. Trials are considered open if the investigators and participants are aware of the allocated treatment. Table 2 shows the characteristics of the 14 trials that fall into this category. Many of the trials reported that opaque envelopes were used to conceal allocation. However, as noted above, this method may not be secure. We examined the baseline tables of the trials and did not find any clear evidence of differences between groups, although subversion would have to have been quite substantial for this to be observable.5 Interestingly, one trial reported using a block size of four (and opaque envelopes) and stratified the randomisation by site.18 This resulted in numerical imbalances of 11 at two of the three sites, which should not have occurred using a block size of 4. Altman noted a similar occurrence in a blocked randomised trial.19

Table 2

Characteristics of 14 open trials with small block sizes

View this table:

Only 32% (74) of trials in our review had a sample size smaller than 200, with 11% (25) smaller than 100 and 6% (n = 13) smaller than 50. This suggests, in terms of statistical power, that most of the trials identified in our review could have used simple randomisation with covariate adjustment rather than restricted randomisation.


Fewer than 10% of our sample of randomised trials published in major medical journals reported using simple randomisation. Although some form of restricted randomisation seems to be accepted practice, simple randomisation may be best. Simple randomisation will lead to more trials looking cosmetically unbalanced at baseline, but these differences are generally of no consequence. It is important for statisticians and non-statisticians to realise that trials that are numerically imbalanced will not be either unscientific or lose much statistical power. Loss of power occurs only when the imbalance is much greater than 2:1.


We thank the referees Doug Altman, Marion Campbell, and Maroeska Rovers for their helpful comments.


  • Embedded Image References w1-w14 are on

  • Contributors DJT thought of the idea, CEH reviewed, extracted, and tabulated the data. Both authors drafted and revised the manuscript. DJT acts as guarantor.

  • Funding University of York.

  • Competing interests None declared.


  1. 1.
  2. 2.
  3. 3.
  4. 4.
  5. 5.
  6. 6.
  7. 7.
  8. 8.
  9. 9.
  10. 10.
  11. 11.
  12. 12.
  13. 13.
  14. 14.
  15. 15.
  16. 16.
  17. 17.
  18. 18.
  19. 19.
View Abstract