Intended for healthcare professionals


Minimisation: the platinum standard for trials?

BMJ 1998; 317 doi: (Published 08 August 1998) Cite this as: BMJ 1998;317:362

Randomisation doesn't guarantee similarity of groups; minimisation does

  1. Tom Treasure, Consultant cardiothoracic surgeon,
  2. Kenneth D MacRae, Statistician
  1. St George's Hospital, London SW17 0RE
  2. Charing Cross Hospital, London W6 8RP

    When we have to decide which of two drugs, interventions, or management strategies is the better, the most secure evidence is generally obtained from a randomised controlled trial. The primary objective of randomisation is to ensure that all other factors that might influence the outcome will be equally represented in the two groups, leaving the treatment under test as the only dissimilarity. Any difference in outcome can then be attributed to the treatment effect. But how realistic is this assumption in practice?

    When published a randomised trial typically includes a table listing all the prior factors known actually or possibly to influence outcome. The average age and its distribution in each group and the proportion of men and women usually head the list, followed by other likely determinants of outcome. In the case of heart disease these will probably include details of left ventricular function; the proportions in each group with diabetes, hypertension, hyperlipidaemia, or a smoking history; the relative incidence of arrhythmia, obesity, symptoms of heart failure; and any others factors that may have been collected. If these are similar in the two groups (which is not the same as showing that they are not statistically different) then we can go on to attribute any difference in outcome to the benefit of treatment over placebo, or of one treatment over another. But what if there are differences?

    Indeed, if there are many possible prognostic factors there will almost certainly be differences between the groups despite the use of random allocation. In a small clinical trial a large treatment effect is being sought, but a large difference in one or more of the prognostic factors can occur purely by chance. In a large clinical trial a small treatment effect is being sought, but small but important differences between the groups in one or more of the prognostic factors can occur by chance.

    Supposing one group has more elderly women with diabetes and symptoms of heart failure. It would then be impossible to attribute a better outcome in the other group to the beneficial effects of treatment since poor left ventricular function and age at outset are major determinants of survival in any longitudinal study of heart disease, and women with diabetes, as a group, are likely to do worse. At this point the primary objective of randomisation‐exclusion of confounding factors‐has failed.

    Attempts are then made to retrieve the situation by multivariate analysis, allocating part of the difference in outcome to the known, unwanted difference in the groups, but there is always an air of uncertainty about the validity of the conclusion. This may seem to be less of a risk in a very big trial, because we can expect things to even out, but big trials are done to seek small differences, and even a small difference in other determinants of outcome may be important. If a very big trial fails, because, for example, the play of chance put more hypertensive smokers in one group than the other, the tragedy for the trialists, and all involved, is even greater.

    The way to avoid this is by minimisation‐not a well known technique‐first described by Taves in 19741 and shortly after by Pocock and Simon2 and Freedman and White.3 With this method the group allocation does not rely solely on chance but is designed to reduce any difference in the distribution of known or suspected determinants of outcome, so that any effect can be attributed to the treatment under test. The trialists determine at the outset which factors they would like to see equally represented in the two groups. In our study of aspirin versus placebo in the two weeks before elective coronary artery surgery we chose age, sex, operating surgeon, number of coronary arteries affected, and left ventricular function.4 But in trials in other diseases those chosen might be tumour type, disease stage, joint mobility, pain score, or social class.

    At the point when it is decided that a patient is definitely to enter a trial, these factors are listed. The treatment allocation is then made, not purely by chance, but by determining in which group inclusion of the patient would minimise any differences in these factors. Thus, if group A has a higher average age and a disproportionate number of smokers, other things being equal, the next elderly smoker is likely to be allocated to group B. The allocation may rely on minimisation alone, or still involve chance but “with the dice loaded” in favour of the allocation which minimises the differences.

    This process must be handled out of sight of any individual who might introduce bias, but this is equally true of randomisation‐which we know can be subverted by the (often unconscious) vested interests of the trialists. The individual trialist does not know how the risk factors are accruing and cannot influence the allocation. If the trial is double blind the trialists do not know which groups the present patients are in so subsequent decisions to include a patient in the trial cannot be influenced by any knowledge of which group they are more or less likely to enter. Exclusion of bias is as readily achieved as it is with properly performed randomisation, but with the advantage that similarity of the two groups is ensured, rather than hoped for.

    The theoretical validity of the method of minimisation was shown by Smith,5 and White and Freedman have reviewed alternative methods of patient allocation.6 A recent example of the use of minimisation is found in Kallis et al.4 If randomisation is the gold standard, minimisation may be the platinum standard.