Turning a blind eye:Why we don't test for blindness at the end of our trialsBMJ 2004; 328 doi: https://doi.org/10.1136/bmj.328.7448.1136-a (Published 06 May 2004) Cite this as: BMJ 2004;328:1136
- David L Sackett, trialist ()
EDITOR—I write with reference to the paper by Fergusson et al.1 Asking patients or their clinicians at the end of a trial which drug they think they were taking confounds failures in blinding with successes in pre-trial hunches about efficacy.
Thirty years ago, at the end of the first ever trial of aspirin and sulfinpyrazone for threatened stroke, we asked our trial clinicians to predict which drug each of their patients had been assigned.2 With four regimens in this “double dummy” trial, we'd expect correct predictions for 25% of patients; our clinicians' predictions were correct for only 18% of them (2P < 0.05).
This apparently nonsensical result made sense when we compared it with our incorrect pre-trial hunches that sulfinpyrazone probably was effective and aspirin probably was not. When our patients had done well, their clinicians tended to predict that they had received sulfinpyrazone; when patients had suffered strokes, these same clinicians tended to predict that they had received aspirin or the double placebo.
But what if their pre-trial hunches about efficacy had been correct? If patients who had done well were predicted to have received aspirin, and those who had done poorly were predicted to have received sulfinpyrazone or the double placebo, our end of study test for blindness would have led to the incorrect conclusion that blinding was unsuccessful.
Once a trial is under way, I'm not smart enough to separate the effectiveness of blinding from the effects of pre-trial hunches about efficacy, and I've never met anyone who is. Accordingly, we vigorously test for blindness before our trials, but not during them and never at their conclusion.
Competing interests are available on bmj.com