Turning a blind eye to bias in clinical trials
Other rapid responses to Fergusson et al apart from my own (and Stan Shapiro's rejoinder) seem to have produced a positive gloss on the problem of unblinding in clinical trials. The general thought seems to be that measuring unblinding is difficult, so we may as well give up and carry on with our pretence. This may be to continue "turning a blind eye", as used in the phrase in the title of the original paper.
I may be too sceptical but I continue to wonder whether the small effect size in many clinical trials could be totally explained by bias introduced through unblinding. The measured degree of unblinding by guesses at the end of the trial may be greater than would be expected from correct hunches about efficacy. Like David Sackett, I do not think I am clever enough without help to distinguish true unblinding from correct hunches about efficacy, but the advantage of rapid responses means I can share my thoughts without having worked them through properly. I think it may be possible to measure what the degree of unblinding should be from correct hunches from efficacy based on effect size, and if the actual degree of unblinding with correct guesses is significantly greater than this, it would surely imply that bias had been introduced. I am not sure if this makes sense, but I am reluctant to leave the issue and be as negative about the implications as some of my fellow rapid responders.
For example, the small effect size in meta-analyses of antidepressant trials should be more widely known.1 Psychological variables, of course, may be particularly susceptible to bias. However, even trials containing hard end-points like mortality often have very small differences between active treatment and controls, and their statistical significance is heightened by the large scale of the trials.2 In psychiatric trials the outcome is commonly determined by raters rather than patients themselves. Patients may not necessarily be very good at determining the presence of placebo or active treatment. To give another example, the evidence is that patients may not be aware that they are taking lithium, but observers seem to be able to detect it in one way or another.3
If raters are able to be cued in to whether patients are receiving active or placebo treatment, their wish fulfilling expectancies could be affecting outcome ratings. How do we know that small effect sizes in particular are not due to this amplified placebo effect? I think we should stop turning a blind eye to this legitimate question. It does need to be answered to give confidence about the use of many medications that are endorsed in clinical practice.
- Moncrieff J, Double DB. Double blind random bluff.Mental Health Today 2003; Nov: 24-26 [Medline]
- Double DB. Large scale trials exacerbate risk of spurious conclusion if bias is not eliminated. bmj.com/cgi/eletters/317/7167/1170#1150, 4 Nov 1998 [Full text]
- Double DB Lithium revisited. [letter] British Journal of Psychiatry 1996; 168: 381-2 [Medline]
Competing interests: No competing interests