Intended for healthcare professionals

Rapid response to:

Papers

Turning a blind eye: the success of blinding reported in a random sample of randomised, placebo controlled trials

BMJ 2004; 328 doi: https://doi.org/10.1136/bmj.328.74327.37952.631667.EE (Published 19 February 2004) Cite this as: BMJ 2004;328:432

Rapid Response:

Why we don't test for blindness at the end of our trials.

Blindness is important, for all the reasons Dean Fergusson and his
colleagues present in their paper.

However, asking patients or their clinicians at the end of a trial
which drug they think they were taking confounds the success of blinding
with hunches about efficacy. When patients or their study clinicians have
a hunch about which treatment is superior, patients who have done well
will tend to think they were on that treatment, and so will their
clinicians.

My colleagues and I discovered this phenomenon (to our chagrin) when
we were the first group to test aspirin and sulfinpyrazone in the hope
that one or both of these drugs might prevent major and fatal strokes in
patients with transient ischemic attacks (ref 1). In those early days, we
shared the hunch that sulfinpyrazone was probably efficacious, but that
aspirin probably wasn't (we did the trial because we were uncertain about
these hunches, not indifferent about them). As it happened, our pre-trial
hunches were wrong: aspirin turned out to be highly efficacious in our
trial, and sulfinpyrazone worthless.

At the end of our trial we asked study clinicians to predict which
drug each of their patients had been assigned, thinking that we were
measuring whether blindness had been successful during the trial. To our
confusion, their predictions were statistically significantly WRONG. With
4 regimens in this "double-dummy" trial, we'd expect correct predictions
for 25% of patients; our clinicians' predictions were correct for only 18%
of them.

Our confusion lifted when we thought through the effect of our prior
hunches about efficacy. When our patients had done well, their clinicians
tended to predict that they had received sulfinpyrazone; when patients had
suffered strokes, these same clinicians tended to predict that they had
received aspirin or the double-placebo.

But what if our pre-trial hunches about efficacy had been correct?
If patients who had done well were predicted to have received aspirin, and
those who had done poorly were predicted to have received sulfinpyrazone
or the double-placebo, our end-of-study test for blindness would have led
to the incorrect conclusion that blinding was unsuccessful.

I'm not smart enough to be able to look at an end-of-study test for
blindness and distinguish unsuccessful blinding from correct hunches about
efficacy. I hope somebody is. In the meanwhile, both here and in prior
personal correspondence, I've encouraged Dean Fergusson and his colleagues
to reconsider their study's interpretations and recommendations. To the
extent that patients and clinicians were correct in their hunches about
the comparative efficacy of the treatment arms in the trials they
examined, they would draw the incorrect conclusion that blinding had been
unsuccessful, even when it was completely successful.

My colleagues and I vigorously test for blindness before our trials,
but not during them and never at their conclusion.

Ref 1: The Canadian Cooperative Study Group. A randomized trial of
aspirin and sulfinpyrazone in threatened stroke. N Engl J Med 1978;299:53-
9.

Competing interests:
please see: bmj.com/cgi/content/full/324/7336/539/DC1

Competing interests: No competing interests

20 February 2004
David L. Sackett
Director
Kilgore Trout Research & Education Centre at Irish Lake, RR 1, Markdale, Ontario, Canada N0C 1H0