- Catherine E Hewitt, research fellow,
- Natasha Mitchell, research fellow,
- David J Torgerson, director
- 1York Trials Unit, Department of Health Sciences, University of York, York YO10 5DD
- Correspondence to: D J Torgerson
- Accepted 16 September 2007
When randomised controlled trials show a difference that is not statistically significant there is a risk of interpretive bias.1 Interpretive bias occurs when authors and readers overemphasise or underemphasise results. For example, authors may claim that the non-significant result is due to lack of power rather than lack of effect, using terms such as borderline significance2 or stating that no firm conclusions can be drawn because of the modest sample size.3 In contrast, if the study shows a non-significant effect that opposes the study hypothesis, it may be downplayed by emphasising the results are not statistically significant. We investigated the problem of interpretive bias in a sample of recently published trials with findings that did not support the study hypothesis.
Why interpretive bias occurs
A non-significant difference between two groups in a randomised controlled trial may have several explanations. The observed difference may be real and the study is underpowered or the observed difference may occur simply by chance. Bias can also produce a non-significant difference, but we will not include this in our discussion below.
Trialists are rarely neutral about their research. If they are testing a novel intervention they usually suspect that it is effective otherwise they could not convince themselves, their peers, or research funders that it is worth evaluating. This lack of equipoise, however, can affect the way they interpret negative results. They have often invested a large amount of intellectual capital in developing the treatment under evaluation. Naturally, therefore, it is difficult to accept that it may be ineffective.
A trial with statistically significant negative results should, generally, overwhelm any preconceptions and prejudices …