Effect of interpretive bias on research evidence
BMJ 2003; 326 doi: https://doi.org/10.1136/bmj.326.7404.1453 (Published 26 June 2003) Cite this as: BMJ 2003;326:1453
All rapid responses
Rapid responses are electronic comments to the editor. They enable our users to debate issues raised in articles published on bmj.com. A rapid response is first posted online. If you need the URL (web address) of an individual response, simply click on the response headline and copy the URL from the browser window. A proportion of responses will, after editing, be published online and in the print journal as letters, which are indexed in PubMed. Rapid responses are not indexed in PubMed and they are not journal articles. The BMJ reserves the right to remove responses which are being wilfully misrepresented as published articles or when it is brought to our attention that a response spreads misinformation.
From March 2022, the word limit for rapid responses will be 600 words not including references and author details. We will no longer post responses that exceed this limit.
The word limit for letters selected from posted responses remains 300 words.
Sir,
I agree with the author that research outcome seems to be affected by what
the researcher looking. Similarly development of idea and study design
also get affected with the donor's desire and amount of allocated
resources- a donor's/ sponsor'sdesribality bias.
Competing interests:
None declared
Competing interests: No competing interests
Sirs,
I found your article discussing the impact of prestudy bias to be an
excellent discussion. Let me add one more example of such bias that seems
to be rather prevalent in the literature. As one knows, studies are
designed to show that "a statistically significant difference" exists
between the outcomes of two alternative treatments. When one fails to find
such a difference, the temptation for the authors is to conclude, often
incorrectly, that no difference exists between the two treatments: i.e.
the treatment under investigation is "just as good" as the gold standard.
As we know, to make such a statement, the study needs to have adequate
statistical power to ensure that the chance of a beta error, or that of
incorrectly accepting the null hypothesis, is sufficiently small.
Since power can generally be increased by enlarging the sample size,
it has become in vogue for researchers to retrospectively calculate power
and state the potential error in making a statement of "no difference".
However, it seems to have become equally acceptable for researchers who
fail to have sufficient power to qualify their results in a way that makes
the actual power meaningless. For example, one study recently cites "While
the study failed to have suficient power to confirm the findings that the
drugs were not different, had the sample size been increased from 10 to
180, then the power would have been sufficient to so state". In this way,
the researcher implies that it's only some silly statistical convention
that is preventing he or she from stating that no difference in fact
exists between the two drugs. Of course, we know that had the sample size
been so increased, there is no guarantee as to what the researchers may
have found. I have read similar statements when a researcher finds the
variance or standard deviation too large for their liking, and qualifies
those results in the same way, since increasing sample size "IN THE
ABSENCE OF NO OTHER CHANGES" does in fact reduce the variance, standard
deviation, and standard error of the mean.
For those who cannot resist such hypothetical conclusions, I have
only one suggestion (tongue-in-cheek). Next time, skip doing the study and
save yourself a fortune. Examine one patient, report the results, and
speculate away that whatever you find could be of greatest statistical
significance if only the study had been conducted with more people.
Competing interests:
None declared
Competing interests: No competing interests
external validity, to a large degree, depends on scientific
data that is often "mechanistic".
besides an awareness of potential bias what suggestions do you have ?
After all, decisions have to be made.
Competing interests:
None declared
Competing interests: No competing interests
Statistical error rates in the medical literature are high.
EDITOR-Kaptchuk's interesting paper, Effect of interpretive bias on research
evidence[1], draws our attention to published papers in medical journals, and
in particular drug trials studies. McGuigan[2] showed that out of 248
papers, 66%presented numerical results, 40% of the 164 papers contained
statistical errors.
Davies[3] reviewed 29 analytical papers; 10 methodological errors were
found. Peron-Magnan[4] found that errors of statistical procedures are in
important amount of papers in the reliable psychiatric journals.
Thanking you,
Yours sincerely,
A.K.Al-Sheikhli
References,
1.Kaptchuk TJ,Effect of interpretive bias on research evidence,BMJ
(2003);326:1453-1455.
2.McGuigan SM,The use of statistics in the British Journal of
Psychiatry,Br J Psychiatry(1995);167:683-688.
3.Davies J,A critical survey of scientific methods in two psychiatry
journals,Aust N Z J Psychiatry(1987);21:367-373.
4.Peron-Magnan P,Importance and limits of statistical methods in
psychiatry,Ann Med Psychol(1992);150:187-191.
Competing interests:
None declared
Competing interests: No competing interests