Adequacy of authors’ replies to criticism raised in electronic letters to the editor: cohort study
BMJ 2010; 341 doi: https://doi.org/10.1136/bmj.c3926 (Published 10 August 2010) Cite this as: BMJ 2010;341:c3926
All rapid responses
It is clear from the paper by Gotzsche at al (1) that the BMJ editors
first accept critical Letters to the Editor and only then turn them to the
concerned authors for an eventual reply. Unfortunately, this correct
procedure is not a universal one. As the author of two LE that were
rejected by the editors of the respective journals only after forwarding
them to the concerned authors (2), I want to stress here that such a
modus operandi indeed constitutes editorial malpractice.
1. Gotzsche PC, Delamothe T, Godlee F, Lundh A. Adequacy of authors'
replies to criticism raised in electronic letters to the editor: cohort
study. BMJ. 2010;341:c3926.
2. Rivera H. Editors' malpractice: forward submitted letters (to the
concerned authors), then reject them. Account Res 2009;16:331-3.
Horacio Rivera, MD
hrivera@cencar.udg.mx
Competing interests: No competing interests
The remarks by Bolland and Grey are relevant, but their conclusion is
not warranted. We considered the issues they raise when planning the study
but felt the additional work involved wouldn't justify the potential
marginal benefit. We were deliberately conservative and only called
criticisms substantive when both observers agreed (1). Furthermore, the
authors responded to 47 (45%) of the criticisms, and it was clear from
their replies that we had succeeded in selecting criticisms that really
were substantive. Finally, if the critic had misunderstood or
misinterpreted the paper, or the criticism was wrong, irrelevant, or
already addressed in the paper, it would have been simple for the authors
to point that out.
We have now, since publication of the paper, confirmed the substantiveness
of the criticisms in a random sample of 10 of the 47 papers with author
replies. The authors acknowledged the criticism in all ten cases:
overinterpretation of data (N=1), relevant biases not addressed (2), an
important study with contradictionary findings wasn't discussed (1), the
validity of the outcome measure (2), inaccurate and misleading statements
(1), error in data analysis, which prompted the authors to rerun their
analysis (1), and external validity of the findings (2). For the two
criticisms on external validity, the original paper had discussed it, but
only partly, and the author's reply provided additional information that
was important for readers not familiar with the field.
We agree that it would have been ideal to have read the 105 full papers
and also to ask the authors about their view of the criticisms, but our
study was already complicated. This could be part of a new project, aimed
at providing a detailed description of the problems identified by readers
and whether they in some cases invalidate the studies.
The fact that only 19% of the substantive criticisms were published in the
print journal is irrelevant. Substantiveness of a criticism is only one of
the criteria we use for selecting the ~10% of rapid responses for print
publication as Letters. Sometimes criticisms, or authors' responses, or
both, can greatly exceed our 300 word limit. In some cases the exchanges
will be of interest to only a small minority of our print readership.
As reported in the paper we found that the severity of the criticisms in
the print issue was similar to that for the criticisms that were published
only online (it was actually even a little less severe, 2.05 v 2.25,
P=0.12).
It is true that the editors generally thought the authors had responded
adequately to the criticisms (in 70% of the cases), but we have discussed
in our paper why these judgments might have been too positive.
Thus, we believe our conclusion holds: Authors are reluctant to respond to
criticisms of their work.
1. Gotzsche PC, Delamothe T, Godlee F, Lundh A. Adequacy of authors'
replies to criticism raised in electronic letters to the editor: cohort
study. BMJ 2010;341:c3926.
Competing interests: TD and FG are editors of BMJ
The authors report that substantive criticism was raised for about
one third of research papers in the BMJ over a 2 year period.1 From the
Methods of the paper, it appears as though none of the four authors who
decided whether a criticism was substantive referred to the original
paper, instead relying on the rapid response of the critic as the basis
for their judgement. This seems surprising. The critic could have
misunderstood or misinterpreted the paper, and the criticisms were not
peer-reviewed and could well be wrong, irrelevant, or already addressed in
the paper. Therefore, it is not clear that these so-called substantive
criticisms were truly substantive. That conclusion is supported by the
finding that only 19% of these criticisms were published in the print
edition- suggesting that the relevant editors did not think the majority
of criticisms were substantive enough to warrant publication.
A second surprising omission is the failure to contact the authors
for their views on the criticisms. The authors say that "critics on
average would be expected to be more knowledgeable about the subject and
the methodological issues related to the research than the editors." They
do not provide any evidence for that viewpoint, which is questionable
given the ease of publishing rapid responses in the BMJ, none of which are
peer-reviewed. On the other hand, authors are experts on their paper, have
navigated the peer-review process, the internal journal review process,
and succeeded in publishing in a journal that rejects almost all papers
submitted. The authors' views of criticisms of their paper are highly
relevant. It would have been valuable to know the authors' views,
especially since, when the authors did respond to criticisms, generally
the editors thought they adequately addressed the issue whereas the
critics thought they had not.
We agree that authors should respond to substantive criticism, but we
do not think it valid to rely solely on rapid responses to determine
whether a criticism is substantive. Consequently, we do not agree that
this study shows that authors are not responding to substantive
criticisms.
1. Gotzsche PC, Delamothe T, Godlee F, Lundh A. Adequacy of authors'
replies to criticism raised in electronic letters to the editor: cohort
study. BMJ. 2010;341:c3926.
Competing interests: No competing interests
As a member of the Editorial Board of an epidemiology journal, I
believe that when an author is criticized or challenged, the author has a
near-absolute right to respond as thoroughly or as inadequately as he or
she sees fit. Authors are free not to respond at all, although the
readership will be advised that the author elected not to respond.
Especially in a rapid response system, if the author's response is
inadequate, the original critic or other letter-writers are free to submit
additional letters pointing out the inadequacies. Therefore when authors
submit responses to letters to the editor, I screen them only for ad
hominem attacks and for grossly poor writing. Beyond that, in the context
of responding to critics authors are free to make fools of themselves if
they so desire.
Competing interests:
None declared
Competing interests: No competing interests
Singh DK, Tuli L.
E mails: deepakbhu@gmail.com, tuli_lekha@rediffmail.com
The researchers deal with only few aspects of the subject under
study. Very often they miss the allied results and repercussions of their
study. All the more, studies are mostly monopolized and focused at
addressing answers to single or few questions only. In this zealous
venture authors fail to derive any other results and at times miss some
important aspects too. When their work is published it evokes multiple
thoughts in the mind of readers and the critics. It is not very uncommon
to see that the authors receive criticisms in those aspects they had never
thought about. They are speechless at what people could derive from their
study and question the same from them. It is on these occasions that the
authors prefer silence as they lack any scientific explanation. The
authors avoid coming forward and admitting that they missed this aspect in
their study. There is no harm in doing so until the query challenges the
scientific soundness of research. And on these occasions an answer must be
warranted as to why one lacked the efforts to plan, produce and portray
correct results. However, in the end we must not forget that a few
authors, reviewers and editors may not be able to sieve all the mistakes
in an article.
Competing interests:
None declared
Competing interests: No competing interests
Authors do not have to respond to criticisms unless they choose to.
They have
already jumped through the hoops enough to get their paper published. As
the
authors note in this study, "We found that 20 of the 105 rapid responses
with
substantive criticism (19%) were also published in the print issue and
five of
these (25%) had a reply from the author in the print issue." Therefore,
even the
journal editors didn't think they were substantive enough to publish.
However,
authors who chose to respond on their own had a better rate of response
than
the journal editors choosing to publish the letters.
Would this article have really been published had it not come from in-
house?
Competing interests:
None declared
Competing interests: No competing interests
In the Brave New World of Evidence Based Medicine, the quality of the
Medicine is based on the quality of the Evidence. While it is
understandable
that authors see their work as complete after all the work of apparently
not
disproving a null hypothesis, they have a responsibility to be careful in
their
early attempts to influence the practice of Medicine by publishing their
views
of their (incompletely reviewed) evidence in the General Media.
As a General Practitioner it is becoming increasingly important to
check the
latest “News” with the source document. In the last few weeks I have twice
visited the electronic BMJ on reading a general News story on the BBC
Website
only to find that the story was a poor reflection on the published
evidence. In
the first, conclusions were drawn about the use of substitute opiate
prescribing in my own practice area (1) that went beyond the published
data
(indeed the data led me to contrary conclusions (2)); in the second,
patients
taking Calcium Supplements including Vitamin D became concerned despite
the source paper (3) having made clear that it was not relevant to those
preparations, though a linked editorial (4) came to a “newsworthy”
conclusion about preparations combined with Vitamin D that was unjustified
by the evidence published in the Journal, though it presumably increased
the
impact factor of the journal.
It would be wrong for a Journal to prevent public comment by authors
on their
published work but the modern habit of “trailing” a paper in the general
news
media might be made scientifically safer by creating an author’s
requirement
to submit press releases to the editor before they are promulgated, with
modifications or author’s responses being required as critical rapid
responses
are published. Editors, though, should be careful to recognise that impact
factor may compete with their duty to Science.
1. BMJ 2010;341:c3172
2. If substitutes kill, we should treat with blockers Andrew J
Ashworth BMJ
Rapid Response (8 July 2010)
3. BMJ 2010;341:c3691
4. BMJ 2010;341:c3856
Competing interests:
I have published critical Rapid
Responses without subsequent
author's reply.
Competing interests: No competing interests
According to this study, 10-15 % of the published
research articles examined gave rise to major criticisms via
rapid responses that could potentially have invalidated
their results. It is likely that this is a conservative estimate of the
total number of papers with potentially major flaws,since some may not
have received such comments.
The authors highlight the important potential benefits
of rapid responses to allow for improved analysis of
studies after publication. I certainly concur.
However, I would like to propose an additional approach to improving
studies - one that allows for rapid responses ex-ante.
The BMJ could create an online journal (or database) of
research designs - with the intent that the designs would
be published via BMJ.com BEFORE the actual studies were
finalized and performed.
Basically, authors would write a brief paper that included the
hypotheses to be tested, the measures used, choice and allocation of
subjects, choice of experimental manipulations and controls, etc. They
would also provide a very brief introduction/literature review giving the
context and motivation for the proposed study. This would be posted to
the database (perhaps using a system similar to Rapid Responses).
Other readers could comment and/or extend on the
proposals - eg. suggesting weaknesses or potential improvements. (Again
this could be done in a manner similar to the Rapid Responses system -
although it
should also allow readers to post suggestions anonymously).
Many potentially useful hypotheses and designs for
clinical trials, studies or experiments are never funded.
Others are eventually funded after considerable delay.
Still others are funded, but proceed with major potential
flaws which could have been readily detected and corrected
ex-ante - if the proposed design had been subject to the widespread peer
review of BMJ electronic
publication and rapid responses.
The BMJ could expedite the process of bringing research designs to
trial and improve methodological quality by creating a public database (or
perhaps even an online peer-reviewed journal) of detailed designs.
If someone wanted to do a study in a particular area - they could
post their approach PRIOR to getting funding.
***This would allow the broader BMJ community to suggest improvements to
the study BEFORE it was performed -
while it might still be possible to add or change measures
or design. ***
Funded researchers who choose to use the posted research designs(in
whole or in part) could cite the authors or
seek to include them as coauthors as they see fit.
If the BMJ found a particular design unusually promising, they could
formally subject it to review, and if it passes, include it in a
highlighted "Peer-reviewed" design database or in an online-journal.
Advantages :
1) People with good ideas for study designs could share them - on
the record via the BMJ database - and make a contributions to research -
EVEN if they lack funding.
2) Readers could improve posted study designs by adding suggestions
via Rapid Responses.
3) Researchers who already HAVE funding could vet their design with
the broader BMJ community to catch errors or weaknesses or to suggest
improvements before the study gets underway. By doing so in a public
database - they retain their academic claim as authors of the study design
- while still allowing for broader suggestions for improvement.
Costs :
The cost of setting up a Rapid-Response-like system should not be too
high.
One would want to be able to search the study database by topic and
author.
An online journal would be more expensive. Presumably, only very few
proposals would be sent for formal peer review (although all the study
designs could be reviewed
by Rapid Response).
I hope that the BMJ will consider implementing this
suggestion.
Competing interests:
None declared
Competing interests: No competing interests
The role of letters in reviewing research was the title of an
editorial in the BMJ
in 1994. (1) Anyone who has ever written critical letters already knew
what
Gøtzsche and co-authors have now provided evidence for. They write that
they “have noticed that when the criticism is serious, such as suggesting
a
fatal flaw, authors sometimes avoid addressing it in their reply and
instead
discuss minor issues, or they misrepresent their research or the
criticism.” As
I wrote in 1996 from my experience,(2) authors tend to “regard the
criticism
as a challenge to be overcome, and to use various sidestepping tactics
[including] semantic wrigglings, ignoring the main point but expansively
countering less important ones, and, sometimes, being implicitly abusive.”
A
journal’s editor confided in me that he is not surprised when authors
willfully
misrepresent criticism.
Writing critical letters is a skill. (3) It is easy to upset authors:
no parent likes
being told that their child is ugly. But, in the end, the scientific
record is more
important than authors’ feelings.
1 Bhopal RS, Tonks A. The role of letters in reviewing research. BMJ
1994;
308: 1582-83.
2 Goodman NW. Revising the research record. Lancet 1996;347:474.
3 Goodman NW. How to write a critical letter and respond to one.
Hospital
Medicine 2001;62:426-7
Competing interests:
Have written on the subject
Competing interests: No competing interests
Unconvincing defence that criticisms were substantive
The defence by Gotzsche and colleagues of their conclusion that
authors are reluctant to respond to substantive criticisms is not
convincing. To infer reluctance to respond on behalf of authors without
actually enquiring of them seems very unwise. It may be that the non-
responding authors did not consider the criticism substantive, they might
have considered that the answer(s) to the criticism is contained within
the published manuscript, or they might have believed that a response was
not required unless invited by the journal. None of these important
possibilities were evaluated in the study by Gotzsche and colleagues.
Similarly, claiming validation of the definition of a criticism as
substantive by the content of the responses to the minority of the
criticisms to which the author replied and a subsequent random sample of
10 papers with author replies is incautious.. Why should we assume that
the same results would be found for papers and rapid responses that did
not attract author replies?
We find it hard to understand how the authors can defend not reading
the original paper when assessing whether the criticism was substantive.
The only reason given is that the study was already complicated, but this
decision sacrificed the long-established principle of using original
sources to verify information. Gotzsche and colleagues conclude by
advising readers and authors to use all the available information about a
study when assessing or citing it, not just the abstract or full paper,
but also any print or online letters to the editor and commentaries in
other journals; the irony that this conclusion is drawn from a study that
did not follow this advice is not lost on us.
Competing interests: No competing interests