Problems with peer review
BMJ 2010; 340 doi: https://doi.org/10.1136/bmj.c1409 (Published 15 March 2010) Cite this as: BMJ 2010;340:c1409All rapid responses
Rapid responses are electronic comments to the editor. They enable our users to debate issues raised in articles published on bmj.com. A rapid response is first posted online. If you need the URL (web address) of an individual response, simply click on the response headline and copy the URL from the browser window. A proportion of responses will, after editing, be published online and in the print journal as letters, which are indexed in PubMed. Rapid responses are not indexed in PubMed and they are not journal articles. The BMJ reserves the right to remove responses which are being wilfully misrepresented as published articles or when it is brought to our attention that a response spreads misinformation.
From March 2022, the word limit for rapid responses will be 600 words not including references and author details. We will no longer post responses that exceed this limit.
The word limit for letters selected from posted responses remains 300 words.
Mark Henderson's article is informative and stimulating. The time-
honoured process of peer review merits much more discussion than it has
hitherto received. I have had a mixed experience of peer reviews and
reviewers. Many peer reviews have been very helpful, full of learned
comment, and have resulted in revisions and a significant improvement in
the submitted material, but this is not always the case. I recall one peer
review where the comments made it absolutely clear that the reviewer had
not read the manuscript carefully. Careful study is a primary
responsibility of any reviewer. Another reviewer took a key phrase from
the Discussion section, amputated one word so that the meaning was
completely altered, and then proceeded to demolish the altered statement.
One can very well do without such malpractice.
I believe that despite its faults, peer review is essential. The
field of science is immense and even the most accomplished editor cannot
be competent to review every article that is submitted. The editor must
have recourse to expert opinions, hence the review process. The identity
of reviewers should not be made known to the submitting author, because
this can inhibit frank criticism, particularly if the author and the
reviewer are acquainted, and especially if the author is senior to the
reviewer. The review itself should always be made available to the author,
because a competent review can be very helpful to him or her. Ideally, the
identity of the author should not be known to the reviewer, but this is
seldom possible, because a reviewer will by nature be familiar with the
literature and will very likely identify the author or authors. For the
same reason, it can be expected that frequently an author will at least
suspect who wrote the review. Regrettably, it has been amply shown that
peer review does not prevent the publication of shoddy or even fraudulent
work, but it remains the best mechanism that we have for securing the
quality and validity of scientific communications.
Competing interests:
None declared
Competing interests: No competing interests
The Information Commissioner has produced a number of decision
notices covering peer review processes.[1,2,3] It would seem that more
openness in relation to voluntary, unpaid review would - in the view of
the Information Commissioner - be harmful to the public interest. The
greater transparency would stifle free and frank comment.
That view of the Information Commissioner is not something I would
share and the BMJ losing one or two of its reviewers in return for more
transparency is no great loss. Perhaps a study is required to find out
more about the views of those who oppose greater openness in the peer
review process.
[1]
www.ico.gov.uk/upload/documents/decisionnotices/2006/fs_50070878.pdf
[2]
www.ico.gov.uk/upload/documents/decisionnotices/2009/fs_50075956.pdf
[3]
www.ico.gov.uk/upload/documents/decisionnotices/2009/fs_50220528.pdf
Competing interests:
None declared
Competing interests: No competing interests
Peer review, yes, but good editors must have the last say
Mark Henderson’s article on peer review is superb [1]. As one time
editor of the Ghana Medical Journal for several years I quite appreciate
the difficulties that Henderson attempts to unravel. I find the
observations of Dr Mark Walport and Dr Fiona Godlee most helpful [1].
PERSONAL PERSPECTIVES
From a personal perspective, having published (or tried to publish)
on both sides of the Atlantic I recognize a couple of differences –
whether these have anything to do with peer review (open or closed) I
cannot say. First, American Medical/Science journals are more likely to
reject one’s submissions. British editors seem much the fairer, with The
Lancet especially being so extremely apologetic in rejecting a paper that
one felt grateful for receiving such a polite refusal. I once thought of
framing a letter rejecting my paper.
Secondly, American Editors (maybe on the advice of their peer
reviewers) are more likely to invite articles for publication. There is no
fear of rejection here because it is the editors themselves that invite
the paper [2-5].
Thirdly, I feel more comfortable submitting ‘de novo;’ articles to
British Editors, being confident that they are more likely to rely on
their own ‘in-house’ editorial strength (which is quite formidable) than
on ‘Expert Reviewers’ however important their opinion is.
SPEED OF PUBLICATION: UK VERSUS USA
The quickest periods it took for uninvited papers of mine to be
published have been with The Lancet [6] and the BMJ [7], the latter
article was submitted late May 1987 and by June 20 1987 British Medical
Journal Editor Dr Stephen Lock had published it [7] without sending me any
proofs to read. The over 550 reprint requests I had in those days for that
one article provided proof of Dr Mark Walport’s definition of a “good
editor”. On the other hand, it took 6 months for an American Medical
Journal to peer-review and publish a tiny comment of mine on G6PD
Deficiency and sickle cell anaemia [8].
DEFINITION OF MARK WALPORT’S GOOD EDITOR
When Dr Walport mentions “a good editor” more than once in Mark
Henderson’s well-reasoned article [1], he is not just referring to
technical competence. The best definition of “a good editor” is that
she/he has technical competence wrapped in an ethical dimension. Fair play
cannot be assumed to be part of the definition of competence. To assume
that altruism evolves with education is one of the draw backs of
scientific humanism. Fair play combines kindness with intellectual rigour,
fearlessness with reasonableness, and none of these precious qualities
owes anything to scholarship.
EXPERIENCE WITH EXCELLENT MENTORS
Some of the greatest privileges I have had in my postgraduate
upbringing were to have been able to work with Dr Stanley Shaldon in the
Department of Professor Dame Sheila Sherlock FRS (University of London)
and with Professor Hermann Lehmann FRS (University of Cambridge). These
were my extremely valued mentors. And to discover that both Sheila
Sherlock and Hermann Lehmann had had papers rejected at least once by the
two leading medical journals in the world (so they both told me) filled me
with amazement. Some peer review! But then, our Editors must always have
the last word. My soft spot for British Medical Editors is that I feel
they have published more for me over past decades than they have rejected.
And even when they rejected my articles, they apologized. One could hardly
ask for more.
Felix I D Konotey-Ahulu MD(Lond) FRCP(Lond) FRCP(Glasg) DTMH(L’pool)
Kwegyir Aggrey Distinguished Professor of Human Genetics, University of
Cape Coast, Ghana and Consultant Physician Genetic Counsellor in Sickle
& Other Haemoglobinopathies, London W1G 9PF
1 Henderson Mark. End of the peer review show? BMJ 2010; 340:
c1409 http://www.bmj.com/cgi/content/full/340/mar15_1/c1409
2 Konotey-Ahulu FID. The Sickle Cell Diseases: Clinical
manifestations including the Sickle Crisis. Arch Intern Med 1974; 133(4):
611-619 http://archinte.ama.assn.org/cgi/reprint/133/4/611-pdf
3 Konotey-Ahulu FID. Effect of environment on sickle cell disease
in West Africa; epidemiologic and clinical considerations; Chapter 3 in
Sickle Cell Disease – diagnosis, management, education and research. Eds
Abramson H, Bertles JF, Wethers Doris L. St Louis CV Mosby Co 1973 pp 20-
38.
4 Konotey-Ahulu FID. Missing the wood for one genetic tree? The
First Symposium on The Role of Recombinant DNA in Genetics – Proceedings –
Chanaia, VCrete, Greece, May 13-15, Eds Loukopoulos D, Teplitz RL. Athens,
P Paschalidis 1986, pages 105-116.
5 Konotey-Ahulu FID. The Human Genome Diversity Project (HGDP):
Cogitations of an African Native. Politics and The Life Sciences (PLS)
1999; 18(2): 317-322. [Lake Superior State University, Sault Ste, Marie,
MI 49783-1699, USA]
6 Konotey-Ahulu FID. Sicklaemic Human Hygrometers. Lancet 1965;
1:1003-1004
http://www.pubmedcentral.nih.gov/picender.fcgi?artid=184628&biotype=pdf
7 Konotey-Ahulu FID. Clinical epidemiology, not sero-epidemiology
is the answer to Africa’s AIDS problem. BMJ (Clin Res Ed) 1987; 294(6587):
1593-94 http://www.bmj.com/cgi/reprint/294/6587/1593.pdf
doi:10.1136/bmj.294.6587.1593
8 Konotey-Ahulu FID. Glucose-6 Phosphate Dehydrogensae Deficiency
and Sickle Cell Anemia. New England Journal of Medicine 1972; 287-288.
Competing interests:
None declared
Competing interests: No competing interests
Editors must have a mandate that excludes the tradition of confirmatory bias
I agree with Profesor Konotey-Ahulu about good editors having the
last word, but only if they are rid of the tradition of confirmatory bias
that hinders innovation. When we initially set out on a journey to bring
researchers attention to an innovative software we created for research
synthesis little did we realise that this would bring us face to face with
confirmatory bias in clinical epidemiology. We therefore decided that
documenting this journey is of critical significance to the community of
editors discussing this issue.
It all began with the idea that a popular methodology makes no
sense[1,2], and of course this is not a new thought[3-5]. We decided
therefore to do something about this by developing a new method[6,7] and
finally an application that ran it. We thus decided to supply researchers
with the tool for implementation of the method so that they could be in a
position to easily evaluate and decide for themselves the merit or
otherwise of the model. The software[8] was created and we decided to
write a description up and send off to a journal that publishes computer
methods and programs in medicine with the aim of bringing this to the
attention of researchers. We received a rejection notice from the editor
and the reason for this was that the reviewer stated "The proposal is
interesting but I am afraid that the dilemma regarding whether to
..[implement] .....[the method].. transcends the scope of the journal.
Before resolving this dilemma, implementing a software to produce this
kind of ...[model result]... is useful for methodological research but I
am not sure about its impact on the community of reviewers. In the other
hand, the software presented in the paper is robust, well done and well
described". The reviewer also stated "Current recommended methods to deal
with ....[this issue]....explicitly discourage the use of ....[such
methods]....." referencing a mainstream source.
We obviously felt that this was an unacceptable position for an
editor to take subsuming the position of gate-keeper for what tools his
peers had access to and thus wrote to both the section editor and chief
editor where we stated:
1. This paper is not about peer-review of the methodology underlying the
software,it is about the software. The former has already been done -
several times.
2. It hampers the test of new models (that have already been peer-
reviewed), when new tools that implement the new method remain unknown.
Would this not be exactly why it should be published - so that someone
could use the tool to analyze and discuss it or even allow someone to
refute it?
3. We hold the same opinion about the old models that the referee holds
about our model. Nevertheless, we implemented all models in the software
because we think researchers should be in the position to choose for
themselves what model to use.
There was no response to these statements by either editor. We thus
re-submitted to a medical informatics journal that also publishes software
reviews, thinking that this was possibly a chance occurrence and
mainstream editors can't possibly be so biased. We were surprised again
with the editors decision to reject. This time there were several
reviewers whose comments again revolved around the ...[method].., not the
software. Indeed the first referee actually wrote "My comments relate
mainly to the ..[method].., rather than the presentation of the software
package.......". Other reviewers put forward lack of uptake as a valid
reason to suppress the software from researchers: "I do not think that the
manuscript is acceptable for publication for the following reasons. The
..[method].. has been published .......however, the uptake of this method
has been minimal. According to Web of Knowledge, ..[method]...[authors]..
has been cited only seven times which are all self-citations by the
authors". Finally, another reviewer felt it went against the status quo :
"The standard approach to ...[method].. is ...[xyz]...whereas this
information is hidden in the ..[method]..."
It became clear to us that the situation is more serious than we
initially thought because we have a clear pattern of confirmatory bias
both from multiple reviewers and editors. Confirmatory bias is defined as
the tendency to emphasize and believe experiences which support one's
views and to ignore or discredit those which do not[9]. Indeed, this is
exactly in keeping with Mahoney's argument that even though it is clear
that, our software is relevant and the developmental approach is adequate,
and while our obtained results should be of interest to the scientific
community, they were viewed prejudicially because they do not conform to
current theoretical knowhow[9]. According to Mahoney, this is because
there is the tendency for ..[editors].. to seek out, attend to, and
sometimes embellish experiences which support or "confirm" their
beliefs[9].
Of course there are editors who stand against confirmatory bias and
one example is the editorial in Science entitled "Publish or not to
publish", where the editor of Science made a decision to publish even
though two distinguished scientists in the field asked the editor not to
publish the paper. The editor actually stated for the record that "what we
ARE very sure of is that publication is the right option, even- and
perhaps especially- when there is some controversy"[10]. It is clear that
the task of ensuring that journals do not stifle scientific innovation
falls on the editor's shoulder. Both these journal editors cited above
attempted to keep these innovations away from researchers, thus
essentially taking the decision making away from researchers and acting as
the gate-keepers of what researchers have access to. As Mahoney puts it,
"It is only contrary-to-prediction experiments which carry logical
implications" and the mandate for editors thus must exclude this
"dogmatically confirmatory tradition"[9,11].
Reference List
1. Al Khalaf MM, Thalib L, Doi SA. Combining heterogenous studies
using the random-effects model is a mistake and leads to inconclusive meta
-analyses. J Clin Epidemiol 2011; 64(2):119-23.
2. Doi, SA. The case of the king's new clothes: There is no such
thing as a random effects meta-analysis [Web Page]. 29 April 2011;
Available at http://www.bmj.com/content/342/bmj.d549/reply#bmj_el_260709.
(Accessed 15 October 2011).
3. Senn S. Trying to be precise about vagueness. Stat Med 2007;
26(7):1417-30.
4. Peto R. Why do we need systematic overviews of randomized trials?
Stat Med 1987; 6(3):233-44.
5. Shuster JJ. Empirical vs natural weighting in random effects meta
-analysis. Stat Med 2010; 29(12):1259-65.
6. Doi SA, Thalib L. A Quality-Effects Model for Meta-Analysis.
Epidemiology 2008; 19(1):94-100.
7. Doi SA, Barendregt JJ, Mozurkewich EL. Meta-analysis Of
Heterogenous Clinical Trials: An Empirical Example. Contemp Clin Trials
2011; 32:288-98.
8. MetaXL version 1.0 [Web Page]. 2011; Available at
http://www.epigear.com/index_files/metaxl.html.
9. Mahoney M. Publication prejudices: An experimental study of
confirmatory bias in peer review system. Cognitive Therapy and Research
1977; 1:161-175.
10. Kennedy D. To publish or not to publish. Science 2002;
295(5561):1793.
11. Mahoney MJ, Kimper TP. From ethics to logic: A survey of
scientists. Mahoney MJ, Ed. Scientist as subject. Cambridge,
Massachusetts: Ballinger, 1976: 187-93.
Competing interests: No competing interests