Investigating allegations of scientific misconduct
BMJ 2005; 331 doi: https://doi.org/10.1136/bmj.331.7511.245 (Published 28 July 2005) Cite this as: BMJ 2005;331:245All rapid responses
Rapid responses are electronic comments to the editor. They enable our users to debate issues raised in articles published on bmj.com. A rapid response is first posted online. If you need the URL (web address) of an individual response, simply click on the response headline and copy the URL from the browser window. A proportion of responses will, after editing, be published online and in the print journal as letters, which are indexed in PubMed. Rapid responses are not indexed in PubMed and they are not journal articles. The BMJ reserves the right to remove responses which are being wilfully misrepresented as published articles or when it is brought to our attention that a response spreads misinformation.
From March 2022, the word limit for rapid responses will be 600 words not including references and author details. We will no longer post responses that exceed this limit.
The word limit for letters selected from posted responses remains 300 words.
Chalmers makes an interesting suggestion about the relevance and
quality of research but there is no evidence that what he suggests -
asking authors to cite scientifically relevant reviews as the genesis of
the idea for their papers, would work. It is an interesting question for
someone who is in the forefront of evidence based practice -does he have
any evidence that his suggestion would cut fraudulent research ? The idea
could certainly be tested in a randomised control trial. There could be a
cited review arm and a “I thought of it in bed” arm and the end point
would be whether either arm had more scientific fraud. His final hurdle
would be securing permission to carry out such research considering there
is no basis in reviews for such a thing.
I don’t think that extraordinary results are more likely to be
fraudulent. A moment’s thought means that they are sui generis more likely
to be reinvestigated, verified and pondered over. I think fraudulent
results would be more likely to be incrementally different to others to
avoid attracting attention and detection, just a little bit different so
as to make publication feasible. The fact that fraudulent results which
are also statistical outliers are easier to detect is not the answer to
the question. This is an opinion – I have of course no evidence for that
assertion.
The underlying theory of an idea having been seeded by other research
attracted me as it is consistent with C J Jung’s ideas of group
consciousness. This concept itself has recently attracted some attention
but no observational proof. The grave disadvantage of requiring people to
cite research that triggered their own observations is that scientific
revolutions, rare as they are now, could never happen. Newton would not
have been able to cite research leading to gravitation. There was, as far
as I recall, none. He performed a thought experiment on the subject in
1666 with an apple in the grounds of Woolsthorpe Manor. Such a thing would
hardly cut ice at the Royal Society - “I thought of it after an apple
fell on my head”, nor Leibnitz’ invention of calculus which he did in a
few months around May 1676. Einstein could not have printed his theory of
relativity. Nor Godel his incompleteness theorem, as the view then
prevailing in the 1930s was that David Hilbert’s 24th problem was
provable and arithmetic would probably be shown to be complete. Godel’s
paper was a thunderclap refutation.
The advantage of such a scheme is that the old and crotchety would
maintain a stranglehold on the direction of research and publication of
results and the young world and its thought would forever be shut out - is
this what we want?
Oliver Dearlove
Conflict and coincidence – or possible synchroncity – editorial 5 Aug
2005, in the Daily Telegraph on scientific consensus, who should lead
research and the Royal Society – “Heated scientific climate”
Competing interests:
as script
Competing interests: No competing interests
According to Brian Martinson of the Health Partners Research
Foundation in Minneapolis of 3247 mid and early career one in three
scientists confesses to having “sinned”. 1.5% admitted to serious
falsification or plagiarism, 15.5% changed study design methodology or
results in response to pressure from a funding source, 12.5% admitted to
overlooking others use of flawed data, 7.6% circumvented minor aspects of
requirements regarding the use of human subjects. (Nature Vol 435, pp718-
719 June 9, 2005)
Research has arguably become more complex in our globalized world
where information can be shared almost instantly with networks and teams
across the globe. In this context of efficiency and speed, financial
resources seem all the more scarce. Channeling funds away from research
to administrative endeavours such as research oversight is a hard sell
particularly to professionals who are self regulating and consider
themselves to work under the highest of scientific integrity. Yet as
outlined in the article, numerous high profile cases have been brought to
light for public scrutiny. Tremendous time consuming and laborious
efforts were undertaken by the BMJ staff to describe the difficulty to get
to the truth. It is frightening at
how pervasive this problem has become. In a recent review, Martinson and
colleagues of Minneapolis have documented through a survey of 3247 early
and mid career scientists that one third confessed to “having sinned”. Of
these scientists, most of the research infractions included changing study
design methodology or results, and overlooking others’ flawed data. But
1.5% of respondents, admitted to serious falsification or plagiarism.
But is punishment of the fraudulent miscreants enough? And what type
of punishment? Banishment from science forever, confiscation of research
funds and resources for some defined time period, expulsion from an
institution with the risk of being hired elsewhere given impeccable
credentials that these powerful miscreants tend to have? The satisfaction
of irrefutably proving serious fraud and purging the falsehood contrived
from the literature is a goal. Is there a role for rehabilitation and
education through an organizational systems based approach? Education of
the scientific method to students from A level teenagers to professors
emeriti, in the context of modern pressures of commercialization in a
“publish or parish” milieu, may not achieve the immediate satisfaction of
irrefutably proving a fraud, but may begin addressing problems festering
in our scientific community. By painstakingly drawing attention to this
problem, in the final analysis may the BMJ articles motivate the necessary
institutions and organizations at all levels from University and hospital
to national bodies to contribute to finding solutions to this global
problem of research fraud.
Competing interests:
None declared
Competing interests: No competing interests
The title of this response to the editorial by Jane Smith and Fiona
Godlee (1) is the title of a chapter by Tom Jefferson and Jon Deeks in an
excellent BMJ book on peer review in health sciences edited by Godlee and
Jefferson (1). The chapter makes the important point that, until proved
otherwise, each report of an individual study submitted to a journal
should be seen as one member of a population of studies addressing the
same or similar questions, and that it should therefore be assessed by
reference to other members of that population of studies.
I know that I am not alone in having had experience of detecting
previously unsuspected fraud in the course of preparing systematic reviews
(3). And indeed, in her article reviewing the suspicions that research
reported by RB Singh might be fraudulent, Caroline White notes that
external reviewers had observed that one of Singh's reports suggested "a
more striking total benefit than most previously reported trials", and
that another was "not consistent with the literature'.(2)
The BMJ deserves credit for having asked authors submitting studies
to the journal to be explicit about 'what is aleady known on this
subject'. It is my impression, however, that this requirement is usually
not addressed in a sufficiently systematic way. If journals were to
require authors to show by reference to scientifically defensible reviews
of existing evidence why they had embarked on a new study, not only would
it be likely that the relevance and quality of research would improve, but
also that questionable and frankly fraudulent research would be detected
more often.
Yours
Iain Chalmers
References
1. Smith J, Godlee F. Investigating allegations of scientific
misconduct. BMJ 2005;331:245-6.
2. Jefferson T, Deeks J. The use of systematic reviews for editorial
peer reviewing: a population approach. In: Godlee F, Jefferon T, eds.
Peer review in health sciences. London: BMJ Books, pp 224-234.
3. Howell C, Chalmers I. A review of prospectively controlled
comparisons of epidural with non-epidural forms of pain relief during
labour. International Journal of Obstetric Anesthesia 1992;1:93-110.
4. White C. Suspected research fraid: difficulties of getting at the
truth. BMJ 2005;331:281-8.
Competing interests:
None declared
Competing interests: No competing interests
In their recent article about scientific misconduct, Smith and Godlee
discuss a number of options for reducing the risk of falsified study
results (1). One idea that they mention is the depositing of data into a
secure archive for purposes on auditing. It would, for example, have made
the process used to evaluate the unpublished study much easier of the
original electronic version had been given rather than the data having to
be re-entered by hand (2). Depositing the data in a secure location also
helps ensure the preservation of the data in case questions later arise
about unexpected risk factors that were not considered at the time but
which the data might shed light on.
Obviously recently published data needs to be kept strictly
confidential as secondary studies are an important part of a clinical
trial. It is only reasonable that those who put the effort into running
this trial should have the first opportunity to fully investigate the
data. However, if the data eventually become public domain (after a
sufficent waiting period) it would be of great help both in further
understanding the results of key trials but also in providing a solid
backdrop of data for meta-analysts and biostatisticians to develop
techniques on.
What time period is appropriate is an open question but clearly this
could be resolved. Smith and Godlee dismiss this idea because of the lack
of current infrastructure. Howevert, given the potential benefits in in
increasing our understanding of these results, could we not develop this
infrastructure?
JA Delaney
Statistician
Royal Victoria Hospital, Montreal, Quebec, Canada
1. Smith J and Godlee F. Investigating allegations of scientific
misconduct. BMJ. 2005;331(7511):245-6.
2. Al-Marzouki S, Evans S, Marshall T and Roberts I. Are these data
real? Statistical methods for the detection of data fabrication in
clinical trials. BMJ. 2005;331(7511):267-70.
Competing interests:
None declared
Competing interests: No competing interests
Dear BMJ,
About 5 years ago, a paper was published in the US that stated that
85% of all scientific studies lacked the scientific unbiased rigors of
medical research. That means only 15% of the studies were objective and
unbiased. Currently, we are seeing a rash of studies discrediting
nutritional supplements being published. When authors pick and choose the
data or the population they want to support a predetermined conclusion of
a clinical study, I think it is time to drum the myth makers out of the
medical arena. The only purpose of such studies is to create an economic
windfall for someone or some manufacturer.
Honesty and truth has to be returned to the medical research community
worldwide.
Thomas A. Braun RPh
Competing interests:
None declared
Competing interests: No competing interests
Your report on the investigation of misconduct following the paper by
Singh and others published in 1992 will come as no surprise to the
research community in India; rather, we are surprised only that it took so
long. The medical circles in India have had a healthy scepticism about the
prodigious output of this group of authors- so much so that it has led to
cynicism among Indian clinicians about any research conducted in India.
Some of us who have looked closely at some of the other papers of the same
group of authors have been stalled in our efforts to point out
discrepancies, at best being limited to a letter to the editor.
What is perhaps more revealing is that they have managed to get
'research collaboration' from an extensive network of international
scholars. This points to the urgent need for all scientists across the
globe to be very careful who they associate with, in their mad rush to see
their name in print.
Competing interests:
None declared
Competing interests: No competing interests
Dear Editor,
There is no doubt after reading the article that the doctor in
question has a tremendous desire to do research( refer to the prolific
numbers of papers in a short span of time).Anybody who wishes to extend
the vistas of human knowledge should be welcome.However using dubious data
to achieve those ends can only be condemned and no amount of research
enthusiasm can obliviate that. Maybe its time to have an international
medical research commision with punitive powers exercised through the
UN/WHO to regulate worldwide research.Its first step should be to reduce
the number of medical journals to manageable levels and secondly to
prioritise broad areas of research to maximise the use of current world
resources to improve the quality of human life.
Dev Srivastava
Competing interests:
None declared
Competing interests: No competing interests
Research needs policing
The article on fraud in medical research was deeply disturbing.
Without pointing any fingers, fraudulent research affects health care and
mankind in many ways direct and indirect. A good study forms the primary
basis for future research and propels the way forward. False research
means a waste of precious and scarce research resources, a waste of
manpower for hard working researchers involved in secondary research based
on a faulty primary and most of all poor and indeed potentially harmful
health care delivery for our patients.
We indeed set the clock backwards by publishing ‘dangerous’ articles
in the world’s leading journals e.g. the BMJ. Publication in these premier
journals actually authorizes and guarantees the research. Leading journals
of the world cannot be absolved of their errors.
We think cooked up research has to taken most seriously. A few
suggestions include setting up a central database of blacklisted authors
(and co-authors), compulsory submission of raw data once the paper is
accepted by the journal, and reducing the number of journals available on
any particular subject (which we believe is the most impossible task!).
Institutions can also be held responsible for assuring the quality of
research submitted under its name and using its funds. Guilty institutions
could also be blacklisted. Many people may not agree with the policing.
Unfortunately, however, we are not living in a fair world.
We are sure the BMJ can take a lead in setting up this central
database. At least the leading journals of the world can join up to make
this concept of medical Interpol a reality. Silly and potentially harmful
articles will continue to get published in the local journal of silly
articles. Who cares?
Competing interests:
None declared
Competing interests: No competing interests