The need for a national body for research misconductBMJ 1998; 316 doi: https://doi.org/10.1136/bmj.316.7146.1686 (Published 06 June 1998) Cite this as: BMJ 1998;316:1686
All rapid responses
I read in news reports about your position in clinical research
misconduct. The situation is far more serious than people realize. I
worked for the Federal government for several years. Part of my job
involved participation in committees that reviewed grant and contract
applications. I also reviewed research in progress and published articles.
I frequently attend scientific meetings and discuss ongoing research with
scientists and their staff.
Many times, I have been offered to collaborate in a research program with
the condition that the research findings would support a particular
hypothesis. If the findings did not support the hypothesis, the research
would not be published. I am also aware of scientists who misrepresent
data to support a particular position. Reporting such activities is highly
dangerous for the person that seeks to report it. Because there are
usually substantial business interests involved, reporting that others are
publishing misleading or false results is asking to have one's research
and/or personal life heavily investigated and criticized. Personal items
may include health conditions which the individual wishes to keep
confidential or income and other intimate information which might be
embarrassing even if they are legal. For example, many people would not
like to publicize the fact that they have Herpes, that they have cancer,
that they take anti-depressants, or that they had cosmetic surgery.
The whistle blowers are likely to receive poor job evaluations and face a
campaign to discredit them with future employers. A typical statement
would be: "do you want to hire this person so that if you make a mistake
in your research, he reports you and you can end up in jail or lose your
grant?" "Do you know that if you hire this person, most companies will
refuse to give you contract support because they will be afraid that you
will report undesirable data that would harm their sales?"
When I worked for the National Institute on Drug Abuse (NIDA), I
found many suspicious reports on the extraordinary results of treating
drug abusers. According to many researchers, the US government was winning
the war against drugs. The number of patients successfully treated was
extraordinary. Practically every treatment center was highly successful
and deserved additional funding (the government sponsored about 1,500
treatment centers in the late 1970s, treating about 100,000 subjects per
month). The only paradox with these extraordinary reports was that if one
counted the number of successfully treated patients since the inception of
federally funded drug abuse programs, we had treated so many people that
there could not possibly be any more addicts left in the US. When I showed
this interesting statistical finding, some scientists told me that perhaps
it was true, there were almost no drug addicts left in the US due to
winning the war on drugs. Of course, we never won any war against drugs,
it is not even clear we ever won any battles! When caught with drugs, the
court offered subjects the choice: treatment or jail, and of course they
chose treatment. Sometimes, subjects would run out of drugs or money and
would go into treatment to have a place to sleep, free food, and perhaps
In my position in charge of data analysis, I instituted many steps to
bring order to the chaotic data on drug abuse. Individuals could no longer
be considered successfully treated at one program if they were soon to
enroll in another program. We gave each person a unique ID to verify their
movement from one clinic to another.
I believe that one of the most important ways to prevent misleading
reports is to force researchers to make public the data upon which their
publications are based. At NIDA, I created a national databank with all
the statistical data we had on drug abuse. Any researcher could access the
data. This was long before the invention of PCs and the Internet, so that
a databank is much cheaper and feasible today. Editors could mandate that
authors publish the raw data upon which published research is based
through the Internet. For this proposition to work, it would have to be
mandated by the US government and a few European governments that provide
I agree with you that "methods for detection, investigation, and response
to clinical research misconduct are utterly inadequate worldwide." Finding
a subgroup where a particular drug has an effect is one of the common
tricks for researchers. Appropriate multivariate analysis with a sample
size > 100 will always find a highly significant statistical
relationship. It is just a matter of playing with enough variables. I have
yet to find a data set of 100 subjects and 30 variables where it is not
possible to find a significant relationship between a certain treatment
and a certain outcome by careful manipulation of the other variables
included in the analysis. For example, if the data does not show the
desired result, we can categorize one variable into 3 to 10 groups. The
common use of quintiles almost always guarantees results because the trend
of 5 points is highly likely to be statistical significant.
Edward Siguel, MD, PhD
Competing interests: No competing interests
Scientific advances are only made by a worker obtaining and publishing results which can subsequently be confirmed and used by others.
Faked or fraudulent results may waste time and money in vain attempts to replicate them, but the same could be said of results which are genuinely mistaken (eg polywater,cold fusion). It seems likely that in most cases which depend on faked results the fraud is eventually discovered and does not benefit the fraudster long term.
We have to be careful not to spend disproportionate amounts of time, money and effort in this area though I agree a protocol should be established for institutional investigation of allegations of misconduct.
More prevalent than the faking of results is malpractice related to publication. Most academics will have had experience of rows over authorship-- my own experience includes having papers published which I have written but which appeared without my name on them and papers which crucially depended on the histopathology but did not include me.
The "salami slicing" of research to produce multiple papers is probably due to the preoccupation of academic institutions with quantity of publications when making appointments or promotions. These institutions appear reluctant to rely on qualitative assessments.
Poor quality of refereeing and editing must take a part of the blame. It ought to be apparent to the editor and referees who has been responsible for the various parts of the work and the referees should be chosen for their expertise in the field. Referees comments frequently reveal that this is not the case.
While neither referees nor editors can be expected to investigate scientific fraud I do not see that a request to the authors' institution to validate their data could be construed as libellous. It should perhaps be done randomly as regular "quality control".
Competing interests: No competing interests