Missing clinical trial data: setting the record straightBMJ 2010; 341 doi: https://doi.org/10.1136/bmj.c5641 (Published 12 October 2010) Cite this as: BMJ 2010;341:c5641
All rapid responses
Rapid responses are electronic comments to the editor. They enable our users to debate issues raised in articles published on bmj.com. A rapid response is first posted online. If you need the URL (web address) of an individual response, simply click on the response headline and copy the URL from the browser window. A proportion of responses will, after editing, be published online and in the print journal as letters, which are indexed in PubMed. Rapid responses are not indexed in PubMed and they are not journal articles. The BMJ reserves the right to remove responses which are being wilfully misrepresented as published articles or when it is brought to our attention that a response spreads misinformation.
From March 2022, the word limit for rapid responses will be 600 words not including references and author details. We will no longer post responses that exceed this limit.
The word limit for letters selected from posted responses remains 300 words.
It is gratifying to read that the British Medical Journal is going to
address the critical issue of missing clinical trial data. Beyond the very
obvious clinical relevance of the issue, the missing clinical data also
have major implications for preclinical scientists involved in both drug
development and basic science. These do not yet appear to have been widely
Ultimately, preclinical scientists involved in drug development and
basic science inevitably assess the validity of their work in terms of
accepted findings about clinical efficacy and tolerability. I will
illustrate this point by reference to research with so called atypical
After a seminal clinical trial with clozapine in 1988 (1), the
pharmaceutical industry collectively developed a range of novel so-called
atypical antipsychotics which were generally contrasted unfavourably with
older typical drugs such as haloperidol. Recent independently funded
clinical trials and systematic meta-analyses have indicated that the
clinical efficacy of the atypical antipsychotics as a class has been
largely exaggerated (2-4), leading to a recent Editorial in the Lancet
entitled "The spurious advance of antipsychotic drug therapy" (3) in which
it was concluded that:- "The spurious invention of the atypicals can now
be regarded as invention only, cleverly manipulated by the drug industry
for marketing purposes and only now being exposed." Part of the spurious
invention of the atypicals involved the selective publication of clinical
trial data (3).
Following these important publications I recalled an indiscreet
comment made to me at a conference many years ago by a senior preclinical
scientist involved in the development of one of the first of the atypical
antipsychotics. The comment was to the effect that the atypical
antipsychotics did not really differ from a clinically appropriate low
dose of haloperidol - the prototype older typical antipsychotic. I
therefore e-mailed this individual and asked for his thoughts on the
recent very surprising clinical evidence. He indicated that he found the
new clinical evidence unsurprising, as his company had conducted very
extensive unpublished preclinical studies on efficacy and tolerability
comparing an appropriate low dose of haloperidol with clozapine, the
prototypical atypical antipsychotic. It is thus very clear that it is not
only clinical trial data that are "missing" from the scientific
literature. There are also "missing" preclinical data of very substantial
A search of the Web of Science database conducted on October 26th,
2010 for studies in rats on atypical antipsychotics revealed a total of
1,293 studies since 1988 (the date of the seminal clozapine clinical
study). It is a reasonable supposition that a very large proportion of
these studies were conducted largely because of the now very contentious
clinical findings. This situation is not only unethical in that it
reflects extensive research with animals which may well have been totally
unnecessary, but it is also an enormous waste of resources. Similar
comments are likely to be applicable to other classes of drugs for which
there are missing clinical data.
In summary, the missing clinical and preclinical data do not simply
mislead clinicians they mislead preclinical scientists also.
1. Kane J, Honigfeld G, et al. Clozapine for the treatment-resistant
schizophrenic. A double-blind comparison with chlorpromazine. Arch Gen
Psychiatry 1988 vol. 45:789
2. Lieberman JA, Stroup TS et al. Clinical Antipsychotic Trials of
Intervention Effectiveness (CATIE) Investigators. Effectiveness of
antipsychotic drugs in patients with chronic schizophrenia. N England J
Med. 2005 vol. 353:1209
3. Tyrer P, Kendall T. The spurious advance of antipsychotic drug
therapy. Lancet. 2009, vol. 373: 4
4. Leucht S, Corves C, et al. Second-generation versus first-
generation antipsychotic drugs for schizophrenia: A meta-analysis. Lancet.
2009, vol. 373:31
Dr Andrew. J. Goudie, Reader in Psychopharmacology, School of
Psychology, Eleanor Rathbone Building, Liverpool University, L69 3BX, UK.
Competing interests: No competing interests
We agree with Loder and Godlee that selective reporting must call
into question the entire evidence synthesis enterprise. Loder and Godlee
mention the effect that selective reporting may have on systematic reviews
and meta-analyses. As practicing clinicians however we are particularly
concerned by the consequence that failure to disclose negative results may
have on practice guidelines . Like systematic reviews and meta-analyses,
guidelines can only be as good as the sum of the individual studies they
derive from. Differently from systematic reviews and meta-analyses,
however, guidelines have the power to immediately dictate clinical
practice. In this context,the fact that studies that fail to show benefit
are not accounted for is of particular concern because it unavoidably
leads to inflated perception of the efficacy of our interventions (1) and
excessive medicalization of society (2). It is of particular concern that
even in the best guidelines the proportion of recommendations unsupported
by conclusive evidence may be growing (3). Ultimately the most important
message that the reboxetine study may convey to practicing clinicians is
not to be afraid to use their own intelligence (4).
(1) Turner EH, Matthews Am et al. Selective publication of
antidepressant trials and its influence on apparent efficacy. New Engl J
Med 2008 vol. 358 : 252
(2)Getz L, Sigurdsson Ja et al. Estimating the high risk group for
cardiovascular disease in the Norwegian Hunt 2 population according to the
2003 European Guidelines : modelling study. Brit Med J 2005 vol. 331 : 551
(3) Tricoci P, Allen JM et al. Scientific evidence underlying tha
ACC/AHA clinical practice guidelines. J Am Med Ass 2009 vol. 301 : 831
(4) Heath I. Dare to use your own intelligence. Brit Med J 2008 vol.
337 : 434
Competing interests: No competing interests
The editorial by Loder and Godlee  highlights the dangers of
selective evidence. However, there is a deeper problem lurking beneath
these troubled waters. Suppressed counter-evidence within trials is only
part of the story. Trials aiming to demonstrate benefits cannot hope to
capture all possible counter-evidence.
It is time to let the scales fall from our eyes and recognise the
inherent weakness in the current approach to evidence based medicine. The
predominant aim in randomised controlled trials is to produce positive
evidence to support a theory. It is always selective. It can never be
complete. Publication bias is virtually inevitable. The publication of
counter-evidence is an unwanted side show. Under the current orthodoxy of
randomised controlled trials, we cannot simply blame commercial pressures
on drugs companies for the emphasis on presenting positive evidence. A
more robust exploration of the limits of the benefits and the extent of
the harms will only result if the emphasis shifts to actively exploring
the realm of applicability  of proposed treatment regimes. Until we
know more about when a drug should not be used, we shouldn't fool
ourselves into thinking that the drug has been thoroughly tested.
Loder and Godlee rightly question the hierarchy of evidence: 'Meta-
analyses are generally considered the best form of evidence, but is that a
plausible world view any longer when so many of them are likely to be
missing relevant information?' There is no guaranteed way to do good
science. However, attempting to build a fort of positive evidence to
confirm the benefits of a drug is a folly. Attempting to confirm a theory
while brushing counter-evidence under the carpet has never been a recipe
for good science. We need to free ourselves from the prescriptive
constraints of evidence based medicine and develop more imaginative ways
to test medical interventions.
Loder and Godlee propose that 'Efforts are needed to restore trust in
existing evidence.' However, it would be easy to focus on the issues of
honesty and integrity and ignore the vital issue that it is never possible
to rely on studies whose only goal is to produce positive evidence.
Ultimately, this is about a more general trust. Trust in the medical
profession and trust in the research process guiding medical intervention.
It is time to demonstrate a shared commitment to integrity and the search
for truth if trust in the medical profession is not to continue its
1. Loder E, Godlee F. Missing clinical trial data: setting the record
straight. BMJ 2010;341:c5641.
2. Hughes I. Serious Nonsense. Book in progress - to be submitted for
publication early in 2011.
Competing interests: No competing interests
Correctly setting the record straight - the importance of not distorting the results of previous research
Eyding et al provide additional evidence that reboxetine is less effective than placebo (which we did not investigate) and show once again that including unpublished data in meta-analyses often attenuates treatment effects. We know that this problem exists - the issue is how independent researchers can access these data.
We fully agree about the importance of accessing unpublished data and have argued for this on many occasions. However, it is important not to distort the results of previous studies. This error is perhaps one of the hazards of not peer reviewing editorials aimed at making particular points.
Competing interests: Authors of systematic reviews of antidepressants