Clinical trials: what a waste
BMJ 2014; 349 doi: https://doi.org/10.1136/bmj.g7089 (Published 10 December 2014) Cite this as: BMJ 2014;349:g7089All rapid responses
Rapid responses are electronic comments to the editor. They enable our users to debate issues raised in articles published on bmj.com. A rapid response is first posted online. If you need the URL (web address) of an individual response, simply click on the response headline and copy the URL from the browser window. A proportion of responses will, after editing, be published online and in the print journal as letters, which are indexed in PubMed. Rapid responses are not indexed in PubMed and they are not journal articles. The BMJ reserves the right to remove responses which are being wilfully misrepresented as published articles or when it is brought to our attention that a response spreads misinformation.
From March 2022, the word limit for rapid responses will be 600 words not including references and author details. We will no longer post responses that exceed this limit.
The word limit for letters selected from posted responses remains 300 words.
Journal Editors Must Play Their Part
Ioannidis’ recent BMJ editorial offers a stark commentary on the parlous state of clinical trial research and touches on a few potential solutions, including clinical trials registration. Clinical trial registration is pivotal to understanding the research landscape, and for ensuring accountability in research. But what do we do when there are protocols in place to ensure public access to information, but the chain of responsibility fails us?
Prospective clinical trial registration can limit the risk of non-publication of completed trials and selective outcome reporting. It is therefore mandated by the International Committee of Medical Journal Editors (ICMJE) Uniform Requirements for Manuscripts as a pre-condition for publication (Laine et al., 2007) .
For prospective registration to work trialists need to take responsibility for registering trials prior to inception, maintaining their registry entries as changes to the trial protocol are made, and ensuring that the contacts listed in their registry applications are available and responsive. However, the chain of accountability cannot end with the trialist; journal editors must play their part by ensuring compliance with the ICMJE registration mandate.
Increasingly, trial registries allow retrospective registration of trials. This shift in practice means that reporting registry identification numbers no longer provides journals with sufficient proof that a trial is registered in accordance with the ICMJE mandates (i.e., prospectively). As such, the onus to confirm that registration timing aligns with the mandate falls on journal editors.
A pilot study conducted at the South African Cochrane Centre, which hosts the Pan African Clinical Trials Registry (www.pactr.org), examined 68 RCTs published in the New England Journal of Medicine (30); Journal of the American Medical Association (8); PlosMed (5); Annals of Internal Medicine (6); and The Lancet (19) between October and December 2011. For three RCTs no trial registry information was available;of the remaining 65 trials, twenty-nine (45%) RCTs were registered prospectively. Of the 36 retrospectively registered RCTs , 21 registered after 1 July 2005 (the ICMJE amnesty deadline for prospective registration), and were therefore published in conflict with the ICMJE mandate. None of the journals reported that these trials were registered retrospectively. In addition, fourteen of 65 (21.6%) RCTs had a discrepancy in reported primary outcomes between published paper and registry. Of those, seven trials were retrospectively registered, three of which started after the 2005 cut-off for mandatory registration.
What are the implications of these findings? Ioannidis reminds us that publication of clinical trials offers journals “opportunities to accrue citations, influence, and reprint orders.” Under these circumstances it is easy to see how commercial interests could receive precedence over the ethical responsibility to ensure openness and transparency in science. However, it is equally plausible that journal editors may simply not be aware of the practice of registries providing trialists with registration numbers retrospectively. Those registries that are members of the World Health Organization’s Network of Registries are required to flag trials that are retrospectively registered. This is not the case with other registries. The onus therefore rests with journal editorial teams to check the registration timing against the trial’s registry profile. If international research governance protocols are to succeed the highest standards of responsibility and accountability are required from all parties involved.
References:
Laine C, De Angelis C, Delamothe T, et al. Clinical trial registration: looking back and moving ahead. Annals of internal medicine. 2007; 147: 275-7.
Competing interests: Amber Abrams works for the South African Cochrane Centre which manages the Pan African Clinical Trials Registry (www.pactr.org). Amber was the project manager for the PACTR project from 2009 - 2012.
It is with great interest that I read Mr Ioannidis’ article entitled “Clinical trials: what a waste” (1) and indeed all of the follow up responses, where some interesting very interesting points were raised.
As a current core surgical trainee in the final year of my core training programme the need to get involved in randomised controlled trials as an integral part of training and development is becoming ever more apparent. In my current Plastic Surgical post I can say without doubt that this speciality is one which is taking the call to make the evidence by which they practice much more robust, very seriously.
Clinical trials are the foundation for evidence based medicine and therefore need to remain the future of all surgical specialities, and whether these look at innovative ideas or re-assesses old techniques is really irrelevant, as long as they are being conducted in the appropriate manner.
The NHS pledged £1 billion per annum until 2016 to support multi-centre research trials and as part of this The British Association or Plastic, Reconstructive and Aesthetic Surgery (BAPRAS) and The British Society for Surgery of the Hand (BSSH) have collaborated to develop the Reconstructive Surgery Trials Network (RSTN) (2). The RSTN is the UK clinical trials and collaborative research network for plastic surgery and hand surgery developed with an aim of encouraging a developing culture of clinical trials, multicentre national collaboration and involvement of clinicians at all levels (it is open to everyone from medical students to Consultants to pitch and pursue trial ideas). The whole aim of the RSTN is to then help these clinicians get their projects off the ground, running and with an aligned vision among the researchers and the related surgical community, so as to avoid as many of the problems with RCTs as mentioned in the original article, specifically to help against the problem of trials not coming to fruition.
Interestingly many of the trials commenced by the RSTN are looking at old practices to determine whether or not what we do (which surgeons have been doing for many years in many cases) actually has any good evidence behind it. For example the NINJA trial, which is looking at the benefits of replacing or not replacing the nail plate in nailbed repairs (3). Therefore, in response to Dr Bennett’s (4) response I think that Plastic Surgery is just one of the specialities and I’m sure that there are many more out there too, taking responsibility for their practices, to ensure they continue to provide the best possible care for their patients, with robust, well researched evidence to back it up.
I completely agree with Mr McCulloch (5), who urged in his response for the surgical community to do more to nurture, support and develop the preliminary studies which are needed in order for RCT’s to do better and the RSTN is just one way of surgical specialities doing this.
With this highlighted, however, I do think that we should also be urging our current UK medical schools to be building the ideas of RCTs, how to design and execute one in some way into the undergraduate Medical School curriculum. I received a fantastic education from both of my Medical Schools (St. Andrews and Manchester), however, I can’t help but feel now that the practical aspects of developing a RCT, assessing which ones were good, which are needed and assessing all the evidence they provide was somewhat left out. Perhaps encompassing this early into the curriculum would mean a better understanding and application later on in a doctor’s career, which should in theory help prevent all the things Professor Ioannidis (1) quite rightly stated exist: the excess of unfinished, unused, unpublished, even irrelevant research. It would perhaps give those embarking on attempting to set up an RCT a better idea of what it is actually like, with all the many difficulties they entail, thus making them better equipped to do so. With how things are now moving forward in clinical medicine and surgery, surely understanding trials and how they feed and form the basis for the medicine we practice is something equally as important as the actual medicine we practice and, therefore, should be treated by the undergraduate medical curriculum as such?
Whether medical schools start to include this in their basic curriculum or not and whether it would help or not, can be debated. Although developing the same standards and with wide poly-speciality variation in well designed RCT’s in surgery has perhaps a long way to go before it can be considered comparable to those in a medical speciality, we are moving forward in the right direction, with good pace and enthusiasm.
1) Ioannidis JPA. Clinical trials: what a waste. BMJ. 2014;349:g7089
2) http://reconstructivesurgerytrials.net/
3) http://reconstructivesurgerytrials.net/clinical-trials/ninja/
4) Bennett DB. Re: Clinical trials: what a waste. http://www.bmj.com/content/349/bmj.g7089/rr-3
5) McCulloch PG. Re: Clinical trials: what a waste. http://www.bmj.com/content/349/bmj.g7089/rr-5
Competing interests: No competing interests
John Ioannidis has written the single most cited paper in the history of PLOS Medicine(1). His research on the validity of published work in medical science has attracted world-wide acclaim. He is an internationally renowned authority on trials methodology, and is in charge of a huge programme of innovative methodological research. From our brief experiences of meeting and interacting with him he is also witty, amusing, down-to-earth, unassuming and generous in helping other workers. In short, John is a superstar in his field. But that doesn’t make him infallible; and his BMJ editorial on waste in surgical trials was, we humbly submit, wrong.
We don’t mean that it was factually wrong – of course it was entirely correct. We don’t disagree with the recommendations John makes or the vision he sets out, on the contrary we endorse these wholeheartedly. What we think was seriously mistaken was to write an entire Editorial on the very clear failings of surgical RCTs without once posing the obvious question: Why? We can both remember the last time that surgery was publicly taken to task on this issue, by the Lancet in 1996(2), and it is to say the least disappointing that, after nearly 20 years, the debate seems to have advanced so little. The underlying message of the editorial still seems to be a schoolmasterly “must try harder”. But why have surgeons apparently been messing around at the back of the classroom for so long? As a social group within medicine they are well characterised – other doctors will readily give you a view on what surgeons are like – and some characteristics it will not include are sloth, indolence, inefficiency or lack of innovation. “Must try harder” is in fact the instinctive response of most surgeons to any challenge or obstacle.
It is time it was recognised that concluding a successful RCT of a surgical treatment is substantially more difficult than conducting one of a drug or vaccine. The reasons for this have been explored and described in detail in the papers of the Balliol Collaboration(3). Surgical innovations naturally go through an early “tinkering” stage when the intervention is not yet stable, followed by an exploratory stage where learning curves and controversy over the definition and indication of the procedure are unclear. RCTs begun before these stages have been completed are inevitably at high risk of failure. The IDEAL Collaboration has consequently emphasised the need to develop an integrated evaluation pathway, encompassing at least two types of preliminary study, in order to launch surgical RCTs with a better chance of avoiding the high risk of failure reported by Chapman et al. The UK's MRC surgical Hub has done much useful work in the same area, and has also emphasised the importance of preliminary studies. Much of John’s editorial calls for improvements to the environment for clinical research which we would all endorse, but which are equally relevant to all types of trial. None of his analysis pays attention to the specific difficulties of surgical studies, yet this is surely where the explanations lie for the comparatively poor performance of surgical trials reported in Chapman’s paper. In order to avoid the fate of Bill Murray’s character in “Groundhog day”* we urge that the surgical community (and those who fund and publish its research) to do more to nurture, support and develop the preliminary studies which are needed in order for surgical RCTs to do better.
1. Ioannidis JPA (2005) Why most published research findings are false. PLoS Med 2: e124. doi:10.1371/journal.pmed.0020124.
2.Horton R. Surgical research or comic opera: questions, but few answers. Lancet. 1996 Apr 13;347(9007):984-5.
3. McCulloch, P., Altman DG, Campbell WB, Flum DR, Glasziou P, Marshall JC, Nicholl J., No surgical innovation without evaluation: the IDEAL recommendations. Lancet, 2009. 374(9695): p. 1105-12.
Competing interests: Peter McCulloch is Chair of the IDEAL Collaboration, which works to improve methodology in trials of surgery and interventional therapies.
I read the editorial ‘Clinical Trials: what a waste’ by J Ioannidis with interest.
He has discussed reasons for aborted clinical trials at various stages of gestation. I think the reason for the biggest waste is due to desperate repeated attempts with same methods to prove something which has previously not convincingly demonstrated to be true, for example vitamin D supplementation trials.
It is fashionable to carry out vitamin D supplementation clinical trials, if one dose does not work higher or lower doses with various different preparations and frequency with or without calcium are conducted. Vitamin D is presumed to be a panacea for all known adult chronic medical conditions even though multiple meta-analyses have indicated otherwise. That is the reason U.S. Preventive Services Task Force has stated that the current evidence is insufficient to assess the balance of benefits and harms of screening for vitamin D deficiency in asymptomatic adults. (I statement) 1. Vitamin D and/or calcium supplementation also showed no overall effect on cardiovascular disease, cancer, and mortality 2.
The danger of such a practice could be that you are not only wasting money and time but end up bringing harm to the patient. Vitamin D is used to induce calcification of wall of the vessels in mice experiments. However, more and more cash is still being thrown on vitamin D supplementation clinical trials because of strong personalities, vested interests and a deep belief that one day it will be proven true. Instead it is possible that if studies on vitamin D were conducted with a different fresh perspective then it might be not only be beneficial to patients but lot of monies could be saved 3.
This principle could be applicable to various other useless but trendy clinical trials.
Reference List
(1) LeFevre ML. Screening for Vitamin D Deficiency in Adults: U.S. Preventive Services Task Force Recommendation Statement. Ann Intern Med 2014.
(2) Fortmann SP, Burda BU, Senger CA, Lin JS, Beil TL, O'Connor E et al. 2013.
(3) Kain K. Circulation. In press 2015.
Competing interests: No competing interests
Ioannidis is correct in asserting that many trials, particularly surgical ones, represent “wasted effort” due to unimportant clinically outcomes and also that trials should be optimally designed and properly randomised.
Randomisation and blinding in surgical trials is possible and can be extremely powerful. For example, Moseley et al used a double blinded placebo controlled study design to show there was no difference in pain or function in those who experienced knee arthroscopy compared to placebo surgery, despite arthroscopy being commonly used to treat knee pain at the time (1). This study prompted the Department of Veterans Affairs to immediately recommend that knee arthroscopy should not be performed for knee pain in the absence of anatomic or mechanical abnormalities (2).
In another example, although minimally invasive hip replacement surgery had been widely reported as being beneficial in numerous cohort studies we successfully showed that, in a double blinded randomised controlled study, there was no improvement in early post operative outcomes (3), walking ability (4) or ability to mobilize (5), which were the main claims for this new technique.
Perhaps a good starting point would be to examine existing surgical procedures which lack adequate evidence of effectiveness rather than focus on studies of novel devices which currently seem to dominate the literature.
Yours
Dr Damien Bennett
Specialty Registrar in Public Health Medicine
(1) Moseley JB, O'Malley K, Petersen NJ, Menke TJ, Brody BA, Kuykendall DH, et al. A controlled trial of arthroscopic surgery for osteoarthritis of the knee. N Engl J Med 2002 Jul 11;347(2):81-88.
(2) Gordis L. Epidemiology. 4th ed. Philadelphia, Pa. ; London: Saunders; 2009.
(3) Ogonda L, Wilson R, Archbold P, Lawlor M, Humphreys P, O'Brien S, et al. A minimal-incision technique in total hip arthroplasty does not improve early postoperative outcomes. A prospective, randomized, controlled trial. J Bone Joint Surg Am 2005 Apr;87(4):701-710.
(4) Bennett D, Ogonda L, Elliott D, Humphreys L, Lawlor M, Beverland D. Comparison of immediate postoperative walking ability in patients receiving minimally invasive and standard-incision hip arthroplasty a prospective blinded study. J Arthroplasty 2007 Jun;22(4):490-495.
(5) Lawlor M, Humphreys P, Morrow E, Ogonda L, Bennett D, Elliott D, et al. Comparison of early postoperative functional levels following total hip replacement using minimally invasive versus standard incisions. A prospective randomized blinded trial. Clin Rehabil 2005 Aug;19(5):465-474.
Competing interests: No competing interests
Randomized controlled trials are without a doubt one of the best tools to improve health. But their efficacy is in question since they are now become a useless accessory rather than necessity. Most of the trials done are just a formality rather purposeful research. There is a dire need for a system to control the quality of trials as well as the quantity.
Competing interests: No competing interests
What a waste[1]. What a waste of public money, a waste of patients’ time, and a waste to those involved at the very bottom of the research ladder, the interviewers, the questionnaire writers, the data collectors.
Medical students and junior doctors, at the beginning of their careers, look to stand out and enthusiastically join in the activities of their seniors. The promise of publications and presentations are like pound signs in their eyes, as they try to separate themselves from the homogenous group. These publications mean points, and points mean prizes, namely extra marks on application to the Foundation Programme, or for further training.
Chapman[2] found that 21% of surgical trials ended early, and of those that did not end 34% were not published. How many hours of work will have gone into these projects by not only clinical researchers, but also those junior members of the team?
How much should a medical student, or junior doctor be involved in research? With highly intensive courses/syllabuses, it takes hard work and determination to become involved in these projects. With over 40% of randomised controlled trials described as a waste, it is very disheartening not to see the fruit of one’s labours. Through no fault of their own, seniors may feel they are offering opportunities to those that ask, however these projects can be under powered, without a true nil hypothesis and as a consequence not publishable.
What should medical students and junior doctors do if they are interested in research or simply wish to improve their CVs? Should intercalated degrees be the only activity before registration? Here there is supervision, a realistic project and an achievable timescale? We need a mentor to support us in our academic aspirations, we need to be advised honestly about projects and we need to be taught so that in the future we do not contribute to the waste.
1 Ioannidis JPA. Clinical trials: what a waste. BMJ 2014;349:g7089–g7089. doi:10.1136/bmj.g7089
2 Chapman SJ, Shelton B, Mahmood H, et al. Discontinuation and non-publication of surgical randomised controlled trials: observational study. BMJ 2014;349:g6870–g6870. doi:10.1136/bmj.g6870
Competing interests: No competing interests
Ionannidis captures nicely the potential that: "Eventually, randomized controlled trials could be the pride of clinical investigators who collaborate in research that matters, and the best source of information on how to improve health."
I am sad that towards the end of a career in general internal medicine, I have not had the opportunity to enrol patients in practical and intellectually honest trials relating to important clinical questions. When the odd patient was enrolled with my "permission", if not endorsement, the trials were obviously intended for marketing purposes, rather than to learn the best treatment. The more I learned about most trials of drug therapy over the last 20 years, the less enthusiastic I became about encouraging anyone to participate, and the more sceptical that I could believe the conclusions or implement the recommendations.
Imagine what achieving genuinely open access to anonymized data, trial protocols, statistical analysis plans, and the characterization and adjudication of clinically meaningful outcomes would achieve. I am probably not the only physician who would more comfortably encourage patient participation in RCTs. I can even imagine granting informed consent myself!
Competing interests: No competing interests
Professor Ioannidis complains justifiably of trials that are unregistered, unfinished, unpublished, unreachable, or simply irrelevant. How does this deplorable situation come about, and how can it be remedied?
The first hurdle will be the State or Charitable research funding agency, who will vouch for the validity of the question being asked, and for the feasability of the Materials available and the proposed Method to answer it. Despite this ostensibly rigorous scrutiny, a number of studies disappeared into the sands of time because it was reasonably foreseeable that the requisite numbers of subjects would fail to materialise so that even the most compliant statistician would fail to spin gold from the limited data.
The next hurdle will be the research ethics committee that will examine the research protocol to determine its acceptability from its viewpoint. For a protocol to receive approval on ethical grounds, requires more than tinkering with the wording of information for subjects and for consent forms, it requires that scientific rigour be applied to the protocol. In practice, the research ethics committee’s reliance on the funder’s advisors opinion of the scientific merits of the proposed study, may be misplaced. (How bad studies can get accepted by committees of the elect, while good are rejected is another story.)
Research funders may request that the results of a study are published, and research ethics committees may make their approval conditional on publication, but it may be that they lack tenacity for enforcing it. Times are hard and research finances are constrained, but if research was required to be underwritten by the employing institution,and funds wasted on unfinished or unpublished studies were required to be reimbursed, this might concentrate the mind. But only if funding agencies and research ethics committees were so intended.
Competing interests: No competing interests
Re: Clinical trials: what a waste
The editorial addresses issues which are well known and well recognized as deserving urgent attention. In singling out surgical trials, the author does seem to be 'biased'! The solutions that the author has proposed have also been spelled out many a times previously. There is nothing 'new' in what the author proposes. This is similar to what most of the trials that we participate in do. They validate previous trials. Some of them just end up making attempts to validate never reaching the final stages of trial completion. The reasons may be varied, from attrition in the original team to lack of continued funding, or sometimes just a change in the institution management. So to characterize 'failed' trials as a waste of resources is a bit cruel. No researcher starts a study with the intention of failing or not completing it.
The increased number of trials are just a reflection of the changing times. In the past few decades there has been an explosion in the number of medical researchers. The modern times have given the Physician a luxury - of being able to concentrate on non-clinical pursuits - because of the increased numbers of doctors, better infrastructure and more funding opportunities. No longer do we have only islands of research, located in capitals and populated by a select few. Research has actually been democratized. In the present times, a resident can dream of writing a research proposal and also hope for funding. Was this possible a few decades ago?
The invention of movable press was supposed to drown us in smut and literature of such poor quality that we would be left just pandering to our baser instincts. But the same tool - the movable press - also enabled a much wider distribution of all knowledge. And so is RCT - a tool. And ultimately it is up to the artist to use the tool as he wishes. In the millions of trials that have been and are being done, there will be ones which will change paradigms. But we will never be able to select for those trials beforehand. Their utility will only be recognized in hindsight. The issue of allocation of resources is as old as mankind. Lets not make it the only issue of concern. For every unsuccessful, unpublished trial there are students who will learn new skills through participation. Researchers at a personal level learn how not to do things. As Arthur Clarke wrote - The toolmakers had been remade by their own tools.
So lets not abandon our pursuit of truth - for this is what all RCTs are - and on the way if we discover a few new things or find few old lies being presented as the truth, then so be it.
Competing interests: No competing interests