Editorials

Assuring research integrity in the wake of Wakefield

BMJ 2011; 342 doi: http://dx.doi.org/10.1136/bmj.d2 (Published 18 January 2011) Cite this as: BMJ 2011;342:d2
  1. Douglas J Opel, acting assistant professor1,
  2. Douglas S Diekema, professor1,
  3. Edgar K Marcuse, professor2
  1. 1Treuman Katz Center for Pediatric Bioethics, Seattle Children’s Research Institute, Seattle, WA 98101, USA
  2. 2Seattle Children’s Hospital, Seattle, WA 98105, USA
  1. djopel{at}u.washington.edu

Not just a bad apple, but a defective barrel?

In a grove of trees in the grounds of the National Academy of Sciences in Washington, DC, is a statue in memory of Albert Einstein. On it are engraved three of his sayings. One reads: “The right to search for truth implies also a duty; one must not conceal any part of what one has recognised to be true.”

Science is our best way of knowing. When work presented as science is shown to be corrupt, it not only discredits that work and its authors, but it also discredits science. The series of linked articles by Brian Deer illustrate many of the ways that science can be corrupted.1 2 3 Above all, Deer shows that the conventional biomedical research mechanisms intended to assure research integrity completely failed.

Unfortunately, we have been here before. Investigators involved with the 1932 US Public Health Service Tuskegee Syphilis Study deceitfully enrolled subjects with latent syphilis and denied them available treatment for 40 years in order to study the natural course of the disease.4 As part of a 1963 study to determine the body’s ability to reject foreign cells, patients at the Brooklyn Jewish Chronic Disease Hospital were injected with live cancer cells without their knowledge and without oversight from the institution’s research committee.5 From 1944 to 1974, the US government conducted several radiation experiments, some of which involved the use of non-therapeutic radioactive tracers in children and increased their risk of developing cancer.6 And in 1981, it was discovered that John Darsee, a clinical investigator at Harvard Medical School, had fabricated data in several experiments published in high profile medical journals that ultimately culminated in widespread retractions of his work and a ban from funding from the National Institutes of Health for 10 years.7 These experiments have since become symbolic of unethical research on human subjects and of scientific misconduct, and there is little doubt that Andrew Wakefield’s 1998 study will too.8

How could this happen again? To answer this, perhaps we need to focus less on the people involved and more on the defects within the biomedical research enterprise that permit such egregious misconduct. After all, Wakefield was able to circumvent the existing safeguards established to ensure the responsible conduct of research, the protection of research subjects, and the accurate and honest publication of research findings.

To begin, we need to frame research incidents like Wakefield’s as adverse events, akin to clinical adverse events. Doing so would expose them to the same level of scrutiny that we currently apply to clinical adverse events. The goal would also be the same: prevention of future occurrences by learning from our failures. Prevention of clinical adverse events is one of the cornerstones of healthcare quality improvement and patient safety. Prevention of research adverse events should be no less important for the protection of human subjects, future patients who might receive the wrong treatment as a result of the adverse event, and research integrity.

Investigations into clinical adverse events are focused more on systems of care than on individuals (so called bad apples) for several reasons. Firstly, most adverse events result from flaws in systems of care rather than incompetent or malevolent individuals.9 Secondly, the bad apple framework connotes punishment and can hinder the disclosure of—and ability to learn from—errors.10 Thirdly, focusing on individuals’ misconduct is likely to yield simplistic answers and premature closure. Lastly, and perhaps most importantly, without fundamentally changing the way work is done, other similarly trained and motivated personnel are prone to repeat the same errors.

Marcia Angell wrote in 1992 that “all those involved in the research enterprise at each step of the process—investigators, IRBs [institutional review boards], funding agencies, reviewers, and editors—have an obligation to evaluate the ethical content of a work just as they evaluate the scientific content.”11

Deer’s articles reveal the urgent need to understand why there was a failure of multiple systems within the research enterprise. Why weren’t Wakefield’s conflicts of interests recognised and exposed sooner? Why didn’t Wakefield’s co-investigators recognise or bring attention to the study’s methodological flaws? Why wasn’t Wakefield’s research misconduct and non-compliance with ethics approval recognised by the Royal Free or its ethics committee? Why wasn’t there a full, independent investigation by the Lancet or the Royal Free when the veracity and quality of Wakefield’s study were initially questioned? These are the questions that we need to pursue if we are to fix a system that failed to protect human subjects and the public from the consequences of fraudulent science.

Deer’s articles also highlight the existence of a culture and informal customs within the research enterprise that, unless changed, will impede needed improvements. “Culture always trumps strategy” (M Bard, personal communication, 2010). Even the most elaborate strategies, procedures, and interventions designed to prevent future research adverse events will be unsuccessful unless problematic aspects of culture and unwritten customs are explored, understood, and tackled.

Let’s start now. We must transcend traditional hierarchies and authority gradients to empower everyone in the research enterprise—especially those on the front lines, such as research assistants, data analysts, and project managers—to raise questions and “stop the line.”12 We must train our research leaders—such as department chairs and medical school deans—to manage such inquiries. We must not allow it to be “customary” for journal editors “to discuss and take the word of those against whom the allegations are made.”3 Lastly, when allegations of research misconduct or unethical research are brought to the attention of research leadership, these leaders must recognise that they often have a conflict of interest in managing these allegations. As occurred in the Darsee case, institutions may have an overwhelming drive to keep things internal rather than utilise an independent mechanism—such as an audit by a panel of scientists unaffiliated with the institution—to search for the truth. And as in the Wakefield case, journal editors may find it hard to put aside their own investment in a piece of research that they have decided to publish and defended against post-publication criticism. That it fell to a journalist to expose the extent of the misconduct in Wakefield’s research is telling.

Thirteen years later, we are only now beginning to understand the root causes of the multiple system failures involved in the Wakefield incident. We must strengthen our ability to investigate research adverse events. We need to use the tools and techniques available to protect the safety of patients in the clinical realm to protect research subjects. We also need to rethink and reform our customs and culture. The disastrous impact that Wakefield’s study has had on vaccine coverage, recrudescence of disease, public trust, and, most of all, science, requires that we do so in haste.

Notes

Cite this as: BMJ 2011;342:d2

Footnotes

  • doi:10.1136/bmj.c7001
  • Competing interests: All authors have completed the Unified Competing Interest form at www.icmje.org/coi_disclosure.pdf (available on request from the corresponding author) and declare: no support from any organisation for the submitted work; no financial relationships with any organisations that might have an interest in the submitted work in the previous three years; no other relationships or activities that could appear to have influenced the submitted work.

  • Provenance and peer review: Commissioned; not externally peer reviewed.

References