Why did the Lancet take so long?
(Published 02 February 2010)
Cite this as: BMJ 2010;340:c644
- Trisha Greenhalgh, professor of primary health care, University College London
On 28 February 1998 the Lancet published a study with the inauspicious title “Ileal-lymphoid-nodular hyperplasia, non-specific colitis, and pervasive developmental disorder in children.”1 The paper has been much criticised, and the Lancet finally retracted it this week. But why did it all take so long?
The story is well known. Wakefield’s paper implied an association, later shown to be spurious, between gastrointestinal illness, the combined measles, mumps, and rubella (MMR) vaccine, and an autism-like disorder in a sample of 12 children. At a controversial press conference Wakefield appeared to conflate association with causation, and in the eyes of the tabloid press his tiny, skewed sample represented children in general. The immunisation record of then prime minister Tony Blair’s infant son became the most politically sensitive item of data held in the NHS. Private clinics enjoyed a brief boost to business by offering the three vaccines as separate, spaced injections as recommended by Wakefield. Measles returned—and did considerable damage.
On 18 February 2004 the investigative journalist Brian Deer complained to the Lancet that far from being “consecutive” referrals to the gastroenterology clinic, as claimed in the paper, several children in the sample had been referred to Wakefield by a medical negligence lawyer who sought grounds for pursuing a legal action on behalf of parents of allegedly vaccine damaged children. Deer claimed that Wakefield had not obtained ethics committee approval for invasive tests conducted on the children (including lumbar puncture and colonoscopy) and that the paper had been submitted under cover of ethics approval for a different study.
On 6 March 2004 the Lancet rejected Deer’s allegations relating to ethics approval and stated that “children reported in the 1998 Lancet paper were consecutively referred to the Royal Free [Hospital] and were not deliberately sought by the authors for inclusion in their study based on parents’ beliefs about an association between their child’s illness and the MMR vaccine.”2 However, it agreed that funding from the Legal Aid Board for what it considered to be “parallel and related work” should have been declared as a conflict of interest (defined as: “Is there anything . . . that would embarrass you if it were to emerge after publication and you had not declared it?”).
In the same issue 10 of the paper’s 13 authors published a “retraction of an interpretation.”3 Despite this, two of those 10—Simon Murch and John Walker-Smith—joined Wakefield on the ropes in 2007 for what has already become the longest hearing by a fitness to practise panel in the history of the General Medical Council. In a judgment published last week the GMC declared that invasive investigations on children in the “Lancet 12” group were undertaken without proper ethics committee approval and without due regard to their clinical needs (www.gmc-uk.org/static/documents/content/Wakefield__Smith_Murch.pdf). Wakefield’s presentation of the referrals as consecutive and routine was deemed “dishonest” and “irresponsible” and was found to have “resulted in a misleading description of the patient population in the Lancet paper” (paragraph 32b, page 44).
An academic journal is not a collection of blank pages on to which authors inscribe important scientific facts as they discover them. Rather, science is made and shaped as authors consider the declared areas of interest, impact factors, and instructions for authors of candidate journals for their work and as the papers they submit clear the successive hurdles of eligibility screening, selection of peer reviewers, responding to reviewers’ comments, statistical approval, technical editing, and distribution of press releases. My own collection of rejection slips from journals with a high impact factor represents research that could have become important scientific facts but that turned out to be findings of marginal significance in publications to which neither politicians nor journalists subscribe.
A graph showing first a precipitous fall in immunisation rates in the United Kingdom and then a corresponding rise in the incidence of measles was later reproduced in the broadsheets (and in at least one GCSE biology syllabus) as an iconic symbol of bad science.
Leaving aside for a moment the questions of research ethics and fraudulent sampling claims raised by the GMC, there is an alternative interpretation of the same graph: that the acceptance for publication of some very preliminary laboratory findings by one of the world’s leading medical journals was, at the time that editorial decision was made, more a symptom than a cause of declining professional and public confidence in the MMR vaccine. But once the article appeared with the Lancet kitemark—cautious accompanying editorial notwithstanding4—the arguments were considered by many to be proved, and the ghastly social drama of the demon vaccine took on a life of its own.
In an ironic admission to the GMC panel (paragraph 30a, page 43) Wakefield disputed that the piece he submitted to the Lancet should be referred to as a “scientific” paper. And this week the Lancet stated that it had “become clear that several elements of the 1998 paper . . . are incorrect, contrary to the findings of an earlier investigation (Lancet 2004;363:824),” adding, “We fully retract this paper from the published record.” Although the retraction seems overdue, it can only be a good thing for science.
Cite this as: BMJ 2010;340:c644