Investigating allegations of scientific misconductBMJ 2005; 331 doi: http://dx.doi.org/10.1136/bmj.331.7511.245 (Published 28 July 2005) Cite this as: BMJ 2005;331:245
In this issue we take the unusual step of publishing an “expression of concern” (p 266)1 about a paper the BMJ published in 1992,2 together with an account of our attempts to resolve the suspicions about this and other papers written by the author, Dr Ram B Singh of Moradabad, India (p 281).3 The BMJ's expression of concern coincides with a similar expression about another of Singh's papers in this week's Lancet.
As White describes in her article,3 doubts about the validity of the data in Singh's 1992 paper arose soon after we had published it—when Singh sent us a succession of other studies. The reviewers of the subsequent papers alerted us to discrepancies in the data, and to doubts about Singh's work that were already well known among researchers into diet and coronary heart disease.
What should journal editors do when confronted with such doubts? In the past, we would simply have rejected the paper. But in the wake of prominent cases of scientific misconduct in the United States in the 1970s and 1980s,4 journal editors began to recognise that they had an obligation not to ignore such doubts, an obligation now set down in the Committee of Publication Ethics code of conduct for editors.5 In practice there's a limit to what journals can do—because they have neither the resources nor the authority to conduct investigations to resolve suspicions about data. Yet they are, as Smith points out,6 in the position of “privileged whistleblowers.” Privileged because it is often their expert peer reviewers who first raise the suspicions about odd looking data in a research study; because they can ask authors for raw data and ask them to explain discrepancies (which may remove or strengthen the existing doubts); and because they can then ask a legitimate authority (such as an employer, university, or funding body) to investigate.
The problems arise when there is no authority or the authority doesn't see it as its task to investigate. In the case of Singh, over a decade ago, Richard Smith, then editor of the BMJ, tried to find an authority in India that would investigate and resolve the doubts over Singh's work, but no institution would take on the task. He also commissioned reports from subject and statistical experts on Singh's unpublished and published papers and an analysis of the raw data of one of the submitted studies—activities that took a long time and are beyond the resources of most journals.
In the end—and in the face of requests from other researchers that the journal should “do something”—the BMJ decided that the only course left to it was to publish an account of the suspicions and the failed attempts to have them resolved.3 We also publish this week the results of the analysis the BMJ commissioned on the raw baseline data from one of the papers submitted to the BMJ but not published by it (p 267).7 The authors of this analysis conclude, “Several statistical features of the data from the dietary trial are so strongly suggestive of data fabrication that no other explanation is likely.”
We think the questions raised about Singh's data are sufficient to cast doubt on the validity of the paper we published in 1992—hence our expression of concern1—and indeed on many other papers that he has published (see bibliography on bmj.com). And we think that other researchers and systematic reviewers need to know about these doubts. But the doubts are unresolved, and the situation therefore remains unsatisfactory—for researchers, for the journals that have published his articles, for Singh's coauthors, and, not least, for Singh himself.
Although the BMJ may have done more in this case than many journals with lesser resources, it has still taken us over 10 years to try to resolve the issue. This fact reinforces our belief that journals cannot resolve suspicions on their own. The cases in the US in the 1970s and 1980s led to the setting up of the Office for Research Integrity, specifically to support institutions in investigating allegations of research misconduct.8 Denmark, followed by other Scandinavian countries, also set up a national organisation to support institutions in investigating allegations of misconduct.9 Calls for a similar body in the United Kingdom10 11 are at last being answered: Universities UK, the Department of Health, and the NHS are working together on a framework to establish a panel for research integrity.
Nevertheless, even with willing institutions and national bodies the problems don't go away. Research is international and bodies in one country may have no authority over researchers in another. Institutions find it difficult to act once a researcher no longer works for them, as happened in the case of R K Chandra, about whose work suspicions were aroused when he submitted a paper to the BMJ in 2000, and who went on to publish extensively elsewhere.12 As Smith explains on page 288, the Chandra story also illustrates the problem about what to do about a researcher's other papers. Investigating the “index case” of suspected misconduct is hard enough but is only the beginning. A finding that one research study is invalid raises the question about the rest of that author's work.6
In the Chandra case it was a journal, Nutrition, that decided to retract Chandra's article—on the basis of eight specific substantial doubts.13 It did this partly because Chandra's university was unable to investigate further when Chandra failed to provide raw data and then resigned.12 But doubts now remain about Chandra's other studies, and the fact that these have not been resolved has already caused problems to meta-analysts.14 These papers exist in scientific limbo.
The stories of Singh and Chandra are sorry tales, with no clear resolution. What more can journals do when their attempts to get someone else to investigate fail? Some researchers and editors argue that journals should keep collective confidential “black lists” of suspected papers and authors. But the sheer number of journals makes this unreliable; more seriously, it would imply someone was guilty until proven innocent—with a worrying lack of due process. Others suggest that journals should ask authors to deposit a copy of their dataset in a secure archive so that data could be audited if questions arise. But that too demands an infrastructure that doesn't exist. Perhaps rather than waiting for definitive proof, journals should in future be more ready to share their concerns about published papers, using the mechanism we use today—the publication of an expression of concern—where they have reasonable grounds to believe that serious questions exist about a paper. The expression of concern does not resolve the suspicions but it alerts researchers, and in particular systematic reviewers, to doubts about the studies. And it may in turn prompt an organisation with the capacity and standing to take the action necessary to do the necessary investigations.