Managing research misconduct: is anyone getting it right?BMJ 2011; 343 doi: https://doi.org/10.1136/bmj.d8212 (Published 29 December 2011) Cite this as: BMJ 2011;343:d8212
The dark side of scientific research continues to claim high profile casualties. Donald Poldermans, a prolific cardiovascular researcher with over 500 papers to his name, was recently sacked by Erasmus Medical Centre in Rotterdam for conducting a study where he used fictitious data.1
It is difficult to know how common such cases are. A recent meta-analysis suggested that about 2% of scientists had “fabricated, falsified, or modified data or results at least once,” and 14% were aware of colleagues who had done so.2 The number of papers retracted from the literature has increased exponentially over the past few years, to over 400 in 2011,3 with many withdrawals attributed to misconduct.4 Malcolm Green, former vice principal of the faculty of medicine at Imperial College London, comments: “It is highly likely that for every case of fraud that is detected there are a dozen or more cases that go undetected. While these might represent lesser degrees of misconduct, they are still important to science and ultimately patient care.” The mantra of “publish or perish” echoes through many academic departments, with career advancement, tenure, and peer recognition all depending on scientific output. Such rigid metrics of success can put pressure on researchers to act in a less than scrupulous manner.
Whose responsibility is it?
Governments recognise the importance of scientific progress to their economies and international standing, and many countries have outlined ethical codes of conduct they expect researchers to adhere to.5 Similarly, several consensus bodies have written best practice guidelines. These include the European Science Foundation’s code of conduct for research integrity6 and the Singapore statement,7 generated by the 51 counties that attended the second world conference on research integrity in 2010.
Such concerted international efforts were necessary, according to Nick Steneck, director of the research ethics programme at the University of Michigan Institute for Clinical and Health Research and one of the authors of the Singapore statement. “It was more or less assumed that the absence of misconduct meant high standards for integrity in research. The code and statement set out a more specific set of standards for judging responsible behaviour and integrity.” He adds that numerous institutions and governments are asking to use the statement as a base for their own policies. “We established a foundation and are optimistic that others will now build a stronger structure for integrity in research.”
Despite these laudable attempts to promote integrity, high profile cases of outright fraud and deliberate dishonesty still occur. Peer review, the bedrock of the scholarly process, is good at picking up poor quality science, but peer reviewers do not start with the assumption that the data may be false. Fraudsters are often caught through eagle eyed readers or editors spotting impossibly good results8 or by whistleblowers from within the scientist’s institution. We therefore need mechanisms to detect, investigate, and respond to those who deviate from best conduct.
Such approaches vary widely across the world, and even within Europe, because of the differing political and cultural climates in which research is undertaken. These range from institutional self regulation and policing by funding agencies, through to national oversight bodies and committees with statutory powers.9 Though the UK Research Integrity Office has been much maligned for lacking funding or effective powers, there is at least an acknowledgment that such a body needs to exist.10 Many European countries lack any formal approaches to research integrity whatsoever.9 Spain is one, according to Xavier Bosch, associate professor at the University of Barcelona, adding “training in research integrity and responsible conduct of research are not issues of concern for the academic authorities here.” And indifference predominates: “The research institutions, science ministry, and health ministry don’t seem to have any interest,” says Dr Bosch, and although misdoings may occur, “nobody has any interest in publicising them.”
The American dream
The United States was the first nation to take an organised approach. Since 1992 its Office of Research Integrity (ORI) has covered the research funded by the Public Health Service, encompassing funding bodies such as the National Institutes of Health and the Centers for Disease Control and Prevention. Its role in dealing with scientific misconduct is limited to oversight and review of the research institutions’ investigations. If a culture of integrity does not permeate an institution there may be attempts to suppress instances of misconduct to avoid the scrutiny of the ORI. A 2008 survey by ORI researchers published in Nature suggested the true burden of misconduct occurring under the umbrella of Public Health Service funds was at least 10 times higher than is reported.11 “The US system is seen as very professional, which it is, but it is limited to government funded research, and most misconduct is not reported or even detected,” adds Professor Steneck.
The Scandinavian countries have also taken the oversight of research seriously and formed centralised oversight bodies.12 Norway’s scientific community initially wavered over the need for and scope of such an organisation. But when it emerged that oncologist Jon Sudbø had totally fabricated datasets published in the Lancet and New England Journal of Medicine13 there was a feeling that something had to be done, and fast. The legislative machinery kicked into gear and produced the 2007 law on ethics and integrity in research. It requires institutions to bear the primary burden of regulating research misconduct and heralded the birth of the National Commission for the Investigation of Research Misconduct (Granskingsutvalget). The commission oversees research in all fields, across both the public and the private sectors. According to its director, Torkild Vinther, institutions have a duty to inform the commission, although it may decide to investigate independently if the institution lacks appropriate mechanisms or there are severe conflicts of interest. He adds that it’s difficult to know how the commission is perceived by Norwegian academia but that “cooperation between the commission and universities is getting better and better.”
Germany has opted for a decentralised approach. Its major funding agency, the Deutsche Forschungsgemeinschaft (DFG) requires each university to have two structures in place in order to be eligible for funds: an independent “ombudsman” with whom instances of suspected research misconduct should be discussed and a committee for investigation of scientific misconduct. The DFG also has its own ombudsmen: a committee of three high ranking academics to whom any scientist can turn for advice (even if not funded by the DFG). The weakness of this dispersed regulatory system is that not all research takes place in universities, as highlighted when the case of serial fraudster, anaesthetist Joachim Boldt came to light. His hospital, Klinikum Ludwigshafen, a non-university hospital, is effectively ineligible for DFG funding because it has limited research infrastructure and no ombudsman or committee; there is thus little oversight of doctors doing clinical research.14 Indeed, it took a concerned editor several months to track down the person responsible for research ethics at the hospital when allegations first surfaced.8
A struggle with authority
The collaboration seen in Norway is in stark contrast to the apathy or animosity seen towards research oversight in many other countries. Despite being a transitional post-war country, Croatia has been a leader in promoting research integrity, even attempting to form an organisation like the ORI.15 In 2004, the National Committee for Ethics in Science and Higher Education was created as an independent body, its members elected by parliament. It would publish its opinion on alleged cases of misconduct but had no power to take measures. Nevertheless, the committee was unpopular: the first two chairs appointed to lead the committee resigned because, according to Matko Marušić, dean of the University of Split Medical School, “the cases were very delicate and sensitive.”
Things came to a head when Asim Kurjak, a professor of obstetrics at the University of Zagreb, was accused of plagiarism.16 The committee investigated and found numerous papers with concerns. “The medical school covered it up,” says Professor Marušić, leading to substantial political pressure to disband the committee or at least render it impotent. The current situation is very uncertain: “The committee is still there but it is encircled by powerful people who are scientists such that it is invalidated,” says Professor Marušić.
Poland has picked up the baton dropped by Croatia when its parliament endorsed the creation of a new commission to investigate scientific misconduct. Institutions are still expected to perform investigations but the commission can step in if it believes serious cases are not being adequately dealt with. Andrzej Górski, vice president of the Polish Academy of Sciences, adds that institutions and individuals usually apply for judgments and that penalties remain at the discretion of the institution’s director, but in all circumstances “the institution has to accept that our verdict is final.” There was a similar committee previously, but its opinions often went unheeded as they were not backed by any legal powers, according to Professor Górski. He describes the impetus for change, stating: “There were a series of events that were very painful that were cited as examples of how powerless we were in addressing the spectacular cases of misconduct.”
Poland’s embryonic commission has only dealt with a few cases, but its success shows the potential of a centralised body. By contrast, a 2010 expert review panel outlined several deficiencies in Canada, notably a lack of a unified approach across all disciplines and the absence of a centralised entity for education and advice on research integrity issues.17 Some wanted more: several editors of CMAJ proposed an independent body to investigate alleged research misconduct, with the ability to “name and shame” wrongdoers, among other measures to increase transparency.18 They highlighted the institutions’ inherent conflict of interest when conducting investigations within their walls and stated it was “naive to think education alone will eliminate misconduct.” In December 2011 Canada’s three main federal funding agencies set up the Panel on Responsible Conduct of Research. The panel aims to review wider breaches in best practice,19 not simply cases of misconduct as narrowly defined elsewhere.
In 2014 the European Commission begins a six year plan to inject almost €25bn into science as part of the Horizon 2020 financial improvement programme, aiming to boost Europe’s international competitiveness. Dr Bosch is concerned that the scheme does not include research integrity measures. He lambasts the European code of conduct as having had “no impact” adding “there were no specific recommendations regarding misconduct, and no recommendations to the European Commission.” He proposes the formation of a new European agency for research integrity to oversee the Horizon 2020 projects, with powers to take over investigations of misconduct if institutions are unwilling or unable.20
Scientific misconduct continues to occur, and frustratingly we do not know the best way to detect or scrutinise it. “There is no one ‘right’ model,” according to Professor Steneck.
Professor Green points to the success of other sectors: “There are robust mechanisms for diagnosing, investigating, and sanctioning misconduct in the financial sector. While not perfect, at least a suitable framework is in place,” he says. “Medical research fraud is of the gravest importance as it wrongly informs clinical practice. The spurious results produced can cause direct patient harm, or indeed, death.”
Spotlight on the UK
As in Norway, a serious case of fraud drove the UK to take action. Malcolm Pearce, an obstetrician at St George’s Hospital, London, published a report in the British Journal of Obstetrics and Gynaecology describing the first successful birth after re-implantation of an ectopic pregnancy,21 to widespread international acclaim. Geoffrey Chamberlain, head of the department and, then, editor of the journal, and president of the Royal College of Obstetricians and Gynaecologists was the senior author. After concerns were raised by a whistleblower, it transpired that the case report and three other papers by Mr Pearce were fabricated. Mr Pearce was struck off by the General Medical Council, and Professor Chamberlain, who denied all knowledge of the fraud, resigned from his college and editorial positions.
In the aftermath of this scandal, members of the UK medical establishment gathered in Edinburgh to consider the problem of research misconduct. Their consensus statement called for all allegations of research misconduct to be investigated “firmly, fairly and expeditiously” and for the establishment of a national panel to assist this.22 But it was not until 2006 that the UK Research Integrity Office (UKRIO) was born, funded by the Department Health and Research Councils UK among others, to provide guidance on and support good research practice. Funding was withdrawn in 2009 because of the austere fiscal climate, and UKRIO became a limited company. It now exists in a vestigial form hosted by the University of Sussex and funded by voluntary contributions from universities.
UKRIO’s guidance on responsible conduct of research and investigations of misconduct has been embraced by over 50 universities, and it has provided advice in numerous cases. However, lacking statutory powers or the prerogative to investigate cases independently, it has been criticised as being toothless.10 The UK’s most publicised case of scientific misconduct, Andrew Wakefield’s Lancet paper that sparked a global crisis in confidence in the MMR vaccine, remains uninvestigated by University College London.23 But other examples exist. A paper from the National Heart and Lung Institute at Imperial College was retracted after editors at the New England Journal of Medicine concluded that many of the coauthors had neither seen the original data nor any versions of the manuscript and that an author had admitted forging his coauthors’ signatures to gain consent for publication,24
Getting a handle on the scale of the problem is difficult, partly because of the opacity under which misconduct is investigated, if at all. Professor Malcolm Green thinks the cases that have come to light are “likely the tip of a much larger iceberg.” Nevertheless, there seems to be little appetite among the scientific establishment for stronger research oversight measures in the UK. Replying to calls for an independent investigation of the Wakefield affair, Alan Langlands, chief executive of the Higher Education Funding Council for England, the public body responsible for distributing funds to universities, described the UK’s current provision as “sufficient and proportionate.”25
Cite this as: BMJ 2011;343:d8212
Competing interests: The author has completed the ICJME unified disclosure form at www.icmje.org/coi_disclosure.pdf (available on request from the corresponding author) and declares no support from any organisation for the submitted work; no financial relationships with any organisation that might have an interest in the submitted work in the previous three years; and no other relationships or activities that could appear to have influenced the submitted work.
Provenance and peer review: Commissioned; not externally peer reviewed.