Scientific misconduct is worryingly prevalent in the UK, shows BMJ survey
BMJ 2012; 344 doi: https://doi.org/10.1136/bmj.e377 (Published 12 January 2012) Cite this as: BMJ 2012;344:e377
All rapid responses
Rapid responses are electronic comments to the editor. They enable our users to debate issues raised in articles published on bmj.com. A rapid response is first posted online. If you need the URL (web address) of an individual response, simply click on the response headline and copy the URL from the browser window. A proportion of responses will, after editing, be published online and in the print journal as letters, which are indexed in PubMed. Rapid responses are not indexed in PubMed and they are not journal articles. The BMJ reserves the right to remove responses which are being wilfully misrepresented as published articles or when it is brought to our attention that a response spreads misinformation.
From March 2022, the word limit for rapid responses will be 600 words not including references and author details. We will no longer post responses that exceed this limit.
The word limit for letters selected from posted responses remains 300 words.
In addition to Professor Bird's professional comments regarding the design and interpretation of the recent BMJ survey into institutional research misconduct, it also occurred to me that the BMJ appeared to be making the prior assumption that all 9000 plus researchers contacted would be truthful in their responses and would not have been involved in such practices themselves.
However, given the BMJ's alleged extent of such research misconduct, it would seem statistically probable that many of the researchers contacted had themselves also been involved with research misdemeanours. It should also be noted these researchers had all either written articles for the BMJ, or were involved with BMJ peer reviews. This was not a ‘random’ survey of UK research scientists.
Competing interests: No competing interests
In recent years, the number of temporarily research positions at universities has grown exponentially, mainly due to the external funding opportunities that have increased. As a consequence, the number of young researchers with an ambition to grow into an associate or full professorship is systematically greater than the number of available positions. While recruitment and evaluation criteria may vary from institute to institute it is almost predictable that when it comes down to choosing a suitable candidate for the job the potential recruits will be measured against their colleagues and the merits of the candidates will mathematically be summarized in the number of publications produced, impact factors and citation measures. This is considered an acceptable, objective standard to distinguish the excellent candidates from the moderate and the insignificant researchers. Quality of work is also considered, but let’s be honest; the number of full professors that will be remembered for their excellent and insightful papers is very modest. The rest of us tends to be creative, that is, building on the genius work of ‘the innovative few’, who do not need to worry about being promoted or receiving funding for continuing their work.
Most of us, however, will need to attract external funding from European or national sponsors. A good amount of high quality papers is a prerequisite to be successful in such applications. And a successful application adds to your credibility as a researcher. If you are a qualitative researcher, you may face another handicap, e.g. there is the danger that qualitative papers or proposals may be judged less relevant to the broader population of users or readers, regardless of their methodological quality or the richness of data. We have made good progress in acknowledging the importance of qualitative research within the field of health care, but I haven’t come across many full professors engaged in qualitative research that have gained the recognition of the early pioneers of qualitative research; the Margaret Meads of our time, who spent years in a particular research setting to gain an in-depth understanding of their phenomenon of interest. These pioneers were to the research community what columnist now are to society. They broke with existing research conventions and their stories were considered rare treasures of insight.
Society has changed substantially in recent years. It has become more of a business model with a high emphasis on effectiveness and this model has been transferred to universities as well. Academic freedom has significantly been curtailed by the straitjacket of the expected productivity as measurable through all kinds of bibliometric tools. Is it then surprising that some researchers would be triggered to search for short cuts to survive the evaluation procedures imposed on them? This is not to say that we all need to become proponents of the Machiavellistic stance that ‘the end justifies the means’. Of course we have our academic deontology. We are against guest and ghost writing. We would all subscribe to the statement that it is inappropriate to ‘lie’ with statistics, to ‘make up’ data or to ‘adapt’ data. These are just a few of the short cuts that are currently used to increase a scientific output in order to reach the mathematical targets. As a master student, I used to be able to choose my own topic for research. That was twenty years ago. In earlier years, I guided master students just for the sake of guiding them. More recently, we started to engage them for our own research projects, in which we consider them data collection machines to do the work for which we do not longer find the time ourselves. I often wonder what it is that we lose in the quantity we are gaining. I have witnessed the McDonalisation of society, the fast-food generation. I now see that the same thing is happening to the field of sciences. We have forgotten that a thorough understanding of a phenomenon takes time and is preferably not cut into pieces to increase the volume of papers. We cheat on ourselves instead of questioning the logic and truth society imposes upon us. It is about time that we sign up for a slow-science manifest and reclaim our academic freedom.
Competing interests: No competing interests
In January 1977, the British Medical Journal’s then-editor, Dr Stephen Lock, took on the chin [1], and subsequently acted on, a critique by Gore (my maiden name), Jones and Rytter on “Misuse of statistical methods: critical assessment of articles in BMJ from January to March, 1976” [2].
In January 2012, I find myself – a non-respondent in BMJ-editor Dr Fiona Godlee’s headline-making survey – again a critic, but this time of the BMJ’s own statistical methods, namely: of the design, analysis and reporting of their misconduct survey. Scientific misconduct is anti-science, deserves to be treated seriously, and requires paying serious attention to any survey-design for its quantification.
1. Response rate by the surveyed UK authors/reviewers was low at 2782/9036 (31%). Does BMJ ordinarily accept for publication surveys with response-rates so much lower than 60% as their own survey achieved?
2. Three questions were asked, one of which was to ascertain whether respondent was primarily clinician, academic or both. However, the information on type of respondent could not be used for covariate adjustment (that is: to determine if report-rate on misconduct differed by type of respondent) because the free software that had been used for the survey apparently did not allow access to respondent-type. Thus, BMJ has collected data that it cannot disclose . . .
3. Question 2 asked if the respondent was aware of ‘any cases of possible research misconduct at your institution that, in your view, have not been properly investigated?’. The balancing question – awareness of ‘any cases of possible research conduct at your institution that, in your view, have been properly investigated?’ – was not asked. Answers to the balancing question could potentially have been cross-checked institutionally. Survey design requires objectivity and balance in how questions are selected.
4. Question 2 elicited 163 affirmations of “cases”-awareness. As BMJ slides acknowledge, there may have been multiple reporting of the same institutional ‘cause celebre’; or respondents may have had awareness of more than one case in his/her institution. The phrasing of survey questions matters intensely. Because of poor analytical and reporting standards, we cannot be sure that the 163 affirmations do not emanate from a single institution or relate to a single case.
5. Time-related information can be essential for interpretability of surveys. In general, the longer a scientist’s career in a particular institution, the longer the period over which the scientist may have observed research misconduct there. Other things being equal, a scientist whose career at institution A has lasted 20 years should be 10 times more likely to have encountered research misconduct at institution A than a fellow-scientist whose career length at institution A is 2 years. Well-designed surveys build-in checks such as this which assure data-quality and because an analysis plan has been thought out in advance.
6. Analytical forethought immediately points to the necessity of knowing the respondent’s career-length in their current institution as well as the number of different “cases” of misconduct that s/he is aware of during the institutional-period so that “number of misconduct-cases per 1,000 science-career-years” can be computed.
7. Worst of all was question 1. Unlike question 2, question 1 is not restricted to one’s own current institution and so encompasses, for example, any nefarious analytical adjustment that I, as a statistician, may detect – and seek to sort out – as a referee. That being so, it is amazing, indeed incredible, that the affirmative rate for question 1 is as low as 13% (versus 6% affirmative rate for question 2). Did most respondents emanate from somewhat troubled institutions or are they less critical as peer-reviewers than as institutional-observers?
8. But, hold on, the different phrasing between question 1 (Have you witnessed, or do you have firsthand knowledge of, UK-based scientists inappropriately doing Y . . . ) and question 2 (Are you aware of any cases of X . . . at your institution) probably defies the sort of comparison that I sought to draw above. Of course, the survey should have been decently designed so that some form of comparison could be made, for example by asking if the most recently witnessed/firsthand knowledge behaviour had occurred in the respondent’s own institution . . .
9. Back to the wording of question 1: the witnessed/firsthand knowledge behaviours which the BMJ asked about range from inappropriately i) adjusting, ii) excluding, iii) altering data to iv) fabricating data either a) during their research or b) for the purposes of publication. Question 1 is, in effect, a set of eight questions!
10. If there is any consulting statistician who has not helped scientist-colleagues to do some regression analysis (ie statistical adjustment) more appropriately, I’d be concerned about their inexperience. ‘Inappropriately adjusting’, as stated in question 1, does not specifically rule-out - as a qualifying behaviour - the incorrect application of regression methods, which is part of the job of statistician-referees to detect and correct and which are more often errors of comprehension rather malice of forethought.
Just as the BMJ’s errors in survey-design, analysis and reporting are due to lack, not malice, of forethought. However, the first sentence in Tavare’s BMJ news piece is mischievously wrong – he omits to mention that question 1 asked about behaviours i) and ii) as well as about iii) and iv). Correction in the second paragraph is too late . . . a headline has been achieved by dint of ii) (if I’m generous) or iii) (if I’m sceptical).
As Dr Godlee remarks in another context: UK science and medicine deserve better. And statistical science requires better for the design, analysis and reporting of surveys. Designers of surveys with response rate as low as BMJ’s should look to their laurels; and not use statistics as the drunk uses a lamp-post: for support rather than illumination.
1. Editorial. Statistical errors. British Medical Journal 1977; 1: 66.
2. Gore SM, Jones IG, RytterEC. Misuse of statistical methods: critical assessment of articles in BMJ from January to March, 1976. British Medical Journal 1977; 1: 85-87.
3. Tavare A. Scientific misconduct is worryingly prevalent in the UK, shows BMJ survey. British Medical Journal 2012; 344: e377.
I have a long-standing interest in misuse of statistics. This post also appears on Straight Statistics with cross-referencing to BMJ Rapid Responses.
Competing interests: No competing interests
Re: Scientific misconduct is worryingly prevalent in the UK, shows BMJ survey
Dear Editors,
Well connected, despotic, University Professors or corrupted Senior Researchers will cover their misconduct well.
They are certainly not going to inform junior members of their Departments on research frauds or illegal funding!
So by questioning staff we just miss how extensive the problem is.
Only proper legal investigation can uncover all the facts.
At least this is what we found out in Greece, where European Community research funds have been used to buy Ferrari cars or build mountain villas. [3]
In Greece we also find extensive inbreeding, nepotism, plagiarism and “guest author” publishing in Universities, especially Medical Schools. [1][2][4]
References
[1] Lack of meritocracy exacerbates research fraud: the example of Greece.
http://www.bmj.com/content/344/bmj.e179?tab=responses
[2] Greek academia is plagued by inbreeding, nepotism, conflicts of interest, partisan politics, Professor Synolakis letter to Nature:
http://www.nature.com/news/2009/091105/full/news.2009.1042.html
[3] Medical research in Greece has no strategy, no formal standards, no evaluation procedures, no transparency, no evaluation of research staff, no ranking body, but instead heavy bureaucracy pervades, Stavros Saripanidis’ Rapid Responses in:
http://www.bmj.com/content/343/bmj.d7284?tab=responses
[4] Extensive inbreeding, nepotism, plagiarism and "guest author" publishing in Greek University Medical Schools, Stavros Saripanidis’ Rapid Response in:
http://www.bmj.com/content/339/bmj.b3783?tab=responses
Competing interests: No competing interests