Editor's Choice

How predictive and productive is animal research?

BMJ 2014; 348 doi: https://doi.org/10.1136/bmj.g3719 (Published 05 June 2014) Cite this as: BMJ 2014;348:g3719

Re: How predictive and productive is animal research?

The race to produce as many scientific publications as possible in high impact factor journals is one of the main causes of poor research. It has led to insufficient quality in planning, execution and reporting of animal studies. Major deficiencies occur in the materials and methods sections of publications. Even though scientists are educated in laboratory animal science courses where they learn how important it is to publish all details, and even though the ARRIVE guidelines for reporting have been adopted by over 300 scientific journals, thus far, this hasn’t helped to improve the quality of reporting of animal studies (Pound and Bracken 2014). As long as scientists are rated by their output of publications in high impact factor journals, it is highly unlikely that the current situation is going to change for the better. It resembles the trend towards the financial crisis. As long as "funding is coming in and papers are published" things seem to be "going well” and no-one takes responsibility to stop the runaway train or change the course of action.

Systematic reviews of animal studies show repeatedly that the quality of reporting is far below good scientific standards. Randomisation and blinding are often unreported, even though they are the starting points of good scientific practice. While high impact factor journals continue to accept mainly positive results (publication bias), this leads to the effects of drugs and treatments being overstated on the basis of what is published, leading to the potential for exposure of patients and research volunteers to unnecessary harms. Ideally, animal studies would be planned and executed according to the Gold Standard Publication Checklist (Hooijmans et al. 2010), and should reach the same standards expected of clinical studies (Muhlhausler 2013) and then the publications would at least fulfill the ARRIVE guidelines.

All animal trials should be prospectively registered and the individual animal data stored in central databases and the raw data made fully and openly accessible to reviewers. Systematic reviews and meta-analyses could then be carried out with more confidence to provide more rigorous evidence for whether animal studies have translational value. Currently, it cannot be determined whether the low rate of translation of results from animal studies to humans is to be attributed to the poor quality of reporting and/or a really low translational value. This represents a serious waste of research.

The Lancet series 'Increasing value and reducing waste in biomedical research' (www.researchwaste.net 2014) offer suggestions for improving research such as methodological rigour and reproducing results. However, all research evaluation schemes seem to depend only on quantitative output of publications in high impact factor journals.

Clinical trials have recently been described as resting precariously on the pillars of flawed basic and toxicological research in 'the temple of biomedical science,' (Hartung 2013) and methodologists have (so far) mapped 235 different forms of bias in biomedical research, many of which scientists are either unaware of or do not account for (Chavalarius and Ioannidis 2010). It is therefore hardly surprising that only 15% of basic and clinical research has utility (Chalmers and Glasziou 2009), that only 11% of medical compounds are approved following years of expensive research (Arrowsmith 2011) and that $100 billion is wasted annually on biomedical research due to poor research conduct and analysis. (Glasziou 2014). Only when a more rigorous approach is followed, will it become possible to determine whether animal studies are worth their cost and how much are they useful for human healthcare.

At the moment patients and research volunteers seem unaware of the current situation which leaves them vulnerable to the negative effects from poor quality research, waste of funding and loss of therapeutic interventions. As an example, in the case of Alzheimer's Disease (AD) the US annual research spending on AD in 2013 was $504 million, but in spite of this spending, which grows each year, there are only four medications available, and as the NIH states 'these drugs don’t change the underlying disease process, are effective for some but not all people, and may help only for a limited time.' There are now calls in the UK for increased spending on AD and other diseases, however, it would be useful at this stage to stop and reappraise all existing animal research with systematic reviews and to consider adopting a more forward thinking programme such as the one launched by the US National Research Council which aims to tackle the lack of translation of animal studies in toxicology. In their strategy (National Academies 2007) which is now being implemented worldwide the plan is a more human-relevant approach which aims to be faster, less expensive and more predictive for human exposures. The aim is to overcome long-standing translational problems caused by, among other issues 'animal-intensive research.'

SYRCLE and SABRE are working hard for improvements in scientific practice, and unless stakeholders (regulators, funders, charities, journals, researchers, politicians and other key players) mobilise soon, it may become too late, as we saw in the financial crisis. Ultimately the public will find it unacceptable when they learn that best scientific practices are not mandatory in biomedical science and that lives are put at risk from bias and that vested interests come before patient's interests. SABRE has therefore taken the lead in formulating a set of recommendations (the 10 Rs+ www.sabre.org.uk: Register, Report, Represent, Replicate, Retract, Record, Restore, Review, Regulate, Reappraise) to be implemented into scientific practice. So the question is who will be the first to take responsibility for seeing that safeguards are put in place to ensure better science for better healthcare?

Merel Ritskes-Hoitinga₁ and Susan Green₂
₁ SYRCLE, Radboudumc, Nijmegen, The Netherlands.
₂SABRE Research UK

Chalmers I, Glasziou P. (2009). Avoidable waste in the production and reporting of research evidence. Lancet. Jul 4;374(9683):86-9.

Chavalarias D, Ioannidis JP. (2010) Science mapping analysis characterizes 235 biases in biomedical research J Clin Epidemiol. Nov;63(11):1205-15.

Glasziou P (2014) The Role of Open Access in Reducing Waste in Medical Research. PLoS Med 11(5): e1001651

Hartung, T. (2013) Look back in anger – what clinical studies tell us about preclinical work. ALTEX 30, 275–291

Hooijmans CR, Leenaars M, Ritskes-Hoitinga M. A gold standard publication checklist for improving quality of animal studies, fully integrate the 3Rs and to make systematic reviews feasible. ATLA (2010, 38, 167-182).

Muhlhausler BS, Bloomfield FH, Gillman MW. (2013) Whole animal experiments should be more like human randomized controlled trials. PLoS Biol.11(2):e1001481

National Academies Committee on Toxicity Testing and Assessment of Environmental Agents and National Research Council eds. (2007) Toxicity Testing in the 21st Century: A Vision and a Strategy, National Academies Press

Pound P, Bracken MB. (2014) Is animal research sufficiently evidence based to be a cornerstone of biomedical research? BMJ. May 30;348:g3387

Competing interests: No competing interests

24 June 2014
Merel Ritskes-Hoitinga
Professor in Laboratory Animal Science
Susan Green
POBox9101, NL6500HB Nijmegen