Is animal research sufficiently evidence based to be a cornerstone of biomedical research?
BMJ 2014; 348 doi: https://doi.org/10.1136/bmj.g3387 (Published 30 May 2014) Cite this as: BMJ 2014;348:g3387All rapid responses
Rapid responses are electronic comments to the editor. They enable our users to debate issues raised in articles published on bmj.com. A rapid response is first posted online. If you need the URL (web address) of an individual response, simply click on the response headline and copy the URL from the browser window. A proportion of responses will, after editing, be published online and in the print journal as letters, which are indexed in PubMed. Rapid responses are not indexed in PubMed and they are not journal articles. The BMJ reserves the right to remove responses which are being wilfully misrepresented as published articles or when it is brought to our attention that a response spreads misinformation.
From March 2022, the word limit for rapid responses will be 600 words not including references and author details. We will no longer post responses that exceed this limit.
The word limit for letters selected from posted responses remains 300 words.
There is an important aspect to the question of the relevance of animal research to humans, namely the way the observations and results are evaluated and adopted.
Munoz et al. (1981. Biological activities of crystalline pertussigen from Bordetella pertussis. Infection and Immunity; 33(3): 820-826) in their figure 2, demonstrated the fluctuation in weight gain in 5 case and 5 control mice (injected 1 microgram of pertussigen or diluent, respectively. Attached hereto as ‘Figure 2’).
Four of the five case mice died on days 4, 5 and 8. They all stopped gaining weight within 24 hours of the injections. They gained weight towards the third (2 mice), fourth (1 mouse) and fifth (1 mouse) days before dying. Even the one ‘case’ mouse that survived showed weight arrests along the days very close to the critical days observed and described by Scheibner (2004. Dynamics of critical days as part of the dynamics of non-specific stress syndrome discovered during monitoring with Cotwatch breathing monitor. J ACNEM; 23(3): 1-5) [the first and second top graphs in ‘Figure’ attached to this Rapid Response]
The follow-up of that mouse stopped on day 16; maybe it would have died on one of the subsequent critical days (21-24). That is another of the problems with animals (and human) tests: the cut-off points are arbitrarily and potentially prematurely chosen, hence many meaningful results can fail to be recorded. (Figure 2 attached to this Rapid Response.)
There was no doubt in Munoz et al.’s minds that the deaths of the mice were caused by the administered pertussigen (one of the active ingredients in all pertussis vaccines given to humans, whether whole-cell or acellular). The authors definitely did not regard these deaths as mere coincidence.
In contrast, the contemporary Tennessee deaths (“‘Tennessee cluster’ [August 1978-March 1979] stirs inquiries. Fresno County, Coroner David Hadden Report 1984”) and other deaths following vaccines as a rule have all been considered as coincidental with the administered vaccines, or “the causal relationship (is) not ascertained”, even when deaths occur within 24 hours.
However, proper tabulation of the daily dynamics of the Tennessee deaths shows clear clustering of such deaths along the critical days. The ‘Figure’ incorporates data from Bernier et al (1982. Diphtheria-tetanus-pertussis vaccination and sudden infant deaths in Tennessee. J Pediatrics; 105(5): 419-421), Walker et al. (1987. Diphtheria-tetanus-pertussis immunization and sudden infant death syndrome. Am J Publ Health; 77: 945-951) and Coulter and Fisher (1991. A shot in the dark)[the lowest graph in ‘Figure’]
Pertussis vaccine has actually been purposefully used to induce so-called experimental allergic encephalomyelitis in mice innumerable times. The words ‘induced’ and ‘caused’ are used, with the observed and expected reactions never considered coincidental, unlike when the same vaccine is given to human babies.
Steinman et al. (1985. Pertussis toxin in required for pertussis vaccine encephalopathy (postimmunization encephalopathy). Proc Nat Acad Sci USA; 82: 8733-9736); and Munoz et al. (1987. Anaphylaxis or so-called encephalopathy in mice sensitized to an antigen with aid of pertussigen (pertussis toxin). Infection and Immunity; Apr 1987: 1004-1008)) are among the authors who researched vaccine effects in animals relevant to humans.
Steinman et al. (1985) wrote, “BSA [bovine serum albumin] may also be important in human pertussis vaccine encephalopathy. As discussed previously (4, 5), almost all babies exposed to cow’s milk have serum antibodies (IgG, IgA, and IgM) to BSA (9,10). Even breast-fed babies have these serum antibodies, which are probably secondary to sensitization to BSA in the mother’s milk (10, 11).”
Hewlett et al. (1989. Evaluation of the mouse model for study of encephalopathy in pertussis vaccine recipients. Infection and Immunity; 57(3): 661-663) discussed the correct evaluation of pertussis vaccine encephalopathy as anaphylaxis. Anaphylaxis represents sensitisation affecting all system of the body, including central nervous system.] They wrote, “From the onset of widespread administration of killed, whole-cell pertussis vaccines, there have been anecdotal case reports of adverse reactions associated with immunization (2,3,8,10,12). Fever and local reactions occur in more than 40% of the children receiving pertussis vaccine, and this rate is significantly higher than in recipients of diphtheria-tetanus toxoids (DT) (6). Furthermore, other whole-cell vaccines, composed of gram-negative bacteria, such as cholera and typhoid vaccines, elicit similar reactions (9). Of greater concern are the neurologic events which have been reported to be temporally associated with pertussis vaccine administration, including collapse, convulsions, and encephalopathy with or without permanent sequelae (2,3,6,8,10,12). The study by Cody et al (6) in which the nature and rates of such reactions after DT or DT-pertussis vaccine (DTP) injections were compared, did not have a sufficient sample size to provide an answer regarding the relationship between the neurologic events and the pertussis component of DTP vaccine.”
And, “Similarly, data provided by prospective case-control evaluation of encephalopathies in Great Britain [National Childhood Encephalopathy Study – NCES] determined that the association between pertussis vaccine administration and encephalopathy is very rare but did not establish cause and effect relationship between the two (14,20,21).”
The above statement is interesting because the NCES study did find a statistically significant association, which was enough to conclude a causal link. The NCES also concluded that there were 9 cases in which the established encephalopathy could not be explained by anything else [other than the administered vaccine].
Torch (1982 and 1986) presented papers at the Thirty Fourth and Thirty Sixth Annual Meetings of the American Academy of Neurology and, on both occasions, he showed a clear internal consistency between the administered DPT [and polio] vaccines and the observed deaths, and concluded that there was a causal relationship.
The main proponent of the merely temporal [coincidental] relationship between the administered vaccines and the observed reactions is James D Cherry (Cherry et al. 1988. Report of a task force on pertussis and pertussis immunization. Pediatrics (Suppl); 81(6) Part2: 939-984). One of his papers is entitled “‘Pertussis vaccine encephalopathy’: it is time to recognize it as the myth that it is.” (1990. JAMA; March 23-30; 263(12): 1679-80.)
Fulginiti (1990) showed wishful thinking in his article “A pertussis vaccine myth dies” (Am J Dis Childhood; 144(8): 860-1).
In summary, existing animal studies are useful and indeed important for medical evaluation of vaccine effects in humans.
Competing interests: No competing interests
The recent article in the BMJ 2014;348 by Pound and Bracken regarding animal research, and its role in medicine, was much appreciated by those of us attempting to improve biological sciences, particularly with respect to the use of animals. Amongst the many points raised in the article one of particular concern is the method used to train new researchers. This method appears to hark back to the days when a would-be physician received most their training from a mentor within a master-apprentice type relationship.
Forty years of academic life as a pharmacologist has led me to the supposition that there is growing evidence of lack of training in appropriate animal use coupled to a lack of understanding of experimental design and analysis. The apprentice model of training does not meet requirements especially as more and more supervisors become ever more remote from their students. Students are left to learn from the fellow students, but what they learn is how to survive. Academic and research survival depends solely on publications, despite evidence that the publication system may be deeply flawed. Possibly this trend is enhanced by, or even stems from, the endless need to produce more and more ‘work’ without being sure as to the scientific validity of that work. One has to endure endless presentations and publications that do not meet a minimum standard in animal use, a situation made worse by errors in design and analysis. And this occurs despite the addition of layers of bureaucracy that are supposed to ensure quality.
One approach to reducing the problem might be the institution of rigourous graduate courses in animal use, experimental design and analysis for any student who are, or will, work with animals. Such courses would be more important than the usual graduate courses composed of the latest techniques and ‘flavour of the month’ knowledge – all of which could change shortly after the student receives their final degree. Unfortunately the resistance to such training courses is high, often with the stated assumption that nothing is broken in the machine of biological discovery, particularly in biology as related to medicine.
The article in the BMJ concentrated on medicinal aspects but this is not the only area of concern since zoology, physiology and the other life sciences are critical to scientific knowledge, even when not related to medicine, as part of ‘knowledge for knowledge’s sake’. We need to train our students better while trying to ensure that they do not get too seduced by molecular reductionism coupled with translation (a new name perhaps for pharmacology, physiology and related disciplines) in a situation where the ‘languages’ of biology and drugs still remain obscure.
Competing interests: No competing interests
I read with interest the recent article by Pound and Bracken concerning animal based research and it’s apparent lack of utility in predicting clinical outcomes. I agree whole heartedly with the sentiment that an unacceptably large proportion of laboratory animal studies are not designed, analysed or reported appropriately. The authors mentioned efforts by bodies such as NC3R’s and NIH to introduce training modules to improve the level of competence in these areas, but my experience is that this will only go part way towards addressing the problem effectively. Fundamentally, until the basic research community recognises that statistics is a core discipline in the design and execution of pre-clinical research (in much the same way that it is recognised in clinical work) then the problems that are outlined in this paper will, unfortunately, persist.
In AstraZeneca we have established an internal company standard relating to the conduct of animal work (we call it Good statistical practice or GSP) which mandates the involvement of an actual statistician in the design phase of all AstraZeneca in-vivo work (Peers et al 2014). http://www.nature.com/nrd/journal/vaop/ncurrent/full/nrd4090-c1.html. This will enable AZ to transcend the questions around whether the failure of an animal model to reliably predict clinical outcomes is a result of a poorly run study or simply lack of translatability of the model. It is important to separate poor quality in the design and conduct of an experiment, from the lack of predictivity of a well run experiment. Moreover the predictivity cannot be properly assessed until an experiment is well run and so the latter is always a necessary precedent.
It’s important to bear in mind the practical difficulties of assessing the validity of a pre-clinical test. The vast majority of animal experiments are used to test compounds that are never tested in the clinic. They are stopped either because of lack of efficacy or because of safety concerns, and the opportunity to answer the question about clinical translatability (ie ‘ has the model accurately assessed that the compound would be negative in man’?) does not arise. In compiled figures it is difficult (if not impossible) to incorporate this information, so the translatability of an animal model is often based on the notion; ‘of the positive outcomes in the pre-clinical model that were eventually tested in the clinic, what proportion showed commensurate clinical efficacy?’ This quantity is referred to by Cooper et al (1979) as the predictive value (PV) of a test and they discuss in their paper the deficiencies associated with this choice of metric in determining the validity of a test.
The following fictitious example illustrates this point. Suppose that a robust and well designed pre-clinical test is used to screen 100 compounds and (unusually) it was possible to test all of the positives from this screen in the clinic. The clinical results demonstrate that only 1 of these promising compounds was active humans in accordance with this table…
If the question was asked ‘How good is my pre-clinical test?’ then the answer, based on PV, would be 11% (1 out of 9) . Not very impressive and quite likely to lead to criticism that the model is of no clinical utility. However Cooper et al argue that the best way to describe the validity of the test (in this case our pre-clinical test) is by estimating;
1) the sensitivity (the proportion of compounds that would have been successful in the clinic that were correctly identified by the test)
2) the specificity (the proportion of compounds that would be negative in the clinic that were correctly identified as negative).
Clearly the problem with this assessment is that it is unknown how many compounds that were negative in the test would have been positive in the clinic. Assume that the rate of clinically positive compounds amongst the 91 identified as negative by the test is one of two extremes;
i) no higher than 11% (a reasonable worst case scenario). So of the 91 pre-clinical failures, 10 would have been active in the clinic and the remaining 81 negative. This leads to a sensitivity of 90% (81/89) and a specificity of 9% (1/11).
ii) 0, so all 91 were correctly identified as negative. This leads to a sensitivity of 91% (91/99) and a specificity of 100%.
What is clear from this example is that the assessment of quality of a pre-clinical test depends heavily on assumptions made about compounds not clinically tested. Whether a test is reasonable for its objective will also depend of the relative importance of false negative and false positive results. For example if a top priority is to ensure that negative compounds are not tested in man then a high sensitivity value would be required, whereas if it’s more important to ensure that no promising compounds are missed then specificity should be high. This kind of rational design is more often than not impossible in pre-clinical tests because of the missing information.
I realise that there are some very difficult therapy areas such as neuroscience and stroke in which identification of reliable pre-clinical models has proved difficult. However it is potentially misleading to suggest that there is very limited value in animal research (particularly if that assertion is derived from estimates of PV) and that investment should be directed elsewhere without developing a better understanding of the issues discussed above.
Cooper, Saracchi, Cole (1979) ‘Describing the validity of carcinogen screening tests’, British Journal of Cancer 39, pp87-89.
Peers, South, Ceuppens, Bright and Pilling (2014), ‘Can You Trust Your Animal Study Data?’, Nature Reviews Drug Discovery, Published online 6th June 2014
Peter Ceuppens
Competing interests: I am a full time employee of AstraZeneca.
Whitelaw and Thoresen (1) (and other rapid responses (2)) completely miss the point made by Pound et al. in their article summarising the evidence for the utility of animals in medical research(3). Their responses are exactly the stuff that Pound et al. said 10 years ago was unsupportable; anecdote does not trounce systematic review.
Evidence based medicine should be the cornerstone of modern medicine, that is not in dispute. Evidence of the highest order is not anecdote or single success stories but systematic reviews (indeed reviews of systematic reviews in the case of ‘animal models’ in general). This is because-as with any scientific hypothesis, when asking the question ‘could this have happened by chance?’ statistics need to play a part. As Pound et al. say; “Given the large amount of animal research being undertaken, some findings will extrapolate to humans just by chance.”
Whitelaw and Thoresen plead for the use of animal models to be retained, unless proven to not work. Just based on Pound et al.’s article there are plenty of examples where specific ‘animal models’- of stroke and amyotrophic lateral sclerosis for example- have systematically failed to deliver. These models should no longer be permitted let alone funded.
However, the point is, is that you don’t know you have done ‘bad research’ until you have done it. Systematically reviewing the efficacy of specific animal models takes a lot of work and has to be done in hindsight, by which point animal lives, money and time has- potentially- been wasted.
The use of animals is a common approach in science and it is perfectly valid to review it as a whole. In doing so, Pound et al. are doing the medical establishment a service by highlighting that the approach in general may be broken. If the evidence is that most specific ‘animal models’ suck then the approach in general must also suck.
In any other form of business a method that works 10% of the time wouldn’t even get considered, yet alone used. Pound et al. are right when they say the current situation is unethical. Godlee asks what we should do instead (4). The medical community has two options, demonstrate through systematic review that using animals (in specific cases or otherwise) is in fact an effective or –heed the evidence so far - and invest in alternative approaches instead. Science tends to look forward rather than back, but it needs to be looking in the right direction.
1. BMJ 2014;348:g4174
2. E.g Fernando Martins do Vale http://www.bmj.com/content/348/bmj.g3387/rr/701664
3. BMJ 2014;348:g3387
4. BMJ 2014;348:g3719
Competing interests: No competing interests
This analysis rightly discusses the challenges associated with the evaluation and predictive validation of animal models, as well as methodological flaws in preclinical study designs that may be the reasons behind the failure of translation of animal studies to medical practice. But there are many fields of medicine whereby animal research has been fruitfully translated into established treatment modality. We have to replicate methods of animal research which were succesfully translated. History stands evidence for the critical role that animal research has played in the enormous progress of cancer research. Exploration and characterization of disease etiopathophysiology, target identification, biomarkers, characterization and evaluation of novel therapeutic agents and development of radiotherapy, all of them directly or indirectly are owing to animal research. The value of animal studies apart from their utility from efficacy study will remain for safety studies, particularly in oncology.
History has repeated again in the evolution of immunotherapy in cancer. The importance of animal models in the rapid progress of this revolutionary approach of treatment in cancer is explained well in this article.(1) For instance, the experimental mouse models are divided into 3 categories: transplantable tumors, genetically engineered/transgenic models and humanized mouse models of cancer, each having advantages and disadvantages in evaluating different phases of therapy. This is strategically overcome by utilising information in sequence wherein a systematic approach of initial preclinical immune studies are conducted in transplantable tumor models and the findings then confirmed in genetically engineered mice. Such innovative methods of utilising information in continuum between different models, different phases (preclinical and phase 1) and effects of combination therapy (chemo radio and immunotherapy) can serve to overcome some drawbacks in animal research translation to clinical practice.
Reference
1 http://dx.doi.org/10.1016/j.gde.2013.11.008
Competing interests: No competing interests
The article by Pound and Bracken1 highlights some important flaws in the current practice of basic biomedical research. An academic culture in which negative findings frequently meet with difficulty in finding a source for publication creates considerable distortion in the literature and a subsequent over-estimation of effect sizes.
The tone of the article, however, teeters towards rejecting animal-based research en-masse. The authors state that
“…research is beginning to suggest that it is clinical rather than basic research that has most effect on patient care.”
and that
“…promising findings from animal research often fail in human trials…”
Much clinical research is performed by standing on the shoulders of giants. A phase III drug trial comparing two antihypertensives will have much greater direct impact on clinical decision making than any individual animal model based basic science study. However, hundreds or thousands of such “low impact” works are needed to develop the drugs in questions. The authors reference Wooding et al.2 who themselves acknowledge this and conclude that clinically motivated basic biomedical research should be encouraged.
Basic biomedical research may try and may fail. Without it, however, there will be no successes to base clinical triumphs upon.
References
1. Pound P, Bracken MB. Is animal research sufficiently evidence based to be a cornerstone of biomedical research? BMJ 2014;348:g3387.
2. Wooding S, Hanney S, Pollitt A, Buxton M, Grant J. Project retrosight: understanding the returns from cardiovascular and stroke research. 2011. www.rand.org/pubs/research_briefs/RB9573.
Competing interests: The author is a Clinical Research Fellow supported by the British Heart Foundation performing basic biomedical research
Pound and Bracken’s (1) observations of the failure of animal research to translate evidence to humans, primarily stem from poor design and conduct of animal research. The virtue of science is self correcting. In the event of mistakes, it can fix its own mistakes more fitfully and this is considered as the great strengths of science (2). With falling investment in basic and animal research, the dialogue opened upon “Where would you place the balance of effort: investment in better animal research or a shift in funding to more clinical research?” by the editor of BMJ (3) is a timely, necessary effort to take the science of animal research forward and bolster evidence based practice in medicine.
Broadly, the limitations in translating the results of animal experiments to humans include, non-availability of suitable animal models to adequately represent human disease, sloppy techniques, clinical heterogeneity and predominantly inadequate sample sizes leading to effect size bias. Moreover, the environments of the animals under experimental conditions are far different from natural environment (environmental determinant) to mimic the natural history of the disease and that affects reproducibility of the results of intervention when the results from animal studies are taken forward to human trials. Hence, a pragmatic and effectiveness approaches are required in animal research.
However, to build evidence, the general observations and limitations articulated so far from animal research is not applicable to zoonotic diseases, because, zoonotic agents infect multiple species. Taylor et al (4) have documented that zoonoses constitute more than 60% of all known infectious diseases, and 75% of emerging infectious diseases (4). The etiologies/shared risks and treatments of zoonotic diseases are generally similar across multiple species. Therefore, the efficacy of therapeutic interventions in zoonoses is also believed to be similar across multiple species and it is prudent to demand scientifically valid evidence only from animal experiments (without an alternative) that are applicable to multiple species including humans (5). Paucity in evidence from animal experiments will greatly affect not only humans but also multiple species.
Animal experiments play an important part in the chain of research evidence. To generalize the results, efforts to minimize bias and random error are important. Further, improving the precision and reliability of the results depends on repetition of the animal experiments with adequate sample size (6), although, to maximize the impact factor, journals prefer publishing ground breaking new research than dutiful replications that confirm previous findings.
To determine similarities between animal models and clinical trials, researchers engaged in animal experiments need to follow: the guidelines of ARRIVE (Animal Research: Reporting In Vivo Experiments)(7), publication checklist of SYRCLE (Systematic Review Centre for Laboratory Animal Experimentation)(8), and produce evidence through systematic reviews and meta-analyses of animal experiments for comparison with clinical trials that may be aligned to The Collaborative Approach to Meta-Analysis and Review of Animal Data from Experimental Studies (CAMARADES) (9).
Essentially, in the context of zoonoses, and largely in other infectious and non communicable diseases investment in animal research is invaluable to generate evidence, until a more simple and sensitive method for demonstrating the evidence is replaced.
References
1.Pound P, Bracken MB. Is animal research sufficiently evidence based to be a cornerstone of biomedical research? BMJ 2014;348:g3387.
2.It’s Science, but Not Necessarily Right. Opinion Pages. Sunday Review. New York Times. June 25,2011.
3.Fiona Godlee How predictive and productive is animal research?. BMJ 2014;348:g3719.
4.Taylor LH, Latham SM, Woolhouse MEJ. Risk factors for human disease emergence. Philos Trans R Soc Lond B Biol Sci. 2001, 356:983–9.
5.G.V. Asokan, Z. Fedorowicz, P. Tharyan and A. Vanitha One Health: perspectives on ethical issues and evidence from animal experiments. EMHJ 2012;18 (11):1170-3.
6.Schneider B. Begründung der Wiederholung von Tierversuchen und Bestimmung der erforderlichen Tierzahl gemäss demDeutschen Tierschutzgesetz. [Justification of repeated animal experiments and determination of the required number of animals according to the German Animal Protection Act]. Arzneimittel-Forschung, 2009;59:318–25.
7.Animal Research: Reporting of In Vivo Experiments. www.nc3rs.org.uk/downloaddoc.asp?id=1206&page=1357&skin=0
8.About SYRCLE. www.radboudumc.nl/Research/Organisationofresearch/Departments/cdl/SYRCLE...
9.CAMARADES. www.dcn.ed.ac.uk/camarades//
Competing interests: No competing interests
Malcolm Watters and I compared clinical and basic research directly in a paper published 15 years ago (1) which, from what PubMed tells me, has been cited just 3 times since. We were not cited by Pound and Bracken.
This is the abstract: Tissue and cell culture (in vitro) studies reported in the 1997 issues of the British Journal of Anaesthesia, Anesthesia and Analgesia, and Anesthesiology were compared with groups of clinical studies selected at random from the same issues. Comparisons were of some basic aspects of study design and reporting that might lead to bias. The aspects examined were sample size, randomization and reporting of exclusions and withdrawals. Two groups of 53 articles were compared: sample size was smaller in in vitro than in clinical studies (median 6 vs 19); randomization was reported in five in vitro studies and in 37 studies; and failures were reported in two in vitro studies and in 43 clinical studies. This hinders interpretation of reported tissue and cell culture studies. Where possible, tissue and cell culture studies should be conducted, reported and assessed for publication to standards equivalent to those for clinical studies.
At the time that we were collecting the data for this study, it seemed to me that many basic science studies stopped when n=6, because this satisfied p<0.05. Researchers almost never said whether this was 6 consecutive experiments, or 6 out of some undetermined number, the results of some being discarded. At a meeting, I asked one presenter, well known for his basic research (and for whom n was indeed 6), whether he randomised in any way, or whether he had any results that were not reported. He disdainfully ignored the question. That our paper has been cited only three times since shows that nobody really cares. Perhaps Pound and Bracken's paper will start the debate that we failed to spark.
1 Watters MP, Goodman NW. Comparison of basic methods in clinical studies and in vitro tissue and cell culture studies reported in three anaesthesia journals. Br J Anaesth. 1999 Feb;82(2):295-8.
Competing interests: No competing interests
The article by Dr. Pound and Professor Bracken (1) is a wake-up call to all scientists working in translational research. It is essential that all animal research conducted in the most rigorous, considered and objective manner possible in order to maximise the potential of clinical results. Ensuring that those animals are treated ethically and with minimal waste is also fundamental. These principles should be upheld throughout the career of any preclinical scientist.
However, the aforementioned article and accompanying editorial should not be taken as anything more than a driver towards the improvement of animal research, rather than the death-knell for translational animal models. Indeed, the overriding message of the article is somewhat confusing – demanding that we optimise and streamline animal research is very different from suggesting that it is useless, but both of these ideas are presented side-by-side.
The commenting authors have provided elegant examples of animal research elucidating basic physiological processes (Professor Martins do Vale), disease aetiology (Dr. Grant), and potential therapeutic strategies (Professors Whitelaw and Thoresen). However, in a subsequent response, Professor Bracken asks us to “move beyond the citing of individual cases”. This is a convenient argument to use, as it essentially negates any future discussion on the exact areas where animal research has been beneficial. I would argue that only by citing the successes, as well as examining the failures, can we find ways to ensure the relevance of future animal research. In fact, when we were told that drug companies are reducing the use of animals, perhaps this is because we are “trimming the fat”, and focusing on models that are more likely to produce results.
Though we are asked not to, I would like to return to the example of therapeutic hypothermia in neonatal hypoxic brain injury presented by Professor Whitelaw. Any potential treatment should ideally show efficacy across a broad range of species before it enters human trials. This was the case with hypothermia. However, using multiple animal platforms presents a problem for providing preclinical systematic reviews, as the models and outcome measures will be fundamentally heterogeneous. A range of approaches must therefore be used to assess the preclinical evidence of a treatment before trials are implemented, as was done with hypothermia (2,3).
Recently, a trial of longer and deeper cooling (despite no preclinical studies examining whether this would be beneficial; 4) in neonatal hypoxic brain injury, stopped recruiting participants due to a trend towards increased futility in the experimental arms. Had more animal research been performed first, the patients involved may have seen better outcomes elsewhere.
That being said, we have clearly relied-upon a number of inaccurate models of complex disease. This includes the previously discussed superoxide-dismutase 1 (SOD1) mouse model of amyotrophic lateral sclerosis (ALS). The SOD1 mutation only translates to a small percentage of ALS cases, where the aetiology is often unknown (5). As such, it would be naïve to assume that any success in this model would translate to all patients with ALS. Another example is the seminal paper by Seok et al. (6), which details how the mouse inflammatory response poorly mimics that of humans. However, this should not be considered a failure of animal research, but a reason to expand, for example, porcine, ovine and non-human primate models, which may translate more faithfully to the clinical scenario.
Nobody will disagree with Dr. Parekh’s assertion that the best test species for humans is humans. In terms of dietary and lifestyle manipulations with respect to diseases such as type II diabetes and obesity, trials could almost certainly be safely administered with little more preclinical work required. However, when developing novel therapeutic agents with novel targets, or any therapy which is necessary during childhood development, we must continue to start with animal data to ensure the safety of our patients. What are the alternatives?
Cell culture is useful when identifying pathways and molecular targets, but even complex co-cultures will never accurately replicate the immunological, physiological and biochemical responses to a treatment. Not forgetting that 90% of the cells in the human body are prokaryotic, and their effect on disease processes is only beginning to be uncovered (7,8). This can only be explored in vivo. Additionally, while computer modeling of disease is attractive, we can barely replicate single organs (9) due to the computational power required. Only when we are able to model the flux of electrons through the mitochondrial matrix of trillions of cells simultaneously (alongside thousands of cellular phenotypes and epigenetic processes) will in silico models become a robust option. We are decades away from that being a possibility.
The history and future of animal research is clearly complex. However, rather than just presenting problems, our community should work harder to find solutions. Those that that should be considered include:
1. Rigorous experimental protocols that minimise potential bias, and allow for easier systematic review of treatments, where possible.
2. Expansion of the animal platforms used to include mammalian species other than rodents, particularly in chronic disease processes.
3. Refinement and development of more accurate animal models. This will include objectively rejecting those models that are not producing clinically relevant results.
4. Encouragement to publish negative data, preventing unnecessary replication, and waste of animal resources.
5. Increasing the scrutiny of funding bodies to examine whether previous animal work by the applicant has applied scientifically rigorous and ethical standards. Many of the commenting authors and readers will be reviewers for such boards.
6. Ongoing investment in alternative platforms to model disease aetiology and treatment.
Finally, I must agree with Dr. Pound that the education of current (and future) preclinical scientists is a key step to any of the above becoming possible. However, though the drive to create clinical results must be our main motivating factor, I would never disparage the act of pure “scientific discovery”. Curiosity and ingenuity are two of the fundamental defining characteristics of our species, and this must continue to be celebrated if we are ever to solve the great number of clinical challenges that we face.
References
1. Pound P, Bracken MB. Is animal research sufficiently evidence based to be a cornerstone of biomedical research? BMJ. 2014 May 30;348:g3387.
2. Thoresen M. Cooling the newborn after asphyxia - physiological and experimental background and its clinical use. Semin Neonatol. 2000 Feb;5(1):61-73.
3. Edwards AD, Brocklehurst P, Gunn AJ, Halliday H, Juszczak E, Levene M, Strohm B, Thoresen M, Whitelaw A, Azzopardi D. Neurological outcomes at 18 months of age after moderate hypothermia for perinatal hypoxic ischaemic encephalopathy: synthesis and meta-analysis of trial data. BMJ. 2010 Feb 9;340:c363.
4. Optimizing (Longer, Deeper) Cooling for Neonatal Hypoxic-Ischemic Encephalopathy(HIE) (http://clinicaltrials.gov/ct2/show/NCT01192776?term=deep+neonatal+hypoth...).
5. Perrin P. Make mouse studies work. Nature 2014;507:423-5.
6. Seok J, Warren HS, Cuenca AG et al. ; Inflammation and Host Response to Injury, Large Scale Collaborative Research Program. Genomic responses in mouse models poorly mimic human inflammatory diseases. Proc Natl Acad Sci U S A. 2013 Feb 26;110(9):3507-12.
7. Berg RD. The indigenous gastrointestinal microflora. Trends Microbiol. 1996 Nov;4(11):430-5.
8. Nieuwdorp M, Gilijamse PW, Pai N, Kaplan LM. Role of the microbiome in energy regulation and metabolism. Gastroenterology. 2014 May;146(6):1525-33.
9. Setty Y. In-silico models of stem cell and developmental systems. Theor Biol Med Model. 2014 Jan 8;11:1.
Competing interests: No competing interests
Stroke research and aging: an example of the discrepancy between basic and clinical research
Basic science research in animal models helps to prevent mistakes and improve translation to clinical research and clinical practice. Unfortunately, most animal models resembling diverse pathological phenomena do not include all important variables to allow an adequate translation to clinical research 1. An example of this is the discrepancy related to stroke research and aging.
Stroke primarily affects older adults and, in the following decades, patients in this age group will increase due to the demographic change in the world population. However, the vast majority of experimental studies in animal models do not consider aging as a factor, despite the recommendation for inclusion in preclinical studies 2. Therefore, there is a clinical-experimental discrepancy: first, this clinical condition usually occurs in people over 65 years 3 4; on the other hand, experimentation is performed in rats between 3 and 6 months of age, which extrapolated to humans corresponds to individuals between 15 and 18 years of age 5. It is important to consider this factor when studying ischemic phenomenon in the brain and transferring it to potential therapeutic options.
There is evidence supporting that the response to cerebral ischemia is different between aged and young brains 6. This difference is mainly because of the changes occurring in the brain throughout life, which are present at the molecular, cellular, tissue, and functional level 7. The use of older animals to study the effect of aging in brain ischemia has not been entirely popularized, so there is still a need to assess the structural damage and functional deterioration in these animals.
Managing older animals presents some major challenges: 1) high cost, 2) high mortality, 3) variability in structural damage and functional deterioration, and 4) a lack of adaptation in functional assessment protocols adjusted for age. These conditions cause greater difficulty and complexity in the implementation of models and the interpretation of results.
To address this clinical-experimental discrepancy, it is necessary to increase the number of studies on cerebral ischemia in aged animals. To achieve this, studies need to document the aging process, evaluate the effect of age on animal models, and create and/or adapt appropriate functional tests for old animals. It is imperative to include older animals in preclinical models, especially in those that study diseases with a greater prevalence in older adults, such as stroke.
Responding to the need for better animal research, it is convenient to adopt the recommendations of experts in order to improve effective translation of knowledge 1 2. Additionally to age, other variables such as gender, comorbidities, diet, and environment exposition must be adapted to better explore pathological phenomena and make efficient the translation of diagnostic and therapeutic interventions.
References
1. Pound P, Bracken MB. Is animal research sufficiently evidence based to be a cornerstone of biomedical research? BMJ 2014;348:g3387
2. Fisher M, Feuerstein G, Howells DW, et al. Update of the stroke therapy academic industry roundtable preclinical recommendations. Stroke 2009;40(6):2244-50 doi: 10.1161/STROKEAHA.108.541128[published Online First: Epub Date]|.
3. Warlow C, Sudlow C, Dennis M, et al. Stroke. Lancet 2003;362(9391):1211-24 doi: 10.1016/S0140-6736(03)14544-8[published Online First: Epub Date]|.
4. Zheng ZJ, Croft JB, Giles WH, et al. Sudden cardiac death in the United States, 1989 to 1998. Circulation 2001;104(18):2158-63
5. Andreollo NA, Santos EF, Araújo MR, et al. Rat's age versus human's age: what is the relationship? Arq Bras Cir Dig 2012;25(1):49-51
6. Popa-Wagner A, Buga AM, Turner RC, et al. Cerebrovascular disorders: role of aging. J Aging Res 2012;2012:128146 doi: 10.1155/2012/128146[published Online First: Epub Date]|.
7. Popa-Wagner A, Crăiţoiu S, Buga AM. Central nervous system aging and regeneration after injuries. A systematic approach. Rom J Morphol Embryol 2009;50(2):155-67
Competing interests: No competing interests