Translating animal research into clinical benefit

BMJ 2007; 334 doi: (Published 25 January 2007) Cite this as: BMJ 2007;334:163
  1. Daniel G Hackam, clinical pharmacologist (Daniel.hackam{at}
  1. 1Cardiac Rehabilitation and Secondary Prevention Program, Toronto, ON, Canada M4G 1R7

    Poor methodological standards in animal studies mean that positive results rarely translate to the clinical domain

    Most treatments are initially tested on animals for several reasons. Firstly, animal studies provide a degree of environmental and genetic manipulation rarely feasible in humans.1 Secondly, it may not be necessary to test new treatments on humans if preliminary testing on animals shows that they are not clinically useful. Thirdly, regulatory authorities concerned with public protection require extensive animal testing to screen new treatments for toxicity and to establish safety. Finally, animal studies provide unique insights into the pathophysiology and aetiology of disease, and often reveal novel targets for directed treatments. Yet in a systematic review reported in this week's BMJ Perel and colleagues find that therapeutic efficacy in animals often does not translate to the clinical domain.2

    The authors conducted meta-analyses of all available animal data for six interventions that showed definitive proof of benefit or harm in humans. For three of the interventions—corticosteroids for brain injury, antifibrinolytics in haemorrhage, and tirilazad for acute ischaemic stroke—they found major discordance between the results of the animal experiments and human trials. Equally concerning, they found consistent methodological flaws throughout the animal data, irrespective of the intervention or disease studied. For example, only eight of the 113 animal studies on thrombolysis for stroke reported a sample size calculation, a fundamental step in helping to ensure an appropriately powered precise estimate of effect. In addition, the use of randomisation, concealed allocation, and blinded outcome assessment—standards that are considered the norm when planning and reporting modern human clinical trials—were inconsistent in the animal studies.

    A limitation of the review is that only six interventions for six conditions were analysed; this raises questions about its applicability across the spectrum of experimental medicine. Others have found consistent results, however. In an overview of similar correlative reviews between animal studies and human trials, Pound and colleagues found that the results of only one—thrombolytics for acute ischaemic stroke—showed similar findings for humans and animals.3 In our systematic review of 76 highly cited (and therefore probably influential) animal studies, we found that only just over a third translated at the level of human randomised trials.4 Similar results have been reported in cancer research.5

    Why then are the results of animal studies often not replicated in the clinical domain? Several possible explanations exist. A consistent finding is the presence of methodological biases in animal experimentation; the lack of uniform requirements for reporting animal data has compounded this problem. A series of systematic reviews has shown that the effect size of animal studies is sensitive to the quality of the study and publication bias.6 7 8 A review of 290 animal experiments presented at emergency medicine meetings found that animal studies that did not use randomisation or blinding were much more likely to report a treatment effect than studies that were randomised or blinded.9

    A second explanation is that animal models may not adequately mimic human pathophysiology. Test animals are often young, rarely have comorbidities, and are not exposed to the range of competing (and interacting) interventions that humans often receive. The timing, route, and formulation of the intervention may also introduce problems. Most animal experiments have a limited sample size. Animal studies with small sample sizes are more likely to report higher estimates of effect than studies with larger numbers; this distortion usually regresses when all available studies are analysed in aggregate.10 11 To compound the problem, investigators may select positive animal data but ignore equally valid but negative work when planning clinical trials, a phenomenon known as optimism bias.12

    What can be done to remedy this situation? Firstly, uniform reporting requirements are needed urgently and would improve the quality of animal research; as in the clinical research world, this would require cooperation between investigators, editors, and funders of basic scientific research. A more immediate solution is to promote rigorous systematic reviews of experimental treatments before clinical trials begin. Many clinical trials would probably not have gone ahead if all the data had been subjected to meta-analysis. Such reviews would also provide robust estimates of effect size and variance for adequately powering randomised trials.

    A third solution, which Perel and colleagues call for, is a system for registering animal experiments, analogous to that for clinical trials. This would help to reduce publication bias and provide a more informed view before proceeding to clinical trials. Until such improvements occur, it seems prudent to be critical and cautious about the applicability of animal data to the clinical domain.


    • Competing interests: None declared.