Editorials

Learning lessons from MVA85A, a failed booster vaccine for BCG

BMJ 2018; 360 doi: https://doi.org/10.1136/bmj.k66 (Published 10 January 2018) Cite this as: BMJ 2018;360:k66
  1. Malcolm Macleod, professor of neurology and translational neuroscience
  1. University of Edinburgh, Edinburgh, UK
  1. malcolm.macleod{at}ed.ac.uk

We must review how we use animal data to underpin clinical trials in humans

The development and testing of new treatments is complicated. Many approaches are developed through the laboratory, and most involve animal studies—to give confidence in the biological concept underpinning the treatment, to show safety, or to show efficacy in animal models of the disease in question. These purposes are regulated differently: safety studies must be conducted according to established principles of good laboratory practice, but there is no such requirement for proof of concept or efficacy studies. The regulatory requirements around safety are, understandably, better developed for marketing authorisation of new treatments than they are around approvals for clinical trials.

The development of MVA85A for the prevention of tuberculosis (TB) gives us an opportunity to reflect on these processes. This promising intervention was designed to boost the effectiveness of the well established BCG vaccine. It was reported to show efficacy in laboratory studies in animals but did not show benefit when tested in a large clinical trial in infants.1 In the linked feature (doi:10.1136/bmj.j5845),2 Deborah Cohen raises important questions about the development of MVA85A and, in particular, the extent to which emerging findings from animal studies were communicated to clinical trial regulators, funders, ethics committees, and potential participants.

It seems that while the human clinical trial was in the late stages of preparation, an experiment suggested that macaque monkeys given MVA85A in addition to a BCG vaccine had a higher mortality than those given BCG vaccine alone.3 In this experiment, mortality endpoints were reached when monkeys became unwell enough to require euthanasia. Like most laboratory experiments, this one was too small to say whether the increased mortality was real or occurred by chance alone, but it flagged a potential problem and it would have been prudent to have at least made the clinical trial regulators aware of this finding at the earliest opportunity. Indeed, the European Medicines Agency’s Guidelines on Good Clinical Practice recommend that the investigator brochure includes “all relevant non-clinical … studies.”4

That these data were apparently not included in the initial investigators brochure (May 2008—six months after the experiment ended), while supportive animal data were included, is a cause for concern. Raw data from the macaques (not the survival curves) and the results of one statistical test, were provided in a November 2008 update to the brochure, seven months before the human clinical trial began.

Some uncertainty surrounds the purpose of the primate experiment, described in the initial research contract as an efficacy study but later characterised as the development of a new experimental model. Quite when the purpose changed is not clear, and a protocol registered before the animal study began5 is not available. Without one, we do not know if the purpose changed before or after the data were available for analysis, and that is critical to assessing the importance of the findings. The study as published is characterised as a test of the model rather than as an efficacy study. Of course, investigators have a duty to change experimental plans when circumstances change if this increases the usefulness of the work; but any such change must be transparent and justified, so that research users can interpret findings accordingly.

Weak evidence

In fact, a recent systematic review and meta-analysis of animal studies has since shown that the MVA85A booster was associated with a small (3%) and non-significant reduction in bacterial load in animals infected with TB and a non-significant increase in risk of death when compared with BCG alone (risk ratio 1.45, 95% confidence interval 0.23 to 8.98).6 Only one of the five included studies that reported risk of death—a guinea pig study with intradermal vaccination—found a statistically significant benefit from MVA85A plus BCG compared with BCG alone.

None of the eight published experiments included in that review reported a sample size calculation or blinding of the assessment of bacterial load or the decision to euthanise, and only one reported random allocation of animals to experimental groups. Unfortunately, this kind of weak evidence base is common in animal studies underpinning human trials. When animal data are pivotal in developing new treatments, we should insist on more robust evidence before proceeding.

The evaluation of MVA85A included data from only 192 animals, and just 40 primates. The idea that this is enough efficacy data in animals to support further large scale testing in humans is widely held but somewhat implausible and in need of review.

Much of the animal data on MVA85A came from a single research group, but the effects observed in animal studies can differ substantially among laboratories.78 Much of the variation reflects true biological heterogeneity, so combining findings across several sites might increase confidence that effects observed in animals are robust enough to justify their clinical exploitation.9

Of course, it would be entirely reasonable to proceed to clinical trials on the basis of indirect evidence of potential efficacy (such as the immunogenic response to vaccination in animal models); but when the criteria by which this judgment will be made are not articulated clearly in advance there is a real danger that investigators will be seduced by the positive and rationalise away the neutral or negative, leading to premature clinical trials that may expose participants to harm. We need to develop better and more systematic ways to establish when a drug is ready for clinical trials in humans—and importantly, when it is not.10

Need for cultural change

The story of MVA85A also raises questions about how researchers and institutions respond to criticism. One approach sees criticism as a challenge that must be rebutted and resisted. An alternative characterises criticism as an opportunity to better understand where things might have gone wrong, to allow everyone to learn from what has happened, and to improve how we develop much needed new drugs.

While the second approach is at the heart of the ambition for self correcting and collegiate scientific endeavour, the first draws more from competitive environments, where “winning” isn’t the main thing, it’s the only thing. The current scientific ecosystem sees publication in high impact journals and the award of high value grants as ends in themselves. There is much discussion about how this might change, but until our institutions recognise that their core purpose is to produce research of value to society they risk a slow decline in their reputation, and possibly a faster and more serious erosion of public trust in science. In these troubled times, that public trust is more important than ever.

Footnotes

  • Feature, doi: 10.1136/bmj.j5845
  • Competing interests: I have read and understood BMJ policy on declaration of interests and have no relevant interests to declare. TheBMJ declares that the systematic review of animal studies (ref 6) was co-authored by Emily Sena, who is the editor of BMJ Open Science (openscience.bmj.com).

  • Provenance and peer review: Commissioned; not externally peer reviewed.

References

View Abstract

Log in

Log in through your institution

Subscribe

* For online subscription