Intended for healthcare professionals

Feature Investigation

Oxford TB vaccine study calls into question selective use of animal data

BMJ 2018; 360 doi: https://doi.org/10.1136/bmj.j5845 (Published 10 January 2018) Cite this as: BMJ 2018;360:j5845
cropped thumbnail of infographic

Infographic available

Click here for a timeline of events, trial dates, and publications related to the MVA85A vaccine booster.

  1. Deborah Cohen, associate editor, The BMJ
  1. dcohen{at}bmj.com

Researchers were disappointed when a clinical trial of a new tuberculosis vaccine failed to show benefit, but should it have gone ahead when animal studies had already raised doubts and what does it mean for future research? Deborah Cohen investigates

In July 2009, researchers began a clinical trial in infants in South Africa, testing the newest hope for an improved vaccine against tuberculosis.1 Nearly 2800 infants took part. Their parents consented on the basis of information that included the statement that the trial vaccine, MVA85A, “has been tested in animals and was shown to be safe and effective.”

Similar statements had been used to obtain funding and ethical and regulatory approval for the trial. In one funding application, for example, the researchers said that the MVA85A booster had been shown to be more protective than BCG alone in four animal models.

Information given to ethics committees, regulators, and trial investigators said that protective efficacy studies had been carried out in four animal models—mice, guinea pigs, cattle, and monkeys. They reported “evidence of protection” against Mycobacterium tuberculosis when MVA85A was given as a booster to BCG.

However, an investigation by The BMJ has unearthed concerns about how the researchers, from Oxford University, used the results of animal studies selectively to gain funding and approval for human trials, publicly relying on claims that animal studies had shown the new vaccine to be protective while privately playing down or dismissing unsupportive experiments as “failed” or irrelevant. Disappointment at the apparent failure of animal models to predict the outcome of human trials has, in turn, led major funders of TB research to rethink their funding priorities, with allegations that this has slowed progress in the entire field.

The investigation asks whether the parents of babies included in the South African trial were misled and raises questions about the transparency around regulatory decision making and what data are required when deciding to move from animal studies to human trials. It also asks if universities are equipped or willing to investigate allegations of scientific misconduct against high profile academics who bring in substantial funding.

People who have spoken to The BMJ about this saga consider it to be but one example of a systemic failure afflicting preclinical research. Symptoms of this failure include, they say, poor study design; lack of understanding of statistical methods; lack of clear prespecification of the purpose of animal studies; selective reporting of results driven by pressure to obtain funding and ethics approval; failure of regulatory, ethical, and funding bodies to understand or to pay sufficient attention to animal research; the inaccessibility of supporting evidence and protocols; and poorly managed financial and academic conflicts of interest.

Jonathan Kimmelman, associate professor in the Biomedical Ethics Unit at McGill University with a special interest in translational research, says that this is not an isolated case. “It’s widely recognised that animal studies intended to support drug development are often riddled with flaws in design and reporting. But it sometimes feels as if regulators and ethics committees missed the memo. Unfortunately, there are other cases where new treatments were put into human testing on animal evidence that was foreseeably flawed, incomplete, or even negative,” he says.

Animal data

MVA85A is a subunit vaccine designed as a parenteral booster to be given after BCG. Led by Helen McShane, professor of vaccinology at the Jenner Institute in Oxford, the team that developed the vaccine hoped it would confer extra protection against TB compared with BCG alone.

Before the 2009 efficacy trial in South African babies, it had been tested in mice, guinea pigs, cattle, and monkeys and had also undergone early phase safety and immunogenicity trials in humans. However, in 2015 an independent systematic review of eight studies using 192 animals published between 2003 and 2010 concluded that the data did not provide evidence to support efficacy of MVA85A as a BCG booster. It also raised questions about the design, execution, and reporting of the studies.2

Paul Garner, head of the Centre for Evidence Synthesis in Global Health at the Liverpool School of Tropical Medicine, was one of the authors. He told The BMJ that the efficacy of MVA85A in the animal studies has been overstated.

“Our meta-analysis suggests an apparent lack of evidence of efficacy in animals in data collected before the start of the recent phase 2b trial in South African children that enrolled children between 15 July 2009 and 4 May 2011,” the authors wrote.

Two studies showed significant protection, but in these MVA85A was given differently from the way it was given in humans. One was in guinea pigs and MVA85A was further boosted with a recombinant fowlpox virus (FP9Ag85a). The other was a mouse experiment when both BCG and MVA85A were given intranasally rather than intradermally.

But one particular study concerned Garner and his colleagues. In November 2006, a study in rhesus macaque monkeys started at the UK government laboratories in Porton Down in collaboration with McShane’s team. The 16 macaques were given BCG alone, BCG plus MVA85A given intradermally, or neither, and then exposed to TB. The study finished in November 2007, 18 months before the start of the efficacy trial in South African infants and three months before the start of a safety trial in infants. It was published in 2010.3

In line with humane practice, the macaques were put down if they became too ill during the study or at the end of the trial. As might have been expected, none of the four unvaccinated macaques survived to the end of the experiment compared with four of the six macaques given BCG alone. Worryingly, however, only one of the six macaques that received BCG plus MVA85A survived (fig 1).

Fig 1
Fig 1

Kaplan-Meier plot of survival of vaccinated and unvaccinated macaques after challenge with M tuberculosis. Survival of BCG vaccinated animals differs significantly from no vaccine controls using the Kaplan-Meier log rank test (P=0.048) but not significantly using Cox regression (P=0.07). Redrawn with permission from Sharpe et al3

Garner told The BMJ that although the difference between the BCG and MVA85A groups wasn’t statistically significant, the Porton Down study gave a strong signal that the MVA85A vaccine was hastening the development of TB in the macaques, raising the possibility that MVA85A was actually impairing the effectiveness of BCG.

Academic concerns

This is not the only controversy around the Porton Down monkey study. Some years before the systematic review, it triggered a bitter dispute between academics at Oxford University that drew in senior figures at the university and others at philanthropic organisations.

In March 2007, Peter Beverley, at that time a principal research fellow at Oxford University and now visiting professor at Imperial College’s Tuberculosis Research Unit, attended a TB conference in Vancouver: “Tuberculosis: from laboratory research to field trials.” Data detailing the macaques’ immune responses in the Porton Down study were presented at the conference. The study was still ongoing and word emerged that all was not well with the MVA85A macaques.

Beverley told The BMJ: “A colleague spoke to the poster presenters, the people who had done the work in Porton, and was told that the monkeys that had had the booster vaccine were actually reaching their humane endpoints rather rapidly and having to be killed because they had [TB], and that seemed a little bit strange.”

Beverley didn’t think much more about it, until he went to Stockholm for the seventh international conference on the pathogenesis of mycobacterial infections in June 2008. A poster once again presented data from the Porton Down macaque study, which had finished seven months before. This one showed the Kaplan-Meier curves charting the macaques’ survival over time, but only from the groups that had received either BCG or nothing. The BCG plus MVA85A group was omitted.

The BMJ has seen copies of the poster. It concluded that “long term survival is an important read out of vaccine efficacy” and that BCG afforded “66% protection against severe TB disease,” which was, according to the poster, statistically significant. “This is in line with the efficacy of BCG in humans over 12 weeks post-vaccination,” it said.

Public Health England, which oversees Porton Down, told The BMJ that this poster was primarily about BCG and that the MVA85A data were presented at an international symposium in New Delhi in December 2008. It said it did not have a copy of the poster. It also said that staff don’t recall voicing any concerns about the macaques that received MVA85A and therefore rejected Beverley’s claims about such conversations.

Nevertheless, the absence of the data in the June poster worried Beverley, particularly since excitement was building around MVA85A. A phase IIa safety trial in South African infants had started in February 2008. In June that year Oxford University struck a deal with Emergent Biosolutions to jointly develop MVA85A. The university held a 49% stake, with some of the shares held by the scientists, including McShane and her boss, Adrian Hill, director of the Jenner Institute.

Then in July 2008 the university announced on its website that a larger trial of MVA85A in infants was planned, this time testing efficacy.

Beverley contacted Peter Ratcliffe, professor of clinical medicine and head of the Nuffield Department of Medicine at Oxford University, in August 2008, to express his concerns about the Porton Down macaque study. He also questioned whether McShane had properly represented the other animal data, since his review of the published literature, which he gave to Ratcliffe, suggested that the animal studies in mice, guinea pigs, cows, and monkeys had failed to show MVA85A was effective when given as a parenteral booster.

Ratcliffe responded by inviting Beverley and a colleague to view the results of the Porton Down study. The following month (September 2008) they were shown into a data room but were not allowed to take in any pens, notebooks, or phones. Oxford University has told The BMJ that this is standard practice.

What Beverley saw didn’t assuage his worries. The Kaplan-Meier survival curves showed that the BCG plus MVA85A macaques had reached their humane endpoints at similar times to the unvaccinated macaques. In answer to questions put by The BMJ, Porton Down denies this, saying there is “no evidence to support claims that the MVA85A vaccinated group developed progressive disease more rapidly.”

Should the trial have gone ahead?

There is debate about what should have happened in light of these results. By the time the macaque study ended in November 2007, phase I safety trials in adults and infants were well under way and phase II efficacy trials in both adults and infants were planned.

Kevin Urdahl, associate professor at the Center for Infectious Disease Research at Washington University, specialises in the immune responses to TB. He says the prevalent view among scientists was that a large efficacy trial in humans was needed. Tuberculosis kills over a million people a year and BCG’s efficacy varies in different regions of the world.

“[There was a sense that] we’ll never know whether this is going to be helpful or not unless we test it in people,” he said, adding: “Yes there were some people who questioned whether it should go forward and others who felt it should go forward, and I think reasonable people could differ on that.”

However, others are less convinced. One scientist who wants to remain anonymous told The BMJ: “When I found out the clinical trial [the phase IIb efficacy trial in South African infants] was still going ahead, it just completely did not make sense to me.”

The scientist said, “I would have stopped at that point and called for clinical trials to be halted while I resolved what was happening with the result in this monkey experiment. There could be lots of explanations for it: some of it could have been benign, but at least that would have been the prudent thing to do.”

In response to Beverley’s concerns, Oxford University initiated an inquiry in December 2008, led by scientists from outside the university. This was to be the first of three inquiries into the dispute between Beverley and McShane, two of which centred on Beverley’s concerns about the presentation of McShane’s research in the scientific literature and elsewhere.

The BMJ asked to see the inquiry reports under freedom of information but this was refused. However, we have seen letters relating to each of them.

The first inquiry called for more oversight of clinical trials sponsored by the university to ensure the “accuracy, completeness and timeliness of all data” submitted to the regulators. In response, the university set up a vaccines oversight committee in 2012. The inquiry scientists also said it would have been good practice for the “potentially adverse reaction” observed in the monkey experiment to be reported to the authorities sooner, “within about two weeks of the termination of the experiment, irrespective of the interpretation of its significance.”

But they concluded that scientists at the Jenner Institute had not infringed “statutory provisions” set out in good clinical practice guidelines.

Perils of whistleblowing

What happened in the aftermath of Beverley’s complaint highlights the perils facing academics who raise concerns about the scientific integrity of senior scientists.

Although the MVA85A clinical trials continued to be funded and approved, life became tougher for Beverley. In May 2009 Oxford University closed the animal house in which he had been carrying out experiments. The university was making a new special containment animal house for studies using dangerous pathogens such as TB.

Beverley expected to be allowed to use the new animal house—he had a grant to do his TB research and when the university accepts the funding it commits to providing facilities to do the work described in the application. McShane and Hill were to use it too.

But to his surprise he was told that a “safety issue” meant he wouldn’t be allowed to use it. The university found him alternative space at Imperial College, London, some 50 miles away. He had to transport animals. This put him in breach of Home Office animal research guidelines, which recommend that animals should not be moved around.

Oxford University, however, told The BMJ that it “strictly adheres to Home Office animal research guidelines and will not have placed researchers in a position where these guidelines are breached.”

Just as he was starting his experiments at Imperial, Oxford University set up an inquiry to determine who should use the new special containment animal house. It was carried out by Peter North, a retired judge and former vice chancellor of Oxford University. Beverley has never seen the report from this inquiry.

However, in a letter seen by The BMJ, North wrote that the animal house “cannot be safely shared by these two groups” and joint use may lead to “reputational risks.” North had considered a “range of matters,” including each individual’s “contribution to the funding of the facility and equipment, research need, academic record and research profile and the consequences of access being denied.” On this basis he advised that priority be given to Hill and McShane.

All went quiet for a few years, until the systematic review of the animal studies was published in 2015, which suggested that claims of efficacy of MVA85A had been overstated. The publication fuelled Beverley’s belief that the scientific literature had continued to misrepresent the animal studies despite his 2008 complaint to Oxford University.

So he wrote to the Medical Research Council and Wellcome Trust—both of which were major funders of the Jenner Institute team. He also wrote to the UK Medicines and Healthcare Products Regulatory Agency (MHRA), as some of the phase I trials had been conducted in the UK.

The MHRA told Beverley that it was interested only in safety and not efficacy. Both the MRC and the Wellcome Trust viewed his complaint as an employee matter and passed it to Oxford University, which instituted a third investigation in 2016.

This was led by two senior Oxford academics and the chair of university’s vaccine oversight committee, who is an academic at Leeds University. The inquiry’s report was not made public but The BMJ has seen a copy.

It concluded that there was “no evidence of selective referencing, of deliberate delay in publishing, of selective use of statistics, or of lack of representation of pre-clinical data to the relevant regulatory, ethical or grant bodies.”

It said it would have been unethical for the phase II trials not to proceed on the evidence presented and that Beverley was vexatious in writing the letters.

Beverley is critical of the conflicts of interest of those leading the inquiry. “Clearly, neither of those people [at Oxford University] would wish to admit that there had been any wrongdoing in the department of medicine or the vaccine oversight committee,” he said. Oxford has denied there were any potential conflicts.

What the regulators were told

Plans for the phase II efficacy trial in infants began in earnest in 2008. Researchers seeking approval for a trial must submit an investigator’s brochure, which contains clinical and non-clinical data about a product and is updated as new data come to light. The BMJ obtained a copy of the investigator’s brochure for MVA85A dated May 2008.

This was completed before Beverley’s complaint to Ratcliffe and was sent by McShane to South Africa’s Medicines Control Council (MCC) and the human research ethics committee of the University of Cape Town to support her application to conduct the efficacy trial in infants. Both organisations suggested that this was the document on which the initial approval was based.

It states that there is “evidence of protection” against TB using MVA85A in four animal models. Although some positive unpublished data were included that showed trends towards improved protection, other less favourable data from mice, guinea pigs, and the Porton Down macaque studies, which had finished six months previously, were not.

The BMJ applied to the university for the MVA85A investigator's brochures under freedom of information, and was sent a batch dating from 2006 to 2015. The May 2008 brochure, which we had obtained by other means, was not included. When we queried this, Oxford University said it was an oversight.

The batch did include a brochure dated November 2008, which was submitted to the MCC. Like the May 2008 brochure, this said that evidence of MVA85A’s protective effect against TB had been seen in four animal models—cattle, guinea pigs, mice, and monkeys. It contained raw data and the results of one statistical test from the Porton Down study, but the Kaplan-Meier plot and statistics that Beverley had seen in September that year in the Oxford University data room were absent.

The brochure also played down the significance of the Porton Down study, saying that conclusions could not be drawn from it because BCG had only a marginal protective effect in monkeys.

The BMJ sought to understand whether this information was received before or after the MCC and ethics committee approved the trial.

The MCC told The BMJ that it finally approved the infant efficacy trial in December 2008. It said that the results from the macaque study were provided as “a supplement to the [November 2008] investigator’s brochure,” which it received after one of their committees had recommended approval. The results “were considered during the evaluations but did not change the recommendations for approval.”

The BMJ asked both the MCC and McShane for a copy of the supplement. The MCC did not respond. McShane said she didn’t know what document we were referring to.

The University of Cape Town ethics committee said it knew about the Porton Down study. It refused to say when information about the study was provided or the date that it approved the study. But it did say that when the updated November 2008 investigator’s brochure was submitted, the information was “insufficient to overturn the approval.” This suggests that the approval predated receipt of the macaque study data.

However, neither the MCC nor the ethics committee has shared original documentation, and the basis on which they approved the trial remains unclear.

McShane insists there was no problem with delay in sharing the information. “The important fact here is that the regulators and the ethics committees saw the raw data eight months before we started the infant efficacy trial. They had ample opportunity to question the data and to query them if they wanted to,” she said.

But Jimmy Volmink, dean of the medicine and health sciences faculty at Stellenbosch University in South Africa is less convinced. After he became aware of the Porton Down study, he tried to find out what the authorities knew before the efficacy trial in infants had started.

He contacted the ethics committees at his own institution and the University of Cape Town. But his requests for more information were refused.

“[The point of] research ethics committees is the protection of patients and of the public,” he says, adding: “I found this refusal to disclose information about a matter that was so clearly of public interest both confusing and very disappointing.”

His request was referred to the MCC in 2016. It provided no extra information but reassured him that it was going to investigate.

“[I was] sent emails asking for ‘more details of the studies concerned,’ which I can only describe as puzzling as copies of the relevant publications had in fact been provided on two separate occasions,” he says. He heard nothing more.

Kimmelman shares these concerns. “When protocols are for review, ethical committees are entitled to an unbiased description of all relevant supporting evidence—even if it is equivocal. Whereas in medicine, clinical trial registries help prevent selective presentation of evidence, there is no comparable mechanism for preclinical evidence. That means when you review a protocol you have no way of knowing whether the animal data are representative, comprehensive, or simply biased,” he says.

So what information should regulators be given and when? The guidance about this seems open to interpretation. Internationally agreed standards state that “all relevant” animal studies are expected to be provided, but what does that include?

Indeed, in the investigator’s brochures obtained by The BMJ, the animal data are presented inconsistently, with less favourable data selectively presented and references to their corresponding publications omitted (see infographic).

Malcolm Macleod, professor of translational neuroscience and neurology at Edinburgh University, is a member of both the UK Home Office animals in science committee and the UK Commission for Human Medicines. He spoke to The BMJ in his capacity as an academic and told us that regulatory authorities want to get as complete a picture as possible of what is already known about a subject and of the information held by the researchers.

“If I was the regulator who had given approval on the basis of the information available to me and then I discovered later that there was more information, I would be very concerned indeed as to the integrity of my approval process,” he said. The macaque data should have been shared within weeks if not days of the study ending—and if necessary a phone call should have been made by the investigators, Macleod said.

Oxford University told The BMJ that the data were given directly to the regulators. McShane said that the regulators “all received the raw data from the primary outcome” of the Porton Down study in an investigator’s brochure in November 2008. However, as stated above, the brochure didn’t present all the relevant statistics and it said there was “no significant difference between the three groups in terms of survival.”

It also cast doubt on the findings: “This aerosol challenge protocol has not been used in macaques before and it is difficult to draw any clear conclusion from this experiment.”

What the funders saw

McShane’s team relied on statements that the vaccine had been shown to be safe and effective in animal models when applying for funding from philanthropic and public bodies. The South African infant efficacy trial received at least £8m (€9m; $10.9m) from various parties. In a successful 2007 £4m grant application to the Wellcome Trust, details of which The BMJ obtained through freedom of information, the Oxford team said that efficacy of the MVA85A booster had been “demonstrated in 3 preclinical models: mice, guinea pigs and non-human primates.”

An application to the EU funded European and Developing Countries Clinical Trials Partnership for funding for a second efficacy trial in HIV positive adults submitted in December 2008 stated that the MVA85A booster had been shown to be more protective than BCG alone in four animal models. The application did not refer to all the available animal data—the Porton Down macaque study, for example, was not mentioned—and the study was awarded €9.47m. (This trial went ahead and also failed to show efficacy of MVA85A.4)

The Porton Down study also got scant mention in a July 2008 presentation by McShane and Hill to another key funder of the South African trial: Aeras’ Vaccine Selection Advisory Committee. In a letter to Oxford University’s Peter Ratcliffe in September 2008, they maintained that the macaque data were shared and the committee didn’t have “safety concerns.”

However, a copy of the presentation obtained by The BMJ lists studies in cows, mice, macaques, and guinea pigs as part of the preclinical data package. Although it mentions the Porton Down study, it doesn’t contain any information about how many and for how long macaques survived in any of the groups.

When we asked McShane about the omission she said the data were not included because it was “not possible to determine a vaccine effect” as the “challenge dose was, with hindsight, too high” making it impossible to interpret.

Were trial participants misled?

But perhaps of greatest concern is whether parents of babies enrolled in the MVA85A trials were misled.

Parents of participants were told that: “MVA85A has been tested in animals and was shown to be safe and effective.”

Volmink is critical of the information provided. “I don’t think it’s a fair reflection of the evidence that I have seen, and I have grave concerns about communicating the evidence in this particular way, especially given the context in which we work in South Africa,” he says.

The communities in which the trials were done are poor, and Volmink says people are eager to participate in studies, hoping to receive beneficial treatments that would otherwise not be available.

“If these claims are misleading about possible protective effects of a vaccine, or if investigators play down the possibility of harm from exposure to the vaccine, I think that would be particularly egregious in this context and one might consider this a form of exploitation of a vulnerable population,” he says.

McShane strongly denies misleading participants. “The information sheet was reviewed by the ethics committee and by the regulators. It is their job to determine whether or not what we say in that information sheet is an accurate representation of the facts,” she says.

McShane says that it’s unusual for patient information sheets to mention the animal data. She said that her team conducted “the most cautious clinical development plan”—some say “too cautious” she adds.

The ethics committee agreed with McShane. It told The BMJ that human safety data carried much more weight in relation to “human risk and benefits for participants.” It said the participant information leaflet contained limited information on the animal studies as well as the more relevant safety studies in humans. It said it was acceptable not to include the results of the unpublished Porton Down study.

Challenged Oxford

The BMJ asked McShane about the apparent lack of evidence about MVA85A’s efficacy in the animal studies. She reiterated that there were “published data supportive of efficacy in each case” and the “overall pattern with MVA85A vaccination in animals was clear and positive.”

When asked for further clarification she said that: “These data are all published and in the public domain.”

While the Oxford researchers relied on claims of protective benefit in the animal studies for funding and approval, they were less positive about these studies when speaking privately.

In a letter to Oxford University after Beverley’s complaint in 2008, Hill and McShane said that they agreed that “it is now becoming increasingly clear” that boosting with MVA85A parenterally “does not consistently enhance protection in the murine model to an extent that can be reliably detected with small group sizes.” They also dismissed the guinea pig model as being unreliable.

In responding to questions from The BMJ they have also sought to dismiss the Porton Down study. McShane told us that its purpose was not to assess efficacy of the new vaccine but to test a new aerosol challenge model. She said the study used an exceptionally high dose of TB—20 times higher than is given today by Porton Down—and called it a “failed” experiment because the BCG vaccine in the control group didn’t work as expected, although the 2008 Stockholm poster stated that BCG had worked as predicted.

She said that “it is not standard practice to put figures on animal data into investigational brochures” and “there was no clear BCG effect in this experimental study with a new TB vaccine animal model.”

The BMJ has obtained documents that dispute McShane’s claim that the study was not about efficacy.

A copy of the contract research agreement between Porton Down and Oxford University dated 8 April 2004, obtained under freedom of information, lists various studies, including one that will challenge primates with TB. The studies were intended to “provide evidence of the protective efficacy of the vaccine.” The results can then be “extrapolated to indicate the likely protective efficacy of the vaccine in humans,” it said.

A letter from a Porton Down scientist to Oxford University, in response to Beverley’s concerns, also seems to suggest it was conceived as an efficacy study. Dated 4 September 2008—almost a year after the macaque study had finished—the letter says that the study sought to determine the immunogenicity of both BCG and the MVA85A booster in rhesus macaques and “their efficacy against aerosol challenge with M tuberculosis using survival as the measure of efficacy.”

The same letter also seems to support Beverley’s view that staff at Porton Down thought the MVA85A macaques were ailing.

The author wrote that, in the conversation about how the macaques were faring during the study, the Porton Down scientists were “probably describing the likely cause of the respiratory symptoms and described the nodes as forming a ‘collar’ around the trachea, to get across the concept of the encirclement and subsequent restriction that was occurring.”

As for McShane’s view that this was a failed experiment, other scientists disagree. BCG worked as expected in the macaques, they say. In both animals and humans, it works some of the time and not others. That’s why there’s a need for a booster.

JoAnne Flynn, professor at the department of microbiology and molecular genetics at the University of Pittsburgh School of Medicine, specialises in non-human primate research into TB. She says the dose of TB given in the Porton Down experiment reflects the understanding and practice at the time.

“The Sharpe experiment [Porton Down macaque study] is not a failed experiment—it did not show improved control by the vaccine over BCG, and, just like in humans, BCG efficacy is variable in non-human primates,” she says.

The questions about the study’s purpose—to test vaccine efficacy or a new aerosol challenge model of TB—could be resolved if either Porton Down or Oxford University would share the study protocol. But Porton Down has refused twice, citing concerns about staff safety and saying that the protocol is “fully described in the paper.” McShane told The BMJ she doesn’t have a copy of the protocol. They also turned down our request to see the application for ethical approval to conduct the study.

The difficulty in obtaining such basic information is concerning, says Macleod. “While we have a clear roadmap for what a scientifically and ethically robust clinical trial programme looks like, we do not have the same thing for (non-human) animal studies. Since those animal studies often feed into clinical trials—as is the case here—this has ethical implications for both animal and human studies,” he says.

No clear route from animal to humans

Helen McShane says the move to human trials was based on evidence of improved protection in mice and guinea pigs.

Animal studies continued alongside the human trials. Fourteen early stage clinical trials of parenteral MVA85A have shown safety and immunogenicity in humans, she added.

However, it’s unclear whether the guinea pig and mouse studies were the basis for the transition from animals to humans. A copy of an investigator’s brochure from 2006—the earliest one obtained by The BMJ—does not include the statistically significant mouse study McShane was referring to nor the guinea pig data. Both of these studies were published at the time.

The South African Medicines Controls Council told The BMJ that the first clinical trial it approved started in 2004. It said in the guinea pig studies MVA85A booster gave “equivalent protection” to BCG. The “lack of supporting non-clinical data on efficacy was acknowledged [by them],” it said.

But Jonathan Kimmelman is concerned: “Human protections policies state that, when you expose patients to risk in trials—those risks must be favourably balanced against benefit. With early phase trials like these, that evaluation is largely based on a careful vetting of animal evidence supporting the trial. It’s not clear to me that ethics committees routinely evaluate the supporting evidence. That leaves sponsors with a huge amount of discretion concerning the launching of clinical development.”

Does the route of administration matter?

According to Beverley, only one animal study showed a significant protective effect compared to BCG alone: when both BCG and MVA85A were given intranasally in mice.

JoAnne Flynn says though there aren’t any hard and fast rules, she thinks the route by which a vaccine or booster is given in experimental animals should be the same as when tested in humans.

“Route matters for vaccination, and translating data from an animal model in which a vaccine was delivered by one route to humans where a vaccine is delivered by another route is not appropriate. It may be that the route won’t matter, but I consider this unlikely,” she says.

Shadow over future research

When the trial showed no evidence of added protection, it came as a surprise. The animal models had suggested potential efficacy, said McShane and her coauthors on publication of the trial in 2013, pointing again to studies in four animal species.1

It was a view that many in the TB community accepted, believing that MVA85A had been well vetted before being trialled in humans.5

“Mouse and non-human primate studies had shown that MVA85A provides increased protection against TB over BCG alone,” an article in Nature commented.5 “It is not clear why these results did not translate to humans.”

But for Beverley the episode highlights how miscommunication of research findings in animal studies can hold science back.

“It’s been claimed that because the animal models did show a suggestion of effectiveness of the vaccine, they failed to predict the outcome of the trial. This is absolutely incorrect, the animal models actually showed no statistically significant protective effect of MVA85A, which exactly parallels the outcome of the trial,” he says. Incorrect claims that the animal models failed to predict the outcome of the efficacy trial cast doubt on what he thinks are useful animal models.

“It clearly makes it more difficult to bring forward other vaccine candidates—if we don't trust the animal models, how are we going to develop new vaccines?” he asks.

Several sources have told The BMJ that another consequence of the apparent discrepancy between the animal and human studies was the decision of Aeras (the Gates Foundation TB vaccine development foundation) to pull back funding for human trials and instead shift back to a focus on basic science and animal models.

The foundation had been a major supporter of TB vaccine development and provided substantial funding for MVA85A. It reasoned that if this candidate vaccine was effective in four animal models and failed as a booster in humans, then the vaccine development community was on the wrong track and needed to regroup.

Debate about animal research

The 2015 systematic review of the animal data drew hostility from some people within the animal testing world.2 Shah Ebrahim, honorary professor of public health at the London School of Hygiene and Tropical Medicine, was coeditor of the International Journal of Epidemiology when it published the review. He says that some of the peer reviewers’ reactions were quite extreme.

“One of the reviewers said if you publish this a lot of people are going to be very unhappy with you. Another reviewer telephoned me—which was a first—to request this paper be rejected,” he says.

Ebrahim was particularly alarmed by one argument against publication—that the negative conclusions of the systematic review would mean people would be less likely to publish their animal data. “There was a staggering argument that really depressed me: that publishing the review would push negative findings underground,” he says.

Peer reviewers also said it was unfair to pick on the MVA85A researchers, on the grounds that many other groups do not publish negative preclinical efficacy data.

“I think this is really serious grounds for concern because research funders are being duped by investigators if they’re funding in good faith research that is supposedly going to help us improve human health and progress science,” Ebrahim says.

Helen McShane was one of the peer reviewers of the systematic review. She told The BMJ the role of peer reviewers is to express concerns where they have them. She was worried that “publishing of this negative article may act to inhibit others from publishing negative data.”

She also said she has misgivings about the methods of the systematic review, how the data were analysed, and the expertise of the authors. She said that no one else applies this methodology “to small numbers of vaccine studies in small numbers of animals” and implied that it was only published because one of the authors, Paul Garner, was a senior editor of the journal.

Garner isn’t listed as a senior editor on the journal’s website but as one of 44 members of the editorial board.

Garner disputes McShane’s view that he and his coauthors averaged the data on outcomes. Instead, he says they systematically collected the outcomes across all studies and displayed them individually in tables and graphs. They took into account the small number of animals in each study, he says.

“It is valid to systematically seek, describe, and appraise studies to summarise a body of evidence. All our conclusions were drawn from qualitative statements examining the summaries of each individual study,” he says.

Ebrahim thinks animal researchers have much to learn from clinical scientists.

“I’m not anti animal experiments. What I am anti is the non-reporting of negative findings, the lack of protocols, the lack of clear reporting of what was done and openness. I think if we were to talk to scientists in the 1960s and 1970s who were doing trials in humans, they were in much the same situation as animal experimenters are now,” Ebrahim adds.

But those who question the methodology or scientific quality of animal studies risk being branded as “anti-vivisectionists.” Ebrahim reports having been leant on by senior figures in medical science not to publish a paper in The BMJ in 2004 that raised questions about the quality of animal research.6

“One of the problems for the animal experimental world is not just the antivivisectionist lobby. It’s that there is a general almost tribal [feeling]—people band together to try to build an edifice against anyone looking inside who’s not an animal experimenter,” he says.

Footnotes

  • Competing interests: I have read and understood BMJ policy on declaration of interests and have no relevant interests to declare. The BMJ declares that the systematic review of animal studies (ref 2) was coauthored by Emily Sena, who is the editor of BMJ Open Science (openscience.bmj.com).

  • Provenance and peer review: Commissioned; externally peer reviewed.

References

View Abstract

Log in

Log in through your institution

Subscribe

* For online subscription