Testing times for the government’s favoured antibody kitBMJ 2020; 371 doi: https://doi.org/10.1136/bmj.m4440 (Published 16 November 2020) Cite this as: BMJ 2020;371:m4440
In September, Boris Johnson announced Project Moonshot, an attempt to use mass testing for covid-19 “to identify people who are negative—who don’t have coronavirus and who are not infectious—so we can allow them to behave in a more normal way, in the knowledge they cannot infect anyone else.” New tests, Johnson said, “which are simple, quick, and scalable will become available.”
A consortium of businesses—including Abingdon Health, BBI Group, CIGA Healthcare, and Omega Diagnostics—was assembled in April by John Bell, professor of medicine at Oxford University and the government’s life sciences adviser, in a bid to create a test kit that could “be read by the person at home or on their mobile phone camera.” Out of this group, called the UK Rapid Testing Consortium, came the AbC-19 rapid antibody test. The test uses a small drop of blood from a finger prick and shows results in 20 minutes, without the need to send a sample to a laboratory.12
The consortium was awarded £10m (€11m; $13m) by the government to buy components to manufacture test kits in June, and towards the end of July press reports claimed that ministers were “making plans to distribute millions of free coronavirus antibody tests after a version backed by the government passed its first major trials.”
On 10 June, however, the head of the NHS test and trace programme, Dido Harding, said that not enough was known about what level of protection coronavirus antibodies provided. In July, a study by King’s College London found a significant drop in antibody potency after three months.3 Plans were scaled back and the focus adjusted towards an antibody test only for surveillance studies, to help build a picture of how the virus has spread across the country, rather than for home testing use.
Yet when, in October, the government announced its decision to purchase one million AbC-19 tests for use in surveillance studies in a £75m contract award without public tender, it prompted a legal action over the maladministration of public funds. Judicial review proceedings issued by the Good Law Project, a non-profit legal organisation, claim that the lack of public tender and failure to evaluate the accuracy of the tests coupled with the government’s active financial involvement in the consortium and awarding of the contract without an open tender mean that the purchase is unlawful.
The legal action issued on 11 November4 comes as The BMJ publishes an evaluation of AbC-19 by Public Health England (PHE), finding that one in five people with positive results on the test could be wrongly told that they had covid-19.
Commenting on the significance of the findings, Dipender Gill, clinical research fellow at Imperial College, London, said: “The covid-19 pandemic represents an unprecedented challenge, and governments around the world are under pressure to take decisive action. However, even if taken with the best intentions, the wrong choice has the potential to do considerable harm. High quality evidence must be considered transparently, in consultation with experts. Such data should be sought when they are not already available.”
So how did AbC-19 end up as the government’s antibody test of choice to support nationwide surveillance studies among tens of thousands of people, and how can we be sure that it is the best test the market has to offer? Tracing the story highlights the opaqueness and lack of accountability that’s become common during the pandemic.
The AbC-19 test did not have to wait for independent evaluation before getting a CE mark for professional use in July. Instead, the rules allow the manufacturers to self-assess the test using figures posted on Abingdon Health’s website. The Medicines and Healthcare Products Regulatory Agency (MHRA) recommends that test manufacturers reach a minimum standard of 98% sensitivity (ability to correctly identify a true positive sample) and specificity (ability to correctly identify a true negative sample). Abingdon Health reported that its test had a sensitivity of 98% and specificity of 100%, so was within recommended standards.
The only published evidence on the accuracy of AbC-19, before the publication of The BMJ paper last week, was a preprint article, not yet peer reviewed, reporting results from a study led at Ulster University. The study was funded by members of the consortium and led by professors Jim McLaughlin and Tara Moore, who were recruited as consultants in April by CIGA Healthcare, a member of the UK Rapid Testing Consortium. They found the test to have 97.7% sensitivity and 100% specificity. The study’s methodology has been criticised, including by academics brought together by the Science Media Centre, because it screened out blood samples that were difficult to classify as real positives or not,.5 This approach will tend to overestimate test accuracy.
By contrast, the large scale PHE study, using 4842 blood samples, published in The BMJ, found the test had clinical sensitivity of 92.5% and specificity 97.9% if using known positive and negative samples, with sensitivity falling to 84.7% among cases with unknown infection status, as would be the case in real life settings.
Sylvia Richardson, the Royal Statistical Society’s president elect and chair of its covid-19 taskforce, says that it is vital to get real world estimates of how a test might perform, as PHE had done. “At a time when reliable, effective testing is vital to our ability to contain the covid-19 pandemic, and when the government is proposing to spend many millions on novel tests, it is hugely important that independent assessment of the context-relevant performance of antibody or antigen tests is available prior to purchasing,” Richardson says.
In October, the Department of Health and Social Care announced the purchase of one million AbC-19 tests from the UK Rapid Testing Consortium to support national surveillance studies. In the press release, dated 6 October, health minister Lord Bethell is said to be, “thrilled by the product, both for Britain and export markets around the world.”
Emails seen by The BMJ show that officials at the department knew about the disappointing results of the PHE study before the announcement. The emails also show how the department blocked publication of a preprint of the PHE study.
The plan, according to a Department of Health and Social Care email dated 25 September, was for “minimal mention” of the PHE study in Lord Bethell’s announcement, “but we do need to mention it as we will get asked.”
PHE staff warned that there were “significant risks” in not publishing the PHE evaluation showing the low accuracy of the tests and asked if holding back the results had been agreed by ministers. In an email on 1 October, PHE asked whether this strategy—of holding back results—had been agreed. “Is everyone aligned on the handling—ministers, spads, etc?” it asked. The department replied, “Yes everyone is aligned as far as I know. No. 10 [Downing Street, the prime minister’s office] now aligned.”
Asked whether he was aware that the publication of the PHE preprint had been blocked, Lord Bethell did not reply before The BMJ went to press.
When questioned by The BMJ, the Department of Health and Social Care denied that it had blocked publication and said that it had planned to submit the PHE study to The BMJ for peer review with a view to publication. It added that it had bought the AbC-19 test for surveillance studies only. “They were never intended for, and have never been issued for, widespread public use. This robust evaluation was carried out by PHE at the Department’s request before any purchase was made, and PHE approved the test for use in surveillance studies.”
The PHE study says that the AbC-19 test might be accurate enough for serosurveillance to gauge the level of infection in the population, but only if the test’s accuracy improved, the relation between the test and future immunity was better understood, and the difficulty non-professionals might have accurately reading the test was accounted for. In a press release issued by the Science Media Centre on 7 October, Hayley Jones, senior lecturer in medical statistics at the University of Bristol, said: “When using antibody tests for surveillance purposes (to estimate what proportion of a population has been infected or has antibodies) errors in the accuracy of a test can in principle be corrected for. Test errors could be dangerous, however, if individuals receive their test results and change their behaviour based on this—due to a belief that they are immune from the virus. For now, any individual who receives a positive antibody test result should interpret it with caution and not change their behaviour as a result.”5
Abingdon Health said in a statement to The BMJ: “Our customer, [the Department of Health and Social Care], is satisfied with the performance of the test, as is the UK Rapid Testing Consortium, and will continue to roll out the use of the product.”
The government is currently conducting nine surveillance studies including the UK Biobank SARS-CoV-2 serology study with 20 000 families who self-administer finger prick blood samples once a month, and the REACT 2 study, which measures antibody levels in 150 000 people using finger prick testing. If the AbC-19 test were to be deployed in one of these studies, one in five positive results may be incorrect.
Joshua Moon, research fellow in the Science Policy Research Unit at the University of Sussex, adds: “Although the idea of a pinprick test is very useful in measuring disease spread and assessing public health interventions, overestimating the level of covid-19 in the community by 20% makes you wonder what the use case for this test could be.”
British rapid antibody test manufacturers have been frustrated at the government’s procurement process. Over spring and summer, many of them submitted rapid lateral flow immunoassay tests to the government’s new test approvals group, set up specifically to assess covid-19 diagnostic tests. Every lateral flow test was rejected for failing to meet the MHRA recommended standard, which requires 98% sensitivity and specificity, despite British made tests receiving approval in Europe and the US.
Yet, in early May, the government agreed to spend £13.5m buying Roche’s antibody tests three days before scientists at PHE’s Porton Down laboratory released a study that found the test was 84% sensitive—less accurate than some of the rejected lateral flow tests.
“There is no process, no tender available, no opportunity to bid,” said one manufacturer who wished to remain anonymous. “There are rapid tests from British companies being bought by governments around the world that could have been deployed here six months ago.”
“What is most incomprehensible is the price. Lateral flow is quick and easy to use, and cheap to make—that’s the beauty of it. At these volumes the government should be paying closer to £3 (compared to approximately £10)—especially as they’ve already given [the UK Rapid Testing Consortium] £10m upfront for components and are procuring directly.”
Jolyon Maugham QC, director of Good Law Project, said, “However amazed you are by this bestiary of incompetence, you’re not amazed enough.”
“This [the AbC-19 test contract] was a £75m contract, let without competition, on the basis of profoundly flawed research,” he told The BMJ, “And what we then get is a government that knows of these flaws and tries to suppress their publication.”
Sylvia Richardson of the Royal Statistical Society says that there are lessons to be taken from this case: “More transparency—undertaking robust evaluations more quickly, publishing the results in a timely manner, and acting on them—would help the government avoid costly mistakes about testing.”
UK Rapid Testing Consortium timeline
March 2020 John Bell appointed head of the National Covid Testing Scientific Advisory Panel and chair of the government’s new test approvals group, which assesses virus diagnostic tests and recommends tests for the government to procure at scale.
5 April Bell posts a blog post called Trouble in Testing Land,2 writing that
“None of the tests we have validated would meet the criteria for a good test as agreed with the MHRA. This is not a good result for test suppliers or for us.”
8 April Government announces the UK Rapid Test Consortium, a business consortium including Oxford University, Abingdon Health, BBI Solutions, and CIGA Healthcare, charged with designing and developing a new antibody test to determine whether people have developed immunity after contracting the virus.6
9 July Initial trials of Abingdon Health’s AbC-19 antibody test fills the company with “confidence.”7
30 July The AbC-19 rapid test is CE marked for professional use and is registered with the MHRA declaring conformity with recommended requirements.
1 October A team at Ulster University8—led by two members of the UK Rapid Testing Consortium—publishes a preprint assessing antibody test performance as well as an assessment of the UK Rapid Testing Consortium’s AbC-19.
1 October PHE evaluation of AbC-19 finds that the accuracy of the antibody test might be considerably lower than previously suggested. The Department of Health and Social Care stops publication of the preprint.
6 October Government announces a £75m deal with the UK Rapid Testing Consortium for one million antibody tests.
9 October The Department of Health and Social Care confirms a £10m contract award notice to Abingdon Health regarding “components and materials” for covid-19 “lateral flow” tests.
20 October The Department of Health and Social Care confirms a £75m contract award notice to Abingdon Health for provision of AbC-19. The award notice states “It is impossible to comply with the usual timescales: due to the urgency of the situation there was no time to run an accelerated procurement under the open, restricted or competitive procedures with negotiation as there was an urgent requirement to ensure provision of diagnostic equipment in response to the pandemic.”
12 November PHE evaluation of AbC-19 published in The BMJ.
Commissioned, not peer reviewed