Assuring the quality of diagnostic testsBMJ 2013; 346 doi: http://dx.doi.org/10.1136/bmj.f836 (Published 13 February 2013) Cite this as: BMJ 2013;346:f836
- Mark Wilcox, professor
A linked investigation by Cohen and Swift (doi:10.1136/bmj.f837) clearly shows that the system for regulating the quality of in vitro diagnostic medical devices is fallible.1 The term in vitro diagnostic medical device (IVD) refers to any reagent, reagent product, calibrator, control material, kit, instrument, apparatus, equipment, or system used for the in vitro examination of human specimens. These devices underpin most medical diagnoses, so the consequences of using suboptimal products could be far reaching.
It is perhaps surprising, then, that to be marketed across Europe most IVDs require the manufacturer just to self declare that the product complies with the essential requirements of relevant European laws. Once marketed, the manufacturer has a clear responsibility to ensure that the product performs as declared. If this is in doubt, then a regulator (in the United Kingdom, the Medicines and Healthcare Products Regulatory Agency) should be informed. Cohen and Swift’s investigation shows that a system of regulating the quality of these devices that is based on trust can be undermined, especially when there is no systematic safety net to identify poor test performance. There is no evidence to suggest that many devices are flawed, but it is clear that some yield results with suboptimal clinical usefulness.2 3
CE marking is a manufacturer’s declaration that a product complies with the essential requirements of relevant European laws, permitting marketing across 30 countries. The letters CE are an abbreviation of the French phrase, Conformité Européene, meaning European conformity. When a manufacturer applies for CE marking for an IVD it is not obliged to submit performance data, but instead declares that such efficacy information exists and is robust.4 5 A notified body (a specialist company in the field) typically undertakes an annual audit of the manufacturer’s quality assurance system. Only a few diagnostic tests—including reagents and tests for the detection of HIV, hepatitis B, hepatitis C, and hepatitis D—require a notified body to review the manufacturer’s own product data.4 5
Assessment of the quality of an IVD should take clinical usefulness into account, but again there is no clear requirement within CE marking to show that such a device has good clinical utility.4 5 For example, a large evaluation of tests to detect Clostridium difficile toxin, the mainstay of diagnosing C difficile infection, found that some CE marked products had positive predictive values less than 50%, notably in (normal) non-outbreak settings.2 Such unacceptable performance levels were widespread before the implementation of NHS guidelines aimed at improving the utility of tests for C difficile infection.6 7 Users need to be alert to product data that are based on unrealistic target frequency; a test designed to detect a disease or marker that is common in one setting (such as inpatients with disease specific symptoms) will tend to have a worse positive predictive value when the same disease or marker is less likely (such as when samples are obtained from patients in the community with vague symptoms).
The US Food and Drug Administration’s process for approving IVDs is more stringent than the European system, with a greater emphasis on clinical testing.8 Although this approach adds rigour to the verification of clinical utility, the approval process is longer and more expensive, which may retard the availability of new tests and increase their cost. The FDA route to product marketing is not obligatory in the United States; laboratory developed tests and simple tests have more straightforward approval routes.
So how does a laboratory assure itself of the accuracy of an IVD? In practice it reviews the (normally self declared) product information and ensures that control (positive and negative) tests give expected results. Few laboratories have the capacity to evaluate test performance rigorously; this requires large studies, ideally across multiple settings to account for a range of disease or target prevalences. In practice, such evaluations rarely occur before a CE marked product is used in the clinical setting. Once a test is introduced, there is no set review process, and, providing controls give expected results, it is rare that suboptimal performance is detected. As Cohen and Swift’s investigation shows,1 a product’s deficiencies can go undetected. The message is clear that control tests provide a safety net, but the mesh is wide, so deficient products may pass through undetected.
There are plans to reform European IVD legislation. Increased clinical evidence will be required and assurance around batch conformity will be strengthened. However, changes will be phased in between 2015 and 2019 and will not apply retrospectively,9 which provides a perverse incentive to obtain CE marking under the current rules. By how much the new requirements will improve quality and, crucially, clinical utility remains unclear.
So how can we deal with the current weaknesses regarding the assurance of quality and clinical utility? In the UK, until it was decommissioned in 2010, the Centre for Evidence-based Purchasing provided impartial and objective assessments of medical devices to help the NHS make better purchasing decisions. The establishment of standalone laboratories to test IVDs is unlikely given the cost implications. More practicably, designated centres that can do this alongside routine diagnostic testing, particularly where doubt exists about clinical utility or if changes in performance are suspected, would be a step forward. Flexibility and responsiveness would be key to making such a system work well.
Interestingly, the UK National Institute for Heath Research issued a call in 2012 for proposals to establish diagnostic evidence cooperatives.10 It is intended that these new centres will provide leadership in the evaluation of IVDs to produce high quality evidence for their implementation in clinical care pathways. This is a step in the right direction, but a more comprehensive, indeed systematic, approach is needed to strengthen the current CE marking based system. Whether this should operate at the national or international (for example, European) level is for regulators to determine. European legislation would probably require that marketing regulation crosses borders. This makes sense from a patient perspective, assuming that solutions are flexible and fit for purpose.
Meanwhile, users of diagnostic tests should be particularly alert to unexplained findings and should be encouraged to report suspicious results, so that trends can be identified and investigated. The implementation of a new test should be followed by a formal review to determine its clinical utility, in a similar way that hospitals review the prescribing of a new drug. Diagnostic tests deliver huge benefits, but benefit should not be assumed.
Cite this as: BMJ 2013;346:f836
Competing interests: I have read and understood the BMJ Group policy on declaration of interests and declare the following interests: Personal financial interests: I have received antimicrobial agent related consultancies and/or lecture honorariums in the past three years from Actelion, Astellas, Astra-Zeneca, Bayer, Cubist, Durata, J&J, Merck, Nabriva, Novacta, Novartis, Optimer, Pfizer, Roche, Sanofi-Pasteur, the Medicines Company, VH Squared, and Viropharma. None of these relates specifically to in vitro diagnostic medical devices (IVDs). Organisational financial interests: I have received antimicrobial agent related research funding in the past three years from Actelion, Astellas, Biomerieux, Cerexa, Cubist, Merck, Pfizer, Summit, and the Medicines Company. I have received IVD related research grant funding from Biomerieux and the Department of Health/Health Protection Agency (England).
Provenance and peer review: Commissioned; not externally peer reviewed.