Intended for healthcare professionals

Feature

Commentary: How to encourage more diagnostics for infectious diseases

BMJ 2016; 354 doi: https://doi.org/10.1136/bmj.i4744 (Published 07 September 2016) Cite this as: BMJ 2016;354:i4744
  1. John H Powers, professor of clinical medicine
  1. George Washington University School of Medicine, Washington, DC, USA
  1. JPowers3{at}aol.com

The right incentives would aid the development of tools to detect disease, says John Powers

Efforts to increase awareness of sepsis without diagnostic tests may increase inappropriate use of antibiotics, causing greater drug related adverse effects and antibiotic resistance.1 Previous campaigns to decrease “time to first antibiotic dose” in pneumonia increased the odds of inappropriate diagnosis by 39%.2

Diagnostics should improve patient outcomes not just detect organisms or disease.3 Inappropriate use can lead to patient harm and rising healthcare costs.4 But lack of diagnostics for sepsis highlights a bigger problem. The dearth of diagnostics for common infections means that patients are often treated empirically based on assumptions about the presence of disease caused by specific pathogens.

Other specialties such as oncology have identified biomarkers that allow doctors to select patients who will benefit and avoid prescribing to those who will not.5 Developing rapid point-of-care diagnostics in acute diseases is a scientific challenge, but more than 100 years ago investigators rapidly diagnosed pneumonia and identified the specific serotype of Pneumococcus so that they could administer specific serum.6 So why not now?

Less obvious are the contradictory incentives inherent in the development process and reimbursement of antibiotics. Many trials are designed with “non-inferiority” hypotheses. Their aim is not to evaluate better efficacy but to rule out lesser effectiveness than older therapies. The threshold for a positive outcome is often a 10% absolute decrease in effectiveness compared with existing treatment. Such studies do nothing to encourage appropriate rapid diagnosis because enrolling patients without the target disease minimises differences between the test and control drugs, resulting in false conclusions of non-inferiority.

Researchers aim to show that the new intervention is “non-inferior” to existing safe and effective treatments. But the analyses are conducted on subgroups infected with specific pathogens, which are often identified by culture results available only days after randomisation and long after treatment decisions must be made. The drugs are then approved for patients with “limited or no alternative treatment options.”7 However, such patients were not studied because the new drug was tested in settings where the control drug was effective. Thus patients for whom the control drug is effective are used as surrogates for a population in which the control drug is assumed to be ineffective and the new intervention assumed to be superior. Furthermore, culture positive patients often comprise a small proportion of enrolled participants, meaning many are enrolled who may accrue no benefit but are exposed to potential harms from experimental interventions. This raises ethical questions about risks to trial participants and scientific questions about extrapolating results from studied to unstudied groups of patients.

Drug developers have no incentive to develop diagnostics because limiting a product’s use to a population defined by diagnostic testing decreases sales. However, the US 21st Century Cures Bill enshrines this “limited population” approval pathway in law.8 9 The bill contains no incentives for diagnostic development to aid clinical trials or to focus use of the new drugs in specific patients for whom they are effective. Rather, the bill proposes “smaller clinical data sets” to speed drug development, increasing uncertainty for patients while companies increase drug costs.10 Without diagnostics this policy will result in more patients empirically receiving the drugs despite approval for a “limited population” because clinicians cannot identify that population.

A STAT-Harvard poll shows that most Americans do not favour these measures when they increase the risk of approval of interventions with lesser efficacy—ironically the primary research question in non-inferiority hypotheses.11

Specific diagnostics would enable trials that evaluate the superior efficacy of new interventions over current standards of care in patients who might benefit, allowing clinicians to select those patients in clinical practice. The UK has proposed a prize for the development of diagnostics to “quickly rule out an antibiotic or identify an effective antibiotic to treat a patient.”12 The US President’s action plan calls for diagnostics to detect organisms but does not mention improving patient outcomes.13 As the Institute of Medicine pointed out, “improving the diagnostic process is not only possible, but also represents a moral, professional and public health imperative.” To do this requires putting the incentives in the right place with adequately designed clinical research that asks the right questions in the right patients.

Footnotes

  • Feature, doi: 10.1136/bmj.i4209
  • Competing interests: I have read and understood BMJ policy on declaration of interests and have served as a consultant for Abbvie, Gilead, Johnson and Johnson, MedImmune and Otsuka.

  • Provenance and peer review: Commissioned; not externally peer reviewed.

References

View Abstract