Fee for service payments are thwarting evidence based medicine in the USBMJ 2012; 345 doi: https://doi.org/10.1136/bmj.e7019 (Published 17 October 2012) Cite this as: BMJ 2012;345:e7019
Fee for service payment is one of five major reasons why healthcare in the United States has been slower to embrace comparative effectiveness research than some other countries, says new research.
A misalignment of financial incentives pushes patients and providers to disregard the evidence and pursue aggressive treatments even if they are no more effective than more conservative treatments, said Justin Timbie, lead author of a meta-analysis. He was speaking at a forum in Washington, DC, where the paper, which appears in a a focal issue of Health Affairs that also includes a series of interviews conducted by the RAND Corporation,1 was released.
“Fee for service payment is the real culprit in that it provides a potent incentive to adopt treatments that are well reimbursed, regardless of the evidence,” he said.
Most comparative effectiveness studies reflect real world situations of complex comorbidities and lower adherence compared with the “more pristine” conditions of randomized controlled trials. This introduces a greater level of uncertainty, Timbie said. “It doesn’t take a lot for a few stakeholders to raise doubts about the validity of some of these studies.”
There is also a cognitive bias towards rejecting evidence that does not confirm prior beliefs. It took more than a decade of findings that second generation antipsychotic drugs were not superior to the first generation for psychiatrists to accept that fact, he said, as an example.
Physicians and patients in the US, particularly when compared with the UK, have a pro-intervention bias “even when the marginal benefit of doing so is small.” Timbie said this played out in cardiology with a tendency to open up even minor blockages that were not clinically meaningful.
A pro-technology bias places faith in the new compared with the old.
Timbie criticized much of comparative effectiveness research for failing to deal with the needs of end users. One reason for this is a focus on “downstream decision making where new evidence might have the least leverage in changing clinical practice.”
The research focuses on which limited intervention is better, often after a patient has been referred to a physician who favors one particular intervention, rather than further up stream on a question of whether an intervention is even necessary.
Only a few integrated healthcare systems, such as the Department of Veterans Affairs and Kaiser Permanente, have decision support technology built into electronic health records that help patients and physicians make such decisions at any stage of the process.
Timbie and his colleagues identified three principles and activities that might lead to a greater integration of comparative effectiveness findings into regular patient care.
“It is important to develop objectives and standards for interpretation” of outcomes prior to the start of a study, he said. This should be both a process and a transparent public record on key decisions of a study.
Treatment guidelines and quality measures should be created by committees that are multidisciplinary, balanced, and free from conflicts of interest, according to standards laid out by the Institute of Medicine in 2011.2 Timbie noted that professional associations had been slow to embrace these principles.
Finally, payment and coverage policies should provide incentives for efficiency. There is hope that accountable care organizations might accomplish some of this.
Cite this as: BMJ 2012;345:e7019