Intended for healthcare professionals

Observations Yankee Doodling

Linking health insurance coverage to evidence

BMJ 2013; 347 doi: (Published 05 July 2013) Cite this as: BMJ 2013;347:f4270
  1. Douglas Kamerow, chief scientist, RTI International, and associate editor, BMJ
  1. dkamerow{at}

Be careful what you wish for

Everyone agrees that we want healthcare policies of all sorts to be informed—if not governed—by the best evidence available. This applies to coverage and payment for drugs and services, whether for prevention, diagnosis, or treatment. It may be especially important for preventive services, for two reasons.

Firstly, to put it baldly, preventive care is optional. If a patient has a broken ankle or a weeping ulcer, we have to do something, whether there are randomized trials supporting the choice of treatment or not. Not so for a screening test. Before we take an asymptomatic patient and recommend mammography, say, or a cervical smear test, we want strong evidence that the benefits (better outcomes, preferably) have been proved to outweigh the harms (side effects of the test and subsequent treatment).

Secondly, there is already a lot of good quality evidence about preventive care. Many of the major screening tests, immunizations, and some counseling interventions have been examined in high quality studies. We often know the effectiveness of specific preventive services: what should be done and, on occasion, what shouldn’t.

None of this matters, of course, unless the evidence based recommendations are implemented, and even then it can be a treacherous road. A good example of this is the history of the US Preventive Services Task Force. The task force, which was based on the groundbreaking work and methodology of the Canadian Task Force on Periodic Health Examination, was convened in the early 1980s and issued its first report in 1989.

Because of the strict methodology (“evidence over consensus,” as the Canadians put it1), the first US task force report evaluated many preventive services but recommended precious few. This scandalized the medical specialty societies that had been advocating widespread screening for everything from prostate cancer to glaucoma.2 It did not matter much, though, because although it had been convened by the US federal government, the task force was kept at arm’s length, and its recommendations were not official. In the early days the task force was little known and less heeded.

Time passed and things changed.

After the second edition of the task force report was published, in 1996, the Public Health Service moved the task force sponsorship to the Agency for Healthcare Research and Quality and subsequently engineered to have Congress endorse the task force’s existence and mission. This and the trend towards evidence based medicine led to increasing uptake of task force recommendations by health plans and insurance companies, which determine what gets paid for in US healthcare. In addition, task force recommendations were adopted as quality measures by various official and quasi-official quality improvement groups.

During this time, task force recommendations had the best of both worlds: no official link to coverage policies, with the accompanying political and economic pressures; but increasing prestige and adoption throughout the US. In fact the legislation that authorized the task force specifically exempted it from having to hold public meetings, as other government advisory bodies do, so the task force and its staff could go about their business out of the limelight. When critics attacked the small number of positive task force recommendations as a denial of appropriate services to save money, task force members could honestly state that they had no direct linkage to coverage policies in the highly decentralized US healthcare “system.”

All this changed with the passage of US healthcare reform legislation in 2010. It provided that all American health plans and insurers would cover, without any copayments or deductibles, services that were recommended by the task force and a small number of other government sponsored bodies. Although it specifically did not prohibit payment for and coverage of other preventive services, it clearly linked task force recommendations with coverage policy in a way that had never been done before. It also put the task force in the same category as the other groups, which had less stringent scientific recommendation criteria.

What seemed like a desirable link between science and policy was actually a recipe for disaster for the impartiality of task force. This was clear even before the law was passed. In the fall of 2009 the task force revised its breast cancer screening guidelines, recommending starting screening at age 50. This unleashed a firestorm of negative public opinion. Congress responded to this pressure and amended the then draft legislation to specifically exclude the task force’s 2009 mammography recommendations from the version of the law that was passed the following year. Politics trumped science.

This should not be a surprise. The task force is constituted only to review the scientific evidence, not to set health plan and insurance policies. Despite naive hopes that policies can be directly and exclusively created from evidence, experience shows that groups like the task force are better off being one step removed from policy creation. Coverage policy is a tricky business. It needs to take scientific evidence into consideration, perhaps using it as a floor for minimum acceptable coverage, but then it also has to evaluate other non-scientific issues and public opinion. There is no way that the task force can continue to function as presently constituted and remain linked directly to coverage policy. Leading experts are calling for severance of that link, and they are right.3


Cite this as: BMJ 2013;347:f4270


  • Competing interest: DK directed the staff of the US Preventive Services Task Force from 1988 to 1994 and was the editor of the first two editions of its reports. He is the author of Dissecting American Health Care (


View Abstract