The true cost of pharmacological disease prevention
BMJ 2011; 342 doi: https://doi.org/10.1136/bmj.d2175 (Published 19 April 2011) Cite this as: BMJ 2011;342:d2175- Teppo L N Järvinen, orthopaedic resident12,
- Harri Sievänen, research director3,
- Pekka Kannus, chief physician4,
- Jarkko Jokihaara, postdoctoral fellow5,
- Karim M Khan, professor5
- 1Department of Orthopaedic Surgery, University of Tampere, 33014 Tampere, Finland
- 2Department of Surgery, Central Finland Central Hospital, 40620 Jyväskylä, Finland
- 3Bone Research Group, UKK-Institute, 33500 Tampere
- 4Division of Orthopaedics and Traumatology, Tampere University Hospital, 33520 Tampere
- 5Centre for Hip Health and Mobility, University of British Columbia, Vancouver, British Columbia, Canada
- Correspondence to: T Järvinen teppo.jarvinen{at}uta.fi
- Accepted 3 February 2011
Large randomised clinical trials are considered to represent the strongest form of evidence in assessing whether a particular healthcare intervention works. However, little attention has been paid to the fact that people treated in large multicentre randomised trials may not accurately reflect the population receiving the drug in real world settings.1
Recently, van Staa and colleagues assessed the external validity of published cost effectiveness studies of selective cyclo-oxygenase-2 (COX 2) inhibitors by comparing the data used in these studies (typically from randomised trials) with observed clinical data.2 The trial data suggested that the cost of avoiding one adverse gastrointestinal event by switching patients from conventional non-steroidal anti-inflammatory drugs to COX 2 inhibitors would be about $20 000 (£12 500; €14 000). However, when the same analysis was performed using the UK’s General Practice Research Database, comprising anonymised medical records of general practitioners, the cost of preventing one bleed was fivefold greater ($104 000).2 The authors concluded that the published cost effectiveness analyses of COX 2 inhibitors neither had external validity nor represented the patients treated in clinical practice. They emphasised that external validity should be an explicit requirement for cost effectiveness analyses that are used to guide treatment policies and practices.
Efficacy versus effectiveness
This striking difference between the results from randomised trials and the real world clinical implications was recognised by Archie Cochrane, the pioneering clinical epidemiologist. Almost 40 years ago, Professor Cochrane introduced a specific hierarchy of evidence required from any healthcare intervention before it can be applied to real life situations (table⇓). Three simple questions summarise Cochrane’s scheme: can it work (efficacy)? does it work (effectiveness)? and is it worth it (cost effectiveness)?
Evidence of efficacy is only the first step in the process of assessing whether a healthcare intervention is appropriate for wider clinical use. Even if an intervention is successful in a study, it may not succeed similarly in usual care. This is because randomised trials select patients who are carefully diagnosed, have a carefully defined risk profile for the event being evaluated (such as cardiovascular event, stroke, or fracture), do not have other serious illnesses, and are likely to adhere to the treatment.3 Also, the study treatment is prescribed by doctors who adhere to the study protocol and participants receive special attention from dedicated staff.
It is wrong to assume that efficacy results apply faithfully in clinical practice. The effectiveness of treatment in the community is influenced by at least five factors: the clinical population treated, diagnostic accuracy, provider compliance, patient adherence, and coverage of healthcare services. Population characteristics in a randomised trial, such as age and sex, generally diverge considerably from that of the clinical population (fig 1⇓). In the clinical setting, misdiagnosis (false positive or negative) is more likely, and this dilutes the treatment effect. Also, care providers working outside a research setting may not administer the treatment faithfully. Further, patients’ other drugs may modulate the effect of the treatment of interest. Finally, the most important confounder is patients’ compliance with treatment—in real life, patients typically take less than half of prescribed treatments whereas about 90% compliance is common in efficacy trials.4 5
Effectiveness of preventive drugs
This gap between the ideal and clinical circumstances raises the question of how well our most widely used preventive drugs work in real life. If we consider efficacy studies (that is, randomised trials) as the bottom rung of Cochrane’s hierarchy ladder, few therapies have made the second rung, and we know of none that have alighted the third. Thus, although there are claims that important preventive drugs such as statins, antihypertensives, and bisphosphonates are cost effective,6 7 8 9 there are no valid data on the effectiveness, and particularly the cost effectiveness, in usual clinical care. Despite this dearth of data, the majority of clinical guidelines and recommendations for preventive drugs rest on these claims.
How can this be the case? Where do the claims arise that preventive drugs are “cost effective”? Consider bisphosphonates to prevent hip fractures in older people. It has been claimed that treatment is as cost effective as drugs to prevent hypercholesterolaemia or hypertension.10 Expert panels have concluded that treatment is cost effective based on estimates from post-hoc Markov models.6 9 11 Unfortunately, the fundamental problem with Markov cost effectiveness models is that the data underpinning the efficacy of the drug do not reflect clinical practice (fig 1⇑). In essence, the models extend the highly specialised trial evidence on drug efficacy as if it were widely applicable in community practice. Thus, the fracture risk reduction data derived from specific randomised trials (such as, 1-2% reductions in absolute risk) are then applied to a wide population largely irrespective of age, sex, comorbidity, bone status, or previous history of fracture.
This is a far cry from reality. The evidence that bisphosphonates prevent hip fracture is very limited (fig 1⇑).12 Significant reductions in hip fractures occurred in a restricted subpopulation of women (aged 65-80 with osteoporosis or previous fractures), whereas the evidence for efficacy among those who most typically suffer from hip fractures—people aged ≥80 and those living in nursing homes—is lacking.13 14 15 And although osteoporosis is considered a predominantly female disease, about 40% of age related fractures occur in elderly men.16 Despite this, all current analyses of cost effectiveness of bisphosphonates assume a universal reduction in fracture risk among all older people. Figure 2⇓ uses data on hip fracture from one year in Finland to show the evidence void; what happens when efficacy data are applied only to those who meet the criteria for randomised trials. If we assume a 32% reduction in hip fracture with bisphosphonates (fig 1⇑) in these patients selected based on the criteria used in randomised trials, only 343 fractures (4.6%) would have been prevented by administering bisphosphonates to all citizens aged ≥50 years (1.86 million in 2003).
Another important economic evaluation is a cost-utility study that enables decision makers to compare the cost of interventions in different health conditions. However, laboratory efficacy studies cannot be considered a proper basis for cost-utility calculations. Economic calculations derived from an optimistic combination of laboratory efficacy and epidemiological data on event rates in population are more likely to meet national thresholds for funding (such as those used by the UK’s National Institute for Health and Clinical Excellence). We recommend that policy makers insist that cost-utility calculations are based on both the incidence data and evidence of effectiveness in real world settings. This would provide a more conservative estimate of the value of an intervention but it would be more accurate than one based on evidence from randomised trials. We acknowledge that it is difficult to obtain the quality of life data necessary to calculate cost-utility in the real life setting but it is not impossible, particularly given the increasing interests in routine patient rated outcome measures.19
Policy decisions
Cost effectiveness is not a straightforward concept because it encompasses elements not directly measurable in currency, such as morbidity, mortality, and reduction in quality of life. Drug treatment to prevent morbid events is rarely cost saving or cost neutral, and thus an individual decision whether to start treatment to prevent disease also has political and philosophical implications. The ultimate questions are to what extent the patients are going to benefit from the treatment and at what cost, or even more directly, how much more than the treatment costs healthcare systems are willing to afford to prevent one morbid event. Currently, the US National Osteoporosis Foundation recommends starting osteoporosis drugs if, according to a World Health Organization fracture risk calculator, a person’s 10 year probability of a hip fracture is 3% or over.11 If we presume a very optimistic 50% hip fracture risk reduction with the treatment, as suggested by some efficacy trials, 667 patients with a 3% risk level need to be treated for one year to prevent one hip fracture. This will cost from $48 000 to $5.21m depending on whether the cheapest drug (a generic bisphosphonate) or the most expensive one (an anabolic bone forming compound) is used, and these figures do not include the other costs related to subjects’ screening and treatment. Since the average total cost of one hip fracture is about $27 500, many drug treatments that are claimed to be cost effective will become so only when available in generic form, if then.
The findings of van Staa and colleagues,2 along with our example from osteoporosis, emphasise that cost effectiveness analyses have external validity only if they rest on realistic event rates and costs rather than solely on data from randomised trials. Accordingly, treatment and health technology assessments should move from analyses in carefully screened populations to actual cost effectiveness trials. In this light, we wonder at the virtual absence of empirical cost effectiveness data on preventive drugs when drug companies stand to make millions of profit a week if their drugs are shown to reduce important clinical outcomes in the community setting. For comparison, Cochrane’s third rung on the ladder (real cost effectiveness trials) has been reached in non-pharmacological medical interventions. For example, the cost effectiveness of exercise in preventing falls of older adults has been confirmed in actual trials.20
Both the European Medicines Agency and US Food and Drug Agency require the drug industry to provide comparisons of all new medicines with placebo only. Thus, companies seldom conduct a head to head comparison of different drugs; it is not required, it is expensive, and the sample size needed to detect small differences between drugs would probably be enormous. Such studies might also disadvantage the company.
We need to put an end to this kind of gaming of the system and start to advocate true comparative effectiveness research.21 All relevant parties (doctors, patients, patient advocacy groups, drug industry, and government regulatory bodies) should acknowledge that it is everyone’s responsibility to ensure that we have true cost effectiveness data on all preventive healthcare before it is approved for wider use and reimbursed by the government. This responsibility should not fall on the drug industry alone. Rather, governments and their drug approval agencies should not only ensure that true cost effectiveness trials are carried out but also fund these trials and provide legal protection for the possible adverse events of the intervention. As preventive drugs are currently marketed to the entire population after they are approved on the basis of efficacy in randomised trials, there should be no ethical or other reasons for not performing true cost effectiveness trials in real life settings. Unless this is done, the important question whether preventive pharmacotherapy is cost effective will remain unanswered.
Notes
Cite this as: BMJ 2011;342:d2175
Footnotes
We thank Antti Malmivaara, Martti Kekomäki, and Janne Leinonen for their comments on the manuscript.
Contributors and sources: The authors have a long experience and research interest in epidemiology and prevention of osteoporosis, falls, and fractures in elderly people. This article arose out of discussion at several meetings on osteoporosis and injury and disease prevention, as well as during TLNJ and JJ’s fellowships at the Centre for Hip Health and Mobility, Vancouver General Hospital, Vancouver, Canada in 2008-2009.TLNJ conceived the paper, carried out the initial critical review of the literature and planned the rationale for the paper. JJ carried out the meta-analysis on anti-hip fracture efficacy of bisphosphonates. TLNJ and HS wrote the first draft. All authors contributed to the serial drafts and conversations and agreed to the final submission. All authors contributed to manuscript revisions after review. TLNJ is the guarantor.
Funding: The Competitive Research Funding of Pirkanmaa and Central Finland Hospital Districts, Sigrid Juselius Foundation, and the Academy of Finland.
Competing interests: All authors have completed the unified competing interest form at www.icmje.org/coi_disclosure.pdf (available on request from the corresponding author) and declare: no support from any organisation for the submitted work; no financial relationships with any organisations that might have an interest in the submitted work in the previous three years, no other relationships or activities that could appear to have influenced the submitted work.
Provenance and peer review: Not commissioned; externally peer reviewed.