- Dan Greenberg, visiting scientist ()1,
- Allison B Rosen, AHRQ health services research fellow2,
- Natalia V Olchanski, research assistant1,
- Patricia W Stone, assistant professor of nursing3,
- John Nadai, instructor in medicine4,
- Peter J Neumann, associate professor of policy and decision studies1
- 1Harvard Center for Risk Analysis, Department of Health Policy and Management, Harvard School of Public Health, 718 Huntington Avenue, Boston, MA 02115, USA Peter J Neumann associate professor of policy and decision sciences
- 2Division of General Medicine and Primary Care, Beth Israel Deaconess Medical Center, 330 Brookline Avenue, Rose 130, Boston
- 3Columbia University, 617 West 168th Street, New York, NY 10032, USA
- 4Massachusetts General Hospital, 55 Fruit Street, Weight Center S50-4th floor, Boston
- Correspondence to: D Greenberg, Harvard Clinical Research Institute, 930 Commonwealth Avenue, Boston, MA 02215, USA
- Accepted 30 January 2004
Economic evaluations conducted alongside randomised controlled trials enable analysis of detailed, patient level data on efficacy, cost, and quality of life in a controlled setting. They can provide timely and reliable assessments of value for money, to inform decisions on coverage and reimbursement.1–3
The BMJ recently decided to consider trial based economic evaluations for publication only if the clinical results are submitted to the journal as well.4 We assessed the extent to which cost utility analyses are conducted alongside trials, estimated the time lag between the publication of trials' clinical and economic results, and compared the characteristics of journals publishing the clinical trial data and the cost utility analyses.
Methods and results
We conducted a systematic search for original English language cost utility analyses published in 1976-2001 by using Medline and other electronic databases. Two readers independently reviewed each study and came to a consensus on whether the analysis was conducted alongside a trial (data on both efficacy and resource use from the trial were used for the analysis). We identified the journal and publication date for each cost utility analysis and the corresponding trial. To assess the study's potential readership and dissemination we used paired sample t tests to compare the mean impact factors of journals in which studies were published and the extent to which publications were subsequently cited by other authors.
Of 533 cost utility analyses identified, 45 (8%) were trial based economic evaluations and covered a variety of clinical areas, particularly cardiovascular disease, cancer, and psychiatry (a full list of studies is available at www.hsph.harvard.edu/cearegistry). We could not determine the lag in publication between the trial and the economic evaluation for four studies, for which a specific trial could not be identified or trial results were published only in abstract form. In cases where the clinical trial results and economic evaluation were reported in the same article or in the same issue of the journal (n = 7), we assumed no lag.
On average, cost utility analyses were published almost two years after the publication of the corresponding trial (mean (SD) 1.8 (1.4) years; range 0-7.5 years) (⇓). Journal impact factors were higher for trials than for cost utility analyses (11.0 ν 4.9; t = -3.951 (df = 28); 95% confidence interval for the difference -9.25 to -2.93; P < 0.001). The mean number of citations per year (total number of citations divided by number of years since the study was published) was also higher for clinical trials than for the economic evaluations (27.4 ν 3.4; t = -3.197 (df = 30); 95% confidence interval for the difference -39.24 to -8.64; P = 0.003).
We found a substantial delay in the publication of cost utility analyses, suggesting that reliable economic data are usually not available, at least in peer reviewed journals, for decision makers when decisions on adoption and reimbursement are typically made. Moreover, compared with trial results, dissemination of cost utility analyses takes place in journals with lower readership and influence. Several factors may contribute to this phenomenon: economic evaluations may be time consuming to construct, as they typically involve projections of trial data over time and across populations through use of modelling techniques and data from external sources; trial sponsors and investigators are eager to report important clinical results first, and more resources are initially allocated to interpreting and publishing these results; given that most readers of clinical journals are physicians, and not economists or policy makers, manuscripts presenting important clinical results are more often assigned by editors to an accelerated review and publication process.
Efforts have recently been made to keep the clinical and economic results of a trial together.4 Further efforts (for example, fast track review process) should be made to promote timely dissemination of results of economic evaluations concurrent with or soon after the completion and publication of the trial.
What is already known on this topic
To identify cost effective interventions, decision makers need timely and reliable information about the clinical and economic consequences of treatments
Economic evaluations conducted alongside clinical trials enable analysis of detailed, patient level data on efficacy, cost, and quality of life in a controlled setting
What this study adds
A substantial delay in the publication of economic evaluations suggests that reliable economic data are usually not available when decisions have to be made
We thank Richard H Chapman for his contribution to the design and analysis of the Harvard School of Public Health Cost-Effectiveness Analysis Registry
Contributors DG had the original idea for the study, drafted the first version of the manuscript, did the statistical analysis, and is the guarantor. All authors extracted data, interpreted the findings, critically revised the report, and approved the final version
Funding Supported by grant number R01 HS10919 from the Agency for Health Care Research and Quality
Conflict of interests None declared