Personal paper: how to get the best health outcome for a given amount of moneyBMJ 1997; 315 doi: https://doi.org/10.1136/bmj.315.7099.47 (Published 05 July 1997) Cite this as: BMJ 1997;315:47
- Matthew Sutton, research fellowa ()
- Accepted 16 January 1997
Increasingly, healthcare programmes are evaluated in terms of both their costs and outcomes. However, existing evaluation designs, in which equal numbers of participants are compared, remain a hangover from the time when outcomes were the sole concern. Outcome maximisation studies, in which costs are equalised, are an alternative way of comparing the costs and consequences of health interventions. They may be feasible in some instances and are attractive because they highlight two of the main messages of health economics.
Firstly, economists emphasise that their concern for the costs of interventions is due to the implications of opportunity costs—that is, resources used for one intervention should be compared with the potential benefits achievable by the next best option that is forgone because the resources for health care are finite.1 2 3 In an outcome maximisation design the opportunity costs of high cost interventions would be highlighted more clearly in the study results.
Secondly, the marginal costs of implementing programmes do not necessarily equal average costs, in which marginal costs are the change in costs resulting from a small change in the level of provision.4 By evaluating alternative technologies at comparable levels of expenditure, outcome maximisation studies would take account of the fact that programme costs often do not rise proportionally with the number of patients treated.
Health care evaluations used to focus solely on outcomes, but increasingly they have compared the costs of competing interventions
Popular study designs in which equal numbers of people are subjected to each regimen at different levels of total cost do not reflect this innovation
An outcome maximisation design, in which the costs allocated to each programme are equal, might be appropriate
This design has two main advantages: (a) the opportunity costs (or next best use) of more resource intensive treatments are readily expressed in terms of units of benefit forgone and (b) the real life choice of allocating a fixed level of resources to different programmes is considered
One likely consequence of this design is that different numbers of subjects receive the different treatments
This is not less ethical than traditional study designs if ethical consideration is extended to those not receiving treatment
Study designs for economic evaluations
Published work in health economics lists four types of economic evaluation: cost minimisation, cost effectiveness, cost-benefit, and cost-utility analyses.1 Differences between the studies relate to the way in which the consequences are considered. Cost minimisation studies are based on the assumption that the outcomes from the different interventions are not significantly different. As this assumption should be based on evidence, cost minimisation analysis is often undertaken after a clinical trial. The remaining three approaches are applicable when the outcomes are not equal. As shown in the first box on the next page, these differences pertain to whether there are multiple outcomes of interest and to how these multiple effects are to be measured on a common scale. In all cases if any aspect of the study is controlled it is that equal numbers of individuals are treated by the different options. This permits testing of a null hypothesis of no difference in outcome between the alternatives.
Current designs of economic evaluations1
Cost minimisation analysis: evidence is sufficient to assume that the outcomes from the alternatives are equal. Programmes are judged on the criterion of least cost
Cost effectiveness analysis: the alternatives are judged on a single outcome. This may be achieved to different degrees by the alternatives. Programmes are ranked using cost effectiveness ratios—for example, cost of intervention per problem free drinker
Cost-utility analysis: several outcomes are produced by the alternatives. They may be produced to different degrees and some outcomes may not apply to all alternatives. The multiple outcomes are combined using preference weights. Programmes are selected on the basis of comparisons of costs per unit of outcome—for example, per quality adjusted life year (QALY)
Cost-benefit analysis: outcomes are produced as in cost-utility analyses, but the multiple outcomes are combined using monetary values. Programmes are ranked using cost-benefit ratios. Because benefits are measured in the same units as costs programmes can be judged to be worth while overall—that is, whether the value of the benefits produced exceed the value of the resources consumed
Broadly, economists describe the objectives of economic analysis in two ways—either as securing a given level of outcome at least cost or as maximising outcome from a fixed level of resources.5 However, the taxonomy of possible study designs in the box includes only those which consider securing a given level of outcome at least cost; as the outcomes from various programmes cannot be set as equal, cost minimisation studies will be rarely applicable.
Proposed outcome maximisation designs for economic evaluations
Effect maximisation analysis: equal resources are allocated to each programme. The preferred programme maximises total effectiveness—for example, number of problem free drinkers, number of disability free days
Utility maximisation analysis: equal resources are allocated to each programme. Multiple outcomes are combined using preference weights. Programmes are selected on the basis of total increase in utility—for example, the total number of QALYs produced
Benefit maximisation analysis (disbenefit minimisation analysis): equal resources are allocated to each programme. Multiple outcomes are combined using monetary values. The programme producing outcomes of most value is recommended. If the value of the benefits is less than the value of the resources used the criterion becomes disbenefit minimisation
Instead, studies maximising outcome from a fixed level of resources may be more pragmatic. Thus, it may be plausible to hold constant the total resources or significant cost components—for example, staff time—devoted to each arm of the trial and adopt an outcome maximisation design. The three different ways of assessing outcomes listed in the first box would generate an equal number of possible study types: effect maximisation, utility maximisation, and benefit maximisation analyses (when total benefits are less than total costs a benefit maximisation analysis becomes a disbenefit minimisation analysis). The second box summarises the principal features of these new study designs. Essentially, the (equalised) resources in each arm of the trial would be used to treat as many people as possible and competition would be based on some measure of the total amount of benefit produced. This does not have to be a simple summation across people,2 3 although this is the standard approach implicit in cost effectiveness or cost-benefit ratios.
Outcome maximisation study of brief and intensive interventions for alcoholism
By way of example, consider recent reviews of brief and intensive interventions for alcohol misuse.5 6 Notwithstanding considerable debate about whether brief interventions are truly brief,7 the primary purpose is to test the null hypothesis—that there is no significant difference in the outcome from the two alternatives. With a cost minimisation approach and if the null hypothesis is not rejected, the choice can be based on the obvious cost advantage of brief interventions.6
For example, Chapman and Huygens found no significant differences in various outcome measures between alcoholic patients offered a six week outpatient programme or a two hour confrontational interview.8 At 18 month follow up, 29% of the 22 subjects in the outpatient group had problem free drinking compared with 22% of the 26 subjects in the confrontational interview arm. This difference was reported to be insignificant.
Chapman and Huygens do not report the amount of resources used in each arm of the study.8 However, the results from an effect maximisation design may be estimated by holding staff resources constant at 1.5 full time equivalent workers over six weeks, estimating the number of subjects who could be treated by each alternative, and using the reported rates of problem free drinking (table 1).8
Clearly, more subjects can be treated in the confrontational interview programme within these fixed resources. Given the comparatively small difference in rates of problem free drinking between the programmes, the advantage of the confrontational interview over the outpatient programme in terms of cost also emerges in the number of subjects for whom the programmes produce a successful outcome (16 v 3). The opportunity costs of allocating 1.5 full time equivalent workers to the outpatient programme, when they could have been assigned to conducting the confrontational interviews, is a successful outcome for 13 subjects. This seems a more intuitive and persuasive way to present opportunity costs than highlighting the monetary value of the resources that could have been saved.9
This type of study design better represents the decision problems faced in health services as it compares alternative uses of a predetermined budget. A further advantage of this approach is that it is explicit in considering the production of outputs from inputs in different health technologies at a particular input. Therefore, if cost per subject varies with the number of subjects—for example, for certain interventions savings are possible by treating many patients—the costs and consequences will be evaluated at comparable levels.
For example, the sample problem (table 1) could be reconsidered for the case in which four part time (two full time equivalent) workers are available. In the one to one intervention (confrontational interview) it may be possible to react to this marginal increase in resources by treating more subjects, whereas additional group sessions may not be possible in the outpatient arm with only one extra worker. Therefore, the number of problem free drinkers resulting from the confrontational interview rises to 21 and the opportunity costs of allocating four part time workers to the outpatient intervention would be a successful outcome for 18 patients.
Opportunity costs and ethical implications of outcome maximisation studies
The ethics of randomisation are based on a lack of evidence on the superiority of any alternative and equal chances for each subject of receiving the treatment that emerges as superior. One probable feature of an effect maximisation design is that different numbers of people would be treated in each arm of the study. Would outcome maximisation studies therefore be unethical?
An extension of ethical concerns to opportunity costs suggests not. Williams has argued that all those affected by clinical decisions (including those missing out on those resources) must be the subject of ethical considerations because the resources for health care are limited and choices must be made between potential patients.2 Table 2 shows the ethical implications of traditional study designs once the opportunity costs in the next best treatment alternative (intervention x) are considered. Owing to the different amounts of resources allocated to each study arm, a different number of people are subjected to the consequences of each alternative. Effectively, different numbers of individuals are allocated to missing out on treatment in traditional study designs. Therefore, these designs cannot be considered more ethical than ones in which the level of resources is equalised across study arms.
There may be no additional ethical problems associated with holding costs constant in outcome maximisation studies. Also, this approach may be preferable because, as with many real life decisions, the evaluation focuses on competing options for a fixed level of resources. Moreover, input is explicit in the study design and it is obvious for what allocation of resources the results are relevant. Furthermore, it highlights the disadvantages of high cost interventions by showing opportunity costs in natural units of outcome—for example, number of problem free drinkers—and, by reducing the amount of cost analysis required, returns us to the good old days of outcome maximisation.
Initial drafts of this paper were written while I was national drug strategy research fellow at the National Drug and Alcohol Research Centre, University of New South Wales, Sydney, Australia. This temporary post was funded by the Drugs of Dependence Branch of the Commonwealth Department of Human Services and Health in Australia. I thank David Buck, Neil Craig, Neil Donnelly, Richard Mattick, and David Torgerson for helpful comments on early drafts of the manuscript.