Cost consequences: implicit, opaque and anti scientific
At the heart of the paper by Coast is a fundamental misunderstanding
and a clear preference for the comfort of decision makers over explicit
and transparent social decision making. She really must distinguish
between those issues which are about society’s health values and those
which are about positive scientific questions regarding how to estimate
costs and effects of an intervention and how to represent the uncertainty
surrounding these estimates. Her argument confuses these issues and, if
taken seriously, suggests that both relevant evidence and the use of
appropriate scientific methods should be abandoned.
The paper claims that complex technical presentation of the results
from economic evaluation is not meaningful to decision makers and that
their knowledge of formal methods is limited. This observation, which may
well be true for many decision makers, is used as one of her reasons for
advocating cost consequences analysis and for avoiding the use of methods
which are not readily and easily understood. The clear implication of her
argument is that attempts to apply formal methods to the analysis of cost
and effects of health care technologies should be restricted to those
which are easily understood rather than those which can be demonstrated to
be appropriate.
If we are to take Coast seriously then any discussion of methods of
analysis in health care is redundant, we should simply ask current
decision makers what type of analysis they feel comfortable with and then
deliver it to them. Thankfully, historically, this has not generally been
the approach taken to the development of methods in the evaluation of
health care. If it had, then it is unlikely that we would regard RCTs as
worth while, the development of meta-analysis would have been abandoned
long ago as too complex, survival analysis would have remained an after
dinner amusement for academics, and journals such as Statistics in
Medicine would be a figment of our collective imagination, replaced by the
reporting of the disaggregated consequences of focus groups asked to
consider whether an intervention is safe and effective. The truth is
that, as methods of evaluation have developed, decision makers have had to
acquire sufficient understanding of them to interpret appropriately the
results of scientific analysis so they can then apply social health values
in making decisions. This is true historically and it is true today with
the development of methods of economic evaluation as used, for example, to
inform decision making within bodies such as NICE [1].
Cost-effectiveness acceptability curves (CEACs) are cited as an
example of technical presentation which she believes is not meaningful for
some (presumably not all) decision makers. As stated in the paper the CEAC
is a means of summarising the uncertainty surrounding the choice between
each alternative intervention [2,3]. It simply plots the probability that
each of the alternatives is cost-effective for a given cost-effectiveness
threshold and requires little more that the usual misinterpretation of the
p-value as an error probability. Coast’s specific arguments about CEACs
must be based on an argument that either uncertainty in the estimates of
cost and effect is unimportant or that there are alternative to CEACs
which are less complex and easier to interpret.
There are strong arguments for basing decisions about resource
allocation on expected costs and effects rather than traditional rules of
inference. However, this in no way implies that the uncertainty
surrounding decisions is unimportant. Indeed, an assessment of the
implications of decision uncertainty is an essential to assess whether the
current evidence is sufficient to sustain a decision about a health
technology or whether more evidence should be required [4,5]. To fail to
consider uncertainty surrounding cost and effect can only lead to
decisions based on inadequate and limited evidence.
The methods literature has demonstrated that the CEAC is an
appropriate way to represent the uncertainty surrounding a decision to
adopt a technology. In fact when considering more than two alternatives
it becomes impossible to appropriately interpret other alternative means
of presenting uncertainty (e.g., scatter plots on the CE plane) despite
that fact the decision maker maybe more comfortable with their inevitable
their misinterpretation of them [2,3,6]. Similarly the use of simpler and
more easily understood approaches such as one way or multi-way sensitivity
analysis either fail to account for the combined effect of uncertainty in
all the parameters taken together or become impossible to interpret
correctly [7].
It is worth considering how an assessment of the uncertainty
surrounding the choice between Hospital and Hospital at home in the cost
consequences example in the paper could be made. Standard deviation are
only reported for activity, functioning and carer burden (presumably based
on single study), but no information is given about the type of
distribution or the correlations between these dimensions of outcome.
More generally, how can decision makers interpret the evidence from a
number of different studies sometimes comparing different alternatives
without formal statistical analysis? The results for patient and carer
satisfaction are not reported and it is simply noted that some are not
statistically significant. Does this mean that decision makers should
conclude that there is no effect and this is known with certainty?
Similarly no standard devotions are reported for costs or for mortality.
Does this also mean that the decision maker should conclude that these
parameters are known with certainty? Finally, even if more information
was provided, how could a decision maker integrate this information to
correctly understand the uncertainty surrounding the choice between
Hospital and Hospital at home without formal, explicit analysis?
Coast is right about one thing: if we need to understand the
uncertainty surrounding a decision then we must be able to specify the
social values on which decisions should be based i.e., we need to be
explicit about how costs and effects should be combined to inform a
decision. This is not a problem with health economic analysis if you
believe that legitimate social decision making should be explicit and
transparent. The concerns that there may be social values not captured in
current analysis should prompt a call to establish what these social
values should be and make sure they are fully reflected in subsequent
analysis. Not a call to retreat from sometimes complex scientific methods
and allow an implicit and therefore opaque approach to the social values
required in decision making. Until quite recently, of these challenging
issues could be conveniently ignored by both policy makers, clinicians and
analysts while decision making was opaque and based on implicit criteria
and the unspecified “weighing” of the evidence implied by cost
consequences “analysis”. Coast would have us continue to ignore them and
simply be happy with the unscientific paternalism of social decision
making based on implicit criteria and analysis with which they feel
comfortable.
1. National Institute for Clinical Excellence (NICE). Guide to the
Methods of Technology Appraisal. London: NICE, 2004.
2. Fenwick E, Claxton K, Sculpher M. Representing uncertainty: the role of
cost-effectiveness acceptability curves. Health Economics 2001;10:779-89.
3. Briggs AH, O'Brien BJ, Blackhouse G. Thinking outside the box: Recent
advances in the analysis and presentation of uncertainty in cost-
effectiveness studies. Annual Review of Public Health 2002;23:377-401.
4. Claxton K, Sculpher M, Drummond M. A rational framework for decision
making by the National Institute for Clinical Excellence. Lancet
2002;360:711-715.
5. Claxton K. The irrelevance of inference: a decision-making approach to
the stochastic evaluation of health care technologies. Journal of Health
Economics 1999;18:342-64.
6. Briggs AH, Goeree R, Blackhouse G, O'Brien BJ. Probabilistic analysis
of cost-effectiveness models: choosing between treatment strategies for
gastroesophageal reflux disease. Medical Decision Making 2002;22:290-308.
7. Claxton K., Sculpher M., McCabe C. Briggs A., Akehurst R., Buxton M.,
Brazier J. and O’Hagan A. Probabilistic sensitivity analysis for NICE
technology assessment: not an optional extra. Health Economics,
forthcoming 2005
Rapid Response:
Cost consequences: implicit, opaque and anti scientific
At the heart of the paper by Coast is a fundamental misunderstanding
and a clear preference for the comfort of decision makers over explicit
and transparent social decision making. She really must distinguish
between those issues which are about society’s health values and those
which are about positive scientific questions regarding how to estimate
costs and effects of an intervention and how to represent the uncertainty
surrounding these estimates. Her argument confuses these issues and, if
taken seriously, suggests that both relevant evidence and the use of
appropriate scientific methods should be abandoned.
The paper claims that complex technical presentation of the results
from economic evaluation is not meaningful to decision makers and that
their knowledge of formal methods is limited. This observation, which may
well be true for many decision makers, is used as one of her reasons for
advocating cost consequences analysis and for avoiding the use of methods
which are not readily and easily understood. The clear implication of her
argument is that attempts to apply formal methods to the analysis of cost
and effects of health care technologies should be restricted to those
which are easily understood rather than those which can be demonstrated to
be appropriate.
If we are to take Coast seriously then any discussion of methods of
analysis in health care is redundant, we should simply ask current
decision makers what type of analysis they feel comfortable with and then
deliver it to them. Thankfully, historically, this has not generally been
the approach taken to the development of methods in the evaluation of
health care. If it had, then it is unlikely that we would regard RCTs as
worth while, the development of meta-analysis would have been abandoned
long ago as too complex, survival analysis would have remained an after
dinner amusement for academics, and journals such as Statistics in
Medicine would be a figment of our collective imagination, replaced by the
reporting of the disaggregated consequences of focus groups asked to
consider whether an intervention is safe and effective. The truth is
that, as methods of evaluation have developed, decision makers have had to
acquire sufficient understanding of them to interpret appropriately the
results of scientific analysis so they can then apply social health values
in making decisions. This is true historically and it is true today with
the development of methods of economic evaluation as used, for example, to
inform decision making within bodies such as NICE [1].
Cost-effectiveness acceptability curves (CEACs) are cited as an
example of technical presentation which she believes is not meaningful for
some (presumably not all) decision makers. As stated in the paper the CEAC
is a means of summarising the uncertainty surrounding the choice between
each alternative intervention [2,3]. It simply plots the probability that
each of the alternatives is cost-effective for a given cost-effectiveness
threshold and requires little more that the usual misinterpretation of the
p-value as an error probability. Coast’s specific arguments about CEACs
must be based on an argument that either uncertainty in the estimates of
cost and effect is unimportant or that there are alternative to CEACs
which are less complex and easier to interpret.
There are strong arguments for basing decisions about resource
allocation on expected costs and effects rather than traditional rules of
inference. However, this in no way implies that the uncertainty
surrounding decisions is unimportant. Indeed, an assessment of the
implications of decision uncertainty is an essential to assess whether the
current evidence is sufficient to sustain a decision about a health
technology or whether more evidence should be required [4,5]. To fail to
consider uncertainty surrounding cost and effect can only lead to
decisions based on inadequate and limited evidence.
The methods literature has demonstrated that the CEAC is an
appropriate way to represent the uncertainty surrounding a decision to
adopt a technology. In fact when considering more than two alternatives
it becomes impossible to appropriately interpret other alternative means
of presenting uncertainty (e.g., scatter plots on the CE plane) despite
that fact the decision maker maybe more comfortable with their inevitable
their misinterpretation of them [2,3,6]. Similarly the use of simpler and
more easily understood approaches such as one way or multi-way sensitivity
analysis either fail to account for the combined effect of uncertainty in
all the parameters taken together or become impossible to interpret
correctly [7].
It is worth considering how an assessment of the uncertainty
surrounding the choice between Hospital and Hospital at home in the cost
consequences example in the paper could be made. Standard deviation are
only reported for activity, functioning and carer burden (presumably based
on single study), but no information is given about the type of
distribution or the correlations between these dimensions of outcome.
More generally, how can decision makers interpret the evidence from a
number of different studies sometimes comparing different alternatives
without formal statistical analysis? The results for patient and carer
satisfaction are not reported and it is simply noted that some are not
statistically significant. Does this mean that decision makers should
conclude that there is no effect and this is known with certainty?
Similarly no standard devotions are reported for costs or for mortality.
Does this also mean that the decision maker should conclude that these
parameters are known with certainty? Finally, even if more information
was provided, how could a decision maker integrate this information to
correctly understand the uncertainty surrounding the choice between
Hospital and Hospital at home without formal, explicit analysis?
Coast is right about one thing: if we need to understand the
uncertainty surrounding a decision then we must be able to specify the
social values on which decisions should be based i.e., we need to be
explicit about how costs and effects should be combined to inform a
decision. This is not a problem with health economic analysis if you
believe that legitimate social decision making should be explicit and
transparent. The concerns that there may be social values not captured in
current analysis should prompt a call to establish what these social
values should be and make sure they are fully reflected in subsequent
analysis. Not a call to retreat from sometimes complex scientific methods
and allow an implicit and therefore opaque approach to the social values
required in decision making. Until quite recently, of these challenging
issues could be conveniently ignored by both policy makers, clinicians and
analysts while decision making was opaque and based on implicit criteria
and the unspecified “weighing” of the evidence implied by cost
consequences “analysis”. Coast would have us continue to ignore them and
simply be happy with the unscientific paternalism of social decision
making based on implicit criteria and analysis with which they feel
comfortable.
1. National Institute for Clinical Excellence (NICE). Guide to the
Methods of Technology Appraisal. London: NICE, 2004.
2. Fenwick E, Claxton K, Sculpher M. Representing uncertainty: the role of
cost-effectiveness acceptability curves. Health Economics 2001;10:779-89.
3. Briggs AH, O'Brien BJ, Blackhouse G. Thinking outside the box: Recent
advances in the analysis and presentation of uncertainty in cost-
effectiveness studies. Annual Review of Public Health 2002;23:377-401.
4. Claxton K, Sculpher M, Drummond M. A rational framework for decision
making by the National Institute for Clinical Excellence. Lancet
2002;360:711-715.
5. Claxton K. The irrelevance of inference: a decision-making approach to
the stochastic evaluation of health care technologies. Journal of Health
Economics 1999;18:342-64.
6. Briggs AH, Goeree R, Blackhouse G, O'Brien BJ. Probabilistic analysis
of cost-effectiveness models: choosing between treatment strategies for
gastroesophageal reflux disease. Medical Decision Making 2002;22:290-308.
7. Claxton K., Sculpher M., McCabe C. Briggs A., Akehurst R., Buxton M.,
Brazier J. and O’Hagan A. Probabilistic sensitivity analysis for NICE
technology assessment: not an optional extra. Health Economics,
forthcoming 2005
Competing interests:
None declared
Competing interests: No competing interests