What then is the true measure of preventive effect ?
The true cost of pharmacological disease prevention?
Teppo Jarvinen et al. comment that "Large randomised
clinical trials are considered to represent the strongest form of evidence in
assessing whether a particular healthcare intervention works". We agree, and are thus persuaded to
prescribe Statins to many of our patients at risk of strokes or heart attacks. Cost-effectiveness analysts
such as NICE exhort us to treat just those at high-risk ( previous Stroke, IHD,
or CVD 10 year risk >20% ).
But Jarvinen goes on to state "little attention has
been paid to the fact that people treated in large multicentre randomised
trials may not accurately reflect the population receiving the drug in real
world settings 1" Seeing no evidence that north Pembrokeshire people had
been included in the published RCTs, we must therefore extrapolate. In an effort to justify our prescriptions,
we examined our practice, with a view to establishing in what
pertinent respects our treated patients differed from the trial patients.
Firstly, we identified that cohort of 5664 patients on
the lists 5 years ago.
Jarvinen stated that "The effectiveness of treatment
in the community is influenced by at least five factors" each of which we
examined:-
We feel we have made good efforts as GPs to ensure that
our clinical population was treated according to NICE guidelines, but
with questionable diagnostic
accuracy. We were forced to use best-estimate
data substitution for missing parameters (cholesterol, BP), and used
worst-case rather than mean parameters to compute Framingham 10-year CVD risks.
Subject to provider compliance: Our local prescribing incentive scheme
encouraged 'low-acquistion cost' Simvastatin, yet perversely also rewarded a
net cost-reduction). Nonetheless, we
prescribed Statin (>90% Simvastatin) to 1017 (18%) of our patients,
achieving the highest rates in Pembrokeshire.
As to patient
adherence we found that among those
offered statin 18% stopped taking it, with 12% claiming adverse reactions. We cannot reliably estimate how many
patients were not asked (coverage of healthcare services), but see no
evidence for inverse-care laws nor rules-of-halves.
In an effort to estimate the treatment effect, we divided
the patients into those with, and those without known CVD. We further subdivided into those taking
Statin for the whole 5 years, and those not taking a Statin for the 5 years,
with a middle group who started Statin sometime later than 5 years ago.
We analysed the 5-year rate of recorded CVD
events, according to worst-case prior Framingham CVD risk as described
previously.
RISK Group
Mean RISK
count
expected events
IHD
events
IHD event%
MIs
MI
%
Strokes
Stroke
%
< 20%
5.8
3400
99
6
0
2
0
4
0
20% +
40.2
2264
455.5
102
4
49
2
47
2
Primary Prevention group
Allocation
RISK Group
Mean RISK
count
expected
events
IHD events
IHD event
%
MIs
MI
%
Strokes
Stroke
%
5yrs no Statin
< 20%
5.6
3309
92.5
1
0
0
0
0
0
5yrs no Statin
20% +
34.5
1313
226.5
10
0
1
0
3
0
5yrs some Statin
< 20%
14.3
51
3.5
3
5
1
1
3
5
5yrs some Statin
20% +
45.7
442
101
48
10
28
6
28
6
5yrs Statin
< 20%
14.5
29
2
0
0
0
0
0
0
5yrs Statin
20% +
49.4
273
67.5
20
7
9
3
6
2
Secondary Prevention group
Allocation
RISK Group
Mean RISK
count
expected
events
IHD events
IHD event
%
MIs
MI
%
Strokes
Stroke
%
5yrs no Statin
< 20%
13.5
3
0
0
0
0
0
1
33
5yrs no Statin
20% +
54.8
22
6
2
9
0
0
0
0
5yrs some Statin
< 20%
10.7
3
0
1
33
1
33
0
0
5yrs some Statin
20% +
47.2
37
8.5
4
10
4
10
3
8
5yrs Statin
< 20%
12.1
5
0
1
20
0
0
0
0
5yrs Statin
20% +
51.6
177
45.5
18
10
7
3
7
3
Superficial inspection gives little comfort. We might be heartened to find just 26 (39%)
CVD events as opposed to an expected 67.5 amongst those identified and treated
for a full five years for primary prevention at >20% 10year risk. But there were only 89 (27%) events out of
the 327.5 expected events amongst the other 1313 + 442 primary patients at
>20% risk. There were
proportionately more heart attacks and strokes amongst those taking statin for
five years, than amongst those not! There were 11 new events in persons not taking Statin, resulting in all
but 3 patients being initiated onto Statin. The secondary prevention data for Statin seems even more disheartening -
with 25 (55%) CVD events as opposed to an expected 45.5 amongst those
identified and treated for a full five years. Given our confidence in RCTs, it is not scientifically
tenable to argue that Statin caused more CVD events in practice. Such is the nature of a non-randomised
retrospective cohort study - prone to all manner of selection, allocation, and
ascertainment bias.
Could it be that people who were put on a Statin had an
ineffable unquantified indication of high-risk? That GPs or their patients had a canny sixth sense??Alternatively, perhaps those prescribed a
delayed Statin were initiated as a result of a new CVD event? What part did other interventions, such as
anti-smoking advice, BP-lowering, and
lifestyle change play? It is difficult to know what to do next. Estimating risk by averaging risk factors,
rather than worst-case figures, would make our preventive efforts look even
less successful. Using QRISK would
lower our estimates by 5%.
Could Des Spence be right, in his claim that 'the
banana-split effect' accounts for the lion's share of preventive influence? Have we really wasted the last
5 years on Statins, or should we keep our faith in the RCT? Although Statin appears at first sight to
have contributed little if anything to the results, the fact is we have seen
only 96 out of the 455.5 strokes and heart attacks expected in our 2264
patients at >20% worst-case ten-year
CVD risk.Further work is needed to explain this unexpected and
worrying paradox.
References
The true cost of pharmacological disease prevention:? Teppo Jarvinen et al
BMJ 2011; 342:d2175 doi: 10.1136/bmj.d2175
(Published 19 April
2011)????
Cite this as: BMJ
2011; 342:d2175
Cooking the books : Des Spence
BMJ 2009; 338:b2647 doi: 10.1136/bmj.b2647
(Published 30 June 2009)
Rapid Response:
What then is the true measure of preventive effect ?
The true cost of pharmacological disease prevention?
Teppo Jarvinen et al. comment that "Large randomised
clinical trials are considered to represent the strongest form of evidence in
assessing whether a particular healthcare intervention works".
We agree, and are thus persuaded to
prescribe Statins to many of our patients at risk of strokes or heart attacks. Cost-effectiveness analysts
such as NICE exhort us to treat just those at high-risk ( previous Stroke, IHD,
or CVD 10 year risk >20% ).
But Jarvinen goes on to state "little attention has
been paid to the fact that people treated in large multicentre randomised
trials may not accurately reflect the population receiving the drug in real
world settings 1"
Seeing no evidence that north Pembrokeshire people had
been included in the published RCTs, we must therefore extrapolate. In an effort to justify our prescriptions,
we examined our practice, with a view to establishing in what
pertinent respects our treated patients differed from the trial patients.
Firstly, we identified that cohort of 5664 patients on
the lists 5 years ago.
Jarvinen stated that "The effectiveness of treatment
in the community is influenced by at least five factors" each of which we
examined:-
We feel we have made good efforts as GPs to ensure that
our clinical population was treated according to NICE guidelines, but
with questionable diagnostic
accuracy. We were forced to use best-estimate
data substitution for missing parameters (cholesterol, BP), and used
worst-case rather than mean parameters to compute Framingham 10-year CVD risks.
Subject to provider compliance: Our local prescribing incentive scheme
encouraged 'low-acquistion cost' Simvastatin, yet perversely also rewarded a
net cost-reduction). Nonetheless, we
prescribed Statin (>90% Simvastatin) to 1017 (18%) of our patients,
achieving the highest rates in Pembrokeshire.
As to patient
adherence we found that among those
offered statin 18% stopped taking it, with 12% claiming adverse reactions. We cannot reliably estimate how many
patients were not asked (coverage of healthcare services), but see no
evidence for inverse-care laws nor rules-of-halves.
In an effort to estimate the treatment effect, we divided
the patients into those with, and those without known CVD. We further subdivided into those taking
Statin for the whole 5 years, and those not taking a Statin for the 5 years,
with a middle group who started Statin sometime later than 5 years ago.
We analysed the 5-year rate of recorded CVD
events, according to worst-case prior Framingham CVD risk as described
previously.
RISK Group
Mean RISK
count
expected events
IHD
events
IHD event%
MIs
MI
%
Strokes
Stroke
%
< 20%
5.8
3400
99
6
0
2
0
4
0
20% +
40.2
2264
455.5
102
4
49
2
47
2
Primary Prevention group
Allocation
RISK Group
Mean RISK
count
expected
events
IHD events
IHD event
%
MIs
MI
%
Strokes
Stroke
%
5yrs no Statin
< 20%
5.6
3309
92.5
1
0
0
0
0
0
5yrs no Statin
20% +
34.5
1313
226.5
10
0
1
0
3
0
5yrs some Statin
< 20%
14.3
51
3.5
3
5
1
1
3
5
5yrs some Statin
20% +
45.7
442
101
48
10
28
6
28
6
5yrs Statin
< 20%
14.5
29
2
0
0
0
0
0
0
5yrs Statin
20% +
49.4
273
67.5
20
7
9
3
6
2
Secondary Prevention group
Allocation
RISK Group
Mean RISK
count
expected
events
IHD events
IHD event
%
MIs
MI
%
Strokes
Stroke
%
5yrs no Statin
< 20%
13.5
3
0
0
0
0
0
1
33
5yrs no Statin
20% +
54.8
22
6
2
9
0
0
0
0
5yrs some Statin
< 20%
10.7
3
0
1
33
1
33
0
0
5yrs some Statin
20% +
47.2
37
8.5
4
10
4
10
3
8
5yrs Statin
< 20%
12.1
5
0
1
20
0
0
0
0
5yrs Statin
20% +
51.6
177
45.5
18
10
7
3
7
3
Superficial inspection gives little comfort. We might be heartened to find just 26 (39%)
CVD events as opposed to an expected 67.5 amongst those identified and treated
for a full five years for primary prevention at >20% 10year risk. But there were only 89 (27%) events out of
the 327.5 expected events amongst the other 1313 + 442 primary patients at
>20% risk. There were
proportionately more heart attacks and strokes amongst those taking statin for
five years, than amongst those not! There were 11 new events in persons not taking Statin, resulting in all
but 3 patients being initiated onto Statin. The secondary prevention data for Statin seems even more disheartening -
with 25 (55%) CVD events as opposed to an expected 45.5 amongst those
identified and treated for a full five years. Given our confidence in RCTs, it is not scientifically
tenable to argue that Statin caused more CVD events in practice. Such is the nature of a non-randomised
retrospective cohort study - prone to all manner of selection, allocation, and
ascertainment bias.
Could it be that people who were put on a Statin had an
ineffable unquantified indication of high-risk? That GPs or their patients had a canny sixth sense??Alternatively, perhaps those prescribed a
delayed Statin were initiated as a result of a new CVD event? What part did other interventions, such as
anti-smoking advice, BP-lowering, and
lifestyle change play? It is difficult to know what to do next. Estimating risk by averaging risk factors,
rather than worst-case figures, would make our preventive efforts look even
less successful. Using QRISK would
lower our estimates by 5%.
Could Des Spence be right, in his claim that 'the
banana-split effect' accounts for the lion's share of preventive influence? Have we really wasted the last
5 years on Statins, or should we keep our faith in the RCT? Although Statin appears at first sight to
have contributed little if anything to the results, the fact is we have seen
only 96 out of the 455.5 strokes and heart attacks expected in our 2264
patients at >20% worst-case ten-year
CVD risk.Further work is needed to explain this unexpected and
worrying paradox.
References
The true cost of pharmacological disease prevention:? Teppo Jarvinen et al
BMJ
2011; 342:d2175 doi: 10.1136/bmj.d2175
(Published 19 April
2011)????
Cite this as: BMJ
2011; 342:d2175
Cooking the books : Des Spence
BMJ
2009; 338:b2647 doi: 10.1136/bmj.b2647
(Published 30 June 2009)
Cite this as: BMJ
2009; 338:b2647
Competing interests:
None declared
Competing interests: