How to identify when a performance indicator has run its courseBMJ 2010; 340 doi: https://doi.org/10.1136/bmj.c1717 (Published 06 April 2010) Cite this as: BMJ 2010;340:c1717
- David Reeves, senior research fellow1,
- Tim Doran, clinical research fellow1,
- Jose M Valderas, clinical lecturer1,
- Evangelos Kontopantelis, research associate1,
- Paul Trueman, director, York health economics consortium2,
- Matt Sutton, professor of health economics3,
- Stephen Campbell, senior research fellow1,
- Helen Lester, professor of primary care4
- 1National Primary Care Research and Development Centre, Manchester M13 9PL
- 2York Health Economics Consortium, University of York, York YO10 5NH
- 3Health Methodology Research Group, Manchester M13 9PL
- 4National Institute for Health Research School for Primary Care Research, Manchester M13 9PL
- Correspondence to: H Lester
- Accepted 16 February 2010
Increasing numbers of countries are using indicators to evaluate the quality of clinical care, with some linking payment to achievement.1 For performance frameworks to remain effective the indicators need to be regularly reviewed. The frameworks cannot cover all clinical areas, and achievement on chosen indicators will eventually reach a ceiling beyond which further improvement is not feasible.2 3 However, there has been little work on how to select indictors for replacement. The Department of Health decided in 2008 that it would regularly replace indicators in the national primary care pay for performance scheme, the Quality and Outcomes Framework,4 making a rigorous approach to removal a priority. We draw on our previous work on pay for performance5 6 and our current work advising the National Institute for Health and Clinical Excellence (NICE) on the Quality and Outcomes Framework to suggest what should be considered when planning to remove indicators from a clinical performance framework.
First UK decisions
The Quality and Outcomes Framework currently includes 134 indicators for which general practices can earn up to a total of 1000 points. Negotiations between the Department of Health and the BMA’s General Practitioners Committee last autumn led to an agreement to remove eight clinical indicators worth 28 points in April 2011 (table 1⇓). The eight indicators are all process measures and reward actions such as taking blood pressure or taking blood to measure cholesterol, glucose, or creatinine concentrations for people with relevant chronic diseases. The framework rewards the action itself rather than a clinically informed response to results or intermediate outcomes such as better control of blood pressure or cholesterol levels. It is therefore not surprising that achievement of these process indicators is high (median >95% and interquartile range <4.5%) with little change in rates or variation across practices since 2005-6, the second year of the Quality and Outcomes Framework.
In many schemes, including the Quality and Outcomes Framework, providers can “except” certain patients from inclusion in the denominator figures for an indicator on grounds such as extreme frailty or contraindications to a specified drug. Exception reporting rates are also low for these eight indicators (median <5% and interquartile range <3%).
If we look at one of the eight indicators in more detail—the proportion of diabetic patients who have had their blood pressure measured in the previous 15 months—the reason for removal is clear. Performance has been extremely high and stable since 2005-6 both in terms of achievement (median around 99%) and exception reporting (around 1%), with low interpractice variation (table⇑). Indeed, over 99% of practices scored maximum points (21% had 100% achievement), and the average remuneration for practices on this indicator was £374.40 (£3.1m nationally each year). These results strongly suggest that the ceiling has been reached in performance for this indicator and little can be gained from continuing to reward it.
However, the associated intermediate outcome indicator (the proportion of people with diabetes with blood pressure ≤145/85 mm Hg in the previous 15 months) will remain in the 2011 framework. Although there have been moderate gains in performance since 2005-6, median achievement and exception rates in 2007-8 were 80% and 5.7% respectively.7 It may not be possible to reach the same level of performance for intermediate outcome indicators as for process indicators; however, around 10% of practices have attained achievement rates of ≥90% and exception rates of <2%, showing that higher performance is possible. It would therefore be inappropriate to remove this indicator.
Criteria for removing indicators
Indicators that are candidates for removal from a framework should be identified largely on the basis of statistical criteria, with the final decision often determined by the context. Statistical criteria consider measures of performance as well as the economics of incentives. Economic analysis considers the net benefit of incentives by quantifying the costs of the indicator relative to the health benefits accrued. If the benefits outweigh the costs it is economically justifiable to continue to reward good performance. This approach is particularly suited to indicators that are associated with a direct therapeutic benefit but is less suited to process indicators such as measuring blood pressure, where it may be difficult to attribute or quantify any resulting health benefit. We therefore suggest that economic analysis is not routinely used for process indicators.
The performance of an indicator should be assessed in at least five ways:
Average rate of achievement
Recent trend in achievement rate
Extent and trend in variation of achievement rate
Average rate and trend in exception reporting
Extent and trend in variation of exception rate.
If the rates have skewed distributions, medians and interquartile ranges may be more appropriate measures than means and standard deviations.
The average reported achievement rate—the percentage of eligible patients for whom the indicator target has been achieved—should be high. It is difficult to set one definition of high for all indicators because some, particularly intermediate outcomes, such as achieving low cholesterol levels, are unlikely to ever reach as high rates as, say, process indicators. One solution is to use a different empirical definition of high for each indicator—for example, by using the achievement rates of the top 10% during the first year of the indicator’s operation.
Examination of trends in performance can help identify indicators that have reached the limits of achievement. This is signalled either by consistently high performance or by a period of growth followed by a plateauing of the curve. Indicators for which improvement shows no signs of flattening off are less likely to be candidates for replacement. A variable pattern of improvement may signal a wider problem. A lack of change when there is clear room for improvement suggests a substantial mismatch between the magnitude of the incentive and the workload required.
Achievement rates will depend on a range of factors, only some of which will be under the control of providers.8 When factors outside providers’ control have been allowed for, variation in achievement rates should be low. A wide variation in achievement suggests that many providers could substantially improve their performance.
Average rates of exceptions and variations in these rates should be low. It would be inappropriate to replace an indicator for which a large proportion of patients with the condition are excepted without first determining the reason for the high level of exceptions, including the possibility that the indicator had poor face validity and was not seen as useful in clinical practice. Indeed, one of the next tasks of the external contractor team employed by NICE to help develop the framework is to look in detail at indicators with high exception reporting as well as high achievement. One such indicator is the percentage of patients with newly diagnosed angina who are referred for exercise testing or specialist assessment. What constitutes a low exception rate may vary by indicator, but a good indication that a practical limit has been reached is low variation in the rate between practices.
Even if an indicator satisfies the statistical criteria for removal, contextual factors, which consider the wider framework in which the indicator is operating, may make removal inappropriate. Contextual factors include policy considerations such as maintaining an appropriate balance of indicators across disease domains; stakeholder perspectives such as concerns of health professionals about additional workload and reliance on incentives9; and concerns of patients and user groups about perceived loss of prioritisation of their condition. Indeed, the public seems to perceive inclusion in the framework to be important for good care. When the Department of Health invited ideas for inclusion in the framework in 2007, for example, 153 were received in five weeks, 52% of which came from national disease societies or local patient groups.
Another consideration is circumstances affecting the validity of the indicator, such as changes in evidence. For example, the framework currently contains an indicator for the percentage of patients taking lithium who have a record of plasma concentrations in the therapeutic range within the past six months. However, a recent National Patient Safety Agency alert reported 567 dosage errors for lithium, five of which occurred in primary care.10 The indicator may need to be reviewed in the light of this and 2006 NICE guidance that lithium concentrations should “normally” be measured every three months.11
Consequences of replacement
We have proposed selecting indicators for replacement on the basis of each indicator’s recent history of achievement and exception reporting rates. Underlying this approach is an assumption that this provides a good guide to the future performance of that indicator. The approach, however, provides no actual estimates of future performance, nor any measure of the degree of uncertainty in the forecast. This is something that we plan to explore in the near future.
Ultimately, we need to know what will happen to performance if an indicator is replaced. Empirical evidence from the United Kingdom is limited. There is some conflicting evidence from performance on two indicators removed for contextual reasons from the Quality and Outcomes Framework in 2006. In a sample of 150 practices in which performance was tracked, immunisation of asthmatic patients against influenza showed a substantial reduction in achievement rates after it was removed from the framework, but there was no such reduction for checking lithium concentrations in patients taking the drug. This process indicator, however, was paired with an intermediate outcome indicator (the percentage of patients taking lithium with a record of lithium levels in the therapeutic range within the past six months), which remained in the framework.
Strategies to minimise the risk of harm from removal might include a gradual reduction of the payments for achieving indicators or, as in the above example, initially to remove half of a paired indicator, so that the removed process is still incorporated as part of a linked intermediate outcome indicator.
Finally, removed indicators need to be monitored within the framework. A new centrally managed tool for extracting the necessary data from practices’ clinical computing systems—the General Practice Extraction Service—is due to be operational in England by 2011.12 In the meantime, large scale general practice databases that allow interrogation of electronically captured patient consultation data, such as the General Practice Research Database, QRESEARCH, and the Health Improvement Network, could be used to identify general trends.
Cite this as: BMJ 2010;340:c1717
Contributors and sources: The ideas in this paper arose from thinking, researching, and reading around the experience of developing the Quality and Outcomes Framework. All the authors contributed initial ideas and to the serial drafts and agreed the final submission. HL is guarantor.
Competing interests All authors have completed the unified competing interest form at www.icmje.org/coi_disclosure.pdf (available on request from the corresponding author) and declare (1) HL, TD, SC, DR, EK, MS, and PT have an external contract with NICE for work on removing indicators from the Quality and Outcomes Framework; the views expressed are those of the authors and do not necessarily represent the views of NICE or its independent QOF Advisory Committee. (2) No financial relationships with commercial entities that might have an interest in the submitted work; (3) No spouses, partners, or children with relationships with commercial entities that might have an interest in the submitted work; (4) No non-financial interests that may be relevant to the submitted work.
Provenance and peer review: Not commissioned; externally peer reviewed.