Intended for healthcare professionals

Letters

Performance indicators for primary care groups

BMJ 1999; 318 doi: https://doi.org/10.1136/bmj.318.7186.803 (Published 20 March 1999) Cite this as: BMJ 1999;318:803

Current indicators have been chosen for ease of collection rather than scientific validity

  1. Paul Myers (Pmyers1860{at}aol.com), Senior lecturer.
  1. Department of General Practice and Primary Care, Queen Mary and Westfield College, St Bartholomew's and the Royal London School of Medicine, Medical Sciences, London E1 4NS
  2. Petts Hill Surgery, Northolt UB5 4NL
  3. Department of Primary Health Care and General Practice, Imperial College School of Medicine, London W2 1PG
  4. East Sussex, Brighton and Hove Health Authority, Lewes, East Sussex BN7 2PB
  5. 17 Villiers Crescent, Eccleston, St Helens, Merseyside WA10 5HP
  6. Prescribing Research Group, Department of Pharmacology and Therapeutics, University of Liverpool, Liverpool L69 3GF
  7. Wessex Institute for Health Research and Development, University of Southampton, Southampton General Hospital, Southampton SO16 6YD
  8. Primary Medical Care, University of Southampton, Southampton SO16 5ST
  9. Three Swans Surgery, Salisbury ST1 1DX

    EDITOR—McColl et al provide a welcome alternative1 to performance indicators proposed by the NHS Executive and the Department of Health.2 They have suggested a range of evidence based interventions which are likely to produce behaviour change at practice level. The proposed indicators are very different from the performance indicators in current use, which seem to have a political role at health authority level, often being used simply to search for poorly performing doctors.

    I have looked at the performance indicators that have been described in the literature, and in particular the scientific evidence underpinning them. Little evidence exists for the validity of using the common indicators in current use.3 A consistent finding is that indicators are often chosen for their ease of collection rather than their scientific validity. The most commonly used indicators include uptake of cervical cytology, immunisation rates, and various prescribing indicators. I have found little published research showing the importance of a high or low indicator. This applies particularly when the indicators have been accepted as proxy measures of individual general practitioners' clinical competence. Others have also reviewed performance indicators and have identified additional areas that raise doubts about their validity. 4 5

    The new indicators will need to be differentiated from the non-clinical indicators that are currently popular markers of clinical competence. In practice these often reflect historic support that has been provided for the practice rather than the competence of the individual general practitioner. Thus the proposed introduction of evidence based clinical indicators for primary care groups provides a more acceptable way forward.

    Although McColl et al's paper refers to cost effectiveness of proposed interventions, the likely timescale over which they will operate requires consideration, as it has an important implication for the primary care groups at which they are targeted. Although secondary and tertiary prevention may reduce morbidity and mortality over decades, the short term effects of the implementation of interventions will create upward pressure on costs, particularly prescribing costs. This issue should not present obstacles to the promotion of evidence based interventions at the level of primary care groups but must be taken into account when these groups and health authorities are funding health improvement plans.

    References

    Will they discriminate against small general practices?

    1. Suresh Shah, Practice manager.,
    2. Adrian Cook (a.d.cook{at}ic.ac.uk), Research analyst.
    1. Department of General Practice and Primary Care, Queen Mary and Westfield College, St Bartholomew's and the Royal London School of Medicine, Medical Sciences, London E1 4NS
    2. Petts Hill Surgery, Northolt UB5 4NL
    3. Department of Primary Health Care and General Practice, Imperial College School of Medicine, London W2 1PG
    4. East Sussex, Brighton and Hove Health Authority, Lewes, East Sussex BN7 2PB
    5. 17 Villiers Crescent, Eccleston, St Helens, Merseyside WA10 5HP
    6. Prescribing Research Group, Department of Pharmacology and Therapeutics, University of Liverpool, Liverpool L69 3GF
    7. Wessex Institute for Health Research and Development, University of Southampton, Southampton General Hospital, Southampton SO16 6YD
    8. Primary Medical Care, University of Southampton, Southampton SO16 5ST
    9. Three Swans Surgery, Salisbury ST1 1DX

      EDITOR—We believe that the debate over performance monitoring in primary care groups and general practices1 has overlooked a combination of factors that may already be giving rise to a discriminatory effect against small general practices. An improvement to a system in which quarterly figures are used in isolation would be a rolling average of the current quarter plus the three preceding quarters.

      Probability of failing to reach infant immunisation targets, by size of practice

      View this table:

      Indicators such as infant immunisation rates are measured by proportional coverage of a target group. For any practice, aggregating quarterly infant vaccination figures over several years would measure the long term coverage of that practice. Quarterly coverage would vary about this figure; the higher variability in smaller practices is explained by binomial variation, where the standard deviation of a proportion p is given by (p(l−p)/n)—the variability of p increases as n decreases.

      Practices slightly above target in the long term would probably fall below in some quarters, while those slightly below target in the long term would occasionally rise above. Preschool immunisation figures are above 90% in England and Wales,2 so it would follow that most practices are above target in the long term. Hence the number of practices losing out because of quarterly variations will be greater than the number gaining.

      We therefore predict that the most affected group is the group of small practices. The effect is to their disadvantage and results in reduced payment and an appearance of poorer performance. We have calculated the expected effect in an imaginary group of 100 small practices and 100 large practices, each achieving 95% long term coverage with a target of 90%. Quarterly coverage varies about 95%, with greater variability in the small practices. In a single quarter one large practice and nine small practices would be expected to fall below 90% and lose payment, despite identical long term coverage rates (table). If the results of four quarters were aggregated only one small practice and no larger practices would be expected to fail.

      Footnotes

      • Competing interest This work was funded by a small bursary from the West London Research Network.

      References

      Local consensus opinion must be reflected

      1. Thomas Scanlon (toms{at}esbhhealth.cix.co.uk), Medical adviser.,
      2. Polly Tarrant, Primary care quality indicators project coordinator.
      1. Department of General Practice and Primary Care, Queen Mary and Westfield College, St Bartholomew's and the Royal London School of Medicine, Medical Sciences, London E1 4NS
      2. Petts Hill Surgery, Northolt UB5 4NL
      3. Department of Primary Health Care and General Practice, Imperial College School of Medicine, London W2 1PG
      4. East Sussex, Brighton and Hove Health Authority, Lewes, East Sussex BN7 2PB
      5. 17 Villiers Crescent, Eccleston, St Helens, Merseyside WA10 5HP
      6. Prescribing Research Group, Department of Pharmacology and Therapeutics, University of Liverpool, Liverpool L69 3GF
      7. Wessex Institute for Health Research and Development, University of Southampton, Southampton General Hospital, Southampton SO16 6YD
      8. Primary Medical Care, University of Southampton, Southampton SO16 5ST
      9. Three Swans Surgery, Salisbury ST1 1DX

        EDITOR—McColl et al state the criteria that primary care groups should use for selecting performance indicators.1 Performance indicators, they say, should be attributable to health care, sensitive to change, based on reliable and valid information, and precisely defined and should reflect important clinical areas and include a variety of dimensions of care.

        We would add a further criterion: they should reflect local consensus opinion. The process of developing indicators of performance is as important as the evidence base behind them. There is a widespread perception in general practice that “indicators of good practice” are often developed by academics and managers who are remote and whose knowledge of clinical practice no longer includes personal experience. The same view is held about many guidelines, with the result that their impact on practice has often been negligible.

        If performance indicators are to be embraced by clinicians then ownership is essential. To that end we have adopted an inclusive approach to developing quality indicators in East Sussex, Brighton and Hove. A consensus group of doctors, nurses, and managers working in primary care is currently considering a range of indicators in the first stage of a Delphi approach. We hope by this process to establish a group of primary care quality indicators that not only fulfil McColl et al's criteria but also enjoy broad local support. Only then, we believe, will practitioners be willing to consider and adjust their own practice when they are seen to be performing differently from others.

        References

        Performance of these indicators is critical

        1. Mike Cranney (cranney{at}liv.ac.uk), General practitioner.,
        2. Stuart Barton (stuart.barton{at}dial.pipex.com), Research consultant.
        1. Department of General Practice and Primary Care, Queen Mary and Westfield College, St Bartholomew's and the Royal London School of Medicine, Medical Sciences, London E1 4NS
        2. Petts Hill Surgery, Northolt UB5 4NL
        3. Department of Primary Health Care and General Practice, Imperial College School of Medicine, London W2 1PG
        4. East Sussex, Brighton and Hove Health Authority, Lewes, East Sussex BN7 2PB
        5. 17 Villiers Crescent, Eccleston, St Helens, Merseyside WA10 5HP
        6. Prescribing Research Group, Department of Pharmacology and Therapeutics, University of Liverpool, Liverpool L69 3GF
        7. Wessex Institute for Health Research and Development, University of Southampton, Southampton General Hospital, Southampton SO16 6YD
        8. Primary Medical Care, University of Southampton, Southampton SO16 5ST
        9. Three Swans Surgery, Salisbury ST1 1DX

          EDITOR—We agree with McColl et al that performance indicators for use by primary care groups should be more evidence based,1 but the interpretation of the available evidence and the implementation of performance indicators is not as straightforward as they suggest. Of the eight primary care interventions discussed, the control of hypertension arguably has the strongest combination of evidence, potential impact, and cost effectiveness. Unfortunately, assessing control of hypertension among a group of general practitioners is difficult. The apparent performance of a practice might have more to do with digit preference, the number of available blood pressure readings per patient, or mere chance than with any underlying variation in medical practice.

          The authors give the mean level of control among hypertensive patients as 40%. In a multipractice audit we found that the figure changed from 37% to 54% according to whether control was defined as <160/90 or 160/90 mm Hg.2 These results were based on the mean of up to three measurements per patient. The mean control changed from 26% to 62% with the different definitions when only one reading was available. Chance has a major role too: we found that the main determinant of whether a practice performed particularly well or particularly badly was the sample size in that practice (even though we used sample sizes of 10% of elderly patients, as others have done).3

          In the 76 practices in our audit, control of treated hypertensive patients varied between zero and 86%. The figure shows a funnel plot illustrating the influence of sample size. A sample size of 200-250 per practice is necessary to obtain even minimally reliable results (a signal to noise ratio over unity). This clearly has resource implications and may increase the funding required to deliver clinical governance.

          Figure1

          Funnel plot showing proportion of hypertensive patients with controlled blood pressure and sample size used in each practice. The spread of the proportions is largest when the sample size is low. Detailed analysis of the variance indicates that a sample size of 200 is needed to discriminate reliably between practices

          We strongly disagree with the authors that treating elderly hypertensive patients is less cost effective than treating younger patients. Treating elderly patients delivers a greater benefit in the short term because the baseline risk is so much higher. The morbidity and mortality from coronary heart disease and cerebrovascular disease are substantially reduced in elderly patients: only 18 older people need to be treated for five years to prevent such events. More than twice as many younger patients need to be treated to prevent one death, and two to four times as many to prevent one cardiovascular event.4

          Performance indicators certainly need to reflect important clinical areas and be sensitive to change, but even those with the best evidence base may fail to deliver in routine practice. The performance of performance indicators is a critical issue.

          References

          Authors' reply

          1. Alastair McColl (a.mccoll{at}soton.ac.uk), Lecturer in public health medicine.,
          2. Paul Roderick, Senior lecturer in public health medicine.,
          3. John Gabbay, Professor of public health medicine.,
          4. Helen Smith, Senior lecturer in primary care.,
          5. Michael Moore, General practitioner.
          1. Department of General Practice and Primary Care, Queen Mary and Westfield College, St Bartholomew's and the Royal London School of Medicine, Medical Sciences, London E1 4NS
          2. Petts Hill Surgery, Northolt UB5 4NL
          3. Department of Primary Health Care and General Practice, Imperial College School of Medicine, London W2 1PG
          4. East Sussex, Brighton and Hove Health Authority, Lewes, East Sussex BN7 2PB
          5. 17 Villiers Crescent, Eccleston, St Helens, Merseyside WA10 5HP
          6. Prescribing Research Group, Department of Pharmacology and Therapeutics, University of Liverpool, Liverpool L69 3GF
          7. Wessex Institute for Health Research and Development, University of Southampton, Southampton General Hospital, Southampton SO16 6YD
          8. Primary Medical Care, University of Southampton, Southampton SO16 5ST
          9. Three Swans Surgery, Salisbury ST1 1DX

            EDITOR—We share Myers's concerns about the lack of scientific evidence underpinning proposed performance indicators. We presented a method to identify primary care interventions of proved efficacy and suggested performance indicators that could monitor their use.

            We are evaluating our indicators in all 18 practices of a future primary care group, several of which are small practices. By presenting confidence intervals when comparing practice values with a local or an estimated national mean we have made the problems addressed by Shah and Cook more transparent. The training of those using and interpreting performance indicators should include how to understand the role of chance in the variation of indicator values.

            Scanlon and Tarrant suggest the local development of indicators reflecting local consensus opinion with a Delphi approach. Primary care groups should use local consensus to prioritise their action. If they use only their local indicators, however, they will be unable to compare themselves with others outside their small locality. Nationally agreed clearly defined indicators would enable wider comparisons and help to identify variations in practice. Consensus indicators derived from Delphi approaches are not necessarily evidence based.2

            The implementation of performance indicators is not straightforward. We stated that indicators require evaluation both before and after introduction into routine use. Our current evaluation project highlights the difficulties, many of which can be overcome. Interpretation of evidence is not straightforward (see our table 21). We used our sources of evidence in an “illustrative way to demonstrate the potential for developing evidence based process indicators.”

            Cranney and Barton's data show that control of hypertension is a problem that needs to be addressed whatever definition is used. Our method emphasised the importance of having indicators that reflect the detection and control of hypertension. These are not yet part of currently proposed indicators.3 We agree that primary care groups will need to balance the accuracy of indicator values against the cost of data collection. If our indicators were widely accepted then providers of primary care software would be more likely to provide straightforward mechanisms for data collection. Evidence for the efficacy of antihypertensive treatment in elderly patients is strong.4 Our reference to its relative cost effectiveness was from a Department of Health document.5

            Our evidence based indicators could help to turn evidence into everyday practice and to have an impact on the population's health. They will be useful not only for primary care groups engaging in clinical governance but also to justify investment in primary care interventions which should deliver clear health gains.

            References

            View Abstract