Assessing the quality of care
BMJ 1995; 311 doi: https://doi.org/10.1136/bmj.311.7008.766 (Published 23 September 1995) Cite this as: BMJ 1995;311:766- Huw T O Davies,
- Iain K Crombie
- Research fellow Reader Department of Epidemiology and Public Health, University of Dundee, Ninewells Hospital and Medical School, Dundee DDI 9SY
Measuring well supported processes may be more enlightening than monitoring outcomes
Everyone wants information on clinical outcomes.1 These measures have an intuitive appeal: high quality care should be reflected by good outcomes. Therefore, poorer outcomes should indicate deficiencies in care, including missed opportunities or wasted resources. The hope is that data on outcomes will provide a barometer for health care, indicating the effectiveness and efficiency of service delivery.
Many purchasers are pushing to include outcomes criteria in their contracts as a means of assessing effectiveness. In clinical audit, measurement of outcome is generally considered superior to audits that simply assess the process of care.2 But perhaps this emphasis on outcomes is being overplayed. Are outcomes data always so enlightening?
Outcome measures have a major weakness: interpretation. Suppose a hospital reported that patients admitted with coronary heart disease in 1994 had a 30 day mortality of 25%. This can be interpreted only by comparison with mortality elsewhere or with figures for previous years. But such comparisons are bedevilled by differences in case mix. The American experience suggests that the effects of case mix are large, and attempts to adjust for them have met with only limited success.3 Those sophisticated and successful adjustments for case mix that do exist (such as APACHE II used in intensive care4) are rare exceptions. They are created and used with considerable effort. The problems lie in identifying the important prognostic factors and in collecting data on these routinely so that appropriate adjustments can be made.5
Interpretation is difficult enough for unambiguous outcomes such as death. But for many specialties death rates are largely inappropriate (for example, in psychiatry, rheumatology, dermatology, ophthalmology, and general practice). In most areas of health care, outcomes have to be assessed with measures such as disease status, functional ability, and quality of life.6 These measures often have less than ideal validity and reliability, and they are usually assessed unblinded. These problems combine with those of case mix to frustrate a meaningful evaluation of the outcomes achieved.
The difficulties in interpreting outcomes are increasingly being recognised. But now a paper by Mant and Hicks in this week's journal highlights a further problem: the clouding effects of the play of chance (p 793).7 The authors show that, even under ideal conditions, death rates are insensitive to quite wide variations in the quality of care. They do this by comparing two fictional hospitals with divergent practice in their use of established interventions for acute myocardial infarction (aspirin, thrombolysis, β blockers, and angiotensin converting enzyme inhibitors). Large differences between the centres in their use of these interventions lead to relatively small differences in death rates. Thus studies using outcomes measurement would need to be run for several years to detect deficiencies in care.
Given the problems inherent in using data on outcomes, can information on the processes of patient care be more helpful? The answer is yes. Knowing that only 30% of eligible patients receive thrombolysis within six hours is immediately interpretable (could do better) and indicates the remedial action that should be taken (greater and earlier use of thrombolysis).
The power of process measures to detect failures in quality lies in their ability to overcome or sidestep many of the problems that beset outcomes data. The process of care (what is done to patients, where, when, and how) can be measured reliably, validly, and mostly without serious bias. Interpreting this information is less hampered by problems of case mix--so long as appropriate processes of care can be clearly defined for specific patient groups.8 Furthermore, the use of process measures identifies specific shortcomings, pointing the way towards what must be changed. What Mant and Hicks show is that small but significant departures from desired practice can be readily identified over a short time.
Despite the attractions of measuring process, a word of caution is needed. Measures of process are valuable indicators of quality only when the processes in question are well supported by research evidence (as exists for thrombolysis). Much of health care lacks this support. However, initiatives such as the Cochrane Collaboration and the NHS Centre for Reviews and Dissemination, together with the proliferation of evidence based clinical guidelines, will provide the best possible information on how to achieve good outcomes. They will establish which processes work. Comparison of current practice with best practice as identified by the research evidence thus provides a sensitive, valid, and purposeful assessment of the quality of care.
In the rush to embrace outcomes, examination of the process of care should not be neglected. Process measures that are buttressed by high quality research provide an easily interpreted guide to quality.