Auditing audits: use and development of the Oxfordshire Medical Audit Advisory Group rating systemBMJ 1994; 309 doi: https://doi.org/10.1136/bmj.309.6953.513 (Published 20 August 1994) Cite this as: BMJ 1994;309:513
- M Lawrence,
- K Griew,
- J Derry,
- J Anderson,
- J Humphreys
- Oxfordshire Medical Audit Advisory Group, University Department of Public Health and Primary Care, Gibson Building, Radcliffe Infirmary, Oxford OX2 6HE
- Correspondence to: Dr Lawrence.
- Accepted 8 June 1994
Objectives : To assess the value of the Oxfordshire Medical Audit Advisory Group rating system in monitoring and stimulating audit activity, and to implement a development of the system.
Design : Use of the rating system for assessment of practice audits on three annual visits in Oxfordshire; development and use of an “audit grid” as a refinement of the system; questionnaire to all medical audit advisory groups in England and Wales.
Setting : All 85 general practices in Oxfordshire; all 95 medical audit advisory groups in England and Wales.
Main outcome measures : Level of practices' audit activity as measured by rating scale and grid. Use of scale nationally together with perceptions of strengths and weaknesses as perceived by chairs of medical audit advisory groups.
Results : After one year Oxfordshire practices20more than attained the target standards set in 1991, with 72% doing audit involving setting target standards or implementing change; by 1993 this had risen to 78%. Most audits were confined to chronic disease management, preventive care, and appointments. 38 of 92 medical audit advisory groups used the Oxfordshire group's rating scale. Its main weaknesses were insensitivity in assessing the quality of audits and failure to measure team involvement.
Conclusions : The rating system is effective educationally in helping practices improve and summatively for providing feedback to family health service authorities. The grid showed up weakness in the breadth of audit topics studied.
Implications and action : Oxfordshire practices achieved targets set for 1991-2 but need to broaden the scope of their audits and the topics studied. The advisory group's targets for 1994-5 are for 50% of practices to achieve an audit in each of the areas of clinical care, access, communication, and professional values and for 80% of audits to include setting targets or implementing change.
Audit in primary care was given a major impetus with the setting up of medical audit advisory groups in 1990.1 Medical audit advisory groups have an obligation to develop audit within all the practices in their area and to report the general results of audit to the family health services authority.
The Oxfordshire Medical Audit Advisory Group developed a rating system (fig 1) for assessing audits with the intention of using it both as an educational tool for practices (by assessing and feeding back the completeness of their audits) and as a summative measure for use in professional accountability and in reporting to the family health services authority. The rating system was described in the BMJ in 1991 together with the practices' performance at the end of the first year of use, April 1991.2 At the same time we declared our target standards for the number and completeness of Oxfordshire practices' audits to be achieved by April 1992.
Over the succeeding two years we became increasingly aware of the limitations of the rating system. Firstly, the system introduced several jargon terms - full, partial, and potential audits - which were not helpful to general understanding. It has been found to be clearer and more helpful to refer to audits which involve setting targets or implementing change (previously called full or partial audits); audits which involve only collecting and analysing data (previously, potential audits); and no audit.
Secondly, although the rating system clearly evaluates the extent to which any cycle of audit has been completed, it does not say anything about the range of topics chosen by a practice or whether the topics relate to the overall quality of care provided for patients. A practice can score highly on the basis of a simple audit on a fairly minor topic - say, the monitoring of lithium levels. Finally, the system gives no indication of team involvement - one enthusiast can score points for the whole practice.
At the same time we became aware that the Oxfordshire rating system was being widely used by other medical audit advisory groups. It was therefore decided to continue to use the rating system for assessing audits in Oxfordshire practices while developing and testing a refinement of the system; and also to ascertain the extent of its adoption nationwide and to determine its strengths and weaknesses as perceived by the chairs of the groups.
Progress of audit in oxfordshire as determined by the rating
The four coordinators - each a general practitioner with two sessions a week - divided the 85 Oxfordshire practices among them and visited them at least once a year. At each visit details of the practice's audits were collected, and after each visit a summary of audit activity was fed back to the practice together with suggestions for improvement.2 Practices which required extra help were visited more often either by the coordinators or by the non-medical facilitator. In 1990-1, 79 out of 85 practices were visited; in 1991-2, 75; in 1992-3, 80. Only two practices have consistently refused to be visited.
Development of the audit grid
The whole medical audit advisory group held a day conference in April 1992, during which a more detailed tool was developed: the Oxfordshire Medical Audit Advisory Group audit grid. Members noted that the rating system lacked content validity in as much as it assessed the process of an audit but not the extent to which a practice's audits reflected the quality of patient care. We agreed the main areas in which a practice would need to undertake audit so as to redress this problem. The areas (box 1) fell into the four categories which were identified by the “What Sort of Doctor” working party of the Royal College of General Practitioners as encompassing the “multiple dimensions of a general practitioner's work.”3 The validation of this document has been established by its use as the basis of the two most widely recognised procedures for assessing the quality of a practice: assessment visits for vocational trainers4 and “fellowship by assessment” for the Royal College of General Practitioners.5
Box 1 - Areas of performance for audit
Acute, chronic and preventive care
Out of hours
With patients (consultations)
With practice (team work)
Outside practice (interface)
Records Professional values:
We then considered items which characterise the nature and quality of any given audit. They were listed under six more headings (box 2). The Oxfordshire rating system assesses compliance with the well established audit cycle.6, 7 The value of distinguishing audits of structure, process, and outcome has been established by Donabedian,8 and major emphasis is being placed on outcomes by the NHS Executive. The need to include patients' views and teamwork are key elements of continuous quality improvement, a methodology increasingly being adapted to health services and encouraged by health authorities.9 That audit should cover the interface between primary and secondary care is recognised as care in the community becomes more closely integrated with hospital services.10
Box 2 - Items that characterise the nature and quality of an audit
Oxford rating system:
Was data collected and analysed?
Were targets set?
Were changes made?
Was the cycle repeated?
Type of audit:
Was the audit numerical or descriptive, or both?
Was the audit only of structure?
Was process audited?
Did it include measures of outcome or intermediate outcome?
Were they included?
What combination of doctors, nurses, and clerical staff was involved?
Cross boundary involvement:
Did the audit involve the interface with secondary care or community services?
The audit grid was formed by using the areas for audit as the rows and the characteristics included in the audit as the columns. A grid was completed for each practice. For any area in which the practice had done an audit, boxes were filled with an icon or shading where the audit included the relevant characteristics (see example, fig 2). Each practice's grid could then be returned to the practice, highlighting areas that had not been the subject of audit. A similar grid produced for the whole county showed the percentage of practices satisfying each characteristic for each area of audit.
Use of the rating system by other groups
In September 1993 a questionnaire was sent to the chairs of all other medical audit advisory groups asking whether they had used the Oxfordshire rating system. If so, two open questions were asked: firstly, how had they used it, and, secondly, what strengths and drawbacks had they found. A written reminder was sent to those who did not reply within a month, and those who did not then reply were telephoned.
Progress of audit as determined by the rating system
Table I summarises the rating of the best audit undertaken within the previous year in each of the 85 practices as recorded by the coordinators on three successive yearly visits. In the first year, 1990-1, 36 (42%) practices had completed an audit involving setting targets or implementing change, and 37 (44%) had declined to be visited, had done no audit, or were only planning. The audit group set itself the target of raising the first figure to over 50% and reducing the second to under 25%. After one year this target had been exceeded, and by 1992-3, 79% of practices were achieving audits involving setting targets or implementing change and only 13% were planning, doing no audit, or declining to be visited.
Use of the rating system by other audit groups
Of the 95 medical audit advisory groups in England and Wales, information was received from 92. A response could not be elicited from three either by post or by telephone. Of these 92, 38 (41%) were using the rating system: three were assessing some projects only, two used it as a postal questionnaire, and the remainder used it on visits to all practices. Nine had adapted the rating scale; two that had used it were intending to develop their own; and the remaining 27 were continuing to use it as published. All the groups reported that they found it helpful except the two who used it as a questionnaire (one had a poor response rate and the other found the system insensitive in distinguishing levels of audit and confusing to general practitioners).
The main reported advantages of the rating system were its use in educating general practitioners to see where deficiencies in their audit technique lay; simplicity and clarity in reporting to family health services authorities; and usefulness in comparing activity between practices and over time. The main drawbacks were perceived to be that the system does not reflect teamwork, does not provide a distinction between simple audits and more extensive ones, and is rather insensitive in detecting improvement once audits get past the basic level.
Assessment of oxfordshire practice audits with the audit grid
The first three columns of the audit grid give a measure of the audit activity in the county (table II). Most audits fell into the category of chronic illness (75% of practices doing audits that at least include collecting and analysing data), preventive care (44%), prescribing (26%), and appointments (44%). Twenty per cent of practices had conducted an audit of workload. All other audits occurred in less than 10% of practices. The absence of any recorded audits of educational activity reflects the fact that audit coordinators did not ask specifically about appraisals, an activity increasingly conducted in practices but not usually regarded as audit.
The detailed characteristics of audits are shown in table III, which includes areas in which over 10% of practices were at least collecting data. In these areas a maximum of 49% of practices had target standards set, 41% were implementing change, 35% were including patients' views, and 29% were involving the interface of primary and secondary care. Nursing or clerical staff were involved in at least 50% of audits other than prescribing. Almost all audits concerned process, but 79%, 59%, and 30% of audits in acute, chronic, and preventive illness, respectively, included outcome measures.
The Oxford rating system for auditing audits was developed in 1990 during the first year of the existence of medical audit advisory groups and used to assess the number and completeness of audits taking place during that year. It was shown that even in Oxfordshire, usually regarded as a district with high standards of primary care, a third of practices were doing no audit beyond collecting data demanded by the family health services authority, and of those who were auditing only 40% were setting targets or implementing change.2
Over the succeeding two years the number of audits being undertaken has risen to almost 80% The rating system has been helpful both to show practices the deficiencies in their previous audits and to show the improvement to the family health services authority and other bodies. Many other audit groups have seen these advantages and used the same system. Indeed, since the only dissemination was by publication in the BMJ and presentation at occasional conferences, a 41% takeup was remarkably high.
We have, however, been forced to ask whether more and better audits imply improved quality of care - and we have to conclude that this is not necessarily the case. High quality care is accepted to require effectiveness, efficiency, acceptability, access, equity, and relevance,11 and it is also generally agreed that this cannot be delivered without good teamwork.12 But a practice could score highly without considering patients in terms of access, acceptability, or equity; without an adequate range of topics for relevance; and without teamwork either in the delivery or audit of care. These concerns were reflected in the national survey of medical audit advisory groups, in which the main drawbacks expressed were over the failure of the rating system in assessing the breadth of care and its failure to consider teamworking. The design of the audit grid addressed particularly these issues of content validity.
In 1992-3, although 80% of practices in Oxfordshire scored well on the rating system, use of the grid identified many deficiencies. Although 75% of practices were reviewing chronic care and about half were reviewing preventive care (topics attracting payment under the health promotion rules), only 16% were looking at acute care - suggesting a neglect of critical incident analysis. Almost half of all practices reviewed appointments, but few reviewed telephone use or out of hours care, both important items for patients. There was little formal evaluation of communication. Evaluating customers' needs is the cornerstone of quality improvement, but few practices were even assessing patients' satisfaction. And there was little evidence that cross boundary cooperation is regarded as an important element of quality, with few audits involving secondary care or even associated services in the community.
Even for those practices undertaking audit - fewer than 50% in each area of care except chronic disease management - we found that, after three years of audit, setting target standards and implementing change after audit were still the exception rather than the rule. On the positive side, the number of clinical audits involving outcomes was impressive - although almost no audits of access or communication included outcomes. Practices' teams are active, with non-medical staff involved in over 50% of all categories except prescribing; and in areas which are suitable for repeated review for long term improvement - such as prevention, care in chronic disease, and access - nurses and clerical staff were involved by over 80% of practices, which seems entirely appropriate.
Modern quality improvement methods suggest that practices should be directing their efforts by developing practice strategies, setting objectives, and identifying problems which need to be solved so as to achieve these objectives.13,14 Audit then becomes one part of quality improvement, a tool for monitoring performance and improvement with regard to the stated objectives. Current health service policy is inducing practices to set up such programmes, not least because family health services authorities are increasingly requiring them so they can assess practices for reimbursement.
Until such forward planning is universal it is helpful to have a too for evaluating current activity to help practices in identifying gaps. By highlighting areas where audit is not being used the grid is itself an aid to quality. In the longer term it will remain useful for continuing evaluation. Without this planning and direction, audit is liable to be carried out on topics for which methods and data are easily available, irrespective of whether they are the most important topics for improving patient care.
One problem repeatedly encountered by audit facilitators is practices' lack of time for audit in face of all the demands of the general practitioner contract. Several of the areas where we have found audit not being undertaken - often areas that involve patients or other disciplines - are complicated and time consuming to review. If these areas are to be included then practices need educating to see the deficiency and also incentives to undertake the work. With the new health promotion rules, audits of chronic disease and preventive care will surely rise. Unless careful attention is given to encouraging practices to look at other areas of care, the neglected areas will be considered even less often than now, and the efforts of primary health care teams will become more unbalanced with regard to the components of quality care.
In 1991 the Oxfordshire Medical Audit Advisory Group published its targets for audit in Oxfordshire practices. These were more than attained by April 1992. A principle of audit is that at each cycle the target standards should be considered and if necessary made more demanding. The general aim of the Oxfordshire group is to encourage practices to undertake strategic planning and whether or not they do that, to undertake a wider range of audits using broader methods and involving more staff and patients. As each practice achieves this, the team will see more of the boxes in the audit grid completed. Specific targets for the group for 1994-5 are: to maintain at 80% the prevalence of practices doing audit that involves setting targets or implementing change, and to achieve 50% of practices doing an audit in each of the four categories of clinical care, access, communication, and professional values.