Health service accreditation: report of a pilot programme for community hospitalsBMJ 1995; 310 doi: https://doi.org/10.1136/bmj.310.6982.781 (Published 25 March 1995) Cite this as: BMJ 1995;310:781
- a CASPE Research, London SE1 1LL
- b Taunton and Somerset NHS Trust, Musgrove Park Hospital, Taunton, Somerset TA1 5DA
- Correspondence to: Dr Shaw.
- Accepted 7 February 1995
Voluntary accreditation in the United Kingdom is being used by health care providers to improve and market their services and by commissioners to define and monitor service contracts. In a three year pilot scheme in the south west of England, 43 out of 57 eligible community hospitals volunteered to be surveyed; 37 of them were ultimately accredited for up to two years by the hospital accreditation programme. The main causes for non-accreditation related to safety, clinical records, and medical organisation. Follow up visits in 10 hospitals showed that, overall, 69% of recommendations were implemented. An independent survey of participating hospitals showed the perceived benefits to include team building, review of operational policies, improvement of data systems, and the generation of local prestige. Purchasers are increasingly influenced by accreditation status but are mostly unwilling to finance the process directly. None the less, the concept may become an important factor moderating the quality of service in the new NHS.
The concept of accrediting hospitals as suitable environments for training was developed by the American College of Surgeons from 1917, and accreditation became a national yardstick for the organisation of hospitals for many other purposes. Accreditation has subsequently been adapted and adopted in Canada and Australia and, particularly over the past five years, in many other countries, including the United Kingdom.
All accreditation systems have explicit standards for organisation against which the participating hospital assesses itself before a structured visit by outside surveyors, who submit a written report to the hospital with commendations and recommendations for development before a follow up survey.1 Accreditation may be awarded for a fixed term or may be withheld by an independent assessment board if the hospital does not meet a defined threshold of standards. Functioning national schemes in the United Kingdom include the King's Fund organisational audit programme2 and Clinical Pathology Accreditation (UK),3 but programmes in development include services for diagnostic radiology, palliative care, and health records.
We describe the origin, operation, and impact of the hospital accreditation programme, which was set up in 1990 as a pilot scheme to assess and develop community hospitals in the South Western region. This has been available throughout the United Kingdom since April 1993. The aims of the three year pilot study were to enhance the effectiveness of community hospitals, to spread good organisational practices, and to test the principles of health service accreditation in the United Kingdom with a particular model. We describe the model and discuss some of the emergent issues that may be generally relevant to quality management.
Administration—The programme was set up as a three year pilot study funded by the South Western Regional Health Authority in April 1990 on the recommendation of a working party on the future of community hospitals.4 Two staff were appointed to the programme and were based at the University of Bristol.
Board—An independent board was set up of nominees from the royal colleges and other national and regional bodies (clinical, managerical, and lay) (box 1); this board steered the programme and assessed reports on the hospitals that volunteeed to join.
Box 1–Composition of the board, 1990-3
Association of General Practitioner Community Hospitals
Community health councils
English National Board for Nursing
Independent Health Care Association
Institute of Health Services Management
National Association of Health Authorities and Trusts
Royal College of Anaesthetists
Royal College of General Practitioners
Royal College of Physicians
Royal College of Surgeons
Standards—The initial standards used were published in 1988 in Towards Good Practice in Small Hospitals by the National Association of Health Authorities5 (see boxes 2 and 3). They were based on the systematic observation of organisational practice in 54 small hospitals and had been broadly endorsed by 17 national bodies, including seven royal colleges. This research into empirical standards was supported by the Nuffield Provincial Hospitals Trust and was completed in 1985.
Box 2–Where accreditation standards applied
Facilities and equipment
Accident and emergency medicine
Surgery and anaesthesia
Pastoral and voluntary
Box 3–What the standards measured
Purpose and scope of service
Equipment and facilities
Personnel and training
Hospitals—The programme was offered to the 57 community hospitals in the region. All these hospitals had no resident medical staff and fewere than 50 beds, most of which were allocated to general practitioners.
Self assessment—The hospitals completed a self assessment questionnaire based on the published standards during a six month preparatory phase. This included discussing, defining, and revising policies and collating existing reports and statistics.
Survey visit—A team of at least two surveyors (one clinician or manager and one general practitioner) visited each hospital for one day. Each surveyor was based at a similar but distant hospital and had three days of theoretical and practical training in the standards, survey process, and report writing. The team's proposed report was discussed with senior staff before leaving the hospital, was completed in draft the same night, and was checked in the central office for internal consistency and for compliance with the published standards.
Board review—A copy of each report was sent to each member of the board before the next quarterly meeting for evaluation of the surveyor's recommendations. Hospitals that were considered to be operating safely (with respect to patients, staff, and visitors) and within the law were eligible for accreditation. At least one of the surveyors from the relevant team attended this discussion, both to assist the board by complementing the written report and to learn the context of the process.
Implementing change—Final reports were sent from the board to the manager of the participating hospital. They consisted of fewer than 10 pages listing commendations of good practice and recommendations for change. Hospital managers were encouraged to discuss the reports with staff, trust managers, and purchasers. The programme did not provide copies other than to the manager of the hospital visited. Worthy innovations reported to the board as well as warnings (particularly about safety of patients and staff) were published quarterly after each board meeting and distributed to all participating hospitals to disseminate good practice. In addition, staff were invited to an annual meeting to share their experience and to offer comments for development of the programme. The outcome of the pilot scheme was assessed by hospital uptake rates and the proportion of recommendations that were implemented. An independent survey of the opinions of a random sample of 17 participants (stratified according to accreditation award) was carried out by the University of Bath.6
UPTAKE AND COMPLIANCE
By the end of March 1993, 56 surveys had been completed by 28 surveyors. Forty three hospitals (75% of the 57 eligible) had volunteered to take part, 37 ultimately being accredited. Among 10 hospitals visited a second time between 29% and 89% of recommendations had been implemented—overall, 89 out of 131 recommendations (69%). In most hospitals the unfulfilled recommendations related to managerial or structural issues that were difficult to resolve within 12 months.
Some practices were commended as models to other hospitals, particularly for innovative nursing, community based rehabilitation services, and well managed surgical and anaesthetic facilities (table I). Most of the reasons for non-accreditation related to safety; the board refused accreditation to any hospital which did not comply with normal good practice or legal requirements in relation to fire, radiation, hygiene, and the safety of patients and staff. Hospitals were alerted to particularly dangerous practices (box 4) through a quarterly bulletin, and the frequency of hazards found decreased in later visits to other hospitals. Recurrent issues affecting medical staff included personal contracts, availability, rotas in accident and emergency departments, record keeping, audit, input to management, and liaison by general practitioners with consultants responsible for services based in the district general hospital, particularly anaesthesia, radiology, and accident and emergency (table II). Much of the data on case mix of potential use in contracts, resource management, and audit were based on the problem causing the admission rather than on the final diagnoses, complications, and coexistent illnesses—for example, the leading diagnosis in many hospitals was recorded as “gone off legs,” a common label for unexplained debility which confines patients to bed.
Box 4–Some hazards identified
Beds blocking fire exits
Inflammable materials stored in boiler rooms
Gas cylinders unsecured in patient areas
Glutaraldehyde decanted into a spray bottle
Blood bank refrigerator at 8°C
VIEWS OF HOSPITALS
Most hospitals reported the preparation phase as time consuming but rewarding and the survey visit as nerve racking but enjoyable. Hayes and Scrivens summarised the three main benefits of the programme as reported by managers:
Development of greater team spirit within the hospitals to achieve the standards
Achievement of the standards themselves
Greater networking between the managers of community hospitals.
The challenge of an external assessment proved a powerful motive for reviewing (or even discovering) policies, reports, and data; many managers were surprised at the extent of information (such as radiation safety reports) and data (such as patient administration system reports) which related to their hospital but of which they were unaware. The explicit identification of issues such as data accuracy, drug administration, admission policies, and discharge procedures in a survey report was often the stimulus for more systematic participation of doctors in managing the hospital; several managers reported resurrecting staff liaison meetings as a direct result of the accreditation process.
Although failure to be accredited caused dismay among staff and managers, this was generally replaced by determination to succeed next time and by recognition, at least in retrospect, that the initial disappointment had been an essential and justified stimulus for the change that followed.
The original research, drafting, and consultation on the standards was funded by a grant of £27000 from the Nuffield Provincial Hospitals Trust in 1984. The true operating costs of the programme from 1990 were initially underrepresented, many of the charges for overheads being included in the general running of the university department. When fully costed as a free standing programme at the end of 1993-4 expenditure was £47000, including 2.2 whole time equivalent staff, training of surveyors, board meetings, and other non-pay items (excluding development). Surveyors and board members gave their time free of charge; only general practitioner locums were reimbursed. The average cost of the 20 surveys that year was thus £2363. A fee of £500 (£600 in 1992-3) and the direct expenses (surveyors' travel expenses and accommodation and locums' fees), which averaged £420, were charged to hospitals. The balance was subsidised by the regional health authority during the development phase.
UPTAKE AND COMPLIANCE
The concept of voluntary accreditation by an independent agency was popular among provider units (evidenced by uptake) and effective (evidenced by adoption of recommended changes). But purchasers in the south west, though supportive of the programme, had not sought to fund it directly; this was associated with criticism that the programme focused on structure and process rather than access, appropriateness, and outcomes of care. If, however, most providers are accredited voluntarily then the pressure on the remainder to be accredited will probably increase. The willingness of hospitals to submit to external scrutiny was associated with enthusiasm for pass or fail assessments and, notably, for having a certificate of achievement to be prominently displayed in the hospital. This was reinforced as hospitals within the region renewed their participation, and they were joined by hospitals from outside the region when the programme was offered throughout the United Kingdom from April 1993. Since then 18 hospitals have joined, mostly in the midlands and Wales and including two in Scotland.
Most hospitals showed a combination of strengths and weaknesses—for example, some that were not accredited had some innovative and commendable features. It is not therefore realistic to summarise them as good or otherwise, but there is a degree of noncompliance with standards beyond which members of the board, both individually and corporately, are unwilling to endorse the hospital as accredited.
The credibility of the programme rests, firstly, on the published standards and, secondly, on the consistency of the evaluation of individual hospitals against them. Hospitals readily identified with the standards, most of them having contributed to the standards originally. Innovation in technology, legislation, and management, however, rapidly outdates standards, suggesting a need to revise them at least every three years and intermediately (such as through a quarterly bulletin). Also, to maintain quality improvement among the accredited hospitals the standards need to become increasingly stringent.
In effect, the criterion for accreditation is that the survey team, on the basis of the self assessment questionnaire and their own observations, considers the hospital to comply sufficiently with the standards, and that independent assessments of the report by the permanent staff of the hospital accreditation programme and by the board endorse this. Ideally, published criteria would be related to clinical outcomes, and be robust, objective, self evident, and significant; in reality, a snapshop assessment focuses on measurable features of structure and process as proxies for outcome. If, however, these become the sole criteria for accreditation attention may be focused on them at the expense of outcomes. So, although systematic, the final decision on accreditation remains qualitative—for example, excellent work in poor facilities.
At the beginning of the pilot phase, all efforts to measure the success or failure of the programme led back to the response of the participants. It was therefore reassuring that hospitals were willing to enlist, to respond to recommendations, and to stay with the programme. This evidence of perceived value was consistent with the independent survey, which identified organisational development as a prime outcome.6 Similar early benefits of accreditation programmes have been noted in other countries, notably South Africa and Australia.
Charges to participating hospitals were kept low by the subsidy of the regional health authority and by the donation of time by surveyors and board members, (Employers and partners in general practice generally regard this as an educational activity and of benefit to their own organisation.) The larger cost to hospitals is in the staff time spent in preparation for accreditation and has not been quantified.
For such a programme to be self financing, it needs to be operated as part of another funded activity to share costs, to charge substantially higher fees, or to provide a larger volume of surveys (which is likely to be unsustainable within a single region, especially when the programme becomes embedded and the frequency of revisits is reduced). This was a cogent argument for expanding the programme both functionally—for example, to include consultant beds for the elderly patients—and geographically at the end of the pilot phase.
MEETING THE OBJECTIVES OF THE PILOT SCHEME
Two aims of the scheme were to enhance the effectiveness of community hospitals and to spread good organisational practices. In that the project identified practices among hospitals which it deemed to be good and that it disseminated these practices and encouraged their wider use, it achieved the second goal. It also made hospitals more effective in showing their strengths and presenting themselves to the public and purchasers, and even in adopting new practices, but there is no evidence of benefit in terms of clinical outcomes. Indeed, failure to prove a causal link between compliance with process standards and improved clinical results has dogged the development of health service accreditation throughout the world; but if that argues against accreditation it also argues against clinical audit and continuing education.
The third objective was to test the principles of accreditation in the United Kingdom. Outside specialty training (where the word was derived to describe the recognition of suitable training posts by the American College of Surgeons), neither the term nor the principles had been applied to health services in Britain. Until the early 1990s there had been consistent opposition from health ministers on the grounds that it would be too prescriptive and expensive and that it undermined the right of managers to manage—despite the support it received from several national bodies before Working for Patients, which excluded it from government policy.7 Although this community hospital programme was championed by provider units before the purchaser-provider split, its potential value to commissioners in purchasing decisions and in quality monitoring has been an important factor in its development. Indeed, the growth of this programme and of others, both generic (such as the King's Fund organisational audit programme) and specialty based (such as Clinical Pathology Accreditation, UK), shows a perceived need and a willing market to adopt it. Given that such programmes offer an independent mechanism for monitoring, they are consistent with current government policy to deregulate other sectors, such as voluntary and private health care.
Given the response so far, health service accreditation is unlikely to disappear. More probably, the coming debate will be on how many such systems are appropriate, how they will be validated, and whether they should themselves be regulated nationally. When the University of Bath completes its national review (commissioned by the Department of Health) of current programmes in the NHS these issues may become more prominent.
In the meantime, the market speaks for itself. The original community hospital programme is now being offered throughout the United Kingdom and is being emulated in South Africa as a pilot in secondary care.8 9 This, with other accreditation systems, offers a development tool to providers, an assessment tool to purchasers, and a visible measure of achievement to the public.