Intended for healthcare professionals

Information In Practice

Surgical wound infection as a performance indicator: agreement of common definitions of wound infection in 4773 patients

BMJ 2004; 329 doi: https://doi.org/10.1136/bmj.38232.646227.DE (Published 23 September 2004) Cite this as: BMJ 2004;329:720
  1. A P R Wilson, consultant microbiologist (peter.wilson{at}uclh.nhs.uk)1,
  2. C Gibbons, research fellow2,
  3. B C Reeves, senior lecturer in epidemiology2,
  4. B Hodgson, registered general nurse1,
  5. M Liu, physicist3,
  6. D Plummer, head of medical physics3,
  7. Z H Krukowski, consultant surgeon4,
  8. J Bruce, research fellow in epidemiology5,
  9. J Wilson, SSI surveillance programme leader6,
  10. A Pearson, manager and consultant epidemiologist7
  1. 1 Department of Clinical Microbiology, University College London Hospitals, London WC1E 6DB
  2. 2 Department of Epidemiology and Population Health, London School of Hygiene and Tropical Medicine, London
  3. 3 Department of Medical Physics, University College London Hospitals
  4. 4 Department of Surgery, Medical School, Aberdeen
  5. 5 Department of Public Health, Medical School, Aberdeen
  6. 6 Nosocomial Infection Surveillance Unit, HPA Central Public Health Laboratory, London
  7. 7 Health VFM Audit, National Audit Office, London
  1. Correspondence to: A P R Wilson
  • Accepted 4 August 2004

Abstract

Objective To assess the level of agreement between common definitions of wound infection that might be used as performance indicators.

Design Prospective observational study.

Setting London teaching hospital group receiving emergency cases as well as tertiary referrals.

Participants 4773 surgical patients staying in hospital at least two nights.

Main outcome measures Numbers of wound infections based on purulent discharge alone, on the Centers for Disease Control (CDC) definition of wound infection, on the nosocomial infection national surveillance scheme (NINSS) version of the CDC definition, and on the ASEPSIS scoring method.

Results 5804 surgical wounds were assessed during 5028 separate hospital admissions. The mean percentage of wounds classified as infected differed substantially with different definitions: 19.2% with the CDC definition (95% confidence interval 18.1% to 20.4%), 14.6% (13.6% to 15.6%) with the NINSS version, 12.3% (11.4% to 13.2%) with pus alone, and 6.8% (6.1% to 7.5%) with an ASEPSIS score > 20. The agreement between definitions with respect to individual wounds was poor. Wounds with pus were automatically defined as infected with the CDC, NINSS, and pus alone definitions, but only 39% (283/714) of these had ASEPSIS scores > 20.

Conclusions Small changes made to the CDC definition or even in its interpretation, as with the NINSS version, caused major variation in estimated percentage of wound infection. Substantial numbers of wounds were differently classified across the grades of infection. A single definition used consistently can show changes in percentage wound infection over time at a single centre, but differences in interpretation prevent comparison between different centres.

Introduction

Surgical site infections represent a substantial burden of disease for patients and health services. Patients with such infections experience substantial morbidity, pain and discomfort, inconvenience, and cost and, occasionally, may die. From the perspective of health services, patients with surgical site infections stay in hospital on average about twice as long as uninfected patients, and the cost of total care is more than doubled—inpatient costs of surgical site infections alone were estimated to be about £65m in England in 1995.1

The UK government is changing the way postoperative infections are monitored in the NHS. Surveillance of surgical site infection, still commonly referred to as wound infection, became mandatory for orthopaedics in April 2004, and this will soon spread to other specialties.2 The feedback of infection data to surgeons clearly reduces infection rates.3 4 Given that the percentage of wounds classified as infected will probably be used as a performance indicator,5 it is vital that the new surveillance system allows reliable comparisons across NHS institutions, and with overseas health institutions.

Although the UK Department of Health has consulted with experts, it has given little guidance on the definition of surgical site infection that is to be used for surveillance in England, namely the nosocomial infection national surveillance scheme (NINSS) version of the definition set out by the Centers for Disease Control (CDC) in 1992.6 There has been little or no critical evaluation of either the original or modified definition. Moreover, the version or interpretation of the definition used varies between hospitals and regions.7 8 Choosing an appropriate definition and ensuring that the definition is applied consistently are necessary conditions for observed rates of wound infection across hospitals to be valid.

Designers of a national surveillance system must judge the available definitions by their ability to identify infections that matter most to patients and to health services. The practicability of collecting the required information must also be considered, since laborious or complex definitions are less likely to be implemented consistently across hospitals.

We therefore compared agreement between four common definitions of surgical site infection—namely (a) the CDC 1992 definition, (b) the NINSS modification of the CDC definition, (c) the presence of pus, and (d) the ASEPSIS scoring method9—applied to the same series of surgical wounds. We also compared the percentage of infection based on the CDC definition and on the NINSS modification to investigate the potential effect of subjective CDC criteria and of variation between hospitals in data collection methods.

Participants and methods

Since May 2000, surgical wound surveillance has been conducted at University College London Hospitals. Cardiac, thoracic, orthopaedic, general, obstetric, gynaecological, urological, maxillofacial, plastic, and vascular surgical specialties have participated, each for at least six months each year. Only patients staying in hospital for at least two nights are included. Information is collected on patients and their surgical wounds, allowing us to apply the different definitions of wound infection.6 7 9 10

Definitions of surgical site infection

The 1992 CDC definition requires the observation of 16 wound or patient characteristics in order to classify infection and has two subjective criteria, namely a surgeon's diagnosis of infection and the culture of micro-organisms from the wound.6 The US national nosocomial infections surveillance system (NNISS) recommends that the latter criterion should be based only on positive cultures of fluid and tissue rather than wound swabs,6 8 but this interpretation does not seem to be applied generally.8 The English NINSS method modified the CDC definition to exclude the need for a surgeon's diagnosis and required that pus cells be present to satisfy the criterion of micro-organisms cultured from the wound.7 Another definition of infection simply requires the presence of pus, even though some infections are missed.10 ASEPSIS is a quantitative scoring method that provides a numerical score related to the severity of wound infection using objective criteria based on wound appearance and the clinical consequences of the infection.8 9

For purposes of comparison, we classified ASEPSIS scores > 20 as infected. ASEPSIS scores of 10-20 (“disturbance of healing”) are known to describe some infections, but most reflect wound breakdown due to other causes.11 Moderate to severe infections score > 30. The CDC definition also describes the severity of infection, classifying infections as “none,” “superficial,” or “deep or organ space” (termed “deep” in this article). Both definitions purport to describe the importance of an infection with respect to the patient's morbidity and the likely clinical consequences.

Data collection

Surveillance staff assessed patients every two or three days by direct observation, case note review, and questioning of the nurses caring for the patients. We contacted patients by post or telephone one to two months after their operations to complete a questionnaire designed to ascertain late infections. Thus, we followed up patients either until their wounds had healed without infection or until an infection was detected, but the precise duration of follow up varied depending on patients' length of stay in hospital and when they were contacted to ascertain late infections. We therefore classified wounds as infected or not and recorded the proportion of wounds classified as infected at any time during follow up.

Statistical analysis

Information collected was entered into an Access database, but microbiological results and demographic and some operative information came directly from interface with other computer databases. We gave quarterly reports of wound infection to surgeons.

We exported the relational Access database to Stata version 8.2, with each observation representing one wound. Counts and percentages presented are of wounds unless otherwise indicated. Confidence intervals for proportions of infection were adjusted for clustering on patient by means of the robust variance estimators from Stata's “svy” commands. We summarised agreement between the different definitions of infection by means of the κ statistic and the proportional agreement of ASEPSIS and CDC respectively for positive (Ppos) and negative (Pneg) diagnoses of infection.12 Confidence intervals for the agreement statistics were adjusted for clustering on patient and calculated by bootstrap methods. The values shown are “bias-corrected.”

Results

A total of 5804 surgical wounds in 4773 patients were assessed during 5028 separate hospital admissions to all surgical specialties in the hospital group between May 2000 and July 2003 (table 1). The patients' median age was 53.5 years (interquartile range 37.5-69.6), and 2281 (48%) of the patients were female. The median hospital stay was 8 days (6-14), and duration of operation 111 minutes (62-180).

Table 1

Characteristics of 4773 hospital inpatients who underwent surgery. Values are numbers (percentages) of patients unless stated otherwise

View this table:

The mean percentage of wound infection differed substantially with the different definitions; 19.2% (95% confidence interval 18.1% to 20.4%) with the CDC definition, 14.6% (13.6% to 15.6%) with the NINSS version, 12.3% (11.4% to 13.2%) with pus alone, and 6.8% (6.1% to 7.5%) with an ASEPSIS score > 20. Table 2 shows the level of agreement between the ASEPSIS and CDC systems. When superficial infections (according to CDC category) were included, 13% (778) of all observed wounds received conflicting diagnoses, and 6% were classified as infected by both definitions. When superficial infections were excluded, the two definitions estimated about the same overall percentage infection (6.8% and 7.0% respectively), but there were almost twice as many conflicting infection diagnoses (n=371) as concordant ones (n=215).

Table 2

Comparison of crude rates of surgical site infection reported with Centers for Disease Control (CDC) 1992 definition and with ASEPSIS scoring method. Wounds were considered to be infected if they met the CDC criteria for either superficial or deep infection (top half of table) or if they met the criteria for deep infection only (bottom half of table). Values are numbers (percentages) of wounds, with 95% confidence intervals for percentages, adjusted for multiple wounds in the same patients

View this table:

Wounds with pus were automatically diagnosed as infected by the CDC, NINSS, and pus alone definitions, but only 39% of these (283/714) had ASEPSIS scores > 20 (fig 1). For these wounds, the CDC scale also consistently diagnosed greater infection severity than did ASEPSIS. Most wounds with pus were classified by ASEPSIS as having a “disturbance of healing” (39%, 280/714) or as healing satisfactorily (21%, 151/714). Of these latter 151 wounds, 26% were classified as deep infections by the CDC definition.

Fig 1
Fig 1

Comparison of diagnoses of surgical site infection in 5804 wounds reported with Centers for Disease Control (CDC) 1992 definition and with ASEPSIS scoring method, for wounds with and without pus

In wounds without pus the relation of ASEPSIS and CDC scales was less consistent (fig 1). For example, 42% (177/421) of wounds classified only as “disturbance of healing” by ASEPSIS were classified as infected by the CDC definition, with 3.8% (16) classified as deep infections. Conversely, four of the six wounds classified as “severe wound infections” by ASEPSIS were classified as superficial by the CDC definition.

Figure 2 compares the wound classification with the CDC definition and with the NINSS version. Each category of infection showed unique discrepancies between the two definitions. For example, more than 30% of wounds defined as superficially infected with CDC were classified as not infected with NINSS (229/709). In the CDC “superficial infection” category 94% (222/237) of the observed discrepancy was attributable to the NINSS modification of the CDC criterion related to positive bacterial cultures. In the CDC “deep infection” category the discrepancy observed was due to the exclusion of infections based solely on a surgeon's diagnosis.

Fig 2
Fig 2

Comparison of diagnoses of surgical site infection in 5804 wounds reported with the Centers for Disease Control (CDC) 1992 definition and with the nosocomial infection national surveillance scheme (NINSS) version of the CDC definition

Discussion

We compared four different definitions of surgical site infection and found that they varied widely in the estimated percentage of wounds infected. Comparing the 1992 CDC definition and the ASEPSIS scoring method, we found more than twice as many wounds were classified as infected by only one definition (n=778) as were classified as infected by both (n=366).

Potential limitations of this study

We made some assumptions in applying the definitions, but these are unlikely to explain the extent of the discrepancies observed. For the CDC definition, we often assumed the requirement for a surgeon's diagnosis of infection to be satisfied when a decision was made to start specific antibiotic treatment or to provide surgical treatment. For example, opening of a wound under general anaesthetic for drainage of pus was taken to indicate deep infection. In other studies, differences in results between CDC and other surveillance methods have been associated with lack of follow up, use of positive culture results, or clinical criteria.13 Although our study was conducted in a single group of hospitals, data came from multiple sites, many surgical specialties, and a large number of surgeons, so that most of the relevant sources of variation were represented.

Comparison of the different definitions

Both the CDC and ASEPSIS definitions describe the severity of wound infections—CDC describing three categories (none, superficial, or deep), whereas ASEPSIS has scores up to 50 or more. The CDC definition consistently tended to rate wounds with pus as more severely infected than did ASEPSIS. CDC also tended to rate wounds without pus as being more severely infected than did ASEPSIS, but some wounds classified as moderately or severely infected by ASEPSIS (31-40 points and > 40 points respectively) were classified as not infected or only superficially infected by CDC.

The criteria needed to satisfy the CDC definition are complicated, and some are subjective. They were modified in the English NINSS version of the CDC definition to make it practicable in a hospital setting.7 However, the equivalent Scottish surveillance system adopted the original CDC definition.8 Unfortunately, none of the methods of determining wound infection has been validated against outcomes that it would be expected to influence, such as length of stay of hospital inpatients or prescription of antibiotics after discharge.

Therefore, choosing an optimal definition is extremely difficult. A definition that is too sensitive will give rise to high estimates of infection rates and may cause public alarm. Moreover, if overall rates are influenced primarily by minor infections of relatively little consequence to patients and health services, the use of such a definition could mask important differences between institutions. In contrast, a definition that lacks sensitivity would not identify infections that are avoidable.

An agreed definition needs to capture all infections of clinical importance and be accepted by patients, doctors, and managers. Other health outcome measures have been psychometrically evaluated,14 but similar information is lacking for most definitions of wound infection.15 ASEPSIS in its original form was reported to be repeatable and related to outcome,11 16 but it has since been modified and reproducibility is currently being reassessed.

The absence of a clear pattern to the type of wounds classified as infected by CDC but as not infected by NINSS supports the view that the CDC criteria responsible for the discrepancy are difficult to apply consistently. Small changes made to the CDC definition or even to its interpretation, as with the NINSS version, causes substantial variation in the apparent percentage of infected wounds. This lack of robustness is disquieting, because the elaborate and labour intensive CDC definition would probably need to withstand similarly varied adaptations in any nationwide surveillance programme.8 Although the CDC definition has been adopted in many countries to allow international comparison, this faith seems unwarranted.

Conclusions

Surveillance systems that monitor rates of wound infection and provide feedback to clinicians have been shown to contribute to quality improvement and are acknowledged as an important component of local programmes to prevent and control infection.3 4 10 17 Indeed, we recorded reductions in infection rate in our own programme after giving feedback to surgeons. Provided the same definition is used over time, any changes recorded should be accurate.18 However, using wound infection rates as a performance indicator to compare centres or countries is premature. Without a means to interpret absolute rates, such comparisons will be compromised by discrepancies in the way that infections are defined. External agencies should not judge the quality of medical care on these measures.19 Comparative performance tables should be reported only once a scientifically based and agreed definition has been produced.

What is already known on this topic

Surgical site infections are a major cause of morbidity and increased costs in health care

The percentage of surgical wounds classified as infected is an obvious potential performance indicator, but common definitions have not been validated or compared

What this study adds

To assess the robustness of four common definitions of wound infection, their agreement in wound classification was determined

Classifications with different definitions disagreed for more than twice as many wounds as those for which they agreed, and small changes in the interpretation of a definition caused substantial variation in the percentage of wounds classified as infected

Although feedback of rates of wound infection within an institution using a consistent definition is effective in reducing infection rates, infection rates cannot be used as a performance indicator to compare hospitals without a more robust definition

We thank the members of the wound surveillance team (D Archibald, J Leach, and E O'Donnell).

Footnotes

  • Contributors APRW planned and supervised the study and drafted the paper. BH was in charge of the data collection. BCR and CG were responsible for statistical analysis and helped to write the paper. DP and ML constructed and maintained the database. ZHK and JB helped to write the paper and to apply the definitions of infection. JW and AP helped to write the paper and to apply the NINSS definition. APRW is guarantor for the study.

  • Funding Wound surveillance was supported by a start up grant from UCLH trustees for the first two years. Subsequently it has been funded directly by UCLH Trust. The analysis by CG reported in this paper has been funded by a grant from the National NHS R&D Research Methodology Programme. None of these funding sources have contributed to, or influenced the interpretation of, the analyses reported.

  • Competing interests None declared.

  • Ethical approval This was not deemed necessary as the surveillance was part of the hospital audit programme.

References

  1. 1.
  2. 2.
  3. 3.
  4. 4.
  5. 5.
  6. 6.
  7. 7.
  8. 8.
  9. 9.
  10. 10.
  11. 11.
  12. 12.
  13. 13.
  14. 14.
  15. 15.
  16. 16.
  17. 17.
  18. 18.
  19. 19.
View Abstract