Incident reporting and patient safety
BMJ 2007; 334 doi: https://doi.org/10.1136/bmj.39071.441609.80 (Published 11 January 2007) Cite this as: BMJ 2007;334:51- Charles Vincent, Smith and Nephew Foundation professor of clinical safety research (c.vincent{at}imperial.ac.uk)
- 1Department of Biosurgery and Surgical Technology, Imperial College London, St Mary's Hospital, London W2 1NY
Incident reporting should ideally communicate all information relevant to patient safety. Local incident reporting systems in hospitals typically use an incident form that comprises basic clinical details and a brief description of the incident; there may be a list of designated incidents that should always be reported. Such systems are ideally used as part of an overall safety and quality improvement strategy, but in practice they may be dominated by managing claims and complaints.1
Specialty reporting systems2 and large scale systems, such as that of the UK National Patient Safety Agency (www.npsa.nhs.uk/), allow wider dissemination of lessons learnt and emphasise the need for parallel analysis and development of solutions. In this week's BMJ a case note review by Sari and colleagues finds that local reporting systems are poor at identifying patient safety incidents, particularly those involving harm,3 echoing the findings of similar studies.4 Does this mean that these reporting systems are of no value? It depends entirely on the purpose of reporting and what is hoped to be achieved by reporting.
The comparisons between health care and aviation are often overstated, but the experience of large scale reporting systems in aviation has proved instructive. Reflecting on 20 years of running NASA's aviation reporting system, Charles Billings made many thoughtful comments on past success and failure and the implications for health care.5 Billings stated that counting incidents is largely a waste of time, that reporting systems capture a fraction of the true number of incidents, and that the underlying population from which the reports are drawn is seldom known.
The monthly incident counts produced by healthcare risk managers are largely uninformative, except as an index of the staff's willingness to report. The study by Sari and colleagues supports this view and is an important corrective to the widespread misunderstanding that the purpose of reporting is to provide an accurate reflection of harm to patients.
Much effort in health care is devoted to defining the incidents that should be reported and devising classification systems to capture them, in the hope that the classification itself will produce useful safety information. But Billings warns that “Too many people thought that incident reporting was the core and primary component of what was needed. These people thought that simply from the act of collecting incidents, solutions and fixes would be generated sui generis and that this would enhance safety.”5
Of course, defining some aspects of incidents is feasible and desirable. But Billings cautions that the real meaning of the incidents is apparent only in the narrative. To make real sense of an incident the story must be interpreted by someone who knows the work and knows the context. Thus, if healthcare incident reports are to be of real value they should be reviewed by clinicians and, ideally, by people who can tease out the human factors and organisation issues. Analysing a small number of incidents thoroughly is probably more valuable than a cursory overview of a large number of incidents.6 In health care we are learning slowly and painfully that safety is a tough intractable problem that will take much more than reporting to resolve.1
After reflecting with the wisdom of hindsight on the enthusiasm for reporting and the vast amounts of money poured into it, it is hard to see why it was thought that reporting could be a substitute for systematic data collection. One reason, perhaps, is that it seemed as if aviation and other industries were using reporting to establish rates of serious incidents. In fact, aviation already had established the epidemiology of harm in the form of comprehensive databases of accidents and other systematically collected information. Reporting was always complementary to systematic data collection, providing warnings and additional safety information.
In health care we need systematic assessment of error and harm collected from a wider range of sources, and hopefully a move towards active surveillance of salient events. At local level this means a shift in emphasis from analysis of cases to systematic measurement of known problems and most importantly to safety improvement programmes (www.health.org.uk/ourawards/service/index.cfm?id=41).7 At national level, whether in the United Kingdom or elsewhere, priority should be given to developing safety indicators and measuring harm and other safety issues, a process already begun by the National Patient Safety Agency. When the move to electronic medical records is achieved, records could be routinely monitored to detect those with a high probability of an adverse event. If such routine monitoring could be developed, patient safety initiatives could be much more proactive, with adverse events and patient outcomes being monitored in near real time.8
Reporting will always be important, but it has been overemphasised as a way to enhance safety. Reporting systems can provide warnings, point to important problems, and provide some understanding of causes. They serve an important function in raising awareness and generating a culture of safety. However, a functioning reporting system should no longer be equated with meaningful patient safety activity. Organisations must move towards active measurement and improvement programmes on a scale commensurate with the human and economic costs of unsafe, poor quality care.
Footnotes
-
Competing interests: CV has acted as paid adviser to the National Patient Safety Agency.