Continuing to use APACHE II scores ensures consistencyBMJ 2000; 321 doi: https://doi.org/10.1136/bmj.321.7257.383/a (Published 05 August 2000) Cite this as: BMJ 2000;321:383
- William Konarzewski (), clinical director of intensive care
EDITOR—Shann criticises the use of the APACHE II scoring system as an audit tool for intensive care performance.1 He has two main arguments. Firstly, he says that the system is outdated in that it reflects North American standards in the early 1980s. Secondly, he says that it can mask substandard intensive care performance by magnifying the risk of death in the poorer intensive care units, where patients will achieve higher scores through inadequate management during the first 24 hours after admission. He points out, too, that the collection of data is expensive and that the quality of data can vary between units.
These are undoubtedly fair points, but he overlooks one excellent reason why it is still appropriate to measure APACHE II scores. That reason is that measuring the scores enables an individual intensive care unit to monitor its performance against that in past years, provided it collects the APACHE II data consistently. After all, it is important for each unit to be able to answer what should be a simple question: are we performing better this year than we did 10 years ago? I doubt if every unit can answer that question.
In the intensive care unit where I work we have noted a gradual trend for patients both to die and to survive with steadily increasing APACHE II scores over the past 10 years. We would cautiously argue that we are getting better at treating critically ill patients. Over the past five years, however, the apparent improvement in our performance seems to have reached a plateau, even though patients are in general managed more aggressively than before and staying longer in the unit. This is disquieting, but it at least enables us to eschew complacency and ask ourselves some challenging questions in the hope of making improvements.
Would we have picked up this problem if we had changed our basic scoring system each time a new model came out? I think it unlikely.
Log in using your username and password
Log in through your institution
Sign up for a free trial