From black bag to black box: will computers improve the NHS?BMJ 1996; 312 doi: https://doi.org/10.1136/bmj.312.7043.1371a (Published 01 June 1996) Cite this as: BMJ 1996;312:1371
- Liam J Donaldson
- Professor of applied epidemiology Department of Epidemiology and Public Health, University of Newcastle upon Tyne, Newcastle upon Tyne NE2 4HH
Evidence must shape implementation
As information technology advances, the drumbeat of punditry is swelled by visions of social upheaval, personal enrichment and fulfilment, and even a new world order. The word most often used to describe this prospect is revolution. If our lives are to be transformed and our world remodelled, will our health services too be revolutionised by computers? Some would say they are already poised on the brink of fundamental change. Telemedicine and the Internet have already created a “virtual hospital” in Iowa and a “cyberspace hospital” in Singapore, and the time is said to be coming when it will be as easy to consult a physician across the Atlantic as across the corridor.1
Faced with such captivating images of the future of medical practice, it may seem churlish to stop and ask for evidence of the benefit of computers to health care. In this issue of the BMJ (p 1407), Lock highlights the irony of the NHS, constantly exhorted to strive for greater evidence based cost effectiveness, spending £220 million a year on information technology in hospitals, largely unsupported by evidence of benefit.2 It seems equally ironic that the output of this incompletely evaluated information technology is often the data on costs, quality, and outcome on which objective appraisal of health services themselves should be based.
Formal evaluation of major information technology investments in the public sector, including the NHS, has tended to focus on criticism of the implementation process3—factors such as lack of design planning, poor project management, inadequate staffing, and cost overruns—rather than on health care benefits or their absence. Other assessments have addressed the strategic approach taken by the NHS and have criticised lack of coordination, the diversity of systems, misapplication of equipment at local level, and a lack of properly qualified staff.4 Running alongside the more considered criticisms has been a stream of “shock-horror” allegations of money wasted on NHS computer schemes5 and statements by professional bodies distancing themselves from initiatives with which their members were cooperating.6 Many of the lessons of these early evaluations of the health service's approach to information technology have been addressed through the creation of an information strategy for the NHS.7 Despite this, the lack of evidence of specific health care benefits on which individual strategic decisions can be made remains a problem.
Why is it so difficult to document tangible benefits from the use of computers in health care? Firstly, there is the diversity of computer applications. Electronic patient records, telemedicine, knowledge bases, interactive educational packages, electronic networking, patient administration systems, decision support systems, clinic based and population based information systems, health care process re-engineering, diagnostic and treatment aids—the list is long but far from exhaustive. Secondly, the evaluation of information technology investments is hindered by the difficulty of specifying measurable benefits. Would a new computer system be as effective if it simply freed clinicians' time, or would evidence be needed that the extra time was used to improve patient outcomes? In a systematic review of 30 studies of computers in primary care, Sullivan and Mitchell found that only three measured the impact on patient outcome.8 The studies provided evidence that computers lengthened consultation time slightly; patient initiated content and social content were reduced, while medical content was increased.
Various methods are available to evaluate information technology but they are not widely used. As a result, subjective methods (attitudes and opinions of users and system designers) tend to subordinate established objective methods on the apparent premise that “the system is worthwhile, it's just difficult to show that this is so.”9 Information technology proposals in the NHS are appraised, as is any major potential investment.10 Broadly, this involves estimating the present and future costs and benefits of all available options, discounted to their present value, and taking account of risks and uncertainty. The biggest single risk is obsolescence because of the pace of change of technological development.
Given the rapid uptake of information technology, its diversity, and the locally devolved nature of the present NHS, a purist approach to evaluation is probably inappropriate and unworkable. The randomised controlled trial will have a limited place, and other proved evaluative methods, both quantitative and qualitative, must urgently be brought into play. In other fields of health care development there is little post-project evaluation aimed at assessing how well the benefits foreseen were realised in practice and what lessons may be learnt for the future. This phase of evaluation is particularly important with information technology projects and should be more widely used.
In future, benefits must be expressed not just in organisational, financial, or business terms. It is too much to expect all information technology investments to be appraised on health outcomes. However, taking account of factors that have more meaning for health care professionals and patients—such as improved diagnosis, more appropriate and effective treatment, fewer complications, greater protection through preventive programmes, more decisions that adhere to evidence, less inconvenience—is more likely to lead to better and more widely accepted investment. The NHS has made important strides with its information technology strategy to bring order out of chaos. It now needs to bring the cutting edge of evidence to decision making in this area.