Rational, cost effective use of investigations in clinical practiceBMJ 2002; 324 doi: https://doi.org/10.1136/bmj.324.7340.783 (Published 30 March 2002) Cite this as: BMJ 2002;324:783
- Ron Winkens, associate professor of research at the interface between primary and secondary care ()a,
- Geert-Jan Dinant, professor of effectiveness research in general practiceb
- a Department of General Practice and Transmural Care Unit, University Hospital, 6200 MD Maastricht, Netherlands
- b Department of General Practice, Maastricht University, 6200 MD Maastricht, Netherlands
- Correspondence to: R A G Winkens
Investigations such as blood tests and radiography are important tools for the making correct diagnoses. The use of diagnostic resources is growing steadily—in the Netherlands, for example, nationwide expenditure on diagnostic tests is growing at the rate of 7% a year. Unfortunately, health status is not improving similarly, which suggests that investigations are being overused. The ordering of tests seems not to be influenced by the fact that their diagnostic accuracy is often disappointing. Considerations other than strict scientific indications seem to be involved, and we may ask whether new knowledge and research findings are adequately reflected in daily practice.
Several factors may be responsible for the increasing use of investigations, such as the increasing demand for care (due to ageing of the population and increasing numbers of chronically ill people); the fact that they are available, which in itself leads to ordering; and the urge to make use of new technology. Once an abnormal test result is found, doctors may order further investigations, not realising that on average 5% of test results are outside their reference ranges, and a cascade of testing may result. Furthermore, higher standards of care, the guidelines for which often recommend additional testing, and defensive behaviour have led to more investigations. Unfortunately, when guidelines on selective and rational ordering of investigations are introduced, numerous motives for ignoring evidence based recommendations, such as fear of litigation, or procrastination on the part of the doctor, come into play in daily practice and are difficult to influence.
Overuse of investigations—and there is reason to believe that some requests are illogical—leads to overloading of the diagnostic services and overexpenditure: more efficient usage is therefore needed. Interventions focusing on overt examples of inappropriate testing might reduce costs while simultaneously improving quality of care.
Intervention is needed to reduce the often quite illogical overuse of diagnostic tests
Current evidence favours using combinations of methods to influence doctors' behaviour
In daily practice doctors' decisions are often affected by pressure from patients
General practitioners perhaps need more help in putting across the rationale for using, or not using, tests
What does the change involve?
To change how clinicians order investigations calls for a number of stages, shown in the implementation cycle published elsewhere.1
Guidelines, protocols, and standards are needed to formalise optimal practice. The standards developed for general practitioners by the Dutch College of General Practitioners are a good example of guidelines that have already been developed. 2 3 Since 1989 the college has set up some 70 guidelines on a variety of common clinical problems, one dealing specifically with rational ordering of investigations.4
Simply distributing guidelines, however, does not make clinicians adopt them; strategies have to be devised to bring about actual change. Implementation involves a range of activities to stimulate the use of guidelines, such as communication and information about their contents and relevance, providing insight into the problem of inappropriate ordering of tests and the need to change, and, most importantly, interventions to achieve actual behavioural changes.
Is change feasible?
Ideal interventions would improve the rationality of ordering of investigations while at the same time leading to fewer requests being made, but identifying or formulating interventions that will do this is not easy. Some interventions by their nature cannot always be properly evaluated; especially large scale interventions, such as changes in national regulations or in reimbursement terms, for which it is difficult to obtain a concurrent control group.
Some of the strategies that have been evaluated have proved to be effective, others were disappointing. Several reviews focused on the effectiveness of implementation strategies. The conclusions of the reviews vary, but there is a measure of consensus that while some strategies by and large seem to fail, some are at least promising. A few examples follow.
Changes in terms of reimbursement or regulatory steps by health insurers or government can affect ordering of investigations by acting as a stimulus to clinicians to adopt the desired changes. In several western countries the healthcare system includes a payment to doctors for investigations ordered, even if these are carried out elsewhere: under these systems ordering fewer tests affects the doctor's income. Changing this payment system could improve adherence to guidelines without the risk of reducing clinicians' income. There is a clear need for trials in this field, as at present virtually no evidence exists on the point.
Since one of the reasons for the growing use of investigations is simply that they are so easy to request on the laboratory request forms in use, one simple strategy would be to remove them from the standard forms or to ask for explicit justification for ordering them. Such interventions have been effective and require little extra cost and effort, and Zaat and Smithuis found they resulted in reductions of 20-50%. 5 6 Extensive or unselective curtailing of the request forms, however, carries the risk of potential underuse of tests. Therefore, changes in request forms should be designed very carefully.
A range of interventions provide both information and monitoring of the clinician's performance, such as audit, feedback, peer review, and computer reminders. Investigations the clinician has ordered are reviewed and discussed by expert peers, audit panels, or computerised systems. There is a huge variation in what is reviewed and discussed, how often and into whose performance these interventions enquire, and the ways in which the reviews are presented.
An audit represents systematic monitoring of specific aspects of care; it is somewhat formal, being set up and organised by national colleges and regional committees.7 Feedback resembles audit, although it is less formal and its development is often dependent on the spontaneous initiative of local bodies or even individuals. In peer review, performance is reviewed by expert colleagues. It is used not only to improve aspects of patient care but also to improve organisational aspects (practice management).
Audit and feedback are among the strategies most frequently employed, but the reviews available do not reach any common conclusion. Highly successful trials, such as one with nine years of feedback on rationality of tests, are published but so are interventions with no effect, such as studies of feedback on costs of tests ordered.8–10 Nevertheless, there is evidence suggesting that feedback under specific conditions is effective —for example, when the information provided is directly useful in daily practice, or when doctors are addressed personally and when they have accepted the expert peer.
Computer reminders are becoming more popular, possibly because of the increasing use of computers in health care. Immediate computer reminders try to influence the behaviour of individuals directly, with less emphasis on monitoring performance. “Anonymous” computer reminder systems may seem less threatening, and their feedback does not need to be seen by anyone but the user. They seem to be a potentially effective method with relatively little effort, and although their effects in reducing unnecessary tests are variable, they seem promising for improving adherence to guidelines.11 The number of studies on computer reminders is relatively low, but it is likely that interventions of this type will increase in the future.
It is clear that two common implementation strategies have little or no effect on ordering of investigations. For many years we have put much effort into continuing medical education (CME) and into writing books, clinical journals, and protocol manuals. Although such written material is partly meant to disseminate research findings and increase scientific knowledge, it is also meant to improve clinical competence, though whether any improvement is reflected in clinical practice is another matter. The effectiveness of these methods has been shown to be disappointing. 12 13
The effects of interventions are therefore by no means assured. To discriminate between successful and unsuccessful interventions we need evidence. However, after several decades with many studies and a large number of reviews of implementation strategies, many questions still remain and no final conclusion can be drawn. Differences in interventions, settings, environments, and many other factors impair comparability. Moreover, in a dynamic environment such as the medical profession, it is inevitable that interventions and their effects are dynamic and variable over time. Hence, there will always be a need for evaluations. Owing to their complexity, studies on implementation strategies are difficult to evaluate, and we tend to sacrifice scientific principles in the process. The quality criteria required are no different from those for other evaluations.14 The randomised controlled trial still remains the “gold standard,” but some aspects need special attention.15 The following is a striking example. In most studies on improving behaviour the doctor is the one we are trying to influence. Therefore, the unit of randomisation and, hence, the unit of analysis is the individual doctor, but the number of participating doctors is often limited, and this may affect the power of the study. Here, cluster randomisation and multilevel analyses may offer a solution.16
Perpetuation and cost effective implementation
More attention should be paid to perpetuation of interventions oncethey have been started. It is often unclear what the long term effects are. Interventions in most studies are short, and continuing effects after the intervention has ended are usually not evaluated. Tierney is an exception: he continued observations after ending his intervention, the use of computer reminders to affect test ordering. The effects had disappeared by six months after the reminders were stopped.17 On the other hand, Winkens found that feedback is still effective after being continued over a nine year period.8 Should strategies be continued once they are started? Implementation strategies that are effective with the least effort and lowest cost are to be preferred. We may also question whether those strategies that have not proved effective should be continued. Should we continue to put effort in continuing medical education, especially into “one off” training courses or lectures with no follow up? Who should we try to reach by scientific and didactic papers, clinicians in daily practice or only scientists and policy makers with special interests? Should we choose the most effective intervention method, regardless of the effort and cost it requires? If we start an implementation strategy to change test ordering, have we to continue it for years? There is no clear answer to these questions, although some published reviews argue in favour of combined, tailormade interventions. How such a combination is composed depends on local needs, the availability of experts, and many other aspects. General recommendations for specific combinations are not possible, but if we look at costs in the long term, computer interventions look promising.
From evidence to practice
An important objective in changing the ordering of investigations is to achieve more rational and lower use, thereby reducing costs or achieving a better cost benefit ratio. The ultimate goal is to improve quality of care for the individual patient, but effects on health status and final outcome for individual patients are difficult to assess. On the other hand, reduced use of unnecessary and inappropriate tests is not likely to have any ill effects on the patient.
Despite the increasing evidence that changes in ordering of investigations are necessary, when it comes to individual patients, their doctor's decision whether to investigate will always involve more than just scientific evidence.18 Low diagnostic accuracy or high costs of tests may conflict with patients' explicit wishes to have tests ordered or with their doctors' wish to procrastinate because of fear of missing an important diagnosis or feelings of insecurity and desire for their opinions to be backed up by a positive test result. These dilemmas are influenced by many factors related to both doctor and patient. For the doctor one important aspect is failure on a previous occasion to diagnose important relevant disease. Patients may be have a chronic disease, and question the skills of their doctor when it cannot be cured, or have recurrent vague or unexplained complaints which doctors may be tempted to over-investigate.. Adequate patient education may offer a solution. Patients should be told that not all tests give reliable results and that sometimes the value of investigations, especially in primary care, is limited. But this requires, first of all, that doctors know the principles of medical decision making and its relevance to daily practice.
Series editor J A Knotterus
This is the last in a series of five articles
Funding None declared.
The Evidence Base of Clinical Diagnosis, edited by J A Knottnerus, can be purchased through the BMJ Bookshop (www.bmjbookshop.com/)