Implementing findings of research
BMJ 1994; 308 doi: https://doi.org/10.1136/bmj.308.6942.1488 (Published 04 June 1994) Cite this as: BMJ 1994;308:1488- A Haines,
- R Jones
- Department of Primary20Health Care, University College London Medical School, Whittington Hospital, London N19 5NF
- Department of General Practice, United Medical and Dental Schools of Guy's and St Thomas's Hospitals, London SE11 6SP
- Correspondence to: Professor Haines.
- Accepted 3 December 1993
There are unacceptable delays in the implementation of many findings of research. This results in suboptimal care for patients. A number of approaches may be effective in speeding up implementation, including evidence based guidelines, the influence of opinion leaders, and computer based decision support systems. An integrated approach to speeding up this process by means of a number of mechanisms is likely to be most effective. The results of systematic reviews of the research literature should be incorporated into programmes of continuing medical education and clinical audit. Professional associations have an important role to play in ensuring that research based information is included in educational activities and clinical guidelines. Purchasers of health care could promote the uptake of research findings during contract negotiations. Improved methods of informing health care users and the public about evidence of effectiveness could also have an impact. Policy makers should take more account of the results of research when formulating recommendations. Methods of improving the implementation of research findings require further investigation and greater resources devoted to them.
The NHS research and development programme has embarked on a series of major reviews of research and development priorities in the NHS, leading to commissioned research. These priorities are being derived from assessment of research need, rather than being based on vested research interests and agendas, and stand a good chance of generating new information for implementation in practice.1 However, this sequence of events cannot be taken for granted: Machiavelli warned, “The innovator makes enemies of all those who prospered under the old order and only lukewarm support is forthcoming from those who would prosper under the new.”2 The medical literature is littered with examples of research findings that have not found timely acceptance in practice, and clinical practice is characterised by wide variations in behaviour.
An early example of delayed uptake of innovations was the use of lemon juice to prevent scurvy. In 1601 James Lancaster showed that lemon juice was effective, but it was not until 1747 that James Lind repeated the experiment, and the British navy did not fully adopt this innovation until 1795 (not until 1865 in the case of the merchant marine).3 Among more recent examples are ascertainment and control of hypertension4 5; widespread failure to give steroids to women in premature labour despite evidence of their beneficial effect on fetal lung surfactant6; inadequate use of prophylactic anticoagulants in patients having orthopaedic surgery7; inadequate treatment of asthma8; and inadequate treatment of children with gastroenteritis.9 In the case of thrombolytic treatment for myocardial infarction there was a 13 year delay between the demonstration of effectiveness from cumulative meta-analysis of randomised controlled trials and the advocacy of the treatment by most authors of review articles or book chapters.10
Researchers need to give some thought to the means of implementing their findings. This paper outlines methods of achieving more effective implementation and discusses the organisational framework.
Problems of implementation
One factor working against the smooth transition from publication of research to clinical practice is probably the longstanding cultural divide between researchers, practitioners, and administrators. Communication of ideas occurs most effectively between people who share important attributes such as educational level, beliefs, social status, and networks. This has been described as homophily (heterophily being the opposite). Researchers and innovators are frequently heterophilous with the practitioners who will determine whether a specific innovation is taken up.11 Research can easily become uncoupled from clinical practice12,13 and the needs of health services and be driven by individual and institutional agendas that may operate in virtual isolation.14 Clinicians may be unable to articulate their research needs and become dependent on the traditional publication and dissemination processes controlled by the research community. Many of the literature reviews that clinicians and researchers use as a means of keeping up to date have been methodologically flawed and have resulted in inappropriate conclusions being drawn.15 Educational activities are rarely driven by evidence about effectiveness. Health policy makers may come to regard academic research as wilfully irrelevant to their needs, and important shifts in health policy often take place with little apparent regard for the research evidence.
There are dangers of uncritical acceptance of medical innovations.16 Even medical leaders have occasionally endorsed treatments that have subsequently been shown to be ineffective or even dangerous. For example, William Osler favoured blood letting for the treatment of lobar pneumonia. J Marion Sims, a prominent figure in American gynaecology, described the postcoital test, which was designed to evaluate the ability of sperm to survive and penetrate cervical mucus. This has been extensively adopted, but it lacks predictive power.17
The challenge is to promote the uptake of innovations that have been shown to be effective, to delay the spread of those that have not yet been shown to be effective, and to prevent the uptake of ineffective innovations. This will require a greater capacity on the part of clinicians, managers, and policy makers to critically analyse evidence that is presented to them. A controlled trial showed that teaching techniques of critical appraisal to final year medical students significantly improved their ability to critically analyse the clinical literature whereas the ability of the control group, who were exposed to standard educational approaches, deteriorated over the duration of the study.18 Having the ability to critically appraise evidence, however, may not necessarily result in change of behaviour.
Diffusion of innovations
There is an extensive literature on diffusion of innovations, only a minority of which concerns medicine.11 Diffusion is the process by which an innovation is communicated through certain channels over time among members of a social system. Five adopter categories have been defined on the basis of the rapidity with which they take up innovations: these are innovators, early adopters, early majority, late majority, and laggards. If the proportion of individuals taking up a new idea is plotted over time an S-shaped curve of variable slope is observed, with the innovators and early adopters being at the foot of the curve and the laggards at the asymptote (fig 1). The five categories can be partitioned by standard deviations away from the mean time of adoption: for example, innovators (2.5% of the population) are two or more standard deviations below the mean time of adoption.
A classic study of the spread of the use of a new antibiotic among doctors in Illinois in the early 1950s showed that innovative doctors, who were among the first to use the new antibiotic (which researchers gave the pseudonym Gammanym), had attended more medical meetings out of town and had more extensive social networks than did those who adopted the drug later.19 Studies of innovations in other fields have also reported that innovators tend to have extensive friendship networks. Those doctors defined as opinion leaders by their colleagues had almost all adopted Gammanym by the first half of the period studied. One reason for the S-shaped curve might be that once the opinion leaders in a given system adopt an innovation they influence their colleagues, who rapidly take it up.
Few resources have been devoted to testing strategies to implement research findings, and there is much to be learnt. There is growing interest in this topic, and the United States Agency for Health Care Policy and Research has prepared an annotated bibliography on dissemination of information to health care practitioners and policy makers.20 Dissemination alone, however, is not sufficient to promote change. Strategies for implementation must be sustainable and dynamic, taking into account changing evidence about effectiveness.
Priorities for implementation
Criteria for deciding priorities for implementation should include burden of disease, the potential benefit that might accrue from improvements in care, the strength and generalisability of the evidence, and the feasibility of implementation. Measures of cost effectiveness are also bound to play a part.
The strength and generalisability of evidence can be deduced from systemic reviews of the research evidence that have used the techniques of critical appraisal.21 The Cochrane Collaboration is playing a key role in coordinating systematic reviews of randomised controlled trials in the United Kingdom and internationally,22 and guidelines for reviews have been developed.23 The NHS facilities at the University of York for commissioning and disseminating reviews will complement the activities of the Cochrane Collaboration by focusing on different types of studies and ensuring wide distribution of information.
Different approaches
A top down, centralised approach to implementation is unlikely to achieve changes in behaviour. However, the implementation of research findings cannot be left solely to spontaneous local initiatives or specialty organisations, although both may have a useful part to play. The Royal College of Radiologists provides an example of how a specialty organisation can mount a sustained and effective programme through the use of guidelines derived from the findings of research.24 It has achieved success in several specific areas, including the use of x ray pictures in accident and emergency departments and preoperatively. The strategies for the incorporation of knowledge from well designed studies into clinical policies and practice have been classified, and those approaches that focus primarily on individuals have been separated from those that focus on geographical or specialty communities.25 26 In each case there are four main strategies: patient centred, educational, administrative, and economic.
Patient centred approaches include educating patients about the effectiveness of interventions in an attempt to change the behaviour of professionals. Giving patients more information about the probabilities of different outcomes of surgery and their potential impact on quality of life may influence the treatment that the patients choose. This is exemplified by the use of an interactive videodisk for patients who are being offered a prostatectomy.27
Educational approaches - Traditional didactic approaches to continuing medical education do not seem to be an effective way of changing practitioners' behaviour, though they may increase awareness of issues.28 A review of 50 randomised controlled trials of continuing education suggested that approaches that incorporate feedback of performance, involvement of learners in setting priorities, or face to face encounters between practitioners and an educator may be more effective. However, only a minority of studies have assessed the impact on patient outcomes. Strategies linked to activities that enable or reinforce practice consistently improve the performance of doctors.29 A randomised controlled trial compared the effect of audit and feedback on guidelines with education by local opinion leaders previously identified by a survey effect on the rates of trial of labour and vaginal birth that were advocated in guidelines for the management of women with a previous caesarean section. However, education by local opinion leaders substantially increased the rates.
Economic strategies are frequently motivated by the intention to contain costs, but there is little evidence that they have been used to promote effective health care based on research. An American study that compared a health maintenance organisation with “fee for service” care suggested that use of services fell at equal rates across appropriate and inappropriate areas of practice.31 In the United Kingdom the use of target payments to encourage cervical cytology and immunisation has focused the attention of primary care teams, but it is unlikely that such a mechanism could be widely used because of the potentially large numbers of topics.
There is increasing interest in the use of guidelines, and a review of their efficacy has revealed factors associated with a high probability of successful use.32 These include a sense of local ownership (that is, development of guidelines by those who are to use them), a dissemination strategy that includes specific educational interventions, and the use of patient specific reminders at the time of consultation.
An integrated system
It is clear that no single strategy is likely to be successful.33 We therefore propose an integrated system that encompasses several components (fig 2). Systematic reviews of research should be used to inform providers, purchasers, patients and the public, professional organisations, and policy makers. Feedback from these groups might also lead to new research and reviews. Updated evidence about effective and ineffective interventions should be included in undergraduate, postgraduate, and continuing medical education. Large sums of money are spent on continuing medical education; the cost of the postgraduate education allowance is over pounds sterling 2000 a year for each general practitioner. The system does not seem to represent good value for money as there are no clear objectives either in terms of educational impact or outcome for patients. Similarly, much clinical audit activity is of questionable effectiveness. A much closer link between continuing education and clinical audit seems essential (fig 3).34 An important element of continuing education could be the development, or more commonly adaptation, of guidelines and practice policies derived from the results of research. These guidelines could then form the basis of standards for audit. This could help to establish a sense of ownership, which appears to be important for success, while avoiding a proliferation of guidelines based on inadequate scientific evidence. Topics where evidence is insufficient for the development of guidelines could form the basis of new research projects. In the United Kingdom much of the coordination between the various components of the implementation strategy could take place at the level of regional health authorities or their successor organisations.
Evidence based guidelines
A growing number of guidelines have been developed after exhaustive reviews of the evidence; for example, those prepared under the auspices of the United States Agency for Health Care Policy and Research.35 Three forms of guideline are available on each topic: a patient guide, a quick reference guide, and a more complete guideline that summarises the research rationale behind the guidelines. The database Effective Care in Pregnancy and Childbirth is now available on disk and is regularly updated.36 It is the first of a series of specialised databases - Cochrane Updates on Disk - derived from the Cochrane Database of Systematic Reviews. Effective Health Care - a bulletin on the effectiveness of health service interventions for decision makers - systematically reviews the evidence on specific topics using three principal criteria; clinical effectiveness, cost effectiveness, and acceptability.37 The use of resources such as these to develop locally applicable guidelines should encourage a participatory, multidisciplinary approach. If priorities are made solely on the basis of local interest they may address topics in which performance is already satisfactory, but if they are based only on national and regional priorities they may be seen as an imposition. Considerable discussion and negotiation may be required to ensure that an appropriate balance is achieved.
Systematic reviews of the evidence will not always lead to clear and unambiguous recommendations - the controversy over the drug treatment of hypercholesterolaemia is one example of this38 - but they may suggest new priorities for research. Testing strategies for implementation should become a growing field. Thus, a more responsive relation between research and practice can be developed.
Other methods
Other methods of improving the implementation of research findings may include computer based decision support systems that incorporate information from research. A critical appraisal of relevant trials showed that such systems can improve the performance of doctors, but more work is needed on patient outcome.39 The use of academic detailing - pharmacists trained in educational techniques talking on a one to one basis with doctors - has been shown to affect prescribing patterns in the United States.40 Such pharmaceutical advisors could play a similar role for general practice in the United Kingdom, but further research is needed to determine their effectiveness.
Purchasers, providers, and public
Purchasers of health care could play an important role in implementing research findings, although it may be difficult to include more than a limited number of specific recommendations in contracts. Evidence of effectiveness needs to be presented to purchasers and providers in an appropriate form, including any available information on costs, acceptability, and overall impact of the intervention.41 Discussions between purchasers and providers could help to identify priorities. Each specialty could be asked to identify a specific topic for implementation over the next year. Providers would thus be responsible for the detailed implementation of research findings, and this process should be subject to audit. As advocates for a particular topic, purchasers and providers could set up local implementation projects. The Getting Research into Practice project, based in Oxford, is focusing on improving practice in a number of specific areas, including the use of steroids in premature labour (H McQuay, personal communication), as are similar activities in North East Thames region. A recent circular from the NHS Executive (EL (93) 115) has outlined available sources of information about clinical effectiveness, and district health authorities are being encouraged to use the information in the 1994-5 contracting round and to make use of clinical guidelines in local discussions.
The role of patients (consumers) in the implementation of research findings requires more investigation. The potential for “retailing” research was reviewed with the evidence in Effective Care in Pregnancy and Childbirth42; the results suggested that community organisations concerned with pregnancy and childbirth could increase the use of research based information.43 In the United Kingdom the National Childbirth Trust uses Effective Care in Pregnancy and Childbirth in training its own staff.44 Written material could be disseminated to patients through prenatal classes and by using local and national media. Clearly, working with health professionals where possible is more likely to be effective than working in isolation. In the United States news of the pain guideline of the Agency for Health Care Policy and Research made the front page of 11 newspapers and was covered by at least 44 television stations and 10 news programmes broadcast by more than 4400 radio stations.35 The impact of such wide publicity has not been rigorously evaluated, but the volume of public inquiries suggests major public interest. In Switzerland the potential power of the media was shown by the successful outcome of a campaign to reduce the high rates of hysterectomy; this included newspaper, radio, and television debates.45 Social marketing techniques, which entail the diffusion of socially beneficial ideas rather than commercial products, may have a role to play. They have been used successfully by governments to encourage the use of family planning techniques in several countries.11
Professional organisations and policy makers
Professional associations and colleges also have an important role to play through programmes of postgraduate and continuing education and in postgraduate examinations. The Royal College of General Practitioners has included a paper on critical reading of the medical literature in the membership examinations. The Royal College of Midwives promotes the sale of Effective Care in Pregnancy and Childbirth and encourages members to use the information in it to evaluate and change midwifery practice. Although some senior obstetricians initially seemed sceptical of its value, the audit committee of the Royal College of Obstetricians and Gynaecologists has recently recommended it as a source of information about effective forms of treatment.44 Professional bodies may also influence the public. For example, the “Defeat Depression” campaign organised by the Royal Colleges of Psychiatrists and General Practitioners aims to improve the knowledge of professionals, patients, and relatives about this condition and to change attitudes.46
The role of professional bodies could include ensuring that systematic reviews of literature are undertaken in areas of special interest, recruiting members to participate in the reviewing process, and consulting those who are expected to implement guidelines during their development. Guidelines that span primary and secondary care should have input from both generalists and specialists. Increasing clinicians' participation in intervention trials may be another method of improving the uptake of their findings. For example, clinicians who participated in multicentre trials of lung cancer treatment were more likely to use effective treatments, and those who entered larger numbers of patients were more likely to be influenced by the results.47
Policy makers have a key role in determining whether evidence based approaches are used. Recent examples (such as the imposition of the general practitioner contract and particularly the inclusion of inadequately tested strategies for screening and health promotion48) suggest that policy makers need greater understanding of the need for evidence from research if they are to avoid such costly gambles in the future.
Managing change
Clearly, approaches to implementation may vary according to the topic. Some (such as new surgical techniques) may require considerable training, while others may be dependent on organisational change (such as those that span primary and secondary care or affect the whole primary care team). A model for managing change in general practice has been developed based on experience in industry.49 It emphasises the need to obtain comprehensive background information, identify barriers to change, negotiate with key individuals, achieve agreement on the approach to be used, and evaluate the programme for change.
Resources
In the United Kingdom resources should be made available by the NHS research and development programme, both centrally and at the regional level, but other sources should also be tapped. As ownership of a topic is essential for success, this necessarily implies that financial commitment must be forthcoming from the participants. Thus purchasers, providers, and professional and educational bodies should all contribute because implementing research must be seen as intrinsic to improving the effectiveness of their work.
Conclusions
It is unlikely that any single approach will be effective in ensuring the implementation of the findings from research. A more evaluative culture will require a shift in attitudes among health professionals and managers to ensure that they rigorously analyse their own activities and accept the need for critical appraisal of evidence. Patients should expect a clear explanation about the effectiveness of the treatments that they are offered where available. Policy makers, including political leaders, should acknowledge the requirement for policy to be supported by evidence from research where available. Changes in health service management and organisation should be designed as experiments and evaluated appropriately, and methods of promoting the use of research findings require further evaluation. A systematic review of trials of methods to improve professional performance is being prepared for publication (A Oxman and M A Thomson, personal communication), and a Cochrane Collaboration is being set up to keep trials of methods of promoting the uptake of research findings under surveillance (N Freemantle and I Watt, personal communication).
There is a danger that health professionals and patients may uncritically accept information and guidelines.50 An integrated approach incorporating techniques of critical appraisal should minimise this possibility. It might be perceived as a threat to clinical freedom, but we argue that clinical practice should be responsive to the best available evidence. When evidence is inadequate clinicians should support the need for appropriate research rather than uncontrolled use of unproved interventions. The challenge now is to build on what is currently known about successful techniques of implementation and to ensure that adequate resources are provided and that cultural change is achieved.
Dr Andy Oxman independently presented a diagram similar to figure 3 at a Cochrane Collaboration meeting in 1992. We thank Dr John Spencer for providing useful information for this article.