Evidence based policy: proceed with careCommentary: research must be taken seriouslyBMJ 2001; 323 doi: https://doi.org/10.1136/bmj.323.7307.275 (Published 04 August 2001) Cite this as: BMJ 2001;323:275
Evidence based policy: proceed with care
- Nick Black (), professor of health services research
- Department of Public Health and Policy, London School of Hygiene and Tropical Medicine, London WC1E 7HT
- School of Public Policy, University College London, London
- Accepted 22 February 2001
The emergence of evidence based medicine in the early 1990s led to some clinicians challenging managers and policymakers to be equally evidence based in their policymaking. This demand was shared by some health policy analysts: “At a time when ministers are arguing that medicine should be evidence-based, is it not reasonable to suggest that this should also apply to health policy? If doctors are expected to base their decisions on the findings of research surely politicians should do the same … the case for evidence-based policymaking is difficult to refute.”1
The need to be seen to be making evidence based decisions has permeated all areas of British public policy. The government has proclaimed the need for evidence based policing, and the 1998 strategic defence review introduced evidence based defence.2 In the health sector, the concept of evidence based policy has gained ground, and a journal has been launched devoted to this challenge (Journal of Evidence Based Health Policy and Management).
Despite some groups using evidence based policy as a fig leaf, it seems difficult to argue with the idea that scientific research should drive policy. However, before accepting the argument we need to understand the implied model of policymaking.
Evidence based policy is being encouraged in all areas of public service, including health care
Research currently has little direct influence on health services policy or governance policies
The implicit assumption of a linear relation between research evidence and policy needs to be replaced with a more interactive model
Researchers need a better understanding of the policy process, funding bodies must change their conception of how research influences policy, and policy makers should become more involved in the conceptualisation and conduct of research
Until then, researchers should be cautious about uncritically accepting the notion of evidence based policy
What is the implied model of policymaking?
In essence, protagonists assume that the relation between research evidence and policy is linear3; a problem is defined and research provides policy options. Research is used to fill an identified gap in knowledge. This is consistent with both a positivist model of science and professional dominance, in which the views and priorities of healthcare professionals (and doctors in particular) dominate healthcare policies. It assumes research evidence can and should influence health policy. Lomas has suggested that the model is viewed as “a retail store in which researchers are busy filling shelves of a shop-front with a comprehensive set of all possible relevant studies that a decision-maker might some day drop by to purchase.”4
Discussion of the theory underlying evidence based policy might safely be consigned to an intellectual dustbin if it were not for the practical consequences. If we accept a linear relation, then the value of research will inevitably be judged in terms of its impact on policy. Few would argue with “the need to show that public investment in research results in benefits for patients,”5 but politicians and managers take it a stage further, requiring “a substantial return from investment in health services research.”6 This implies that at least some aspects of the impact of research can and should be quantifiable, even in monetary terms.
The consequences of failure to show benefit are fairly clear. So, how successful have researchers been at facilitating evidence based policy?
Is healthcare policy evidence based?
Several studies have been conducted on the relation between research and policymaking over the past five years. A useful distinction has been made between practice policies (use of resources by practitioners), service policies (resource allocation, pattern of services), and governance policies (organisational and financial structures).7
The relation between research evidence and clinical practice has been thoroughly examined by practitioners of evidence based medicine. Clinical effectiveness should clearly play a large part in determining practice policy. Concern has focused on the delays observed in implementation of research findings.8
The linear, rationalist model holds up quite well for practice policy, although it shows signs of strain in two ways. Firstly, policymakers differ in their interpretation of the evidence. For example, guidelines on cholesterol testing vary considerably both between and within countries.9 Such differences reflect variations in context (values) and in the background of the policymakers. Generally, the more that clinicians are involved, the less the policy reflects the evidence.
Secondly, there is a lack of generalisability once we move away from drugs to manual interventions. For example, difficulty in devising practice policies in surgery arises because decisions depend on the features of the particular patient (obesity, anatomy, quality of tissue), the particular surgeon, and various external factors (equipment available, competence of assistants).10
Although research has made some important contributions to support service developments, the relation between research evidence and service policies is generally weak. The box lists the six main reasons, which are discussed below.
Reasons why research evidence has little influence on service policies
Policymakers have goals other than clinical effectiveness (social, financial, strategic development of service, terms and conditions of employees, electoral)
Research evidence dismissed as irrelevant (from different sector or specialty, practice depends on tacit knowledge, not applicable locally)
Lack of consensus about research evidence (complexity of evidence, scientific controversy, different interpretations)
Other types of competing evidence (personal experience, local information, eminent colleagues' opinions, medicolegal reports)
Social environment not conducive to policy change
Poor quality of knowledge purveyors
Firstly, some policymakers have goals other than maximising clinical effectiveness. The goal may, for example, be social or financial. The UK government's decision to aim the safe sex campaign in the 1980s at the entire population, rather than those at high risk, owed nothing to research but to avoiding a possible backlash against gay men and black people.11 And the introduction of the prenatal triple test for detecting Down's syndrome helped providers fulfil their contract with local purchasers.12 Even terms and conditions of employment of staff can justify a policy. Decisions regarding health promotion in primary care in the early 1990s were influenced by negotiations on the general practitioner contract between the profession and the department of health.13 Policy may also be shaped by electoral considerations. For example, the Changing Childbirth policy in the 1990s was politically led with no secure scientific base.14 Local policymakers are therefore under a myriad of often competing pressures, of which scientific evidence is but one.
Secondly, research evidence may be dismissed as irrelevant if it comes from a different sector or specialty. For example, general practitioners have been reluctant to extrapolate the results of randomised trials on the use of anticoagulants to primary care because the studies were carried out in hospitals.14 Evidence may also be dismissed in areas where practice often depends on tacit knowledge, such as surgery. Perceived lack of applicability can also lead to dismissal—because research on the effectiveness of interferon alfa for hepatitis C was confined to patients with no other serious health problems, the evidence has been seen as irrelevant for a population with high comorbidity.15
Thirdly, there may be a lack of consensus about the research evidence because of its complexity, scientific controversy (incomplete or inconsistent evidence), or different interpretations. Policy on preventing heart disease in primary care has suffered from widely differing interpretations of the results of the two major randomised trials.13
Fourthly, policymakers may value other types of evidence such as personal experience, local information on services, eminent colleagues' opinions, and medicolegal reports. Fifthly, the social environment may not be conducive to policy change. Attempts at introducing evidence based needs assessment have been hampered by frequent organisational changes lowering staff morale.16 And finally, the quality of the “knowledge purveyors” may be inadequate. These are the people who carry the research evidence into the policymaking forums. In central government, civil servants usually have this crucial role. In the United Kingdom, a high turnover of such staff, lack of experience in a particular field, and high workload militate against good quality advice.17
The direct influence of research on governance policies has been negligible. This is illustrated by the reorganisations of the NHS in 1974 and 1989. In both cases research evidence was ignored but for different reasons.11 In 1974, there was a consensus—unification of services was necessary, as was coterminosity with local government. Therefore, no research evidence was needed. Instead working parties were set up in which decisions were based on experiential evidence. In contrast, in 1989 policy was largely influenced by ideology and electoral considerations. Ambiguous research evidence (such as on the merits of competition in the United States) was used selectively.
A second example is the policy of introducing managed care.18 Evidence from the United States has been used both by proponents and opponents. Opponents noted that of 81 published observations of outcomes, 68 showed no significant advantage for managed care. Meanwhile, proponents pointed out that in the other 13 observations, managed care organisations achieved lower use of services and of expensive tests and procedures (where alternatives existed) without compromising quality of care. In effect, research evidence has had little effect on the policy to introduce managed care.
Clearly, research has only a limited role because governance policies are driven by ideology, value judgments, financial stringency, economic theory, political expediency, and intellectual fashion.19 It would be naive and unrealistic to expect research to provide evidence to clinch arguments about governance policies.
Several conclusions can be drawn from the above discussion of practice, service, and governance policies. Firstly, research has little direct influence on service and governance policy if we adopt those criteria set and accepted by researchers. Secondly, the relation between research and policy depends on the arena and, thus, the policymakers. Research evidence is more influential in central policy than local policy, where policymaking is marked by negotiation and uncertainty. Thirdly, the use of research depends on the degree of consensus on the policy goal. It is used if it supports the consensus and is used selectively if there is a lack of consensus. Fourthly, many researchers are politically naive. They have a poor understanding of how policy is made and have unrealistic expectations about what research can achieve. And, fifthly, policy- making is not an event but is “ethereal, diffuse, haphazard and somewhat volatile.”4 The consequences of failing to understand this are clear: “So long as researchers presume that research findings must be brought to bear upon a single event, a discrete act of decision making, they will be missing those circumstances and processes where, in fact, research can be useful.”20 In other words, we need a better model to underpin the relation.
What other models of policymaking exist?
An alternative view was proposed by Weiss in the 1970s, the enlightenment model.21 In this model, research provides a new way of conceptualising the world, mapping the decision making terrain, and challenging conventional assumptions. Research is seen as one of several knowledge sources and cannot speak for itself in policy terms. Evidence based policy is not simply an extension of evidence based medicine: it is qualitatively different. Research is considered less as problem solving than as a process of argument or debate to create concern and set the agenda. During the 1980s and 1990s this view was extended to a more interactive model based on a close dialogue between researchers and policymakers in which knowledge is considered to be inherently contestable.22
The implication of accepting this model is that policymakers have to get something out of research if they are to use it. It is necessary, therefore, to consider which arguments are likely to be useful or gratifying to which policymakers. Researchers have to accept that their work may be ignored because policymakers have to take the full complexity of any situation into account. They need to recognise that the other legitimate influences on policy (social, electoral, ethical, cultural, and economic) must be accommodated and that research is most likely to influence policymakers through an extended process of communication.
The challenge that this represents for researchers is considerable. We can see why if we look at the factors that influence policy decisions (box).4 Researchers have tended to focus on enhancing the strength of the information available, with disappointing results. For research to have an impact it is necessary to target the values of the policymakers. As ideologies and interests are almost impossible to change, researchers' main target is to challenge and change beliefs. This is difficult because they are competing with other sources of persuasion and have to counteract pressure to reject research evidence if it is incompatible with policymakers' interests and ideologies. In addition, beliefs change only slowly and under repeated exposure.
Lomas's framework for understanding policymaking4
Institutional structure—its design, who is involved, rules of conduct
Values—based on beliefs, ideologies, interests
Information—research, anecdote, experience, propaganda
How can research be more influential?
Change in researchers' attitudes
Researchers need to acquire a more sophisticated understanding of the policy process. They need to understand that there are many sorts of evidence, that sensible decisions may not reflect scientific rationality, and that context is all important, particularly with policies related to services and governance. They must also resist simplistic payback models, recognise the difficulty of identifying and quantifying the contribution of research, and be prepared to defend the unmeasurable. One of the most useful roles for research is to make people review their beliefs and legitimise unorthodox views.
Change in funders' understanding
Those who fund and commission research need to review their conception of how research influences policy. Interesting new approaches are being adopted that are consistent with a more complex model—for example, the use of modelling to predict whether research is likely to have any impact23; iterative tendering to improve the dialogue between researcher and policymaker24; encouragement of policymakers to invest directly in research25; and shifting the responsibility for commissioning from scientists to the end users of the research, as is happening with the new NHS service delivery and organisation research and development programme. Funders must also recognise the limited value of single studies. Generally, the results of a single study are not worth disseminating. Syntheses of the results of studies are the appropriate product of research endeavour.
Change in the way research is conducted
Policymakers need to be more involved in the conceptualisation and conduct of research. Researchers need greater access to information on the priorities of policymakers, who in turn need to organise and communicate their needs better.26 A closer relation between the two groups needs to be sustained during the research and beyond if the work is to have any impact. A “policy community” needs to be created with the appropriate people—this might include civil servants to purvey knowledge into policymaking forums, journalists to engender wider interest, and practitioners who will translate the new knowledge into practice.27 And all of this activity needs to be cognisant of timing. Windows of opportunity to make major change open up only rarely and briefly, when policymakers' values happen to coincide with the implications of research. It's like the offside trap in football: go too soon, you're caught offside and nothing ensues; go too late and your progress will be opposed.28
I thank Maureen Dalziel, Renee Danziger, Rudolf Klein, Steven Lewis, Jonathan Lomas, and Nick Mays for enlightenment. This paper is based on the Cochrane lecture given on 6 September 2000 at the Society for Social Medicine Annual Scientific Meeting, University of East Anglia.
Funding NB is grant holder for the NHS National Coordinating Centre for Service Delivery and Organisation Research and Development.
Commentary: research must be taken seriously
- Anna Donald (), clinical lecturer
- Department of Public Health and Policy, London School of Hygiene and Tropical Medicine, London WC1E 7HT
- School of Public Policy, University College London, London
Black rightly identifies the limits of evidence based practice—mainly, that it does not always work. Non-scientific factors, such as vested interests, often override the most convincing research, and decision makers cannot always agree on its merits.
These problems certainly exist. Research findings are seldom black and white; nor are practitioners perfectly trained to interpret them. Policy decisions are almost always made in the context of money, power, and precedent, and these factors will therefore usually affect the decision. The important question, though, is not that these factors exist, but the extent to which they hold sway, especially in the face of good research. And beyond that lies a question of what evidence based practice and evidence based medicine are all about.
I do not recognise Black's portrayal of evidence based medicine as an overrationalised, mechanistic process to replace the realpolitik surrounding major policy decisions. As he intimates, the idea that research could sweep away the time consuming but necessary process of weighing up policies from all angles is naive. I can understand that researchers, whose blood, sweat, and tears pour into their work for decades, might have liked to do away with civil servants and implement their research directly into policy, but this was never a possibility.
Rather, in the United Kingdom, evidence based medicine was introduced in public health to improve not only policy outcomes but also the accountability mechanisms by which decision were made. At that time, the NHS had undergone substantial change based largely on ideological opinion. People were fed up with the extent to which politicians' whims could change their lives—not obviously for the better. Evidence based medicine was propelled on a wave of enthusiasm for something that could reduce the negative effects of uninformed authority, just as scientific rationalism was eagerly promoted by people longing to be free of the blind authority of the Church. In this domain—shifting the politics of decision making by mandating better accountability— it can be argued that evidence based medicine and practice was successful beyond anyone's wildest dreams.
Change in perception
Black's early examples illustrate how before the mid-1990s, and before the term evidence based began to gain common parlance and popularity in Britain, the relation between policy and research was dismal. Since that time there has been a sea change in people's expectation that research should be taken seriously. There are so many examples: screening policies, the recent House of Lords' decision on emergency contraception, the demise of general practitioner fundholding,1 the move towards more aggressive treatment for ischaemic heart disease. Perhaps more important, however, is that the change in people's expectations has been so widespread. Individual doctors, nurses, local managers, health visitors, and the public now care enough about research to be palpably frustrated when it cannot be found or implemented.
That is not to say that policy decisions perfectly reflect research findings, nor that research is easy to use in policy. But just because taking research seriously enough to create infrastructure and rules about it is difficult and imperfect does not mean that it should be viewed with caution. As a form of governance, democracy is also difficult and imperfect, and terrible things happen that shouldn't—corruption, war, poverty, and crime. Yet that does not mean that we yearn to return to less questioning and more precarious forms of governance such as oligarchies and dictatorships, however benign. Taking research seriously—being evidence based—is a discipline requiring decades of work to ensure its support through good times and bad. I do not agree than now is the time to put on the brakes.
Competing interests AD works with many governmental, commercial, and charitable organisations to implement evidence.