Systematic reviews from astronomy to zoology: myths and misconceptionsBMJ 2001; 322 doi: https://doi.org/10.1136/bmj.322.7278.98 (Published 13 January 2001) Cite this as: BMJ 2001;322:98
- Mark Petticrew, associate director ()
- Accepted 18 September 2000
Systematic literature reviews are widely used as an aid to evidence based decision making. For example, reviews of randomised controlled trials are regularly used to answer questions about the effectiveness of healthcare interventions. The high profile of systematic reviews as a cornerstone of evidence based medicine, however, has led to several misconceptions about their purpose and methods. Among these is the belief that systematic reviews are applicable only to randomised controlled trials and that they are incapable of dealing with other forms of evidence, such as from non-randomised studies or qualitative research.
The systematic literature review is a method of locating, appraising, and synthesising evidence. The value of regularly updated systematic reviews in the assessment of effectiveness of healthcare interventions was dramatically illustrated by Antman and colleagues, who showed that review articles failed to mention advances in treatment identified by an updated systematic review.1
It is nearly a quarter of a century since Gene Glass coined the term “meta-analysis” to refer to the quantitative synthesis of the results of primary studies.2 The importance of making explicit efforts to limit bias in the review of literature, however, has been emphasised by social scientists at least since the 1960s.3 In recent years systematic reviews have found an important role in health services research, and the growing interest in evidence based approaches to decision making makes it likely that their use will increase. Not everybody accepts that systematic reviews are necessary or desirable, and as one moves further away from the clinical applications of systematic reviews cynicism about their utility grows. Several arguments are commonly used to reject a wider role for systematic reviews, and these arguments are often based on major misconceptions about the history, purpose, methods, and uses of systematic reviews. I have examined eight common myths about systematic reviews.
The use of systematic reviews is growing outside health care
There are still many common myths about their methods and utility
Some common misconceptions are that systematic reviews can include only randomised controlled trials; that they are of value only for assessing the effectiveness of healthcare interventions; that they must adopt a biomedical model; and that they have to entail some form of statistical synthesis
Systematic reviews have always included a wide range of study designs and study questions, have no preferred “biomedical model,” and have methodologies that are more flexible than is sometimes realised
Many of the common criticisms of systematic reviews are fallacious
Systematic reviews are the same as ordinary reviews, only bigger
There is a common but erroneous belief that systematic reviews are just the same as traditional reviews, only bigger; in other words, you just search more databases. Systematic reviews are not just big literature reviews, and their main aim is not simply to be “comprehensive” (many biased reviews are “comprehensive”) but to answer a specific question, to reduce bias in the selection and inclusion of studies, to appraise the quality of the included studies, and to summarise them objectively. As a result, they may actually be smaller, not bigger, partly because they apply more stringent inclusion criteria to the studies they review. They also differ in the measures they typically take to reduce bias, such as using several reviewers working independently to screen papers for inclusion and assess their quality, and even “small” systematic reviews are likely to involve several reviewers screening thousands of abstracts. As a result of these measures, systematic reviews commonly require more time, staff, and money than traditional reviews. Systematic reviews are not simply “bigger,” they are qualitatively different.
Systematic reviews include only randomised controlled trials
There is a widespread belief that systematic reviews are capable of summarising the results only of randomised controlled trials, and that they cannot be used to synthesise studies of other designs. This belief is prevalent in subjects in which randomised controlled trials are not common and perhaps reflects a concern among some researchers that the studies they consider most relevant will not “count” as evidence. There is, however, no logical reason why systematic reviews of study designs other than randomised controlled trials cannot be carried out. Systematic reviews of non-randomised studies are common, and qualitative studies, for example, can be (and often are) included in systematic reviews. UK guidelines for carrying out systematic reviews do not exclude qualitative research,4 and criteria have been developed to aid in reviewing qualitative studies.5 Even reviews of the effectiveness of interventions do not confine themselves solely to randomised controlled trials; such reviews commonly include other study designs, including non-randomised studies, and case reports.6 In short, there is simply no basis for the belief that systematic reviews can be applied only to randomised controlled trials. The systematic review is simply a methodology that aims to limit bias, and the choice of which study designs to include is a choice that is made by the reviewers. It is not a restriction of the methodology.
Systematic reviews require the adoption of a biomedical model of health
This common myth holds that systematic reviews intrinsically adopt a biomedical model that is of relevance only to medicine and that should not be applied to other domains. Related to this is a belief that as health is more than an “absence of illness” other important outcomes of interventions (such as social impacts) need to be considered and that these are somehow inappropriate for inclusion in systematic reviews. Many health and non-health outcomes, however, are regularly defined, measured, and summarised in both qualitative and quantitative primary studies, and these studies can be (and are) included in systematic reviews. Reviews on the Cochrane Database of Systematic Reviews, for example, commonly include “quality of life” as an outcome alongside clinical indicators of the effects of interventions. The argument that it is somehow inappropriate to do systematic reviews of broader health (or non-health) outcomes is simply fallacious. Systematic reviews do not have any preferred “biomedical model,” which is why there are systematic reviews in such diverse topics as advertising, agriculture, archaeology, astronomy, biology, chemistry, criminology, ecology, education, entomology, law, manufacturing, parapsychology, psychology, public policy, and zoology.7–13 A recent paper even adopted systematic review methods to summarise eyewitness accounts of the Indian rope trick.14 In short, the systematic review is an efficient technique for hypothesis testing, for summarising the results of existing studies, and for assessing consistency among previous studies; these tasks are clearly not unique to medicine. 15 16
Systematic reviews are of no relevance to the real world
Systematic reviews have been portrayed as being obsessed solely with disease outcomes and with randomised controlled clinical trials carried out in simple, closed healthcare systems, which are of no relevance to the complex social world outside evidence based medicine. In fact researchers have been carrying out systematic reviews of policy and other social interventions since the 1970s. For example, there have been at least a dozen systematic reviews investigating the effectiveness of delinquency and correctional programmes for the treatment of offenders, one of which reviewed 400 studies to detect a 10% reduction in delinquency, when previous (non-systematic) reviews had been unable to discern any positive effect of correctional treatments.17
Systematic reviews have also been widely used to examine an array of contemporary and often contentious “real world” issues. These range from reviews of the effectiveness of policy and other interventions to systematic reviews of social issues. Complex “real world” issues are not beyond the remit of systematic reviews. This is highlighted by a recent report that summarised systematic reviews of both randomised and non-randomised studies of issues such as prevention of vandalism, crime deterrence, drug misuse, domestic violence, child abuse, and many others.18 These and many other examples show that systematic reviews can provide a credible evidence base to support policymaking.
Systematic reviews necessarily involve statistical synthesis
This myth derives from a misunderstanding about the different methods used by systematic reviews. Some reviews summarise the primary studies by narratively describing their methods and results. Other reviews take a statistical approach (meta-analysis) by converting the data from each study into a common measurement scale and combining the studies statistically. The above myth assumes that such reviews can only be done this way. Many systematic reviews, however, do not use meta-analytic methods. Some of those which do, probably shouldn't; for example, it is common practice to pool studies without taking into account variations in study quality, which can bias the review's conclusions. It has been pointed out that one of the allures of meta-analysis is that it gives an answer, no matter whether studies are being combined meaningfully or not.19 Systematic reviews should not therefore be seen as automatically involving statistical pooling as narrative synthesis of the included studies is often more appropriate and sometimes all that is possible. A recent methodological review provides clear guidance on when and how to carry out meta-analyses of randomised and non-randomised studies.19
Systematic reviews have to be done by experts
Although expert practitioners are often involved in systematic reviews, most systematic reviewers are not expert practitioners. Even among those carrying out reviews of healthcare interventions, clinical experts are often in the minority. This is not to suggest that clinical input is irrelevant in systematic reviews of clinical interventions. Clearly, such input is invaluable in the location and interpretation of the evidence, and expert opinion is particularly valuable when evidence is sparse.20 Systematic reviews, however, are not the sole provenance of expert practitioners (such as clinical experts). For example, potential users of systematic reviews, such as consumers and policymakers, can be involved in the process. This can help to ensure that reviews are well focused, ask relevant questions, and are disseminated effectively to appropriate audiences.21
Systematic reviews can be done without experienced information/library support
Systematic reviews can indeed be carried out without proper information or library support, though researchers are not typically experienced in information retrieval and their searches are likely to be less sensitive, less specific, and slower than those done by information professionals.22 Improvements to information technology are likely to facilitate the retrieval and filtering of information from electronic databases, but currently this remains a challenging task.23 Producing a good systematic review requires skill in the design of search strategies and benefits from professional advice on the selection of sources of published and unpublished studies.
Systematic reviews are a substitute for doing good quality individual studies
It would be comforting to think that systematic reviews were a sort of panacea, producing final definitive answers and precluding the need for further primary studies. Yet they do not always provide definitive answers and are not intended to be a substitute for primary research. Rather, they often identify the need for additional primary studies as they are an efficient method of identifying where research is currently lacking. Systematic reviews can therefore lead to more, not less, primary research. They can also prevent unnecessary new primary studies being carried out—for example, when meta-analyses show the effectiveness of an intervention by pooling many primary studies.
I have covered a selection of some of the more common myths and misunderstandings about systematic reviews. There are others (such as the myth that systematic reviews are not research but are something that researchers should be expected to do anyway without particular skills, training, or funding). Awareness of the non-clinical applications of systematic reviews is increasing, and the establishment of the Campbell Collaboration, a sibling of the Cochrane Collaboration, will contribute to this by preparing, maintaining, and disseminating systematic reviews of the effects of social and educational policies and practices.24 There are undoubtedly many methodological challenges to be faced in the application of systematic reviews outside clinical specialties. For example, there may be difficulties in incorporating appropriate contextual information and in incorporating the results of relevant qualitative research; and there may be problems of implementation and dissemination. There may also be considerable problems relating to the identification of unpublished studies and “grey” literature. These problems are also common in reviews of healthcare interventions and do not themselves preclude the use of systematic review methods.
In conclusion, I suggest that many criticisms of systematic reviews are ill founded. In particular, systematic reviews are commonly and erroneously perceived solely to be aids to clinical decision making, and this underestimates their wider uses. Despite methodological and other challenges, systematic reviews are already helping to identify “what works” beyond the world of evidence based medicine, and their potential role is more wide ranging than is often realised.
I thank Iain Chalmers, Sally Macintyre, and Trevor Sheldon for comments and for suggesting myths.
Funding Chief Scientist Office of the Scottish Executive Department of Health
Conflict of interest None declared.
Extra references can be found on the BMJ's website