Intended for healthcare professionals

Research Methods & Reporting

When to replicate systematic reviews of interventions: consensus checklist

BMJ 2020; 370 doi: (Published 15 September 2020) Cite this as: BMJ 2020;370:m2864
  1. Peter Tugwell, professor1 2 3 4,
  2. Vivian Andrea Welch, associate professor2 4,
  3. Sathya Karunananthan, postdoctoral fellow3,
  4. Lara J Maxwell, managing editor, Cochrane Musculoskeletal3,
  5. Elie A Akl, professor5,
  6. Marc T Avey, senior manager6,
  7. Zulfiqar A Bhutta, professor7,
  8. Melissa C Brouwers, professor4,
  9. Jocalyn P Clark, adjunct professor8,
  10. Sophie Cook, head of scholarly comment9,
  11. Luis Gabriel Cuervo, senior adviser for Research for Health10,
  12. Janet Agnes Curran, associate professor11,
  13. Elizabeth Tanjong Ghogomu, senior research associate2,
  14. Ian G Graham, professor3 4,
  15. Jeremy M Grimshaw, senior scientist1 3,
  16. Brian Hutton, senior scientist3,
  17. John P A Ioannidis, professor12,
  18. Zoe Jordan, professor13,
  19. Janet Elizabeth Jull, assistant professor14,
  20. Elizabeth Kristjansson, professor15,
  21. Etienne V Langlois, team lead, evidence16,
  22. Julian Little, professor4,
  23. Anne Lyddiatt, consumer17,
  24. Janet E Martin, associate professor18,
  25. Ana Marušić, professor19,
  26. Lawrence Mbuagbaw, assistant professor20,
  27. David Moher, senior scientist3,
  28. Rachael L Morton, professor21,
  29. Mona Nasser, associate professor22,
  30. Matthew J Page, senior research fellow23,
  31. Jordi Pardo Pardo, comanaging editor24,
  32. Jennifer Petkovic, research associate2,
  33. Mark Petticrew, professor25,
  34. Terri Pigott, professor26,
  35. Kevin Pottie, clinician scientist27,
  36. Gabriel Rada, associate professor28,
  37. Tamara Rader, librarian29,
  38. Alison Y Riddle, doctoral candidate2,
  39. Hannah Rothstein, professor30,
  40. Holger J Schüneman, professor31,
  41. Larissa Shamseer, doctoral candidate3 4,
  42. Beverley J Shea, adjunct professor4,
  43. Rosiane Simeon, doctoral candidate32,
  44. Konstantinos C Siontis, assistant professor33,
  45. Maureen Smith, consumer34,
  46. Karla Soares-Weiser, editor in chief, Cochrane35,
  47. Kednapa Thavorn, senior scientist3 4,
  48. David Tovey, emeritus editor in chief, Cochrane35,
  49. Brigitte Vachon, associate professor36,
  50. Jeffery Valentine, professor37,
  51. Rebecca Villemaire, student38,
  52. Peter Walker, emeritus professor2,
  53. Laura Weeks, manager, scientific affairs29,
  54. George Wells, professor39,
  55. David B Wilson, professor40,
  56. Howard White, chief executive officer41
  1. 1Department of Medicine, University of Ottawa, 501 Smyth Road, Room L1227, Ottawa, ON, K1H 8L6, Canada
  2. 2Bruyere Research Institute, Ottawa, ON, Canada
  3. 3Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, ON, Canada
  4. 4School of Epidemiology and Public Health, University of Ottawa, Ottawa, ON, Canada
  5. 5Department of Internal Medicine, American University of Beirut, Beirut, Lebanon
  6. 6Public Health Agency of Canada, Ottawa, ON, Canada
  7. 7Hospital for Sick Children, Toronto, ON, Canada
  8. 8Department of Medicine, University of Toronto, Toronto, ON, Canada
  9. 9BMJ Editorial, London, UK
  10. 10Pan American Health Organization (PAHO/WHO), Unit of Health Services and Access, Washington, DC, USA
  11. 11Faculty of Health, Dalhousie University, Halifax, NS, Canada
  12. 12Stanford Prevention Research Center, Department of Medicine and Department of Epidemiology and Population Health, Stanford University, Stanford, CA, USA
  13. 13JBI, Faculty of Health and Medical Sciences, University of Adelaide, South Australia
  14. 14School of Rehabilitation Therapy, Queen’s University, Kingston, ON, Canada
  15. 15Centre for Research in Educational and Community Services, School of Psychology, Faculty of Social Sciences, Ottawa, ON, Canada
  16. 16World Health Organization, Partnership for Maternal, Newborn, and Child Health (PMNCH), Geneva, Switzerland
  17. 17Ingersoll, ON, Canada
  18. 18Department of Anesthesia and Perioperative Medicine, and Department of Epidemiology and Biostatistics, Western University, London, ON, Canada
  19. 19Department of Research in Biomedicine and Health, University of Split School of Medicine, Split, Croatia
  20. 20Department of Clinical Epidemiology and Biostatistics, McMaster University, Hamilton, ON, Canada
  21. 21NHMRC Clinical Trials Centre, University of Sydney, Sydney, NSW, Australia
  22. 22Faculty of Health, University of Plymouth, UK
  23. 23School of Public Health and Preventive Medicine, Monash University, Melbourne, VIC, Australia
  24. 24Cochrane Musculoskeletal Group, Ottawa Hospital Research Institute, Ottawa, ON, Canada
  25. 25Faculty of Public Health and Policy, London School of Hygiene and Tropical Medicine, London, UK
  26. 26College of Education and Human Development, Georgia State University, Atlanta, GA, USA
  27. 27Department of Family Medicine, University of Ottawa, Ottawa, ON, Canada
  28. 28Epistemonikos Foundation, Santiago, Chile
  29. 29Canadian Agency for Drugs and Technologies in Health, Ottawa, ON, Canada
  30. 30Narendra Paul Loomba Department of Management, Baruch College, New York, NY, USA
  31. 31Cochrane Canada and McMaster GRADE Centres, Department of Health Research Methods, Evidence, and Impact, McMaster University, Hamilton, ON, Canada
  32. 32Population Health, Interdisciplinary School of Health Sciences, University of Ottawa, Ottawa, ON, Canada
  33. 33Department of Cardiovascular Medicine, Mayo Clinic, Rochester, MN, USA
  34. 34Ottawa, ON, Canada
  35. 35Cochrane, London, UK
  36. 36School of Rehabilitation, Faculty of Medicine, Université de Montréal, Montreal, Canada
  37. 37University of Louisville, Louisville, KY, USA
  38. 38Department of Mechanical Engineering, University of Ottawa, Ottawa, ON, Canada
  39. 39Cardiovascular Research Methods Centre, University of Ottawa Heart Institute, Ottawa, ON, Canada
  40. 40George Mason University, Fairfax, VA, USA
  41. 41Campbell Collaboration, Oslo, Norway
  1. Correspondence to: P Tugwell ptugwell{at}
  • Accepted 9 June 2020

Replication is an essential part of the scientific method, yet replication of systematic reviews is too often overlooked, and done unnecessarily or poorly. Excessive replication (doing the same study repeatedly) is unethical and a cause of research wastage. This article provides consensus based guidance on when to replicate and not replicate systematic reviews.

The inability to replicate research findings in various scientific disciplines has been a concern.12 Systematic reviews are increasingly used as the basis for informing health policy and clinical practice decisions.3 However, reliance on systematic reviews for decision making assumes that review findings would not change if the review was replicated (eg, methods for the review are repeated by an independent team of reviewers). Given insufficient capacity to replicate all systematic reviews, researchers should determine which reviews warrant replication.

The need for replication of systematic reviews arises from concerns or lack of clarity about the technical or statistical methods or the judgments made, such as the subjective decisions related to defining criteria for inclusion of the population, intervention or exposure, and outcomes of interest; and data collection, synthesis, and interpretation.4

The compelling scientific case for replicating systematic reviews is complicated by research waste concerns (eg, multiple reviews of the same question outnumbering the original studies).56 The term “duplication” has been used to refer to needless, frequently unwitting or unacknowledged repetition without a clearly defined purpose for the repetition.78 With finite research resources available and the risk of potentially misleading results being introduced due to unrecognised bias, the issues surrounding replication are intensified.91011

In this article, we discuss criteria for applying replication appropriately. Recognising that the terminology and conceptual framework for replication is not standardised across the sciences, we define replication of systematic reviews of interventions as either:

  • Direct replication by purposeful repetition to verify the findings of the original research question; or

  • Conceptual replication by purposeful broadening or narrowing of the research question in existing systematic reviews (eg, across broader or more focused populations, intervention types, settings, outcomes, or study designs).

Both types of replication could include differences in systematic review methods (appendix box 1), which include, for example, whether to use traditional comprehensive reviews versus rapid reviews or living systematic reviews. They can involve either independently conducting an entire review or simply repeating a selected part, such as a reanalysis or subgroup analysis.

Systematic review updates and replications are overlapping, but not mutually inclusive.12 The decision to conduct an update of a systematic review is largely driven by the availability of new data to answer the question of interest, and could occasionally be driven by the availability of new systematic review methods. An update including more data will usually increase the statistical precision of effect estimates. The update could, however, fail to resolve important differences related to previous reviews’ technical procedures or subjective decisions, which is best protected against by independent replication.

Although criteria for when to update systematic reviews have been proposed,1314 systematic review organisations and research funders or commissioners lack tools to judge the need for replicating previous systematic reviews, and thus have no way of knowing whether replication should be prioritised or when such replications would be a poor use of resources or even waste.

We have developed a checklist and guidance to help decide when to replicate or not replicate systematic reviews of studies looking at the effects of interventions to improve health and wellbeing. The checklist can be used by systematic review authors and organisations, funders, commissioners, and groups developing guidelines, consumer decision aids, and policy briefs.

Summary points

  • For systematic reviews of interventions, replication is defined as the reproduction of findings of previous systematic reviews looking at the same effectiveness question either by: purposefully repeating the same methods to verify one or more empirical findings; or purposefully extending or narrowing the systematic review to a broader or more focused question (eg, across broader or more focused populations, intervention types, settings, outcomes, or study designs)

  • Although systematic reviews are often used as the basis for informing policy and practice decisions, little evidence has been published so far on whether replication of systematic reviews is worthwhile

  • Replication of existing systematic reviews cannot be done for all topics; any unnecessary or poorly conducted replication contributes to research waste

  • The decision to replicate a systematic review should be based on the priority of the research question; the likelihood that a replication will resolve uncertainties, controversies, or the need for additional evidence; the magnitude of the benefit or harm of implementing findings of a replication; and the opportunity cost of the replication

  • Systematic review authors, commissioners, funders, and other users (including clinicians, patients, and representatives from policy making organisations) can use the guidance and checklist proposed here to assess the need for a replication


Our methods are adapted from guidance for developing research reporting guidelines.15 We formed an executive group (SK, LM, JP, JPP, PT, VAW) to gather opinions of stakeholders, review the literature, hold a face-to-face consensus meeting of key stakeholders, and draft the checklist and article (fig 1).

Fig 1
Fig 1

Overall research plan: checklist development and dissemination

We used an integrated approach for knowledge translation (as previously described16), with an international multidisciplinary team of methodologists and knowledge users at every stage of this research. The methodologists were experts in clinical epidemiology, consensus methods, guideline development, health economics, information management, information science, knowledge translation, preclinical research, qualitative methods, and statistics. The knowledge users included commissioners, funders, and other users of systematic reviews, including clinicians, patients, and representatives from organisations involved with policy making. This team was involved in the planning of this project and contributed to the interviews, online survey, consensus meeting, this guidance, as well as the dissemination of the output.

Following empirical and opinion gathering steps (appendices 2-5), the executive group collected data and developed a 12 item list of criteria for replication (appendix 6), for deliberation at a consensus meeting (6-8 February, 2019) in Wakefield, Canada. During the meeting, each candidate criterion was presented, along with relevant data from the key informant interviews, literature review, and online survey. One designated meeting participant provided a brief commentary on the criterion and two others facilitated an interactive discussion on its value in determining when to replicate systematic reviews. At the end of a 30 minute discussion for each criterion, participants were asked whether they supported the usefulness of an item when deciding when to replicate systematic reviews. Comments were transcribed and considered in subsequent iterations of the checklist.

Following the meeting, the executive group formed the writing group. They summarised the meeting discussions, revised the checklist, and drafted this guidance document. They sought feedback on the checklist and guidance document from all team members through phone meetings and two rounds of manuscript revision. The final checklist was agreed on by all participants through email.

Patient and public involvement

Seventeen members of the Cochrane consumer group (that is, patients, their families and carers, and members of the public) were recruited as research participants to complete the online survey. Our core research team included two patient partners who are included as authors. They provided input on the grant application, participated in the key informant interviews, online survey and consensus meeting, reflected the voices of consumers who responded to the survey, and revised and approved the manuscript.


The consensus meeting was attended in person by 36 of the 54 research team members who were invited. They represented a range of disciplines and stakeholder groups. Extensive discussion and debate was conducted in relation to the overall definitions and approaches to replication, the potential use of the checklist, and each of its proposed items. At the end of the meeting, the executive group summarised the main points and provided an opportunity for participants to comment before preliminary agreement on the direction of the checklist was reached.

Participants at the consensus meeting agreed that every systematic review of a policy or practice relevant question should be seriously considered for replication. However, given limited resources, before deciding whether to replicate a systematic review, all existing systematic reviews and overviews of systematic reviews on the same question should be identified, and the added value of conducting a replication assessed. Given the absence of a robust evidence base to justify systematic review replication, the group agreed that a checklist would encourage its users to give due consideration to the pragmatic use (clinical/policy/fiscal value) of conducting a project of this nature.

Since many of the discussions before the meeting focused on the implicit trade-offs between the benefits and costs of systematic review replication, participants agreed that the value of information (VOI) framework1718 could be helpful in developing criteria for when to replicate and when not to replicate prior systematic reviews. VOI is a quantitative approach that calculates the return on investment of conducting additional research (in this case, a replication of a systematic review) to generate clearer evidence to inform a clinical or policy decision, where decision uncertainty currently exists. Most VOI studies use cost effectiveness models to quantify the value of additional research. Based on the consensus meeting, the checklist adopted a conceptual VOI framework18 and defined the value of systematic review replication as priority (question 1), potential implications (question 2), and potential impacts (question 3). Question 4 reflects the cost of a systematic review replication.

Participants at the consensus meeting decided collectively that the checklist needed to be a high level, simple tool that linked to available detailed tools for each step. For example, participants endorsed a general item on the methodological quality of previous systematic reviews; some quality assessment tools were suggested, such as AMSTAR-2,19 which provide detailed guidance for users.

Participants also agreed that it would be important to consider the checklist items as examples of reasons for replicating rather than defining an exhaustive list, because some nuances in reasons for replication might be difficult to predict (eg, regarding concerns about conflicts of interest). Finally, participants concurred that replication methods would be driven by uncertainties expressed by users of the reviews (eg, patients and public, practitioners, press, professionals, policy makers). The specific uncertainty would drive the selection of type of replication—direct replication by purposeful repetition or conceptual replication by purposeful broadening or narrowing, using permutations of previous review methods, data, or assumptions.


After the consensus meeting, the executive group formed the writing group. Members met regularly to revise the checklist on the basis of recommendations emerging from the meeting. The 12 item checklist was concentrated into four items. Some of the 12 items were subsumed under the four broader items. Other items were removed because they appear in the accompanying tools. For example, although the research team generally agreed that the durability of information produced through the replication was worth mentioning, this would generally be part of the item assessing the priority of a systematic review replication, and is included in priority setting tools such as the SPARK tool.20

Table 1 presents the final checklist for when a systematic review should be replicated. Checklist items are presented as questions to guide users. The checklist is aimed at systematic review authors and organisations, funders, commissioners, and groups developing guidelines, decision aids, and policy briefs; or anyone using systematic reviews of interventions as the basis of decision making. The extension statement (appendix 7) provides an explanation and example for each item.

Table 1

Systematic review replication checklist

View this table:

Application of the systematic review replication checklist: example

The checklist was applied to two systematic reviews that were considered under the Campbell International Development Coordinating Group (table 2). The first review was a replication of a 2015 Cochrane review on deworming to improve developmental health and wellbeing,27 and the second was an update of a 2012 Cochrane review on strategies to increase ownership of bednets to prevent malaria.28 Both reviews involved simple, discrete, low cost interventions implemented by non-governmental organisations to manage a public health issue, with claims of a wide range of benefits. Responses to the checklist provided a strong justification for a replication of the first systematic review on deworming. A replication of the second review would be of much less value, because no substantive discordance was found and therefore would be less controversial.

Table 2

Application of systematic review replication checklist on two example reviews

View this table:

Systematic review replication worksheet

Table 3 presents a replication comparative worksheet for appraising the body of evidence that can include multiple systematic reviews. The worksheet was developed by team members with expertise in guideline development, and it provides an example of how the checklist can be adapted for use by decision makers. It delineates the decision making process of replication and proposes tools and instruments for each step, although some of these tools remain to be developed. The starting point is a specific question (known as the index PICO (population/intervention/comparison/outcome)) raised by a key stakeholder (eg, guideline panel). The PICOs of existing systematic reviews are then compared to the index PICO to assess their relevance. Other sources of uncertainty related to the conduct or reporting of existing reviews or conflicts of interest are also identified before determining whether a replication could reduce uncertainty related to the index PICO. The final decision to replicate a systematic review or not is based explicit consideration of benefits, harms, cost, feasibility, and acceptability of a given replication against other alternatives. We have plans for validating this worksheet in pilot studies with Cochrane and the Campbell Collaboration.

Table 3

Comparative worksheet for systematic review replication*

View this table:


Replication is a cornerstone of the scientific method, and critical to best practices; transparency; and accuracy of health, healthcare, and policy decisions. Although the need for purposeful replication of systematic reviews has been noted previously,3132 we provide here a structured approach to support this. Our systematic review replication checklist aims to provide guidance to authors, commissioners, funders, and other users of systematic reviews, including clinicians, patients, and representatives from organisations involved with policy making. Systematic reviews can often be subject to errors and ambiguity or uncertainty, owing to inaccurate data collection or analysis, as well as judgments and bias in how data are collected, analysed, and interpreted (eg, breadth of the question, criteria for inclusion, variables for adjustment, decisions about how data are combined). Replication should be optimised for appropriate selection of systematic reviews to replicate, while eliminating excessive waste of resources by unnecessary duplication.69

Because the checklist contains guidance considerations rather than formal mandatory items with only one correct approach, users must consider the relevance of each item, from the perspective of appropriate stakeholder. Use of the checklist might also stimulate additional questions that look at the unique characteristics related to replication in the substantive domain of interest. Users’ increased familiarity with other supporting tools (eg, for prioritisation or quality assessment) will also facilitate use of this checklist. Most importantly, making a decision on whether to replicate requires a good understanding of the underlying body of evidence—that is, the extent, quantity, and quality of existing review (and primary study) results. This approach could require preliminary work, such as a scoping review.

The purpose of replication is to reduce but not necessarily to eliminate decisional uncertainty for translation of results into policy or practice. Identifying the specific sources of uncertainty to be resolved by the replication is therefore a key component of the checklist and will drive the design of the replication. Depending on the specific uncertainty or controversy, repeating only part of the review might be warranted. For example, when critics raised concerns about a systematic review on vaccines for human papillomavirus owing to the exclusion of unpublished reports,33 the authors conducted a replication that focused mainly on a reanalysis after adding previously omitted data.34 When author influence is a source of uncertainty, it is imperative to have an independent team conduct the replication. In other instances, author overlap between the original and replication reviews might be acceptable. All these associations should be declared in the published review. When considering each checklist item, users need to articulate what concerns or uncertainties of the previous reviews should be assessed and how a replication would resolve these concerns. Designing replications to resolve specific sources of uncertainty in this way will limit the waste of research resources while contributing directly to the advancement of knowledge.

Recent guidance on when to consider updating systematic reviews13 is relevant for assessing when replication of previous systematic reviews is likely to have value, for example, when new evidence has become available. However, such guidance does not ensure that uncertainties around the many judgments or choice of technical methods are addressed.


This checklist can help researchers decide whether scarce research resources should be dedicated to purposefully replicating previous systematic reviews to resolve uncertainties in the existing body of knowledge. The checklist was based on the input from multiple stakeholders including methodological experts, following a literature review, interviews, a structured survey, and a consensus meeting. A rigorous process was followed in consultation with representatives from key related disciplines and perspectives.

A varied set of examples of useful replication and wasteful duplication were used to ensure that the checklist is pragmatic. The Campbell Collaboration, Cochrane, and others represented at the meeting are now proposing to use the checklist to guide their decisions on when replication is warranted.


The usability, acceptability, and usefulness of the checklist will need to be assessed and tailored to different users. For example, we have proposed supporting tools, but others could be more suitable for specific review questions or users. Use of the checklist for other types of review questions beyond interventions needs to be evaluated. Furthermore, valid responses to the items depend on input from experts in both systematic review methods, as well as in the content area of the review question; both sources of information could be biased.

Application of the four item checklist requires time and human resources. However, our large team of stakeholders thought that the investment of resources for completing the checklist would be well justified given the potential value of replicating systematic reviews that inform policy and practice as well as the opportunity to limit waste related to unnecessary duplication.

Finally, the use of this checklist depends on stakeholders recognising the value of systematic review replication. Promoting the value of systematic review replication will be an important element in the dissemination and implementation of the checklist.

Conclusions and next steps

This work emphasises the importance of systematic review replication as a necessary and useful element in the advancement of knowledge, along with the use of updates, living systematic reviews, and open evidence synthesis. In view of the role of systematic reviews in policy making and guideline development, the validity and reliability of these updated findings should be tested. The checklist serves as an explicit prompt to carefully consider the value of replication alongside other options such as updating, new reviews, and overviews of reviews.

We will test and tailor the usability, acceptability, or usefulness of the tool with specific user groups as needed. A more detailed framework for identifying uncertainties and designing the replication response35 is also being proposed as part of the guidance on how to replicate systematic reviews. Since this work focused on systematic reviews of interventions, its application to other types of review questions and involving other types of systematic reviews (eg, qualitative, scoping, diagnosis, and prognosis) should be explored.


We thank Paul Glasziou, Simon Lewin, and Tammy Clifford for their comments on this work; and Jiaxuan Wang and Humaira Mahfuz for collecting and summarising online survey data.


  • Contributors: PT and VAW contributed equally to this paper. PT, VAW, and HW conceptualised the research question. SK, VAW, PT, LJM, JP, and JPP operationalised the study design and collected the data. SK, VAW, and PT drafted the manuscript. All authors provided input on the direction of the study, content of the checklist, and two rounds of manuscript revision. The corresponding author attests that all listed authors meet authorship criteria and that no others meeting the criteria have been omitted.

  • Funding: This work was funded by an operating grant from the Canadian Institutes of Health (PJT-148870). The funders had no role in the study design, the collection, analysis, and interpretation of data or the writing of the article or the decision to submit it for publication.

  • Competing interests: All authors have completed the ICMJE uniform disclosure form at and declare: support from the Canadian Institutes of Health Research for the submitted work. PT holds a Canada Research Chair in health equity. LGC works for the Pan American Health Organization/World Health Organization, an intergovernmental public health organisation part of the United Nations system; contributions to this publication were done in his personal time and do not necessarily reflect decisions or policies of his employer. BH reports honorariums from Cornerstone Research Group, outside the submitted work, for methodological advice related to the conduct of systematic reviews and meta-analysis. JPP is a member of the Governing Board of Cochrane, a producer of systematic reviews. KSW is full time employee of Cochrane. In addition, all coauthors have a direct or indirect interest in systematic reviews and replication as part of their job or academic career.

  • Data sharing: Background material for the consensus meeting are available on request from the corresponding author.

  • PT affirms that this manuscript is an honest, accurate, and transparent account of the study being reported; that no important aspects of the study have been omitted; and that any discrepancies from the study as planned have been explained.

  • Peer review and provenance: Not commissioned; externally peer reviewed.