Systematic Reviews and Meta Analysis
Systematic review data extraction: cross-sectional study showed that experience did not increase accuracy

https://doi.org/10.1016/j.jclinepi.2009.04.007Get rights and content

Abstract

Objective

This study assessed the impact of systematic review and data extraction experience on the accuracy and efficiency of data extraction in systematic reviews.

Study Design and Setting

We conducted a prospective cross-sectional study from October to December 2006. Participants were classified as having minimal, moderate, or substantial experience in systematic reviews and data extraction. Three studies on insomnia treatment were extracted. Our primary outcome was the accuracy of data extraction. Data sets of each experience level were analyzed for errors in data extraction and results of meta-analyses. Additionally, the time required for completion of data extraction was compared.

Results

Error rates were similar across the various levels of experience and ranged from 28.3% to 31.2%. Mean rates for errors of omission (11.3–16.4%) were generally lower than those for errors of inaccuracy (13.9–17.9%). There were no significant differences in error rates or accuracy of meta-analysis results between groups. Time required approached significance, with minimally experienced participants requiring the most time.

Conclusion

Overall, there were high error rates by participants at all experience levels; however, time required for extraction tended to decrease with experience. These results illustrate the need to develop strategies aimed at mastery of data extraction, rather than reliance on previous data extraction experience alone.

Section snippets

Background

There is currently no recommended standard for data extraction in systematic reviews with respect to the experience level of reviewers in systematic reviews and data extraction. To our knowledge, there is no empirical evidence regarding the types and magnitude of errors accompanying data extraction conducted by reviewers with various levels of experience in systematic reviews and data extraction, the impact of these errors on the results of meta-analysis, or the efficiency in data extraction

Participant recruitment

The participants of this study were recruited through The Cochrane Collaboration, the Evidence-based Practice Center program of the US Agency for Healthcare Research and Quality, and relevant departments at the University of Alberta. A letter of invitation was sent to the members and students of these respective entities, which directed them to an online screening questionnaire. Individuals with prior knowledge of the systematic review process by education and/or experience were eligible for

Participants

Two hundred and forty individuals responded to the invitation to participate in the study and completed the screening questionnaire (Fig. 1). One hundred and fifty-four individuals were eligible to participate based on their completion of the screening questionnaire and were categorized according to their level of experience in systematic reviews and data extraction. One hundred and twenty-one individuals began the data extraction process with variable completion rates across studies.

Discussion

This is one of the first studies to examine, in a controlled manner, the effect of systematic review and data extraction experience on the accuracy of data extraction in systematic reviews. Overall, we found that level of experience did not result in measurable differences in error rates. Of note is the high level of errors in general with an overall error rate of 28.7%, which is higher than that found in previous research [4]. We found that the errors were more often because of inaccuracies

Conclusion

We found that data extraction did not vary significantly by level of data extraction and systematic review experience in terms of error rates or results of the meta-analysis. Overall, we found high error rates by all experience level groups, which underscores the importance of adequate instruction, training, and care in data extraction in systematic reviews. The familiarity of the data extractors with the terminology and outcomes specific to a field of research may play a role in the accuracy

Acknowledgments

The authors would like to acknowledge Joseph Lau, David Moher, and Lina Santaguida (on behalf of Parminder Raina) who provided expert input on the development of the participant experience classification scheme. We also thank Marilyn Josefsson and Kelley Bessette for their assistance in handling participant inquiries and records.

We gratefully acknowledge the Canadian Agency for Drug and Technologies for Health who provided the funding for this research.

Author contributions: J.H. participated in

Cited by (49)

  • Resource use during systematic review production varies widely: a scoping review

    2021, Journal of Clinical Epidemiology
    Citation Excerpt :

    Extracting major information on study design, participants, and results took one person an average of 41 to 65 minute per study [36,57]. Using two monitors instead of one helped reduce the time spent on data extraction [57]; experience in data extraction was also associated with less time spent on this task [41]. While single data abstraction and verification took on average 107 minutes per study, doing dual independent data abstraction took 172 minutes [46].

  • The effectiveness of psychological interventions for loneliness: A systematic review and meta-analysis

    2021, Clinical Psychology Review
    Citation Excerpt :

    Extraction was initially conducted by the primary researcher. In order to minimise the probability of errors, an independent second coder repeated the data extraction of all quantitative data (Horton et al., 2010). Several socio-demographic and clinical characteristics were extracted from the eligible studies including: (a) mean participant age; (b) gender composition; (c) country; (d) population; (e) sample size; and (f) measure of loneliness.

  • Few studies exist examining methods for selecting studies, abstracting data, and appraising quality in a systematic review

    2019, Journal of Clinical Epidemiology
    Citation Excerpt :

    Buscemi et al. suggested that single-data abstraction may be best suited to reviews with multiple outcomes and employing experienced reviewers, because the potential negative effects of single extraction may be diluted if conclusions are based on multiple outcomes. Horton et al. [36] categorized 87 reviewers’ experience with SRs and data abstraction as minimal, moderate, or substantial, based on years of SR experience (<2, 4–6, >7), number of reviews conducted (<2, 4–6, >7), and number of studies abstracted (<50, 51–300, >300). They then compared error rates among these groups in the abstraction of three studies on insomnia treatment.

  • Meta-analysis of Nutrition Studies

    2019, Analysis in Nutrition Research: Principles of Statistical Methodology and Interpretation of the Results
View all citing articles on Scopus
View full text