Treatment effects in randomised trials using routinely collected data for outcome assessment versus traditional trials: meta-research studyBMJ 2021; 372 doi: https://doi.org/10.1136/bmj.n450 (Published 03 March 2021) Cite this as: BMJ 2021;372:n450
- Kimberly A Mc Cord, researcher1,
- Hannah Ewald, researcher12,
- Arnav Agarwal, researcher3,
- Dominik Glinz, researcher1,
- Soheila Aghlmandi, researcher1,
- John P A Ioannidis, professor49,
- Lars G Hemkens, senior scientist159
- 1Basel Institute for Clinical Epidemiology and Biostatistics, Department of Clinical Research, University Hospital Basel, University of Basel, 4031 Basel, Switzerland
- 2University Medical Library, University of Basel, Basel, Switzerland
- 3Department of Medicine, University of Toronto, Toronto, ON, Canada
- 4Stanford Prevention Research Center, Department of Medicine, Stanford University School of Medicine, Stanford, CA, USA
- 5Meta-Research Innovation Center at Stanford (METRICS), Stanford University, Palo Alto, CA, USA
- 6Department of Health Research and Policy, Stanford University School of Medicine, Stanford, CA, USA
- 7Department of Biomedical Data Science, Stanford University School of Medicine, Stanford, CA, USA
- 8Department of Statistics, Stanford University School of Humanities and Sciences, Stanford, CA, USA
- 9Meta-Research Innovation Center Berlin (METRIC-B), Berlin Institute of Health, Berlin, Germany
- Correspondence to: L G Hemkens @lghemkens on Twitter) (or
- Accepted 27 January 2021
Objective To compare effect estimates of randomised clinical trials that use routinely collected data (RCD-RCT) for outcome ascertainment with traditional trials not using routinely collected data.
Design Meta-research study.
Data source Studies included in the same meta-analysis in a Cochrane review.
Eligibility criteria for study selection Randomised clinical trials using any type of routinely collected data for outcome ascertainment, including from registries, electronic health records, and administrative databases, that were included in a meta-analysis of a Cochrane review on any clinical question and any health outcome together with traditional trials not using routinely collected data for outcome measurement.
Review methods Effect estimates from trials using or not using routinely collected data were summarised in random effects meta-analyses. Agreement of (summary) treatment effect estimates from trials using routinely collected data and those not using such data was expressed as the ratio of odds ratios. Subgroup analyses explored effects in trials based on different types of routinely collected data. Two investigators independently assessed the quality of each data source.
Results 84 RCD-RCTs and 463 traditional trials on 22 clinical questions were included. Trials using routinely collected data for outcome ascertainment showed 20% less favourable treatment effect estimates than traditional trials (ratio of odds ratios 0.80, 95% confidence interval 0.70 to 0.91, I2=14%). Results were similar across various types of outcomes (mortality outcomes: 0.92, 0.74 to 1.15, I2=12%; non-mortality outcomes: 0.71, 0.60 to 0.84, I2=8%), data sources (electronic health records: 0.81, 0.59 to 1.11, I2=28%; registries: 0.86, 0.75 to 0.99, I2=20%; administrative data: 0.84, 0.72 to 0.99, I2=0%), and data quality (high data quality: 0.82, 0.72 to 0.93, I2=0%).
Conclusions Randomised clinical trials using routinely collected data for outcome ascertainment show smaller treatment benefits than traditional trials not using routinely collected data. These differences could have implications for healthcare decision making and the application of real world evidence.
Clinical trials increasingly use health data that are not collected for the purposes of research.12 Such routinely collected data from registries, electronic health records, administrative claims, or even mobile devices might be used to identify trial participants and to assess treatment outcomes.2 Readily available data are typically more affordable than actively collected research data.3 Cost reduction might make larger and longer trials more feasible. Data collection during usual care also avoids artificial research settings, and this could increase pragmatism and applicability of trial results to routine care.4 Databases of routinely collected data include many outcomes that are relevant in practice and matter to clinicians and patients (eg, mortality, disability, hospital admission), whereas they typically lack outcomes that are more relevant for explanatory trials aiming to understand the biological processes underpinning treatment effects (eg, biomarkers).5 Cutting out research driven follow-up visits and relying only on patient interaction during usual care probably better reflects real world treatment effects, and patient adherence might be less faithful in such a setting compared with traditional, more explanatory trials. Overall, trials embedded in existing data collection structures might provide real world evidence, being more informative for guiding treatment decisions and sharing more features of pragmatic trials than do many traditional trials.678
Data quality is a key problem of using routinely collected data for clinical research.12 On the one hand, for some outcomes the quality of routinely collected data might be lower, in particular as a result of non-uniform data collection and potential measurement errors.910111213 On the other hand, healthcare professionals collecting routine data during usual care might have more clinical expertise than research staff who often collect trial data only for a narrow time frame and scope, sometimes only for a few participants or even a single patient in each centre.14 Since routine data are collected independently of the trial from people unaware of treatment allocations, biases related to outcome ascertainment might be even less likely than in traditional trials. Moreover, quality of routinely collected data can vary enormously for different outcomes. For mortality, the quality might be high15: complete, accurate information can be achieved with proper linkage to death registries, whereas other trials not linked to routinely collected data sources might lack information on survival status for many participants. Conversely, the quality of routinely collected data might be highly insufficient for other outcomes, such as specific adverse events or some patient reported endpoints. The impact of using routinely collected data for outcome ascertainment and the impact of potential inaccuracies on trial results is unclear. Misclassification of clinical events or missing information that occurs randomly—for example, due to coding errors or problems with database linkage,16 could diminish the treatment effect point estimates.17 Larger sample sizes achieved by using routinely collected data might increase precision of treatment effect estimates,18 but these could still be biased underestimations.
Here, we provide empirical insights on the agreement of findings from trials using routinely collected data for measuring outcomes compared with traditional randomised clinical trials not using routinely collected data.
No protocol was published for this study. We systematically obtained a large sample of randomised clinical trials that used routinely collected data to measure study outcomes (RCD-RCTs), identified trials that explored the same clinical question but measured outcomes using traditional methods (not based on routinely collected data), and then we compared their treatment effect estimates. We assumed that studies included in the same meta-analysis in a Cochrane review would be on the same clinical question. Cochrane reviews were a main information source for this study.
RCD-RCTs were eligible if they used the data for measurement of any binary clinical outcome and were included in a Cochrane review meta-analysis together with at least one other trial not using routinely collected data for measuring the same outcome. Eligible RCD-RCTs were either identified directly by searching PubMed and subsequent citation analysis to determine if they were included in a Cochrane review (index RCD-RCTs) or indirectly by perusing the other trials that were included with the index RCD-RCTs in the same Cochrane review meta-analysis.
The other randomised clinical trials (ie, not RCD-RCTs) that were included with an eligible RCD-RCT in the same Cochrane review meta-analysis were eligble comparators.
We considered any health intervention in any population. We did not consider outcomes that were uniquely cost related, but we kept outcomes that measured uptake of interventions, such as vaccinations, drug treatments, and screening. Routinely collected data was defined as any health information not collected primarily for a specific research question.19 Trials that Cochrane reviewers described as quasi-randomised or as controlled before and after design were excluded. We considered trials reported as cluster randomised trials and crossover trials (data from first period only), but excluded them in a sensitivity anlysis.
To identify the index RCD-RCTs, we searched PubMed using text words and medical subject headings focusing on terms around routine data (see appendix 1). We searched for randomised clinical trials published in English between 2000 and 2015 because of the emerging availability of electronic health records and other sources of routinely collected data in the past two decades and because more recent trials were less likely to be included in Cochrane reviews. Two reviewers independently screened titles and abstracts (KAM and AL or HE). Articles found to be potentially eligible by one reviewer were considered for further analysis. One reviewer (KAM) then identified Cochrane reviews citing any of these potentially eligible RCD-RCTs using the “cited in systematic reviews” function on PubMed. We also searched ISI Web of Science and perused the citing articles (from Web of Science Core Collection).
The last searches for RCD-RCTs in literature databases and citing Cochrane reviews were in March 2016 and September 2017 (see appendix 1 for details). We used the most recent updated version (last search January 2020) of each Cochrane review for all pertinent clinical questions, and updated our searches, classifications, and extractions using these most recent versions.
We obtained all full texts of cited randomised clinical trials and citing reviews. One reviewer (KAM) determined if the trial was an index RCD-RCT (ie, measured at least one pertinent outcome using routinely collected data and was included in a meta-analysis evaluating this outcome together with other trials). This was verified by a second reviewer (LGH).
We obtained the full texts for all other trials in the meta-analysis, and one reviewer (KAM or DG) determined if they were eligble RCD-RCTs or they were categorised as traditional trials. Whenever uncertainty occurred in these steps, a second reviewer was consulted (LGH) and the decision made was based on consensus. Eligibility of all RCD-RCTs was confirmed by a second reviewer (LGH, AA, or KAM). Any uncertainties were resolved by discussion.
Data collection process
From each Cochrane review, we selected only one clinical question addressed by one meta-analysis including the index RCD-RCT. We selected the meta-analysis with the largest number of randomised clinical trials (if multiple ones existed, we selected the one with the greatest total sample size). Some meta-analyses were reported with summary estimates for subsets of studies but without an overall summary effect. In such cases, we took the subset including the highest number of RCD-RCTs. In some cases, when the same RCD-RCTs were included in multiple subsets (eg, for different lengths of follow-up) but there was an overall summary presented, we also used only the largest subset to avoid double counting of participants or events. We preferred any primary analysis over sensitivity or subgroup analyses. Sensitivity analyses on methodological features (eg, by publication year) were always excluded. These steps were conducted by one reviewer (KAM) and verified by a second (LGH). We applied a different selection approach as secondary analysis whenever the meta-analysis selected for the main analysis was not on mortality (which was the case for 14 reviews) but there was a relevant mortality analysis included in the Cochrane review (which was the case in four of the 14 reviews); then we selected this one instead. We applied the same approach for primary outcomes, but in the three cases where the selected outcome was not a primary outcome of the Cochrane review, no eligible alternative existed.
For each included trial, one reviewer (KM, LGH, AA, HE, or DG) extracted from the Cochrane review the treatment effects (ie, number of events and no events per study arm), trial characteristics (parallel group design, crossover design, cluster design, country, year of publication), the median age of the study population (when not reported, we used other available pertinent information (eg, means) for approximation when possible), and the Cochrane reviewer’s risk of bias assessment. A second reviewer (KM or LGH) verified the extracted treatment effects.
For each eligible RCD-RCT, one reviewer (KAM, DG, or LGH) extracted general characteristics and the types of routinely collected data utilised. We also noted whether the routinely collected data source was the only form of outcome data source, or if a hybrid approach was reported (ie, when the routinely collected data were complemented by additional active data collection). Trials using routinely collected data within a hybrid approach were considered as RCD-RCTs but were excluded in a sensitivity analysis.
One reviewer (KAM or DG) extracted any statement on quality of the routinely collected data in the broader sense (eg, statements related to measurement errors, reliability, accuracy, or completeness) and a second reviewer (KAM or AA) verified the extractions. As a working definition, we deemed data quality to be high when the routinely collected data would be adequate to reliably measure the outcomes of interest for this clinical question.20 Two reviewers assessed this independently. We fully acknowledge that such an assessment from the outside is difficult.20 When authors provided a statement that led us to assume that the routinely collected data would adequately measure the outcome of interest, a high quality mark was given. If this was not reported, but the source was specifically designed to collect the endpoint (eg, breast cancer cases through a comprehensive national breast cancer registry), a high quality mark was still given. If a statement indicating low quality was provided (which we expected to be rare, but such statements could have been made in the limitations section of the studies) or the reviewer thought that the routinely collected data source was unlikely to specifically collect such outcome data with little missingness and little measurement error (eg, adverse events extracted from administrative databases), a low mark was given. Other cases were rated unclear. We quantified the agreement between the two reviewers (KAM versus AA or DG) using κ statistics and the total agreement.
For sensitivity analyses, we extracted the risk of bias reported for each bias domain of all individual trials. We categorised the trials as having one domain or more at high risk (if any bias domain was deemed by the Cochrane reviewers to be high risk), all domains at low risk (if all domains were deemed to be low risk), and all domains at low or unclear risk (in all other cases). We also specifically extracted the risk of bias due to the blinding status (or participant blinding when several blinding domains were presented).
Summary measures and synthesis of results
We used a two stage process to synthesise the results. Firstly, we calculated two summary odds ratios for each clinical question using random effects meta-analyses (Hartung-Knapp-Sidik-Jonkman method21): the summary odds ratios of the RCD-RCTs, and separately the summary odds ratios of all the traditional trials. In cases when only one trial was available, the summary odds ratio was actually the odds ratio of the trial. Subsequently, for each pair of summary odds ratios, we calculated their respective ratio—that is, ratio of odds ratios (summary odds ratios of the traditional trials divided by summary odds ratios of RCD-RCTs). The variance of the ratio of odds ratios was calculated as the sum of the variances of the summary odds ratios (after log transformation). We ensured that for all clinical questions odds ratios of less than 1 indicate favourable effects for the evaluated treatment. We inverted effects when necessary (ie, if a meta-analysis reported survival, we inverted the effect estimate by taking its reciprocal so that estimates <1 indicate mortality benefits). For consistency, we ensured that the second comparator was the control (that is, no intervention or usual care—in three cases when two active interventions were compared,222324 we defined the control as the older treatment; we left these cases out in a sensitivity analysis). A ratio of odds ratios of less than 1 indicated that the RCD-RCTs estimated a less favourable treatment effect for the evaluated treatment than did the traditional trials.
Secondly, we combined all ratios of odds ratios across all clinical questions in a meta-analysis (random effects, Hartung-Knapp-Sidik-Jonkman) to provide an overall summary of the relation of treatment effects obtained from trials using routinely collected data versus trials not using routinely collected data.
We conducted several sensitivity analyses: including only RCD-RCTs with low risk of bias in all domains, including only RCD-RCTs with low risk of bias related to blinding, excluding RCD-RCTs with some active data collection (hybrid approaches), excluding older RCD-RCTs (published before 2005), including only more recent RCD-RCTs (published in 2010 or later), stratified by number of participants and number of events (thirds across all RCD trials), including only RCD-RCTs when the median age of the RCD-RCT population was within 1 standard deviation of the median age of the other trials, including only clinical questions on mortality outcomes or non-mortality outcomes (subsets of main analysis), excluding clinical questions with active controls, using only clinical questions with effect estimates from RCD-RCTs and traditional randomised clinical trials that had no largely different precision (ie, ratio of summary odds ratio standard errors >0.33 and <3), excluding clinical questions with fewer than three RCD-RCTs, excluding clinical questions with more than 10 RCD-RCTs, comparing the index RCD-RCTs with all other trials in the meta-analysis (including traditional trials and misclassifying the indirectly identified RCD-RCTs) to evaluate the robustness of the classification and sampling procedure, using DerSimonian-Laird random effects meta-analyses, and using only fixed effect meta-analyses.
We conducted exploratory subgroup analyses including only RCD-RCTs using registries, electronic health records, or administrative data, and when the data quality of RCD-RCTs was assumed to be high.
Patient and public involvement
We did not involve patients or members of the public when we selected the research question, designed the study, interpreted the results, or wrote the manuscript.
Overall, 4649 publications were screened and 29 index RCD-RCTs identified (see appendices 1, 2, and 5a) from 22 Cochrane reviews. Among the corresponding trials in the selected Cochrane review analyses, 55 other RCD-RCTs were identified (see appendix 5b) and 463 were eligible traditional randomised clinical trials (see appendix 6).
The median number of participants in each of the 84 RCD-RCTs was 721 (interquartile range 275-2729), most (56/84, 67%) originating from North America, followed by Scandinavia (14/84, 17%; table 1). The trials were published between 1976 and 2017: median 2005 (interquartile range 1998-2009). The sources of routinely collected data were registries (36/84, 43%), electronic health records (30/84, 36%), and administrative databases (18/84, 21%). In 27 RCD-RCTs (32%), a hybrid approach with elements of active data collection was applied.
The quality of the data was considered high for 56 of the 84 RCD-RCTs (67%; moderate interrater agreement 77.4%; κ=0.50, weighted κ=0.48).
The 463 traditional RCTs had a median 121 (interquartile range 60-359) participants in each trial. The trials were primarily from North America (125/463, 27%) and continental Europe (60/463, 13%) and were published between 1963 and 2016 (median 2003 (interquartile range 1997-2006); table 1 and see appendix 4).
Of the 22 clinical questions, eight (36%) were related to screening and preventive medicine, five (23%) to community medicine, five (23%) to cardiology, and four (18%) to surgery. Eleven comparisons had only one RCD-RCT, four comparisons had two RCD-RCTs, three comparisons had three RCD-RCTs, and four comparison had four RCD-RCTs or more (table 2). Outcomes were diverse, with a large proportion related to mortality (9 of 22 in the main analysis; 41%). In 19 of 22 cases (86%) the outcomes were a primary outcome of the Cochrane review.
Agreement of treatment effects
In 19 of 22 cases (86%), treatment effect estimates from RCD-RCTs and from traditional trials were in the same direction. In 14 of 22 cases (63%), the summary point estimate of the RCD-RCT was less favourable (fig 1 and fig 2).
Overall, trials using routinely collected data for outcome ascertainment systematically showed less favourable estimates of treatment effects than traditional trials not using routinely collected data (ratio of odds ratios 0.80, 95% confidence interval 0.70 to 0.91, I2=14%) (fig 3 and table 3; see appendix 3). In four of the 22 clinical questions (individualised discharge plans on readmissions, intrauterine device for heavy menstrual bleeding, breastfeeding support for healthy women, and immunisation reminders and recalls), the 95% confidence intervals of the ratio of odds ratios excluded the null, and in all four clinical questions, trials using routinely collected data had less favourable results than traditional trials (fig 3).
The results were similar when including only any available primary outcomes of Cochrane reviews (ratio of odds ratios 0.79, 95% confidence interval 0.70 to 0.90, I2=9%) or mortality outcomes (0.92, 0.74 to 1.15, I2=12%), or studies with routinely collected data when the data quality were considered to be high (0.82, 0.72 to 0.93, I2=0%). The results were also similar when analysing electronic health records (0.81, 0.59 to 1.11, I2=28%), registries (0.86, 0.75 to 0.99, I2=20%), and administrative data sources (0.84, 0.72 to 0.99, I2=0%; table 3). All other sensitivity analyses corroborated the main findings (table 3).
In this systematic analysis of various clinical topics and outcomes, randomised clinical trials that used routinely collected data for outcome ascertainment showed less favourable treatment effects than traditional randomised clinical trials not using routinely collected data. This might be due to problems with data quality and measurement errors leading to dilution of effects by misclassified outcomes. However, the results remained similar across sensitivity analyses dealing with this possibility, including data source type and estimated data quality, or when including only mortality outcomes where misclassification is probably less likely. Thus, trials using routinely collected data for outcome ascertainment might have other features that are associated with less pronounced effect estimates.2 For example, such trials might be more pragmatic than traditional trials.251828 More natural care settings with less eagerness to artificially increase treatment adherence might result in smaller treatment effect estimates.
This interpretation agrees with empirical research indicating that procedures to standardise and increase data quality could have a smaller impact on trial effect estimates than is often assumed: a review indicated that central outcome adjudication committees used to increase data quality typically did not influence effect estimates compared with onsite assessments in the very same trial.29 Direct comparisons of treatment estimates based on separate ways of outcome ascertainments are helpful to understand better the underlying mechanisms of outcome measurements.30 In contrast with such research, we did not aim to isolate the “clean” effect of using compared with not using routinely collected data within the same trial as alternative data ascertainment methods. Conversely, we aimed to empirically describe how results from trials designed to provide randomised real world evidence31 (by using real world data) agree with those from traditionally designed trials relying on their own, active data collection procedures.
Comparison with other studies
We are aware of only one other similar study that compared effects from 30 registry based trials with that from traditional trials on 12 different topics in cardiology or cancer screening.32 The reported ratio of odds ratios were 0.97 (95% confidence interval 0.92 to 1.03) for mortality and 0.95 (0.89 to 1.02) for other outcomes (reported ratio of odds ratios inverted to facilitate comparison), compatible with our findings for registry based trials.
Limitations of this study
Several limitations need to be considered. Firstly, although the outcome selected for our analysis was routinely collected in the RCD-RCTs, other outcomes within some of these RCD-RCTs were still determined traditionally, thus introducing artificial settings that deviate from routine care. Therefore, some of the RCD-RCTs might reflect the “real world” more and others less.
Secondly, we did not directly evaluate the impact of trial pragmatism on treatment effects. The applicability of research findings to real world settings can be determined by other factors, such as the representativeness of the trial population or the treatment setting, which we have not assessed. A deeper investigation of all RCD-RCTs and their comparators would be beyond the scope of this project, and a valid retrospective assessment of each trial’s pragmatism and representativeness is difficult for researchers outside of the original trial team, requiring further information such as study protocols3334 or details on the study population and target population that are typically unavailable.
Thirdly, although we individually assessed and graded data quality and expected accuracies in duplicate, assessing the quality of the sources of routinely collected data is inherently subjective and limited because of widely insufficient reporting of critical details (such as results of data validation studies). We are not aware of an established instrument that would allow the “data quality” on an outcome level to be unambiguously determined using trial reports. Thus, interpretations in this regard need to be made very cautiously.
Fourthly, although our topics were evaluated in Cochrane reviews and probably explore questions of interest for healthcare decision makers, they do not cover the full spectrum of clinical research. The statistical heterogeneity across topics was small, and issues related to data quality and trial design vary across clinical specialties. It remains uncertain how the results can be extrapolated to specific medical disciplines, and more evidence is needed to better assess the generalisability of our findings. However, our assessment covers areas of clinical research where using routinely collected data for outcome assessment is a realistic alternative, indicated by the existence of trials using routinely collected data based and traditional outcome measurement.
Finally, some of our analyses rely on sometimes insufficiently reported details.35 Although we systematically ensured that the trials were actually measuring the analysed outcomes through routinely collected data, poor reporting of such data use in the traditional trials could have led to some misclassification or we might have overlooked some hybrid approaches. We have no reason to believe that possible misclassifications are associated with the investigated agreement; hence, such errors would have led to a dilution of the difference between the compared study designs and not change our overall conclusion.
Randomised clinical trials utilising any form of routinely collected data for outcome ascertainment found systematically less favourable treatment effects than randomised clinical trials utilising traditional methods. Differences might exist between traditional trials and trial designs utilising routinely collected data beyond data quality problems that would explain this finding. We need a better understanding of these factors, to optimise the use of such emerging designs for comparative effectiveness research and to increase the applicability of real world evidence derived from randomised trials.
What is already known on this topic
Routinely collected data are increasingly used in randomised clinical trials to measure outcomes
Data collection during usual care can reduce costs and avoid artificial research settings, which might increase pragmatism and applicability of trial results
What this study adds
Our study suggests that randomised clinical trials using routinely collected data to assess outcomes provide systematically less favourable treatment effects than trials using traditional methods
Differences might exist between traditional trials and trial designs using routinely collected data beyond data quality issues that would explain this finding
We thank Aviv Ladanie for contributing to the literature screening and data extraction and Julie Jacobson Vann for providing details on included trials.
Contributors: LGH and JPAI conceived the study. LGH, KAM, and HE designed the search strategy. KAM performed the literature search. KAM, LGH, HE, and DG screened the studies for eligibility. KAM, LGH, HE, DG, and AA performed the data extractions. LGH, KAM, SA, and JPAI analysed the data. KAM and LGH wrote the first draft of the manuscript. All the authors interpreted the data, critically revised the manuscript for important intellectual content, and gave final approval of the version to be published. LGH and KAM are guarantors. The corresponding author attests that all listed authors meet authorship criteria and that no others meeting the criteria have been omitted.
Funding: The Basel Institute for Clinical Epidemiology and Biostatistics is supported by the Stiftung Institut für klinische Epidemiologie (KAM, LGH, EH, SA, and DG). METRICS has been supported by grants from the Laura and John Arnold Foundation (JPAI and LGH). METRIC-B has been supported by an Einstein fellowship award to JPAI from the Stiftung Charite and the Einstein Stiftung (JPAI and LGH). The funders had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; and preparation, review, or approval of the manuscript or its submission for publication.
Competing interests: All authors have completed the ICMJE uniform disclosure form at www.icmje.org/coi_disclosure.pdf and declare: KAM, JPAI, and LGH support the RCD for RCT initiative, which aims to explore the use of routinely collected data for clinical trials. KAM and LGH are members of the MARTA-Group, which aims to explore how to make randomised trials more affordable. Since 1 June 2020, DG has been employed by Roche Pharma (Schweiz), Basel, Switzerland. The first draft of this manuscript was submitted before his current employment and his current employer had no role in the design and conduct of the project; preparation, review, and approval of the manuscript, and decision to submit the manuscript for publication. The authors declare no other relationships or activities that could appear to have influenced the submitted work.
Ethical approval: Not required.
Data sharing: Available on request from the corresponding author and on the Open Science Framework.36
The lead author (the manuscript’s guarantor) affirms that the manuscript is an honest, accurate, and transparent account of the study being reported; that no important aspects of the study have been omitted; and that any relevant discrepancies from the study as planned have been explained.
Dissemination to participants and related patient and public communities: We plan to disseminate the results through publications, conference presentations, and social media to international stakeholders and healthcare decision makers who use or generate evidence based on routinely collected data.
Provenance and peer review: Not commissioned; externally peer reviewed.
This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.