Reporting bias testing framework for comparing evidence from multiple regulators, manufacturer clinical study reports, trial registries, and published trials

Null hypothesis DefinitionPotential effectFramework to test hypothesis
There is no under-reporting (overview hypothesis)Under-reporting is an overall term including all types of bias when there is an association between results and what is presented to the target audienceTailoring methods and results to the target audience may be misleading. The direction of the effect could change, or the statistical significance of the effect could change, or the magnitude of the effect could change from clinically worthwhile to not clinically worthwhile and vice versa1. Is there evidence of under-reporting?
2. What types of under-reporting are apparent (list and describe them)?
3. What is the overall effect of under-reporting on the results of a meta-analysis (compare estimates of effects using (under) reported data and all data)?
4. What is the effect of under-reporting on the conclusions of a meta-analysis—are conclusions changed when all data are reported?
There is no difference between analysis plan in the protocol and final report (or the differences are listed and annotated) When protocol violations, especially if not reported and justified, are not associated with study resultsPost hoc analyses and changes of plan lead to manipulation of reporting and choice of what is and is not reported1. List any discrepancies between what is pre-specified in protocol and what was actually done
2. Can these discrepancies be explained by documented changes or amendments to the protocol?
3. Were these changes made before observing the data?
4. What is the perceived effect of these changes on the results and conclusions?
There is no difference between published and unpublished conclusions of the same study A specific bias relating to the selective reporting of data in association with target audienceResults have been tailored to the intended recipient audience1. Compare reporting of important outcomes (harms, complications) between published reports and other reports such as those to regulatory bodies
2. Document any differences in conclusions based on separate reports of the same studies
Presentation of same dataset is not associated with differences in spelling, incomplete, discrepant, contradictory, or duplicate entries Different versions of the same dataset are associated with discrepanciesRaises question whether these discrepancies are mistakes or deliberate?1. Document any differences or similarities in separate reports of important outcomes (harms, complications) based on the same studies
2. Report any discrepancies to the manufacturer and ask it to clarify and correct any errors
3. What is the effect on evidence base of including or excluding material with similar discrepancies?
There is no evidence of publication bias Publication status is not associated with size and direction of resultsNegative or positive publication bias can have major effect on the interpretation of the data at all levels 1. Are there studies that have not been published (yes/no)?
2. How many studies have not been published (number and proportion of trials not published and proportion of patients not published)?
3. Construct a list of all known studies indicating which are published and which are not
4. What is the impact on the evidence base of including or excluding unpublished material?
There is no evidence of outcome emphasis bias When overemphasis or underemphasis of outcomes is not associated with size or direction of resultsCan lead to wrong conclusions because overemphasis on certain outcomes1. Are all of the prespecified outcomes in the study protocol reported?
2. Are the outcomes reported in the same way as specified in the study protocol?
3. Are there any documented changes to outcome reporting listed in the study protocol?
4. What is the impact on the evidence base of including or excluding emphasised outcomes?
There is no evidence of relative v absolute measure bias When choice of effect estimates is not associated with size or direction of resultsCan lead to wrong conclusions because of apparent underestimation or overestimation of effects (eg, in the use of relative instead of absolute measures of risk)1. Are both relative and absolute measures of effect size used to report the results?
2. Is the incidence of each event reported for each treatment group?
3. What is the effect on the evidence base of including estimates of effect expressed either in relative and absolute measures?
There is no evidence of follow-up biasWhen there is no evidence that length of follow-up is related to size and direction of resultsCan lead to wrong conclusions owing to overemphasis or underemphasis of results1. Are reported results based on the complete follow-up of each patient?
2. Are important events (harms, complications) unreported because they occurred in the off-treatment period?
3. What is the effect on the evidence base of including or excluding material with complete follow-up?
There is no evidence of data source bias There is no difference between the evidence base presented to regulators (for approval for an indication) and that produced by or in possession of the manufacturerCan lead to approved indications inconsistent with full dataset1. Have regulators been presented with all data from trials sponsored by the drug’s manufacturer?
2. Have all national regulatory agencies been presented with the same trial data?
3. Can differences between national regulatory agencies be explained by access to different data?