Perfect Study, Poor Evidence: Interpretation of Biases Preceding Study Design
Section snippets
Poor Scientific Relevance
A considerable proportion of research efforts are simply irrelevant and very unlikely to yield an outcome that is either useful or insightful and which could advance scientific knowledge however advancement is defined and even in small increments. Indeed, the motives for their conduct are not the advancement of learning and the betterment of human health. These research efforts may reflect the need to generate publications for academic promotion, other secondary gains (such as grants and
Straw Man Effects
A new intervention under study will appear more promising and effective if it is compared with an intervention that is not effective rather than against an intervention already accepted as effective. Worse, a new intervention may be compared with an old intervention that is harmful; an ineffective new intervention may spuriously seem effective if it is compared against another intervention that is even more ineffective. This bias represents a straw man effect.21 The choice of straw man may
Multiple Hits on the Evidence
Biases that precede the study design may often coexist or overlap and their cumulative detrimental impact may be multiplicative. Moreover, some of these biases may be associated with efforts to subvert the quality of the evidence during or after the study. We need more empirical evidence as to how often such “multiple hits” on the evidence are attempted. With increasing scrutiny of the quality of clinical research, seemingly perfect studies that are nevertheless seriously flawed may become
What to Do: Scrutinizing Evidence, Geometry, and Motives of Research
Given the subtlety of some of these biases, the recommendation for the clinical reader and scientist perusing this literature is to systematically consider them and debate whether they may be operative. Overall, a “caveat lector” approach is prudent. Table 2 shows some recommendations that are inherent to this approach, with a number of sample questions that the interested scientist and clinician may wish to pose.
An informed answer for many of these questions involves a systematic, unbiased
References (68)
From “publish or perish” to “patent and prosper
J Biol Chem
(2006)- et al.
Why are so few randomized trials useful, and what can we do about it?
J Clin Epidemiol
(2006) - et al.
The quality of medical evidence in hematology-oncology
Am J Med
(1999) Using economic analysis to determine the resource consequences of choices made in planning clinical trials
J Chronic Dis
(1985)- et al.
Epidemiology and reporting of randomised trials published in PubMed journals
Lancet
(2005) The influence of prior beliefs on scientific judgments of evidence quality
Organ Behav Hum Decision Processess
(1993)Indirect comparisons: the mesh and mess of clinical trials
Lancet
(2006)- et al.
Methodologic discussions for using and interpreting composite endpoints are limited, but still identify major concerns
J Clin Epidemiol
(2007) - et al.
Risk of cardiovascular events and rofecoxib: Cumulative meta-analysis
Lancet
(2004) - et al.
Availability of large-scale evidence on specific harms from systematic reviews of randomized trials
Am J Med
(2004)
Role of granulocyte and granulocyte-macrophage colony-stimulating factors in the treatment of small-cell lung cancer: A systematic review of the literature with methodological assessment and meta-analysis
Lung Cancer
The Good Clinical Practice guideline: A bronze standard for clinical research
Lancet
Selective discussion and transparency in microarray research findings for cancer outcomes
Eur J Cancer
Limitations are not properly acknowledged in the scientific literature
J Clin Epidemiol
Citation bias of hepato-biliary randomized clinical trials
J Clin Epidemiol
Truth survival in clinical research: An evidence-based requiem?
Ann Intern Med
Rationing and Rationality in the National Health Service
Evidence-Based Healthcare
Relation between burden of disease and randomised evidence in sub-Saharan Africa: Survey of research
BMJ
International collaboration, funding and association with burden of disease in randomized controlled trials in Africa
Bull World Health Organ
An economic approach to clinical trial design and research priority-setting
Health Econ
Changes in clinical trials mandated by the advent of meta-analysis
Stat Med
Evidence-based sample size calculations based upon updated meta-analysis
Stat Med
The role of meta-analysis in the regulatory process for foods, drugs, and devices
JAMA
The revised CONSORT statement for reporting randomized trials: Explanation and elaboration
Ann Intern Med
Sample size calculations in acute stroke trials: A systematic review of their reporting, characteristics, and relationship with outcome
Stroke
Effect of interpretive bias on research evidence
BMJ
A randomized controlled study of reviewer bias against an unconventional therapy
J R Soc Med
Persistence of contradicted claims in the literature
JAMA
Association between pharmaceutical involvement and outcomes in breast cancer clinical trials
Cancer
Why olanzapine beats risperidone, risperidone beats quetiapine, and quetiapine beats olanzapine: An exploratory analysis of head-to-head comparison studies of second-generation antipsychotics
Am J Psychiatry
Factors associated with findings of published trials of drug-drug comparisons: Why some statins appear more efficacious than others
PLoS Med
A systematic review of the effectiveness of adalimumab, etanercept and infliximab for the treatment of rheumatoid arthritis in adults and an economic evaluation of their cost-effectiveness
Health Technol Assess
Pharmaceutical industry sponsorship and research outcome and quality: Systematic review
BMJ
Cited by (40)
A users' guide to understanding therapeutic substitutions
2014, Journal of Clinical EpidemiologyCitation Excerpt :More recently, a method called the multiple treatment comparison (MTC) meta-analysis (also called network meta-analysis) has become popular as it allows the comparison of multiple interventions, including head-to-head evaluations at the same time as indirect comparisons, as long as all interventions make up a connected network of comparisons (see Fig. 1A). This is particularly relevant as fields of medicine that are rapidly evaluating new interventions may avoid head-to-head trials, and newer agents may all have a similar comparator [35]. There are several important considerations necessary to determine whether an MTC meta-analysis is valid, and we have described these previously [36,37].
Increasing value and reducing waste in research design, conduct, and analysis
2014, The LancetCitation Excerpt :Typically, every study is designed, done, and discussed in isolation41 (see Chalmers and colleagues42 in this Series). Moreover, most research designs do not take account of similar studies being done at the same time.43 The total sample size of clinical trials that are in progress might exceed the total sample size of all completed trials.44
Interpreting and Implementing Evidence for Quality Research
2022, Quality Improvement and Patient Safety in Orthopaedic SurgeryThe research challenges we face: Identifying and minimising research waste
2021, Australian Occupational Therapy Journal