Perfect Study, Poor Evidence: Interpretation of Biases Preceding Study Design

https://doi.org/10.1053/j.seminhematol.2008.04.010Get rights and content

In the interpretation of research evidence, data that have been accumulated in a specific isolated study are typically examined. However, important biases may precede the study design. A study may be misleading, useless, or even harmful, even though it seems to be perfectly designed, conducted, analyzed, and reported. Some biases pertain to setting the wider research agenda and include poor scientific relevance, minimal clinical utility, or failure to consider prior evidence (non-consideration of prior evidence, biased consideration of prior evidence, or consideration of biased prior evidence). Other biases reflect issues in setting the specific research questions: examples include straw man effects, avoidance of head-to-head comparisons, head-to-head comparisons bypassing demonstration of effectiveness, overpowered studies, unilateral aims (focusing on benefits and neglecting harms), and the approach of the industry towards research as bulk advertisement (including ghost management of the literature). The concerted presence of such biases may have a multiplicative, detrimental impact on the scientific literature. These issues should be considered carefully when interpreting research results.

Section snippets

Poor Scientific Relevance

A considerable proportion of research efforts are simply irrelevant and very unlikely to yield an outcome that is either useful or insightful and which could advance scientific knowledge however advancement is defined and even in small increments. Indeed, the motives for their conduct are not the advancement of learning and the betterment of human health. These research efforts may reflect the need to generate publications for academic promotion, other secondary gains (such as grants and

Straw Man Effects

A new intervention under study will appear more promising and effective if it is compared with an intervention that is not effective rather than against an intervention already accepted as effective. Worse, a new intervention may be compared with an old intervention that is harmful; an ineffective new intervention may spuriously seem effective if it is compared against another intervention that is even more ineffective. This bias represents a straw man effect.21 The choice of straw man may

Multiple Hits on the Evidence

Biases that precede the study design may often coexist or overlap and their cumulative detrimental impact may be multiplicative. Moreover, some of these biases may be associated with efforts to subvert the quality of the evidence during or after the study. We need more empirical evidence as to how often such “multiple hits” on the evidence are attempted. With increasing scrutiny of the quality of clinical research, seemingly perfect studies that are nevertheless seriously flawed may become

What to Do: Scrutinizing Evidence, Geometry, and Motives of Research

Given the subtlety of some of these biases, the recommendation for the clinical reader and scientist perusing this literature is to systematically consider them and debate whether they may be operative. Overall, a “caveat lector” approach is prudent. Table 2 shows some recommendations that are inherent to this approach, with a number of sample questions that the interested scientist and clinician may wish to pose.

An informed answer for many of these questions involves a systematic, unbiased

References (68)

  • T. Berghmans et al.

    Role of granulocyte and granulocyte-macrophage colony-stimulating factors in the treatment of small-cell lung cancer: A systematic review of the literature with methodological assessment and meta-analysis

    Lung Cancer

    (2002)
  • D.A. Grimes et al.

    The Good Clinical Practice guideline: A bronze standard for clinical research

    Lancet

    (2005)
  • J.P. Ioannidis et al.

    Selective discussion and transparency in microarray research findings for cancer outcomes

    Eur J Cancer

    (2007)
  • J.P. Ioannidis

    Limitations are not properly acknowledged in the scientific literature

    J Clin Epidemiol

    (2007)
  • L.L. Kjaergard et al.

    Citation bias of hepato-biliary randomized clinical trials

    J Clin Epidemiol

    (2002)
  • T. Poynard et al.

    Truth survival in clinical research: An evidence-based requiem?

    Ann Intern Med

    (2002)
  • A. Frankel et al.

    Rationing and Rationality in the National Health Service

    (1993)
  • J.A.M. Gray

    Evidence-Based Healthcare

    (1997)
  • P. Isaakidis et al.

    Relation between burden of disease and randomised evidence in sub-Saharan Africa: Survey of research

    BMJ

    (2002)
  • G.H. Swingler et al.

    International collaboration, funding and association with burden of disease in randomized controlled trials in Africa

    Bull World Health Organ

    (2005)
  • K. Claxton et al.

    An economic approach to clinical trial design and research priority-setting

    Health Econ

    (1996)
  • T.C. Chalmers et al.

    Changes in clinical trials mandated by the advent of meta-analysis

    Stat Med

    (1996)
  • A.J. Sutton et al.

    Evidence-based sample size calculations based upon updated meta-analysis

    Stat Med

    (2007)
  • J.A. Berlin et al.

    The role of meta-analysis in the regulatory process for foods, drugs, and devices

    JAMA

    (1999)
  • D.G. Altman et al.

    The revised CONSORT statement for reporting randomized trials: Explanation and elaboration

    Ann Intern Med

    (2001)
  • C.S. Weaver et al.

    Sample size calculations in acute stroke trials: A systematic review of their reporting, characteristics, and relationship with outcome

    Stroke

    (2004)
  • T.J. Kaptchuk

    Effect of interpretive bias on research evidence

    BMJ

    (2003)
  • K.I. Resch et al.

    A randomized controlled study of reviewer bias against an unconventional therapy

    J R Soc Med

    (2000)
  • A. Tatsioni et al.

    Persistence of contradicted claims in the literature

    JAMA

    (2007)
  • J. Peppercorn et al.

    Association between pharmaceutical involvement and outcomes in breast cancer clinical trials

    Cancer

    (2007)
  • S. Heres et al.

    Why olanzapine beats risperidone, risperidone beats quetiapine, and quetiapine beats olanzapine: An exploratory analysis of head-to-head comparison studies of second-generation antipsychotics

    Am J Psychiatry

    (2006)
  • L. Bero et al.

    Factors associated with findings of published trials of drug-drug comparisons: Why some statins appear more efficacious than others

    PLoS Med

    (2007)
  • Y.F. Chen et al.

    A systematic review of the effectiveness of adalimumab, etanercept and infliximab for the treatment of rheumatoid arthritis in adults and an economic evaluation of their cost-effectiveness

    Health Technol Assess

    (2006)
  • J. Lexchin et al.

    Pharmaceutical industry sponsorship and research outcome and quality: Systematic review

    BMJ

    (2003)
  • Cited by (40)

    • A users' guide to understanding therapeutic substitutions

      2014, Journal of Clinical Epidemiology
      Citation Excerpt :

      More recently, a method called the multiple treatment comparison (MTC) meta-analysis (also called network meta-analysis) has become popular as it allows the comparison of multiple interventions, including head-to-head evaluations at the same time as indirect comparisons, as long as all interventions make up a connected network of comparisons (see Fig. 1A). This is particularly relevant as fields of medicine that are rapidly evaluating new interventions may avoid head-to-head trials, and newer agents may all have a similar comparator [35]. There are several important considerations necessary to determine whether an MTC meta-analysis is valid, and we have described these previously [36,37].

    • Increasing value and reducing waste in research design, conduct, and analysis

      2014, The Lancet
      Citation Excerpt :

      Typically, every study is designed, done, and discussed in isolation41 (see Chalmers and colleagues42 in this Series). Moreover, most research designs do not take account of similar studies being done at the same time.43 The total sample size of clinical trials that are in progress might exceed the total sample size of all completed trials.44

    • Interpreting and Implementing Evidence for Quality Research

      2022, Quality Improvement and Patient Safety in Orthopaedic Surgery
    View all citing articles on Scopus
    View full text