Strengthening the reporting of observational studies in epidemiology (STROBE) statement: guidelines for reporting observational studiesBMJ 2007; 335 doi: https://doi.org/10.1136/bmj.39335.541782.AD (Published 18 October 2007) Cite this as: BMJ 2007;335:806
All rapid responses
We endorse the publication of the STROBE guidelines for reporting
observational studies (1) and agree that RCTs may under certain
circumstances, lack feasibility and external validity (2). The
unquestioning endorsement of “pure” experimental methods is challenging
for evaluation protocols of complex multiprofessional interventions, where
blinding is not feasible and the threat of resentful demoralisation is
very real (3;4). Randomisation of patients in participative intervention
trials may be self-defeating where effectiveness depends on participation,
which in turn depends on subject’s beliefs and preferences (5).
A common accusation levelled at observational studies is the
overestimation of effects. However, reviews of 19 therapies compared
findings of RCT vs observational studiesd found only two treatments showed
differences according to type of study (6;7).
We support well designed protocols that select methods according to
criteria of feasibility, acceptability and robustness, which may well lead
to an RCT design but may also point to prospective quasi-experimental
designs. In addition to the challenges to RCT orthodoxy posed by complex
interventions, population-specific challenges may also dictate methods.
Our illustrative example is that of palliative care, i.e. care for
patients and families facing life-threatening, incurable and usually
advanced disease, a field that has a history of failed trials (8).
Reluctance to fund non-RCT evaluations is commonplace, despite the very
real danger of a dearth of evidence generation. Ethical committees have
concerns regarding withholding of potentially effective treatments,
waitlist controls are not possible among dying patients, and available
populations who are able consent to enter RCTs may be relatively small
leading to underpowered studies (9;10).
Criticisms of observational studies have tended to focus on
biomedical interventions (11) which do not take account of the particular
methodological considerations we face. A common challenge to results
obtained from observational studies is that they cannot prove causality,
and that confounders may not be recognised and adjusted for (12). While
confounding is a potential weakness of observational studies (13), prior
systematic reviewing can assist in identification of potentially
confounding variables, enabling adjustment for baseline non-equivalence in
subsequent analysis (10;14). Although caution should be exercised in using
simple controlling for (known) confounding variables in outcome analysis
(15), novel statistical methods such as controlling for propensity scores
(ie using a score of the individual’s propensity to have taken up an
intervention within a non-randomised controlled study) may allow data to
be analysed as if experimental conditions had been applied (16).
A well designed and delivered prospective observational study
protocol enables the generation of outcome evidence where an RCT may fail,
and offers the same internal validity. Blind endorsement of hierarchies
risks a lack of research funding and activity among populations that,
while deserving evidence-based care, have few opportunities to have
(1) von EE, Altman DG, Egger M, Pocock SJ, Gotzsche PC,
Vandenbroucke JP et al. Strengthening the Reporting of Observational
Studies in Epidemiology (STROBE) statement: guidelines for reporting
observational studies. Br Med J 2007; 335(7624):806-808.
(2) Rothwell PM, Bhatia M. Reporting of observational studies. Br
Med J 2007; 335(7624):783-784.
(3) Behi R, Nolan M. Causality and control: threats to internal
validity. Br J Nurs 1996; 5(6):374-377.
(4) Berglund G, Bolund C, Gustafsson UL, Sjoden PO. Is the wish to
participate in a cancer rehabilitation program an indicator of the need?
Comparisons of participants and non-participants in a randomized study.
Psychooncology 1997; 6(1):35-46.
(5) Black N. Why we need observational studies to evaluate the
effectiveness of health care. Br Med J 1996; 312(7040):1215-1218.
(6) Benson K, Hartz AJ. A comparison of observational studies and
randomized, controlled trials. N Engl J Med 2000; 342(25):1878-1886.
(7) Concato J, Shah N, Horwitz RI. Randomized, controlled trials,
observational studies, and the hierarchy of research designs. N Engl J Med
(8) Rinck GC, Geertrudis AM, van den Bos JK, de Haes HJCJM, Schade
E, Veenhof CHN. Methodologic issues in effectiveness research on
palliaitve cancer care: a systematic review. J Clin Oncol 1997; 15(4):1697
(9) Grande GE, Todd CJ. Why are trials in palliative care so
difficult? Palliat Med 2000; 14:69-74.
(10) Harding R, Higginson IJ. What is the best way to help
caregivers in cancer and palliative care? A systematic literature review
of interventions and their effectiveness. Palliat Med 2002; 17(1):63-71.
(11) Laupacis A, Mamdani M. Observational studies of treatment
effectiveness: some cautions. Ann Intern Med 2004; 140(11):923-924.
(12) Brennan P, Croft P. Interpreting the results of observational
research: chance is not such a fine thing. Br Med J 1994; 309(6956):727-
(13) Roberts C, Torgersen D. Randomisation methods in controlled
trials. Br Med J 1998; 317:1301.
(14) Harding R, Higgonson IJ, Leam C, Donaldson N, Pearce A, George
R et al. An evaluation of a short-term group intervention for informal
carers of patients attending a home palliative care service. J Pain
Symptom Manage 2004; 27(5):396-408.
(15) Concato J, Horwitz RI. Beyond randomised versus observational
studies. The Lancet 2004; 363:1660-1661.
(16) Seeger JD, Kurth T, Walker AM. Use of propensity score
technique to account for exposure-related covariates: an example and
lesson. Med Care 2007; 45(10(Suppl 2)):S143-149.
Competing interests: No competing interests