Reporting of observational studiesBMJ 2007; 335 doi: https://doi.org/10.1136/bmj.39351.581366.BE (Published 18 October 2007) Cite this as: BMJ 2007;335:783
- Peter M Rothwell, professor of clinical neurology,
- Meena Bhatia, research fellow
In this week's BMJ, von Elm and colleagues report the STROBE (strengthening the reporting of observational studies in epidemiology) statement, which recommends what should be included in an accurate and complete report of an analytical observational study.1
Observational epidemiology has made an immense contribution to our understanding of the causes and treatment of disease. Numerous causal associations between risks factors and disease have been identified (see box in version on bmj.com). Most of these observations have led to substantial improvements in public health by causing changes in policy or by leading to the development of effective treatments.
A few examples of important causal associations between risk factors and disease that have been identified by clinical epidemiological studies
Smoking and cancerw1
Radiation exposure and cancerw2
Lipids and coronary diseasew3
Blood pressure and strokew4
Sleeping position and sudden infant deathw5
Folate and risk of neural tube defectsw6
Hormone replacement therapy and breast cancerw7
Male circumcision and HIV infectionw8
Aspirin use and colorectal cancerw9
Observational studies are also essential for effective clinical practice. Cohort studies allow us to improve the reliability of diagnosis; to understand prognosis; to develop and validate risk scores to target treatment appropriately; to monitor the safety of treatments in routine practice; to identify treatment effects (adverse or beneficial) that are not reliably detected in trials (perhaps because they are too rare, have too long a latency, or are confined to people excluded from trials); and to estimate the effects of interventions in circumstances in which randomised trials are not feasible.
To make the most of the enormous potential of observational epidemiology to transform clinical practice and improve public health, studies must be designed and reported as rigorously as possible. However, as with other areas of research, including laboratory sciences2 3 and randomised controlled clinical trials,4 the design and reporting of epidemiological studies can be poor, with consequences for the reliability of results.5 6
Quality control is unlikely to improve in the near future, given the ever increasing number of medical journals, and the consequently reduced influence of peer review on the likelihood that poor quality research will be published. The STROBE guidelines on the reporting of epidemiological studies are therefore welcome.1 The summary paper published in this week's journal will be backed up by a more detailed document, which will explain the background and justification for each guideline. Such guidelines inevitably have limitations, and there is always a risk that poorly designed studies will be made more difficult to spot by superficial improvements in the way they are reported. However, experience with similar guidelines for reporting randomised trials and systematic reviews has generally been positive.
Are there any matters that are not covered by the STROBE guidelines or that deserve particular emphasis? Firstly, the definition and prespecification of outcomes is crucial, particularly in cohort studies, where composite outcomes are often used to increase statistical power. For example, outcomes such as “coronary events” and “cardiovascular events” are often used in studies of potential new risk factors for cardiovascular disease. However, these composites have no widely accepted standard definitions. In our systematic review of published studies of seven new vascular risk factors,7 of 266 eligible studies (167 case-control studies and 99 cohort studies), 56 (21%) reported a risk association based on a composite outcome. The 23 studies reporting composites of different coronary events used 11 different terms and 21 different composites. The 33 studies reporting composites of cardiac and extracardiac events (usually termed cardiovascular events) used 25 different composites, and seven studies gave no information on what events were included in their composite outcome. Only one composite was used by two different studies, and these had the same authors. Such variation between studies undermines the potential to compare studies and perform meta-analysis. It also raises the possibility of post hoc choices of composites that are dependent on data—by far the most effective way to increase the “statistical power” of a study.
Secondly, the importance of reporting data on the precision of measurement of the exposure(s) under study also deserves particular emphasis, whether it is a physiological parameter or a behavioural risk factor. For example, in a recent systematic review of case-control studies of the use of aspirin and risk of colorectal cancer, only studies that collected and reported detailed exposure data stratified by dose, frequency, and duration of aspirin use identified the same strong protective effect of aspirin that was found by long term follow-up of randomised trials.8 Interestingly, smaller studies tended to have the most discriminating measures of exposure, resulting in a highly asymmetrical funnel plot, which could be misinterpreted as evidence of publication bias. The potential advantages of smaller more rigorous epidemiological studies over larger simpler studies have been outlined previously.5
Finally, the design of studies and the interpretation of results must have expert clinical input. Just as clinical studies can suffer from a lack of statistical and epidemiological expertise, epidemiological studies can suffer from a lack of clinical expertise. A statement about the extent of any input from people with relevant clinical expertise might be an additional future STROBE recommendation. Overall, however, the STROBE guidelines are an important and timely initiative, which researchers and journals should support and put into practice.
Web references w1-w9 are on bmj.com
Competing interests: None declared.
Provenance and peer review: Commissioned; not externally peer reviewed.