When can odds ratios mislead?
BMJ 1998; 316 doi: https://doi.org/10.1136/bmj.316.7136.989 (Published 28 March 1998) Cite this as: BMJ 1998;316:989
All rapid responses
Rapid responses are electronic comments to the editor. They enable our users to debate issues raised in articles published on bmj.com. A rapid response is first posted online. If you need the URL (web address) of an individual response, simply click on the response headline and copy the URL from the browser window. A proportion of responses will, after editing, be published online and in the print journal as letters, which are indexed in PubMed. Rapid responses are not indexed in PubMed and they are not journal articles. The BMJ reserves the right to remove responses which are being wilfully misrepresented as published articles or when it is brought to our attention that a response spreads misinformation.
From March 2022, the word limit for rapid responses will be 600 words not including references and author details. We will no longer post responses that exceed this limit.
The word limit for letters selected from posted responses remains 300 words.
Dear Sir,
In a recent article, Davies et al. (1) commented on a potential problem when interpreting odds ratios (OR) as relative risks (RR) in epidemiological studies. However, their vague concept of effect measures as applied to different study designs in epidemiology may lead to misuse and false interpretation of OR.
Davies et al. (1) state that the odds ratio is a common measure in case-control studies, cohort studies, or clinical trails. Unfortunately, this first sentence of their article is not correct. For different study designs, OR should only be used as a measure of effect size when RR can not be estimated directly. In cohort studies as well as in clinical trials, RR (the cumulative incidence ratio or the incidence density ratio) can be estimated directly. Therefore, there is no need to use OR to represent the effect size. In contrast in case-control studies, incidence data is usually not available. Therefore, the ratio of the odds of exposure among cases to the odds of exposure among non-cases is calculated. In theory, all case-control studies can be viewed as nested case-control studies, in which both cases and controls are drawn from a well defined source population. If the controls are selected by incidence density sampling, then the OR derived from the case-control study is, apart from random error, the same as RR in the source population. No rare disease assumption is needed (2). In fact, only when cumulative sampling in case-control studies is used, the rare disease assumption is needed. Usually this assumption should not represent a problem, because case-control designs typically are preferred when the outcome of interest is rare (say less than 5%).
The situation, however, is different in cross-sectional studies (usually applied to investigate more common outcomes) when the prevalence odds ratio (POR) is used as an estimate of the prevalence ratio (PR). Since in cross-sectional studies only prevalent cases are drawn, there is no direct way of estimating RR. In the general population, the prevalence odds is equal to the product of the incidence times disease duration (3). If we assume that the exposure of interest has no influence on the disease duration, then the POR is, theoretically, equal to the RR. However, the assumption of equal duration of disease among the exposed and unexposed population is often questionable. Therefore, some authors propose the use of PR as a conservative estimator of the RR (4).
1 Davies HTO, Crombie IK, Tavakoli M. When can odds ratio mislead? BMJ 1998;316:989-91.
2 Greenland S, Thomas DC. On the need for the rare disease assumption in case-control studies. Am J Epidemiol 1982;116:547-53.
3 Rothman KJ, Greenland S. Modern Epidemiology. 2nd ed. Philadelphia: Lippincott-Raven, 1998.
4 Thompson ML, Myers JE, Kriebel D. Prevalence odds ratio or prevalence ratio in the analysis of cross sectional data: what is to be done? Occup Environ Med 1998;55:272-77.
Competing interests: No competing interests
The trouble with odds ratios
For the doctor or patient, the odds ratio is a difficult concept to
comprehend. Surprisingly, its popularity continues. Not surprisingly, it
is used wrongly in quite a lot of reports as authors mistakenly identify
it as a risk ratio (1, 2). Or, is it just to deceive?
By the nature of its calculation, the odds ratio tends to magnify the
apparent magnitude of effect. This may seem to some as an easy way of
getting results published. I suspect and hope the majority of people
genuinely want to publish honest results and in their effort to provide a
discussion of their results, i.e., to communicate in an easy to understand
language, cause these errors and misidentify the odds ratio as a risk
ratio. Why then is it being used so much?
The odds ratio is useful for case-control studies, where a risk ratio
can not be calculated directly. It is also useful in meta-analysis and in
cohort or trials that use regression methods for analysis. In cohort
studies and trials however, it is possible to calculate a risk ratio and
other risk related "doctor and patient friendly" terms, such as "absolute
risk", "risk reduction", "number needed to treat/harm" etc.
The purpose of any publication is to communicate with the intended
audience, and in the case of medical literature, the target audience are
the medical doctors and patients. Doctors and patients understand and use
"risk" everyday. They do not talk about the "odds" of a complication
happening. They do not mention the "risk ratio" either but can at least
discuss it in an ordinary language like "twice/two-fold" etc.
Even in gambling where the term "odds" is commonly used, "odds ratio"
is not generally used. Why use something that we do not understand when
there is an alternative?
Quite often one finds terms such as "twice this", or "2-fold" that,
when the author refers to odds ratios. In their study of published
literature in two major journals, William et al (1) found that 26% of the
articles that used an odds ratio estimate interpreted it as a risk ratio.
This is not correct; it is misleading and should not be done.
Although in certain circumstances the odds ratio approximates the
risk ratio, this should not be encouraged. Davies et al (3) seem to
suggest that such errors are not serious. Statistics is about quantifying
uncertainty. A lot of assumptions, reasonable or otherwise, are used. Why
add to the problem by doing the wrong calculation and then using it as an
approximation of the one you really want when you can do the one
calculation that has meaning.
In these days of evidence based medicine, with clinical trials being
awarded the highest honours in the “level of evidence” grading system it
is vital that the reports communicate in a language that is understood and
most of all, do not mislead.
1) Holcomb WL Jr. Chaiworapongsa T. Luke DA. Burgdorf KD. An odd
measure of risk: use and misuse of the odds ratio. Obstetrics &
Gynaecology 2001; 98(4): 685-8.
2) Sinclair JC. Bracken MB. Clinically useful measures of effect in binary
analyses of randomized trials. Journal of Clinical Epidemiology 1994;
47(8): 881-9.
3) Davies HTO, Crombie IK, Tavakoli M. When can odds ratios mislead? BMJ
1998; 316: 989-991
Competing interests:
None declared
Competing interests: No competing interests