Quantifying and monitoring overdiagnosis in cancer screening: a systematic review of methods
BMJ 2015; 350 doi: https://doi.org/10.1136/bmj.g7773 (Published 07 January 2015) Cite this as: BMJ 2015;350:g7773
All rapid responses
The aim of Carter et al (1), ‘to determine the optimal method for quantifying and monitoring overdiagnosis in cancer screening over time’, hits the spot. Their results section has, however, a different focus and therefore, the objective is not yet fully fulfilled.
In the result section, Carter et al (1) focus on the evaluation of published studies. The judgment of these individual studies leads to an overall judgement of the four methodological groups. You would, however, expect an evaluation of methods in light of future studies, because the objective aims ‘to determine the optimal method to quantify and monitor overdiagnosis’. In other words, this article needs to evaluate existing study designs rather than published studies in order to find the optimal method to quantify overdiagnosis in studies in the future. The discussion of individual study designs by Carter et al (1) is unfortunately limited.
For example, the discussion of cohort and ecological studies. All cohort and ecological studies are grouped together, while they consist of a range of different study designs in itself. Cohort and ecological studies have already a different design. Furthermore, ecological studies can be further subdivided according to the method used to estimate cancer incidence in absence of screening, including extrapolation of pre-screening trends (2) and use of control regions (3). Different designs with different types of bias. The strengths and limitations of these different designs are, however, not discussed separately. As a consequence, we still do not know which design is the most optimal to quantify and monitor overdiagnosis over time.
To conclude, Carter et al (1) gave a nice overview of existing studies estimating overdiagnosis and set some criteria to judge those studies. They did, however, not extensively discuss the study designs to quantify overdiagnosis. Therefore, our question after reading this promising article is still: what is the optimal method to quantify and monitor overdiagnosis over time?
References
1. Carter JL, Coletti RJ, Harris RP. Quantifying and monitoring overdiagnosis in cancer screening: a systematic review of methods. BMJ 2015;350:g7773.
2. Jørgensen KJ, Gøtzsche PC. Overdiagnosis in publicly organised mammography screening programmes: Systematic review of incidence trends. BMJ (Online) 2009;339(7714):206-09.
3. Peeters PHM, Verbeek ALM, Straatman H, Holland R, Hendriks JHCL, Mravunac M, et al. Evaluation of Overdiagnosis of Breast Cancer in Screening with Mammography: Results of the Nijmegen Programme. International Journal of Epidemiology 1989;18(2):295-99.
Competing interests: No competing interests
A systematic review concluded that patients with lower health literacy level were less likely to have cancer screening tests (Oldach & Katz, 2014), which further lead to cancer disparities. If we consider patients’ health literacy as a contributing factor in the cancer screening guidelines, it might help to have clear communication between doctors and patients. To do so, doctors also need to improve communication skills then address health literacy (Green, Gonzaga, Cohen, & Spagnoletti, 2014).
Patients’ health literacy refers to the ability to find, understand, appraise and apply information on health, especially in disease prevention and health promotion. They then make the correct decision on which test, number of tests (sometimes need more than one test), time, technique, and guideline to follow and collaborate well with doctors. Thus, test results will be more precise, and doctors’ decision will be more accurate. Therefore, beside new screening technology, new treatment, and interventions to reduce overdiagnosis, patient’s health literacy together with doctors’ clear communication skills will reduce the rate of overdiagnosis.
References
Green, J. A., Gonzaga, A. M., Cohen, E. D., & Spagnoletti, C. L. (2014). Addressing health literacy through clear health communication: A training program for internal medicine residents. Patient Education and Counseling, 95(1), 76-82. doi: http://dx.doi.org/10.1016/j.pec.2014.01.004
Oldach, B. R., & Katz, M. L. (2014). Health literacy and cancer screening: A systematic review. Patient Educ Couns, 94(2), 149-157. doi: http://dx.doi.org/10.1016/j.pec.2013.10.001
Competing interests: No competing interests
This is an important review work on quantifying overdiagnosis in cancer screening with an objective of determining optimal method related to type of study design. This estimation is of vital importance as the consequences of overdiagnosis are harmful and the diagnosis or treatment cannot benefit such patients. The studies considered in the systematic review (1) were cohort (n=7), ecological (N=13), RCT (N=3), cross sectional (N=8) and modeling based (N=21). The overall strength of evidence was evaluated by following standard methods (USPSTF and Grade). The authors of the systematic review concluded that well conducted ecological and cohort studies are the most appropriate for quantifying and monitoring overdiagnosis.
Inclusion of quantifying appears to be only relevant aspect but monitoring aspect may not fit into all settings studied. Data provided by authors was further summarized by us in Table-1 for cancer site specific distribution in various study settings. Major limitation as appeared was on sample size of number of studies (Table-I) and on non-comparability of various study designs. The 52 studies included cover only four sites out of nine selected sites used in estimation of over diagnosis in this systematic review. Results of sites such as prostate, breast, lung and colorectal were only used in the analysis. Types of studies have known epidemiological hierarchy with highest value beginning with RCT (Randomized Control Trial) followed by cohort, cross sectional and ecological. The rational for combining ecological with cohort type of studies by authors was unknown. Estimation of overdiagnosis using modeling has a separate place and is non-comparable as the models can be developed in various study design settings.
Authors' conclusions appear to be not based on the evidence of their systematic review (Table-I). Systematic review stated as having moderate rating of RCTs as against all others that had a low rate of strength. In fact, the known knowledge as such is that RCTs are not feasible for evaluation of cervical screening programmes where the desired ones are observational studies (2). Perhaps this was the reason for the authors for arriving at the conclusion that observational studies were better for monitoring and quantifying cancer screening. However, the authors should have presented the systematic review on site specific as against study design specific presentatations on overdiagnosis. In summary, instead of searching for an optimal study design for quantifying overdiagnosis in cancer patients, the lookout should be to compare overdiagnosis obtained through various designs in each site specific studies.
References:
1. Carter Jamie L, Coletti Russell J, Harris Russell P. Quantifying and monitoring overdiagnosis in cancer screening: a systematic review of methodsBMJ 2015; 350:g7773
2. Arbyn M, Rebolj M, De Kok IM, Fender M, Becker N, O’Reilly M, et al. The challenges of organising cervical screening programmes in the 15 old member states of the European Union. Eur J Cancer 2009;45:2671-8.
Competing interests: No competing interests
Authors’ response to Theodora M. Ripping. Re: Quantifying and monitoring overdiagnosis in cancer screening: a systematic review of methods
We appreciate Theodora Ripping’s interest in our paper on research designs for assessing the degree of overdiagnosis in cancer screening programs. The point made seems to be that, by examining past research, we do not provide guidance for future research.
By developing criteria, however, for broad categories of research – as we do in Tables 1 and 2 and discuss in Table 7 – we believe that we have given guidance for future studies. In addition, we discuss, under each design heading, the ways that studies can fail these criteria, failures that future studies must avoid. We agree that none of the 4 categories specify an exact research design, but all of the varieties should meet the criteria given in Tables 1 and 2, and discussed in Table 7.
Our assessment found that, designed and conducted correctly, ecologic and cohort studies provide the best opportunity to quantify and monitor overdiagnosis. We call for an international group of unbiased experts to develop specific, optimal designs.
We hope this clarifies our message.
Russell Harris, MD, MPH
for the authors
Competing interests: No competing interests