Intended for healthcare professionals

Rapid response to:

Letters Risk illiteracy

The real problem is the biomedical ignorance of statisticians

BMJ 2011; 342 doi: https://doi.org/10.1136/bmj.d2579 (Published 21 April 2011) Cite this as: BMJ 2011;342:d2579

Rapid Response:

Inappropriate calculations are the cause of the confusion

The use of 'specificity' and 'false positive rate' is confusing and
is not applicable when reasoning with clinical risk or diagnoses. I shall
use the values that confused the group of gynaecologists [1] to illustrate
this.

A doctor (and patient participating in decisions) may wish to know
what proportion of women in a study turned out to have breast cancer. It
was about 1% [1]. When mammograms were done, about 10% were positive (and
so 90% were negative). When the mammogram result was negative, only about
0.1% had breast cancer. When the mammogram result was positive about 9.2%
had cancer [1].

If a mammogram is positive, a surgeon will consider the differential
diagnosis (e.g. breast cancer, benign adenoma, etc) and the proportion of
those with a positive result who have each of these differential diagnoses
(e.g. 9.2% have breast cancer). The surgeon will then consider one of the
possibilities (e.g. breast cancer) and look for a finding (e.g. lymph node
enlargement) that occurs commonly in that that possibility and rarely in
at least one other diagnosis. The specificity and false positive rate are
not used in differential diagnostic reasoning. It is ratios of
'sensitivities' that are used [2].

The proportion of those with a positive screening test result who
have cancer can be observed directly in the study population. It was
about 9.2% (or 0.092). There is no need to 'calculate' it. However it
can be calculated in a roundabout way from other directly observed
proportions by using a version of Bayes theorem as follows:

1/(1+((0.99/0.01)x(0.09/0.9))) = 0.092

Professor Shuster is right about questioning the need for doctors to
do this calculation [3]. Asking them to do so is not a test of 'risk
literacy' but merely a test of unnecessarily applying the arithmetic of
Bayes theorem.

References

1. Heath I. Dare to know: risk illiteracy and shared decision making.
BMJ2011; 342: d2075. (6 April.)

2. Llewelyn H, Ang HA, Lewis K, Al-Abdullah A. The Oxford Handbook of
Clinical Diagnosis, 2nd ed., Oxford University Press, Oxford, 2009

3. Shuster S. The real problem is the biomedical ignorance of
statisticians BMJ 2011; 342:2579 doi:10.1136/bmj.d2579

Competing interests: No competing interests

01 May 2011
Huw Llewelyn
Cosultant Physician & Hon Fellow
Aberystwyth University