Risky business: doctors’ understanding of statisticsBMJ 2014; 349 doi: https://doi.org/10.1136/bmj.g5619 (Published 17 September 2014) Cite this as: BMJ 2014;349:g5619
- Christopher Martyn, freelance writer
Nearly 40 years ago the New England Journal of Medicine published a short survey of doctors’ understanding of the results of diagnostic tests.1 The participants, all doctors or medical students at Harvard teaching hospitals, were asked, “If a test to detect a disease whose prevalence is 1/1000 has a false positive rate of 5%, what is the chance that a person found to have a positive result actually has the disease, assuming that you know nothing else about the person’s symptoms or signs?” This wasn’t a very difficult question, which made the results all the more shocking. Fewer than a fifth of participants gave the correct answer, and most thought that the hypothetical patient had a 95% chance of having the disease.
Of course, this was a long time ago, and medical curriculums now contain much more in the way of statistics and probabilistic reasoning. You might expect that if the exercise were repeated today almost everyone would give the right answer. But you’d be wrong. Earlier this year a similar study was carried out, also in hospitals in the Boston area of Massachusetts, and the results were no better.2 Most doctors who were asked exactly the same question thought that the patient had a 95% chance of having the disease.
(In case you’re struggling, one way of thinking about the question is to imagine that 1000 people are given the test. Since the prevalence is one in 1000, one of these people will have the disease. But a false positive rate of 5% means that 50 people will have a positive test result. So the chance of someone with a positive test actually having the disease is one in 50.)
If you like, you can criticise these surveys for posing a question that’s not directly relevant to routine clinical practice. Doctors don’t generally work in circumstances where the background prevalence of disease is anywhere near as low as one in 1000. And they’re rarely in a position where they know nothing about a patient beyond a test result. So the predictive value of most diagnostic tests in everyday practice will be far higher than that featured in the survey question.
These criticisms may be fair, but they don’t disguise the broader point that most clinicians, whether working in primary care or hospital practice, have a poor understanding of concepts of risk and probability and that increasing exposure to statistics in undergraduate and postgraduate education hasn’t made much difference. This may not matter too much for doctors at the sharp end of clinical care, although diagnostic tests would surely be used more sparingly and more intelligently if their limitations were better appreciated. But, as was pointed out in a recent editorial in The BMJ about lowering the threshold for recommending statins, a substantial chunk of the core business of medicine is changing.3 When doctors offer a preventive drug or a screening test to large numbers of asymptomatic people they’re doing something quite different from treating a patient who has sought help because she is sick. They’re not so much doctors as life insurance salespeople, peddling deferred benefits in exchange for a small (but certainly not negligible) ongoing inconvenience and cost. In this new kind of medicine, not understanding risk is the equivalent of not knowing about the circulation of the blood.
A friend of mine, a woman in her early 60s, recently spent a night in hospital after an epistaxis that failed to stop. She was discharged the next day with advice that she should see her general practitioner, because her blood pressure readings had been high. Her GP also found her blood pressure to be high, lent her a machine so that she could check it at home herself, and, when this confirmed the earlier measurements, started antihypertensive drug treatment. Three months later, after a further series of checks at home showed continuing high blood pressure, her GP wanted to increase the dose. “Hang on,” said my friend, “these tablets aren’t working, yet you want me to take more of them. I don’t much like taking tablets, and I particularly dislike taking these, because they make me cough, which drives my husband to distraction. Is my blood pressure really so high that it has to be treated with drugs?” “Well, no,” replied the GP, “it’s only slightly raised, and since you don’t smoke and aren’t diabetic, your risk of cardiovascular disease probably isn’t very high anyway.”
This story of fuzzy thinking and muddled management isn’t unique. Whenever I’ve told it to anyone, they’ve always capped it with a similar story of their own. But it’s an unsatisfactory and wasteful way to practise preventive medicine. When patients are trying to come to an informed decision about whether a treatment is worthwhile for them, what matters is their absolute risk of the condition that we wish to prevent and the extent to which intervention reduces that risk. If we can’t equip ourselves to provide accurate numerical estimates of these risks and risk reductions, and communicate them clearly to patients, perhaps we shouldn’t be offering this sort of advice at all.
Cite this as: BMJ 2014;349:g5619
Competing interests: None declared.
Patient consent obtained.
Provenance and peer review: Commissioned; not externally peer reviewed.