Intended for healthcare professionals

Rapid response to:

Education And Debate

Evidence based diagnostics

BMJ 2005; 330 doi: https://doi.org/10.1136/bmj.330.7493.724 (Published 24 March 2005) Cite this as: BMJ 2005;330:724

Rapid Response:

Multivariable approaches should not be left out in evidence based diagnostics

We congratulate Gluud and Gluud with their paper on a phased approach for research into diagnostic tests.[1] However, a phased approach (loosely) reflecting the approach for drug research has been covered in the literature.[see e.g. 2-4] We agree with their proposal for phases I and IIa but have major difficulties with phases III and IV.

First, diagnostic accuracy studies that aim to be truly informative to clinical practice require that all relevant diagnostic indicators preceding the new test (usually symptoms and e.g., age) should be included. Thus, diagnostic studies informative to clinical practice should have enough power to examine the influence of these clinical indicators on test performance conditional on what the diagnostician has learnt about the patient’s target illness probability from earlier tests.[5] Consider a new test under study is performed after 3 informative clinical history questions with a yes-no format. Hence, a cross sectional diagnostic accuracy study of the new test deals with patients having one of eight (23) different clinical profiles (‘yes-yes-yes’ all the way down to ‘no-no-no’). The exact mixture of these profiles determines the (average) sensitivity and specificity of the new test found in this study. This average is hardly informative for clinical practice since there the exact clinical profile is known. Standard phase IIc studies may invoke vague concepts such as setting or patient entry criteria, but ignore the explicit (statistical) incorporation of other clinical indicators. Once the informativeness of the new test at its specific clinical point of use is established, its informativeness may differ between, for example, men and women (‘interaction’). To learn about the improvement of the clinical outcome due to a test results, diagnostic informativeness and treatment effects should be studied separately and integrated in a medical decision-analysis.

Second, randomised trials should be used when an acceptable reference standard is lacking, because they are usually inefficient when an acceptable reference standard exists. They may sometimes be invalid when an acceptable reference standard exists.[6]



References

1.Gluud C, Gluud LL. Evidence based diagnostics. BMJ 2005;330:724-726

2.Fryback DG, Thornbury JR. The efficacy of diagnostic imaging. Med Decis Making 1991;11:88-94.

3.van der Schouw YT, Verbeek AL, Ruijs SH. Guidelines for the assessment of new diagnostic tests. Invest Radiol 1995;30:334-40.

4.Moons KG, Harrell FE. Sensitivity and specificity should be de-emphasized in diagnostic accuracy studies. Acad Radiol 2003;10:670-2.

5.Eysink PE, ter Riet G, Aalberse RC, van Aalderen WM, Roos CM, van der Zee JS, Bindels PJ. Br J Gen Pract 2005;55:125-31. Accuracy of specific IgE in the prediction of asthma: development of a scoring formula for general practice.

6.Bossuyt PM, Lijmer JG, Mol BW. Lancet 2000;356:1844-7. Randomised comparisons of medical tests: sometimes invalid, not always efficient.
 

Competing interests: None declared

Competing interests: No competing interests

04 April 2005
Gerben ter Riet
Clinical Epidemiologist
Lucas M. Bachmann, Karel G.M. Moons, Patrick J. Bindels, and Alfons G.H. Kessels
Department of General Practice, Academic Medical Center, 1105 AZ Amsterdam, The Netherlands