Evaluating policy and service interventions: framework to guide selection and interpretation of study end pointsBMJ 2010; 341 doi: https://doi.org/10.1136/bmj.c4413 (Published 27 August 2010) Cite this as: BMJ 2010;341:c4413
- Richard J Lilford, professor of clinical epidemiology1,
- Peter J Chilton, research associate1,
- Karla Hemming, senior research fellow1,
- Alan J Girling, senior research fellow1,
- Celia A Taylor, senior lecturer2,
- Paul Barach, visiting professor3
- 1Public Health, Epidemiology and Biostatistics, University of Birmingham, Edgbaston, West Midlands B15 2TT
- 2Department of Clinical and Experimental Medicine, University of Birmingham
- 3Patient Safety Centre, University Medical Centre Utrecht, PO Box 85500, 3508 GA Utrecht, Netherlands
- Correspondence to: R J Lilford
- Accepted 9 June 2010
There is broad consensus that clinical interventions should be compared in randomised trials measuring patient outcomes. However, methods for evaluation of policy and service interventions remain contested. This article considers one aspect of this complex issue—the selection of the primary end point (the end point used to determine sample size and given most weight in the interpretation of results). Other methodological issues affecting the design and interpretation of evaluations of policy and service interventions (including attributing effect to cause) have been discussed elsewhere,1 and we will consider them only in so far as they may affect selection of the primary end point. Our analysis begins with a classification of policy and service interventions based on an extended version of Donabedian’s causal chain.
Avedis Donabedian conceptualised a chain linking structure, process, and outcome.2 The classification we propose is based on a model in which the process level is divided into three further categories or sublevels as shown in fig 1⇓.3 4 5 Starting closest to the patient these are: clinical processes (encompassing treatments such as drugs, devices, procedures, “talking” therapy, complementary therapy, and so on); targeted processes (those aimed at improving particular clinical processes, such as training in the use of a device, or a decision rule built into a computer system); and generic processes (for example, the human resource policy adopted by an organisation).