Rapid responses are electronic comments to the editor. They enable our users
to debate issues raised in articles published on bmj.com. A rapid response
is first posted online. If you need the URL (web address) of an individual
response, simply click on the response headline and copy the URL from the
browser window. A proportion of responses will, after editing, be published
online and in the print journal as letters, which are indexed in PubMed.
Rapid responses are not indexed in PubMed and they are not journal articles.
The BMJ reserves the right to remove responses which are being
wilfully misrepresented as published articles or when it is brought to our
attention that a response spreads misinformation.
From March 2022, the word limit for rapid responses will be 600 words not
including references and author details. We will no longer post responses
that exceed this limit.
The word limit for letters selected from posted responses remains 300 words.
In his editorial1 Lindenauer questions the role of randomised
controlled trials in evaluating quality improvement collaboratives. The
choice of research design should be informed by the question that needs to
be answered. At a stage where a complex intervention is being developed
then a mix of qualitative and quantitative methods should be used to
define the content of the intervention, the method(s) of delivery and
presence and impact of any important effect modifiers. At the point that
such an intervention is regarded as both understood and stable then it
should be subject to an evaluation of its effects rather than just
introduced. The risk of not evaluating is introducing ineffective and
inefficient interventions that could waste scarce resources and a failure
to learn from current QI efforts to optimise future activities. Auerbach
et al2 argue persuasively for the use of the same standards of evaluation
for quality improvement interventions as are applied to biomedical ones.
At the point in a process of evaluation that the question is “what is the
effect of this intervention and at what cost” then a pragmatic (often
cluster) randomised controlled trial3 with its focus on a “real world”
evaluation, is the optimum design.. However, such a design in no way
precludes the simultaneous collection of the sort of data that Lindenauer
suggests - data on the processes by which the intervention may achieve its
effects or have them modified4,5,6. Such measures, informed by the
intervention development stage, can be much more informative by virtue of
having been collected within the same experimental context.
Martin Eccles
Jeremy Grimshaw
References
1. Lindenauer PK. Effects of quality improvement collaboratives are
difficult to measure using traditional biomedical research methods. BMJ
2008;336:1448-9.
2. Auerbach AD, Landefeld CS, Shojania KG. The Tension between
Needing to Improve Care and Knowing How to Do It. The New England Journal
of Medicine 2007; 357(6): 608-613.
3. Schwartz,D. Lellouch,J. Explanatory and pragmatic attitudes in
therapeutical trials. Journal of Chronic Diseases 1967;20: 637-648.
4. Rousseau N, McColl E, Newton J, Grimshaw J, Eccles M. Practice
based, longitudinal, qualitative interview study of computerised evidence
based guidelines in primary care. BMJ 2003;326:314-322.
5. Ramsay CR, Eccles M, Grimshaw JM, Steen N. Assessing the long term
effect of educational reminder messages on primary care radiology
referrals. Clinical Radiology 2003;58:319-321.
6. Grimshaw JM, Zwarenstein M, Tetroe JM, Godin G, Graham ID, Lemyre
L, Eccles MP, Johnston M, Francis JJ, Hux J, O'Rourke K, Legare F,
Presseau J. Looking inside the black box: a theory-based process
evaluation alongside a randomised controlled trial of printed educational
materials (the Ontario printed educational message, OPEM) to improve
referral and prescribing practices in primary care in Ontario, Canada.
Implementation Science, 2007; 2: 38.
Competing interests:
None declared
Competing interests:
No competing interests
12 July 2008
Martin P Eccles
Professor of Clinical Effectiveness
Jeremy M Grimshaw.
Newcastle University, 21 Claremont Place, Newcastle upon Tyne NE2 4AA
Randomised controlled trials have an important place in evaluating quality improvement initiatives.
In his editorial1 Lindenauer questions the role of randomised controlled trials in evaluating quality improvement collaboratives. The choice of research design should be informed by the question that needs to be answered. At a stage where a complex intervention is being developed then a mix of qualitative and quantitative methods should be used to define the content of the intervention, the method(s) of delivery and presence and impact of any important effect modifiers. At the point that such an intervention is regarded as both understood and stable then it should be subject to an evaluation of its effects rather than just introduced. The risk of not evaluating is introducing ineffective and inefficient interventions that could waste scarce resources and a failure to learn from current QI efforts to optimise future activities. Auerbach et al2 argue persuasively for the use of the same standards of evaluation for quality improvement interventions as are applied to biomedical ones. At the point in a process of evaluation that the question is “what is the effect of this intervention and at what cost” then a pragmatic (often cluster) randomised controlled trial3 with its focus on a “real world” evaluation, is the optimum design.. However, such a design in no way precludes the simultaneous collection of the sort of data that Lindenauer suggests - data on the processes by which the intervention may achieve its effects or have them modified4,5,6. Such measures, informed by the intervention development stage, can be much more informative by virtue of having been collected within the same experimental context.
Martin Eccles
Jeremy Grimshaw
References
1. Lindenauer PK. Effects of quality improvement collaboratives are difficult to measure using traditional biomedical research methods. BMJ 2008;336:1448-9.
2. Auerbach AD, Landefeld CS, Shojania KG. The Tension between Needing to Improve Care and Knowing How to Do It. The New England Journal of Medicine 2007; 357(6): 608-613.
3. Schwartz,D. Lellouch,J. Explanatory and pragmatic attitudes in therapeutical trials. Journal of Chronic Diseases 1967;20: 637-648.
4. Rousseau N, McColl E, Newton J, Grimshaw J, Eccles M. Practice based, longitudinal, qualitative interview study of computerised evidence based guidelines in primary care. BMJ 2003;326:314-322.
5. Ramsay CR, Eccles M, Grimshaw JM, Steen N. Assessing the long term effect of educational reminder messages on primary care radiology referrals. Clinical Radiology 2003;58:319-321.
6. Grimshaw JM, Zwarenstein M, Tetroe JM, Godin G, Graham ID, Lemyre L, Eccles MP, Johnston M, Francis JJ, Hux J, O'Rourke K, Legare F, Presseau J. Looking inside the black box: a theory-based process evaluation alongside a randomised controlled trial of printed educational materials (the Ontario printed educational message, OPEM) to improve referral and prescribing practices in primary care in Ontario, Canada. Implementation Science, 2007; 2: 38.
Competing interests: None declared
Competing interests: No competing interests