Rapid responses are electronic comments to the editor. They enable our users
to debate issues raised in articles published on bmj.com. A rapid response
is first posted online. If you need the URL (web address) of an individual
response, simply click on the response headline and copy the URL from the
browser window. A proportion of responses will, after editing, be published
online and in the print journal as letters, which are indexed in PubMed.
Rapid responses are not indexed in PubMed and they are not journal articles.
The BMJ reserves the right to remove responses which are being
wilfully misrepresented as published articles or when it is brought to our
attention that a response spreads misinformation.
From March 2022, the word limit for rapid responses will be 600 words not
including references and author details. We will no longer post responses
that exceed this limit.
The word limit for letters selected from posted responses remains 300 words.
EDITOR – I read the article by Kerr et al about the work of the
Cancer Services Collaborative1 with interest. Some of the data are hard to
interpret. Figure 1 shows the numbers of ‘PDSA cycles’ per month over a
year and suggest a commendable amount of activity. How does this activity
relate to the reported improvements? Figure 2 shows a linear increase in
the ‘Assessment Scores’ with time but it is not clear exactly what is
being displayed on the y axis. Are these some average of the scores for
all the teams? Since a score of 4 has to be achieved before ‘significant
progress’ is deemed to have been achieved and the final score for November
2000 is about 3.5, this would appear disappointing. It is also not
explained in the section ‘Our first year’s experience’ that the examples
cited of improvement in various Cancer Networks usually relate to only one
hospital site within each network, not to the whole network.
They reported that the Collaborative receives ‘central funding from
the Department of Health’ but did not say how much. It was, I believe,
approximately £0.5 million per network – in other words about £100,000 per
project team or £4,500 for each patient in whom ‘changes were tested’.
Similar funding for the 34 cancer networks in England is involved in the
second phase. While significant improvements seem to have taken place in
some places with this investment, it is not at all certain that this
approach will lead either to durable changes or that the improvements will
be transferable or actually transferred to other teams and networks
without equivalent support. The experience of the NHS with similar
projects in the past does not make me optimistic.
What I take away from this paper is not how successful the Cancer
Collaborative has been but how difficult and expensive it is to achieve
quite modest, local changes in the processes of clinical care in the NHS.
I believe there are two main reasons for this. First is that the NHS has
poor information about the detail of clinical processes compared to the US
where the Institute for Healthcare Improvement has pioneered its methods.
The second is the attitude of doctors and medical managers to the
systematisation of clinical care, an essential part of all quality
improvement methods. A recent study in Wales (Degeling P, Kennedy J,
Macbeth F, unpublished observations) confirmed previous work in other
countries3 that most doctors and medical managers are opposed to such
systematisation because of its perceived threat to their autonomy. This
has implications for the implementation of clinical governance and for the
suggestions made by Berwick in his commentary on the study by Feachem et
al2 in the same issue of the BMJ.
The report by Kerr et al was a partisan report by those leading the
programme not an objective, external evaluation. I hope that the
Department of Health is planning such an evaluation, looking in particular
at improvements across whole networks and their durability and also at the
cost effectiveness of this approach in the NHS.
Competing interest: FM is a member of the board of the Cancer
Services Co-ordinating Group for Wales.
1. Kerr D, Bevan H, Gowland B, Penny J, Berwick D. Redesigning
cancer care. BMJ 2002;324:164-6
2. Feachem RGA, Neelam KS, White KL. BMJ 2002;324:135-43
3. Degeling P, Kennedy J Hill M. Do professional subcultures set the
limits of hospital reform? Clinician in Management. 1998;7: 89-98.
How effective is the Cancer Collaborative?
EDITOR – I read the article by Kerr et al about the work of the
Cancer Services Collaborative1 with interest. Some of the data are hard to
interpret. Figure 1 shows the numbers of ‘PDSA cycles’ per month over a
year and suggest a commendable amount of activity. How does this activity
relate to the reported improvements? Figure 2 shows a linear increase in
the ‘Assessment Scores’ with time but it is not clear exactly what is
being displayed on the y axis. Are these some average of the scores for
all the teams? Since a score of 4 has to be achieved before ‘significant
progress’ is deemed to have been achieved and the final score for November
2000 is about 3.5, this would appear disappointing. It is also not
explained in the section ‘Our first year’s experience’ that the examples
cited of improvement in various Cancer Networks usually relate to only one
hospital site within each network, not to the whole network.
They reported that the Collaborative receives ‘central funding from
the Department of Health’ but did not say how much. It was, I believe,
approximately £0.5 million per network – in other words about £100,000 per
project team or £4,500 for each patient in whom ‘changes were tested’.
Similar funding for the 34 cancer networks in England is involved in the
second phase. While significant improvements seem to have taken place in
some places with this investment, it is not at all certain that this
approach will lead either to durable changes or that the improvements will
be transferable or actually transferred to other teams and networks
without equivalent support. The experience of the NHS with similar
projects in the past does not make me optimistic.
What I take away from this paper is not how successful the Cancer
Collaborative has been but how difficult and expensive it is to achieve
quite modest, local changes in the processes of clinical care in the NHS.
I believe there are two main reasons for this. First is that the NHS has
poor information about the detail of clinical processes compared to the US
where the Institute for Healthcare Improvement has pioneered its methods.
The second is the attitude of doctors and medical managers to the
systematisation of clinical care, an essential part of all quality
improvement methods. A recent study in Wales (Degeling P, Kennedy J,
Macbeth F, unpublished observations) confirmed previous work in other
countries3 that most doctors and medical managers are opposed to such
systematisation because of its perceived threat to their autonomy. This
has implications for the implementation of clinical governance and for the
suggestions made by Berwick in his commentary on the study by Feachem et
al2 in the same issue of the BMJ.
The report by Kerr et al was a partisan report by those leading the
programme not an objective, external evaluation. I hope that the
Department of Health is planning such an evaluation, looking in particular
at improvements across whole networks and their durability and also at the
cost effectiveness of this approach in the NHS.
Competing interest: FM is a member of the board of the Cancer
Services Co-ordinating Group for Wales.
1. Kerr D, Bevan H, Gowland B, Penny J, Berwick D. Redesigning
cancer care. BMJ 2002;324:164-6
2. Feachem RGA, Neelam KS, White KL. BMJ 2002;324:135-43
3. Degeling P, Kennedy J Hill M. Do professional subcultures set the
limits of hospital reform? Clinician in Management. 1998;7: 89-98.
Competing interests: No competing interests