Re: Where are we with transparency over performance of doctors and institutions?
I read with interest the views of Aniket Tavare on the perceived impact of transparency on Cardiac surgery results (1). For a start it must be pointed out that even though the “scandal” at Bristol involved Paediatric cardiac surgery results, until today no centre in the UK publishes surgeon specific results in Paediatric heart surgery. Paediatric heart surgery is too complex a field to pin every success or each failure to one individual. The team results are published and this leads to a more cohesive approach from the entire team including managers to improve the results in the unit. The publication of adult cardiac surgery results was a quick win because it was already being collected on a voluntary basis. The fact that subspecialties in Thoracic surgery and Paediatric cardiac surgery have not published surgeon specific results over the past five years is a clear message that this is probably not the way forward.
The other unintended consequence of surgeon specific results is a risk averse culture among surgeons. Once a surgeon is appointed they inevitably spend their first few years trying to manage their mortality figures to prevent being an outlier. The easiest way to manage mortality is to be risk averse. There is published evidence of this even in large teaching institutions with 12% of patients referred for cardiac surgery being turned down (2). The sickest patients often benefit most from surgery. There is no scrutiny or publication of patients turned down for surgery so there is a perverse incentive not to take on the riskier patients in an NHS which has no clear performance incentives. The only way to deal with the issue of risk averse behavior would be to have a record of the denominator of surgical referrals to cases actually performed. This could be made available for centres to be benchmarked against.
Does the current system of surgeon specific results in adult cardiac surgery prevent poor performance? I think not. The current system grades surgeon’s results over the past three years, this is like driving a car with a speedometer that shows your average speed from the last few hours. Surgeons are only picked out as outliers if they are 2 standard deviations from the national mean. With variation in local patient and hospital factors a national mean is meaningless. This would allow certain surgeons to have more than twice the mortality of their local peers and still continue to function because of the extent of national variation.
At the Lancashire Cardiac Centre in Blackpool we have been one of the few centres in adult cardiac surgery that has refused to publish surgeon specific results on the public portal from the beginning on the premise that we work as a team and will be measured as a team. This has lead us over the past years to increase our overall activity with introduction of many innovative new technology like endoscopic conduit harvesting, minimally invasive coronary and valve surgery with regular audit of our activity and publication of results(3). We have also doubled the number of substantive consultants without an obvious effect on outcomes with a program of protecting new consultants from higher risk cases with a second opinion service from more established consultants. Our mortality over the past 10 years has dropped steadily with most of the drop occurring in the first five years prior to publication of our results, despite the mean EuroSCORE showing a trend upwards year on year (Fig:1). As a team we have managed performance within ourselves on a real time basis with three to six monthly distribution of results anonymised among the surgeons. As the NHS is investing in improved data monitoring it is possible to watch performance in real time. Surgical performance is best managed at the local level rather than have a cumbersome centralized system as we have in adult cardiac surgery at present.
We do feel that teams do need to publish results on a regular basis. Mortality is not always the best way to judge performance. A record of constant improvement is what teams should be benchmarked against. The current initiative by the National Cardiothoracic Services Benchmarking Collaborative (NCBC) is a better example of the way forward. Here teams are benchmarked against each other for efficiency markers like staffing levels, costs, length of stay and good practice is shared on a twice yearly basis. Patient related outcomes measures (PROMS) are being piloted as part of this initiative and would also be a welcome addition.
Our experience in Blackpool would suggest that improvements in mortality can be achieved without surgeon specific results being published. A better system would be one that benchmarks each unit against their peers and publishes aspects like length of stay, transfusion requirements, readmission rates and mortality.
We do agree with the accompanying editorial (4) that only those units that constantly monitor their results will improve it. We think in the current era of openness publishing that information is then inevitable. We would like to make a plea based on our experience that the individual in the team should not be the main focus of this initiative.
References:
1: Where are we with transparency over performance of doctors and institutions? Tavare A. BMJ. 2012 Jul 3;345
2: Factors which influence the cardiac surgeon's decision not to operate on patients referred for consideration of surgery. Waterworth PD, Soon SY, Govindraj R, Sivaprakasam R, Jackson M, Grayson AD. J Cardiothorac Surg. 2008 Feb 26;3:9.
3: Endoscopic vein harvesting: does the learning curve influence outcomes? Kirmani B, Zacharias J. Ann Thorac Surg. 2010 Nov;90(5):1743; author reply 1743.
4: Measure your team’s performance, and publish the results. Godlee F. BMJ.2012 Jul 3; 344
Rapid Response:
Re: Where are we with transparency over performance of doctors and institutions?
I read with interest the views of Aniket Tavare on the perceived impact of transparency on Cardiac surgery results (1). For a start it must be pointed out that even though the “scandal” at Bristol involved Paediatric cardiac surgery results, until today no centre in the UK publishes surgeon specific results in Paediatric heart surgery. Paediatric heart surgery is too complex a field to pin every success or each failure to one individual. The team results are published and this leads to a more cohesive approach from the entire team including managers to improve the results in the unit. The publication of adult cardiac surgery results was a quick win because it was already being collected on a voluntary basis. The fact that subspecialties in Thoracic surgery and Paediatric cardiac surgery have not published surgeon specific results over the past five years is a clear message that this is probably not the way forward.
The other unintended consequence of surgeon specific results is a risk averse culture among surgeons. Once a surgeon is appointed they inevitably spend their first few years trying to manage their mortality figures to prevent being an outlier. The easiest way to manage mortality is to be risk averse. There is published evidence of this even in large teaching institutions with 12% of patients referred for cardiac surgery being turned down (2). The sickest patients often benefit most from surgery. There is no scrutiny or publication of patients turned down for surgery so there is a perverse incentive not to take on the riskier patients in an NHS which has no clear performance incentives. The only way to deal with the issue of risk averse behavior would be to have a record of the denominator of surgical referrals to cases actually performed. This could be made available for centres to be benchmarked against.
Does the current system of surgeon specific results in adult cardiac surgery prevent poor performance? I think not. The current system grades surgeon’s results over the past three years, this is like driving a car with a speedometer that shows your average speed from the last few hours. Surgeons are only picked out as outliers if they are 2 standard deviations from the national mean. With variation in local patient and hospital factors a national mean is meaningless. This would allow certain surgeons to have more than twice the mortality of their local peers and still continue to function because of the extent of national variation.
At the Lancashire Cardiac Centre in Blackpool we have been one of the few centres in adult cardiac surgery that has refused to publish surgeon specific results on the public portal from the beginning on the premise that we work as a team and will be measured as a team. This has lead us over the past years to increase our overall activity with introduction of many innovative new technology like endoscopic conduit harvesting, minimally invasive coronary and valve surgery with regular audit of our activity and publication of results(3). We have also doubled the number of substantive consultants without an obvious effect on outcomes with a program of protecting new consultants from higher risk cases with a second opinion service from more established consultants. Our mortality over the past 10 years has dropped steadily with most of the drop occurring in the first five years prior to publication of our results, despite the mean EuroSCORE showing a trend upwards year on year (Fig:1). As a team we have managed performance within ourselves on a real time basis with three to six monthly distribution of results anonymised among the surgeons. As the NHS is investing in improved data monitoring it is possible to watch performance in real time. Surgical performance is best managed at the local level rather than have a cumbersome centralized system as we have in adult cardiac surgery at present.
We do feel that teams do need to publish results on a regular basis. Mortality is not always the best way to judge performance. A record of constant improvement is what teams should be benchmarked against. The current initiative by the National Cardiothoracic Services Benchmarking Collaborative (NCBC) is a better example of the way forward. Here teams are benchmarked against each other for efficiency markers like staffing levels, costs, length of stay and good practice is shared on a twice yearly basis. Patient related outcomes measures (PROMS) are being piloted as part of this initiative and would also be a welcome addition.
Our experience in Blackpool would suggest that improvements in mortality can be achieved without surgeon specific results being published. A better system would be one that benchmarks each unit against their peers and publishes aspects like length of stay, transfusion requirements, readmission rates and mortality.
We do agree with the accompanying editorial (4) that only those units that constantly monitor their results will improve it. We think in the current era of openness publishing that information is then inevitable. We would like to make a plea based on our experience that the individual in the team should not be the main focus of this initiative.
References:
1: Where are we with transparency over performance of doctors and institutions? Tavare A. BMJ. 2012 Jul 3;345
2: Factors which influence the cardiac surgeon's decision not to operate on patients referred for consideration of surgery. Waterworth PD, Soon SY, Govindraj R, Sivaprakasam R, Jackson M, Grayson AD. J Cardiothorac Surg. 2008 Feb 26;3:9.
3: Endoscopic vein harvesting: does the learning curve influence outcomes? Kirmani B, Zacharias J. Ann Thorac Surg. 2010 Nov;90(5):1743; author reply 1743.
4: Measure your team’s performance, and publish the results. Godlee F. BMJ.2012 Jul 3; 344
Competing interests: No competing interests