Risk-adjusted 30- to 90-day outcome data for selected types of operations done by specific surgeons and hospitals are now being publicly posted online by Englands National Health Service.

According to the site, "Any hospital or consultant [attending surgeon in the UK] identified as an outlier will be investigated and action taken to improve data quality and/or patient care."

After cardiac surgery outcomes data were made public in New York, some interesting unexpected consequences were noted.

Surgeons and hospitals resorted to "gaming the system" by declining to operate on patients who were high-risk and tinkering with patient charts to make those they did operate on seem sicker. This can be done by scouring the charts for all co-morbidities and making sure none are overlooked when they are coded. An article from New York Magazine explains it in more detail.

Interpreting outcomes data can be tricky.

In a post three years ago about a report that nine Maryland hospitals had higher-than-average complication rates, I pointed out that whenever you have averages, some hospitals are going to be worse than average unless all hospitals perform exactly the same way or, like medical students, are all above average.

A much more sophisticated way of looking at this subject appeared in a fascinating 2010 BBC News piece by Michael Blastland, who is the Nate Silver of England [or maybe Nate Silver is the Michael Blastland of the US], called "Can chance make you a killer?"

Blastland set up a statistical chance calculator for a hypothetical set of 100 hospitals or 100 surgeons performing 100 operations each. The model assumes that every patient has the same chance of dying and that every surgeon is equally competent. The standard is that a mortality rate 60% worse than the norm set by the government for any hospital or surgeon is not acceptable.

You are assigned one hospital. Using a slider, you may choose an operative mortality rate anywhere from 1% to 15%. After you do this a number of times and recalculate for each mortality rate, you will notice that the number of unacceptably performing hospitals or surgeons changes randomly for each percent mortality and your hospital may appear in the underperforming group strictly by chance alone.

The whole concept is explained in more detail on the site. I encourage you to try it for yourself. The link is here.

So it may be difficult for the NHS to separate the true outliers from the unlucky surgeons who happened to fall outside the established norms.

What do you think about this?

Leave a Reply