Uncategorized

Physician Accountability Gets a Big Push Forward

Screen Shot 2015-07-15 at 6.23.51 PMDoctors are human. Their talents and skills differ. They make mistakes. And as with every other area of human endeavor: some doctors are really good; some are pretty bad; most are average. If you are over age 50, you’ve likely met an example of all three.

In the past decade there’s more open recognition of this reality and the need to address the failures it creates in medicine and the delivery of care. There’s more willingness now to say out loud that it’s not just poor system dynamics or gaps in planning, knowledge or training leading to poor care and bad results; it’s also the differential skills and ability of the people delivering care.

Despite that, a debate still rages about how vigorously to hold individual doctors accountable for the outcomes of care. That has delayed action for years, with physician groups leading the charge for a “go-slow” approach. Many blogs on THCB discuss this issue, and it’s right that we hash out how best to do this in coming years—and how best to present the results (physician ratings, etc.) to consumers. Both are complex and hard. And the stakes get raised when, in 2019, Medicare’s new Merit-based Incentive Payment System and Alternative Payment Model get rolled out. Under both, all physicians seeing Medicare enrollees will be held more accountable and a portion of their Medicare payment (or their health system’s payment) will be based on the quality of care they deliver.

In the meantime, some groups are bowling forward—and that, too, is right and necessary. Courtesy of recent court rulings and a (sometimes) enlightened CMS, Medicare data on individual physicians is now available to researchers and authorized groups, and through them can be made public. A host of groups have gotten access to the data, and we are beginning to see the results.

ProPublica, the independent investigative journalism shop based in New York City, is one of those groups. This week, it made a significant contribution to physician accountability and healthcare transparency with the release of an analysis of the death and complication rates for 17,000 surgeons treating 2.3 million Medicare patients undergoing one of eight elective (and generally low-risk) surgical procedures from 2009 to 2013—pegging the results/outcomes to individual doctors as well as the hospital where the procedure took place.

The report has generated a lively debate on ProPublica’s web site. Here’s the article (with comments and dialogue at end), background/methods paper, and the searchable database.

I won’t detail the findings here. Researchers/journalists Marshall Allen and Olga Pierce do an excellent job of that. Their feature article is notable for its salient (and troubling) patient anecdotes, naming the names of poorer performing surgeons, and seeking comment from them. It presents an incisive discussion of major issues in the debate over the methodological challenges in assessing treatment outcomes (surgery in this case). And their methods paper deserves praise for its clear explanation, and for emphasizing the need for fairness to physicians in analyzing this kind of data.

For your convenience—until you get a chance to click on and read both pieces in full, as I’d urge)—I quote below from the abstract of their methods paper. (Emphasis in italics are mine.)

“Findings: We found that aggregate rates of harm were quite low. None of the procedures had an average death/readmission rate over 5 percent. However, there was substantial variation within hospitals and between surgeons. Contrary to conventional wisdom that there are ‘good’ and ‘bad’ hospitals, no hospital performed in the worst quartile by our central measure across all eight of our procedures. Only one hospital was in the best quartile across all procedures. The best-performing surgeons had risk-adjusted rates of harm about 50 percent less than average. The worst performers had risk-adjusted rates as high as three times the average. Often this variation was in-hospital: multiple surgeons performing the same type of procedure at the same hospital with widely divergent rates of harm. About 2,000 hospitals (half of those in the data) had top- and bottom-quintile surgeons performing the same procedure. Low-performing surgeons were also unexpectedly dispersed. Two out of three hospitals in the analysis had at least one bottom-quintile performing surgeon. Finally, a comparison of the standard deviations of hospital and surgeon random effects found that surgeon performance accounts for more of the variability of performance between hospitals than hospital-wide performance on a given procedure.”

“Conclusion: There is substantial variation between surgeons in the rates of harm their patients suffer resulting from surgery, which cannot be attributed to patients’ health, or differences in hospital overall performance. Identifying positive outlier surgeons can help to identify best practices. Identifying negative outliers offers an opportunity for intervention and improvement.”

Steven Findlay is an independent journalist who covers medicine and healthcare policy and technology.

Categories: Uncategorized

Tagged as:

1 reply »

  1. Thanks a lot for your excellent article. I really enjoy your1st line.