Uncategorized

The ProPublica Report Card: A Step In the Right Direction

Ashish JhaLast week, Marshall Allan and Olga Pierce, two journalists at ProPublica, published a surgeon report card detailing complication rates of 17,000 individual surgeons from across the nation. A product of many years of work, it benefitted from the input of a large number of experts (as well as folks like me). The report card has received a lot of attention … and a lot of criticism. Why the attention? Because people want information about how to pick a good surgeon. Why the criticism?  Because the report card has plenty of limitations.

As soon as the report was out, so were the scalpels. Smart people on Twitter and blogs took the ProPublica team to task for all sorts of reasonable and even necessary concerns. For example, it only covered Medicare beneficiaries, which means that for many surgeries, it missed a large chunk of patients. Worse, it failed to examine many surgeries altogether. But there was more.

The report card used readmissions as a marker of complications, which has important limitations. The best data suggest that while a large proportion of surgical readmissions are due to a complication, readmissions are also affected by other factors, such as how sick the patient was prior to surgery (the ProPublica team tried to account for this), his or her race, ethnicity, social supports—and even the education and poverty level of their community.

 I have written extensively about the problems of using readmissions after medical conditions as a quality measure. Surgical readmissions are clearly better but hardly perfect. They even narrowed the causes of readmissions using expert input to improve the measure, but even so, it’s hardly ideal. ProPublica produced an imperfect report.

How to choose a surgeon

So what to do if you need a surgeon?  Should you use the ProPublica report card?  You might consider doing what I did when I needed a surgeon after a shoulder injury two years ago:  ask colleagues. After getting input about lots of clinicians, I honed in on two orthopedists who specialized in shoulders. I then called surgeons who had operated with these guys and got their opinions. Both were good, I was told, but one was better. Yelp?  I passed. Looking them up on the Massachusetts Registry of Medicine?  Seriously?  Never crossed my mind.

But what if, just by chance, you are not a physician? What if you are one of the 99.7% of Americans who didn’t go to medical school? What do you do?  If your insurance covers a broad network and your primary care physician is diligent and knows a large number of surgeons, you may get referred to someone right for you. Or, you could rely on word of mouth, which means relying on a sample size of one or two.

So what do patients actually do?  They cross their fingers, pray, and hope that the system will take care of them. How good is that system at taking care of them? It turns out, not as good as it should be. We know that mortality rates vary three-fold across hospitals. Even within the same hospital, some surgeons are terrific, while others? Not so much. Which is why I needed to work hard to find the right orthopedist. Physicians can figure out how to navigate the system. But what about everyone else?

I was on service recently and took care of a guy, Bobby Johnson (name changed, but a real guy), who was admitted yet again for an ongoing complication from his lung surgery. He had missed key events because of his complications—including his daughter’s wedding—because he was in the hospital with a recurrent infection. He wondered if he would have done better with a different hospital or a different surgeon. I didn’t know how to advise him.

And that’s where ProPublica comes into play. The journalists spent years on their effort, getting input from methodologists, surgeons, and policy experts. In the end, they produced a report with a lot of strengths, but no shortage of weaknesses. But despite the weaknesses, I never heard them question whether the endeavor was worth it at all.  I’m glad they never did.

Because the choice wasn’t between building the perfect report card and building the one they did. The choice was between building their imperfect report card and leaving folks like Bobby with nothing. In that light, the report card looks pretty good. Maybe not for experts, but for Bobby.

A step towards intended consequences

Colleagues and friends that I admire, including the brilliant Lisa Rosenbaum, have written about the unintended consequences of report cards. And they are right. All report cards have unintended consequences. This report card will have unintended consequences. It might even make, in the words of a recent blog, “some Morbidity Hunters become Cherry Pickers” (a smart, witty, but quite critical piece on the ProPublica Report Card). But asking whether this report card will have unintended consequences isn’t the right question. The right question is – will it leave Bobby better off? I think it will. Instead of choosing based on a sample size of one (his buddy who also had lung surgery), he might choose based on sample size of 40 or 60 or 80. Not perfect. Large confidence intervals? Sure. Lots of noise?  Yup. Inadequate risk-adjustment? Absolutely. But, better than nothing? Yes. A lot better.

All of this gets at a bigger point raised by Paul Levy:  is this really the best we can do? The answer, of course, is no. We can do much better, but we have chosen not to. We have this tool—it’s called the National Surgical Quality Improvement Program (NSQIP). It uses clinical data to carefully track complications across a large range of surgeries and it’s been around for about twenty years. Close to 600 hospitals use it (and about 3,000 hospitals choose not to). And no hospital that I’m aware of makes its NSQIP data publicly available in a way that is accessible and usable to patients. A few put summary data on Hospital Compare, but it’s inadequate for choosing a good surgeon. Why are the NSQIP data not collected routinely and made widely available? Because it’s hard to get hospitals to agree to mandatory data collection and public reporting. Obviously those with the power of the purse—Medicare, for instance—could make it happen. They haven’t.

Disruptive innovation, a phrase coined by Clay Christensen, is usually a new product that, to experts, looks inadequate. Because it is. These innovations are not, initially, as good as what the experts use (in this case, their network of surgeons). They initially dismiss the disrupter as being of poor quality. But disruptive innovation takes hold because, for a large chunk of consumers (i.e. patients looking for surgeons), the innovation is both affordable and better than the alternative. And once it takes hold, it starts to get better. And as it does, its unintended consequences will become dwarfed by its intended consequences:  making the system better. That’s what ProPublica has produced.  And that’s worth celebrating.

Ashish K Jha is a professor of public health at Harvard Medical School. His previous posts on THCB can be found here.

Categories: Uncategorized

2 replies »

  1. I completely agree. And I completely disagree.

    This is a step in the right direction. Unfortunately, it may be a misstep.

    The issue here is context, something that you and I have talked about many times in terms of quality and Health IT. Unfortunately, there is none.

    To make this a useful tool, ProPublica need to provide some.

    They need to acknowledge the potential limitations of this methodology in a way that the average lay reader can understand. Pointing people to a risk adjustment methodology and saying “really smart experts told us this works” isn’t going to cut it.

    Ultimately, this is about how we use data to tell a story. The authors clearly believe the benefits of this approach outweigh the potential negatives. They may be right. They may be wrong.

    What are our obligations when we present data as fact?

    Is this science? Or is this speech? Is it somewhere between? There’s nothing wrong with either approach – but I think we need to consider these questions very carefully as we progress.

    The public shaming element is a mistake in my opinion and is one of the things that raises red flags here. I get why the authors. It is an object of faith that doctors are trying to hide. This has a lot to do with Washington politics.

    I think we really need to get beyond that.

  2. Do you think that there will be enough surgeons surviving this ratings gauntlet to later serve the community? Could these surgical skills be better assessed earlier in the Board Certification exams rather than later, in public?

    Imagine that clever ways to assess individual airline pilots’ capability in managing severe emergencies–as in stalls and turbulence and loss of engines, etc.–and that these results were publicly displayed at the airline website or at the ticket counter. Would this public knowledge disrupt business?

    Should such ratings systems be applied to critical service providers in many other sectors?

    I’m not being reflexly critical. I just wonder about really late unintended consequencies.