A few months ago, the Centers for Medicare and Medicaid Services (CMS) put out its latest year of data on the Hospital Readmissions Reduction Program (HRRP). As a quick refresher – HRRP is the program within the Affordable Care Act (ACA) that penalizes hospitals for higher than expected readmission rates. We are now three years into the program and I thought a quick summary of where we are might be in order.
I was initially quite unenthusiastic about the HRRP (primarily feeling like we had bigger fish to fry), but over time, have come to appreciate that as a utilization measure, it has value. Anecdotally, HRRP has gotten some hospitals to think more creatively, focusing greater attention on the discharge process and ensuring that as patients transition out of the hospital, key elements of their care are managed effectively. These institutions are thinking more carefully about what happens to their patients after they leave the hospital. That is undoubtedly a good thing. Of course, there are countervailing anecdotes as well – about pressure to avoid admitting a patient who comes to the ER within 30 days of being discharged, or admitting them to “observation” status, which does not count as a readmission. All in all, a few years into the program, the evidence seems to be that the program is working – readmissions in the Medicare fee-for-service program are down about 1.1 percentage points nationally. To the extent that the drop comes from better care, we should be pleased.
HRRP penalties began 3 years ago by focusing on three medical conditions: acute myocardial infarction, congestive heart failure, and pneumonia. Hospitals that had high rates of patients coming back to the hospital after discharge for these three conditions were eligible for penalties. And the penalties in the first year (fiscal year 2013) went disproportionately to safety-net hospitals and academic institutions (note that throughout this blog, when I refer to years of penalties, I mean the fiscal years of payments to which penalties are applied. Fiscal year 2013, the first year of HRRP penalties, refers to the period beginning October 1, 2012 and ending September 30, 2013). Why? Because we know that when it comes to readmissions after medical discharges such as these, major contributors are the severity of the underlying illness and the socioeconomic status of the patient. The readmissions measure tries to adjust for severity, but the risk-adjustment for this measure is not very good. And let’s not even talk about SES. The evidence that SES matters for readmissions is overwhelming – and CMS has somehow become convincedthat if a wayward hospital discriminates by providing lousy care to poor people, SES adjustment would somehow give them a pass. It wouldn’t. As I’ve written before, SES adjustment, if done right, won’t give hospitals credit for providing particularly bad care to poor folks. Instead, it’ll just ensure that we don’t penalize a hospital simply because they care for more poor patients.
Surgical readmissions appear to be different. A few papers now have shown, quite convincingly, that the primary driver of surgical readmissions is complications. Hospitals that do a better job with the surgery and the post-operative care have fewer complications and therefore, fewer readmissions. Clinically, this makes sense. Therefore, surgical readmissions are a pretty reasonable proxy for surgical quality.
All of this gets us to year 3 of the HRRP. In year 3, CMS expanded the conditions for which hospitals were being penalized to include COPD as well as surgical readmissions, specifically knee and hip replacements. This is an important shift, because the addition of surgical readmissions should be helpful to good hospitals that provide high quality surgical care. Therefore, I would suspect that teaching hospitals, for instance, would do better now that the program also includes surgical readmissions than when the program did not. But, we don’t know.
So, with the release of year 3 data on readmissions penalties by individual hospital, we were interested in answering three questions: first, how many hospitals have managed to sustain penalties across all three years? Second, who are the hospitals who have gotten consistently penalized (all three years) versus not? And finally, do the penalties appear to be targeting a different group of hospitals in year 3 (when CMS included surgical readmissions) than they did in year 1 (when CMS just focused on medical conditions)?
Our Approach
We began with the CMS data released in October 2014, which lists, for each individual eligible hospital, the penalties it received for each of the three years of the penalty program. We linked these data to several databases that have detailed information about hospital characteristics, including size, teaching status, Disproportionate Share Hospital (DSH) Index – our proxy for safety net status — ownership, region of the country, etc. We ran both bivariate models as well as multivariable models. We show bivariate models because from a policy point of view, that’s the most salient (i.e. who got the penalties versus who didn’t).
Our Findings
Here’s what we found:
About 80% of eligible U.S. hospitals received a penalty for fiscal year 2015 and 57% of U.S. hospitals eligible for the penalties were penalized each of the three years. The penalties were not evenly distributed. While 41% of small hospitals received penalties in each of the three years, more than 70% of large hospitals did. There were large variations in likelihood of getting penalized every year based on region: 72% of hospitals in the Northeast versus 27% in the West. Teaching hospitals and safety-net hospitals were far more likely to be penalized consistently, as were the hospitals with the lowest financial margins (Table 1).
Table 1: Characteristics of hospitals receiving readmission
Consistent with our hypothesis, while penalties went up across the board for all hospitals, we found a shift in the relative level of penalties between 2013 (when the HRRP only included medical readmissions) versus 2015 (when the program included both medical and surgical readmissions). This really comes out in the data on major teaching hospitals: In 2013, the average penalty for teaching hospitals was 0.38% (compared to 0.25% for minor teaching or 0.29% for non-teaching). By 2015, that gap is gone: the average penalty for teaching hospitals was 0.44% versus 0.54% for non-teaching hospitals. Teaching hospitals got lower readmission penalties in 2015, presumably because of the addition of the surgical readmission measures, which tend to favor high quality hospitals. In the same way, we see the gap in terms of the penalty level between safety-net hospitals and other institutions narrowed between 2013 and 2015 (Figure).
Figure: Average Medicare payment penalty for excessive readmissions in 2013 and 2015
Note that “Safety-net” refers to hospitals in the highest quartile of disproportionate share index, and “Low DSH” refers to hospitals in the lowest quartile of disproportionate share index.
Interpretation
Your interpretation of these results may differ from mine, but here’s my take. Most hospitals got penalties in 2015 and a majority have been penalized all three years. Who is getting penalized seems to be shifting – away from a program that primarily targets teaching and safety-net hospitals towards one where the penalties are more broadly distributed, although the gap between safety-net and other hospitals remains sizeable. It is possible that this reflects teaching hospitals and safety-net hospitals improving more rapidly than others, but I suspect that the surgical readmissions, which benefit high quality (i.e. low mortality) hospitals are balancing out the medical readmissions, which, at least for some conditions such as heart failure, tends to favor lower quality (higher mortality) hospitals. Safety-net hospitals are still getting bigger penalties, presumably because they care for more poor patients (who are more likely to come back to the hospital) but the gap has narrowed. This is good news. If we can move forward on actually adjusting the readmissions penalty for SES (I like the way MedPAC has suggested) and continue to make headway on improving risk-adjustment for medical readmissions, we can then evaluate and penalize hospitals on how well they care for their patients. And that would be a very good thing indeed.
Categories: Uncategorized
Interesting Information..
Source: http://www.tariqdrabu.co.uk/
I agree that the HRPP is maturing, although in my mind it would be better if it covered an even broader range of conditions (perhaps even all conditions, or all conditions with a readmission rate above X%, where X is determined by a cost-benefit analysis).
The goal should be to have hospitals do whatever they can for every patient, not just those in a few targeted categories. I hypothesize (without data to back me up) that the interventions needed will be fairly similar across reasons for admission.
The lack of a good response to the strong evidence that SES is an important risk factor remains the weakest aspect of the program, and MedPAC’s response is an inadequate partial fix. Peer comparison will avoid penalties for different SES mix, but it won’t capture the fact that safety net hospitals, as a class, have accomplished a harder task and done much more work (since they are now near the same readmission rate as hospitals with easier patients).
An alternative that I find preferable to both peer comparison and risk adjustment is to pay more for accomplishing something that is harder to do. It is known to be more difficult to prevent readmission in a low SES patient, so doing so should be more highly rewarded (HRPP doesn’t have a rewards component, but this might be a mistake. However, a similar payment concept would be to penalize less for a low SES readmission.)
We did an RCT of a pay-for-performance program with the “pay more for what is harder to do” concept built into the design, although in this case “harder” included both SES and clinical complexity. We found that the approach improved outcomes that had been very difficult to budge, such as blood pressure control in diabetics (in the 10-17% range at baseline, ref: http://jama.jamanetwork.com/article.aspx?articleid=1737044). However, because many Medicaid patients joined private health plans during the study (so we could no longer separate out their results from higher SES patients), we did not have enough identifiably low SES patients to have adequate power to determine whether this approach worked for them.
MedPAC said they recommended peer leagues because of ease of implementation. I would propose the following implementation of paying more for what’s harder: Use whatever variable is proposed to be measured to define leagues, but apply it at the patient level. If assessing this variable at the patient level is not currently possible, use it at the hospital level for now, but signal to hospitals that we expect them to be able to report it at the patient level in the near future. However implemented, there should be a plan for evaluation, preferably through a natural experiment or RCT.
We just did a little ditty about this: https://soundcloud.com/zdoggmd/readmission-remix-r-kelly-parody
Great question. We haven’t looked but we could. My gut says no — that the reductions are not coming from a lot of the changes that hospitals are making within their walls — but instead from what they are doing after the patient leaves. But, that’s a data-free conjecture on my part.
What? You want to think about what would be optimal for patient care? That seems a little radical to me 🙂 Yes, I agree that something like that, where you hand off care from acute to rehab to test out whether the patient is read (with quick, seemless pickup if the patient isn’t) would be ideal. Would shorten LOS, improve functional status, and almost surely leave patients much worse off. But, as Bobby says below, the word “billing” does come to mind and makes this notion untenable in current environment.
Great hypothesis and that is possible. My suspicion is that it is the way the penalties are shifting towards more surgical conditions. But, that said, agree with the idea that observation stays are part of the explanation (though my sense from the data is that its only a modest part).
Hi, I like what you’ve done here. I think it may be interesting to ponder how many observation units these teaching hospitals have. We know that observation units, for some conditions, 1) decrease the LOS, costs and more obviously, 2) don’t count towards readmissions. We’ve published data showing that there are more observation units in urban and academic settings. Do you think that could be a piece of the puzzle in determining the source of the improvement?
Interesting. The word “billing” comes to mind.
Just a thought experiment: If an acute care hospital were physically in the same building and owned its own SNF, to which it always referred its discharges, would not it be medically good to have lots of discharges and readmissions?
The hospital would then be essentially testing or pinging the SNF by discharging patients early to see how they would do in that less intense environment. Back and forth they would go and there would be no aspersion cast on readmission at all.
Thanks for sharing that Bobby. Exactly what i was looking for.
When a hospital improves its readmit %, shouldn’t the LOSs become longer?…at least for those DRGs used in calculating the%? After all, everything a hospital does to improve its readmit% takes time: waiting for healing, waiting for additional diagnostic results and treatments to bite in, waiting for co-morbidies to be found and treated, waiting for doc and patient to psychologically prepare for discharge. Otherwise a hospital could improve its stats by simply intimidating its SNFs or nursing referrals by saying “Send these folks back to us and we are going to use other services!”
Does one find slightly longer LOSs when hospitals do improve readmit percentages?
I served 3 tenures on and off with the Nevada-Utah Medicare QIO from 1993 to 2013. My first 1993 assignment was to grind through “HCFA” data (UB-82 claims data) looking at 5- and 30-day readmits. e.g.,
http://www.bgladd.com/papers/NevadaPeerReview9193.PDF, see page 9.
Maybe we’ll finally make some headway. It was a priority QIO issue every time I worked there.