notion that hospitals can reduce readmissions, and that punishing them for
“excess” readmissions will get them to do that, became conventional wisdom
during the 2000s on the basis of very little evidence. The Medicare Payment
Advisory Commission (MedPAC) urged Congress to enact the Hospital Readmissions
Reduction Program (HRRP) beginning in 2007, and in 2010 Congress did so. State
Medicaid programs and private insurers quickly adopted similar programs.
The rapid adoption of readmission-penalty programs without evidence confirming they can work has created widespread concern that these programs are inducing hospitals to increase utilization of emergency rooms and observation units to reduce readmissions within 30 days of discharge (the measure adopted by the Centers for Medicare and Medicaid Services [CMS] in its final rule on the HRRP), and this in turn may be harming sicker patients. Determining whether hospitals are gaming the HRRP and other readmission-penalty schemes by diverting patients to ERs and observation units (and perhaps by other means) should be a high priority for policy-makers. 
Part I of this series I proposed to address the question of whether hospitals
are gaming the HRRP by asking (a) does research exist describing methods by
which hospitals can reduce readmissions under the HRRP and, in the event the
answer is yes, (b) does that research demonstrate that those methods cost no
more than hospitals save. If the answer to the first question is no, that would
lend credence to the argument that the HRRP and other readmission-penalty
schemes are contributing to rising rates of emergency visits and observation
stays. If the answer to second question is also no, that would lend even more
credence to the argument that hospitals are gaming the HRRP.
In Part I, I noted that proponents of readmission penalties, including MedPAC and the Yale New Haven Health Services Corporation (hereafter the “Yale group”), have claimed or implied that hospitals have no excuse for not reducing readmission rates because research has already revealed numerous methods of reducing readmissions without gaming. I also noted many experts disagree, and quoted a 2019 statement by the Agency for Healthcare Research and Quality that “there is no consensus” on what it is hospitals are supposed to do to reduce readmissions.
this article, I review the research MedPAC cited in its June 2007 report to
Congress, the report that the authors of the Affordable Care Act (ACA) cited in
Section 3025 (the section that instructed CMS to establish the HRRP). In Part
III of this series I will review the studies cited by the Yale group in their
2011 report to CMS recommending the algorithm by which CMS calculates “excess”
readmissions under the HRRP. We will see that the research these two groups
relied upon did not justify support for the HRRP, and did not describe
interventions hospitals could use to reduce readmissions as the HRRP defines
“readmission.” The few studies cited by these groups that did describe an
intervention that could reduce readmissions:
The notion that hospital readmission rates are a “quality” measure reached the status of conventional wisdom by the late 2000s. In their 2007 and 2008 reports to Congress, the Medicare Payment Advisory Commission (MedPAC) recommended that Congress authorize a program that would punish hospitals for “excess readmissions” of Medicare fee-for-service (FFS) enrollees. In 2010, Congress accepted MedPAC’s recommendation and, in Section 3025 of the Affordable Care Act (ACA) (p. 328), ordered the Centers for Medicare and Medicaid Services (CMS) to start the Hospital Readmissions Reduction Program (HRRP). Section 3025 instructed CMS to target heart failure (HF) and other diseases MedPAC listed in their 2007 report.  State Medicaid programs and the insurance industry followed suit.
Today, twelve years after MedPAC recommended the HRRP and seven years after CMS implemented it, it is still not clear how hospitals are supposed to reduce the readmissions targeted by the HRRP, which are all unplanned readmissions that follow discharges within 30 days of patients diagnosed with HF and five other conditions. It is not even clear that hospitals have reduced return visits to hospitals within 30 days of discharge. The ten highly respected organizations that participated in CMS’s first “accountable care organization” (ACO) demonstration, the Physician Group Practice (PGP) Demonstration (which ran from 2005 to 2010), were unable to reduce readmissions (see Table 9.3 p. 147 of the final evaluation) The research consistently shows, however, that at some point in the 2000s many hospitals began to cut 30-day readmissions of Medicare FFS patients. But research also suggests that this decline in readmissions was achieved in part by diverting patients to emergency rooms and observation units, and that the rising rate of ER visits and observation stays may be putting sicker patients at risk  Responses like this to incentives imposed by regulators, employers, etc. are often called “unintended consequences” and “gaming.”
To determine whether hospitals
are gaming the HRRP, it would help to know, first of all, whether it’s possible
for hospitals to reduce readmissions, as the HRRP defines them, without gaming.
If there are few or no proven methods of reducing readmissions by improving
quality of care (as opposed to gaming), it is reasonable to assume the HRRP has
induced gaming. If, on the other hand, (a) proven interventions exist that reduce
readmissions as the HRRP defines them, and (b) those interventions cost less
than, or no more than, the savings hospitals would reap from the intervention
(in the form of avoided penalties or shared savings), then we should expect much
less gaming. (As long as risk-adjustment of readmission rates remains crude, we
cannot expect gaming to disappear completely even if both conditions are met.)
The message comes in over the office slack line at 1:05 pm. There are four patients in rooms, one new, 3 patients in the waiting room. Really, not an ideal time to deal with this particular message.
“Kathy the home care nurse for Mrs. C called and said her weight yesterday was 185, today it is 194, she has +4 pitting edema, heart rate 120, BP 140/70 standing, 120/64 sitting”
I know Mrs. C well. She has severe COPD from smoking for 45 of the last 55 years. Every breath looks like an effort because it is. The worst part of it all is that Mrs. C just returned home from the hospital just days ago.
The Hospital Readmissions Reduction Program (HRRP), one of numerous pay-for-performance (P4P) schemes authorized by the Affordable Care Act, was sprung on the Medicare fee-for-service population on October 1, 2012 without being pre-tested and with no other evidence indicating what it is hospitals are supposed to do to reduce readmissions. Research on the impact of the HRRP conducted since 2012 is limited even at this late date , but the research suggests the HRRP has harmed patients, especially those with congestive heart failure (CHF) (CHF, heart attack, and pneumonia were the first three conditions covered by the HRRP). The Medicare Payment Advisory Commission (MedPAC) disagrees. MedPAC would have us believe the HRRP has done what MedPAC hoped it would do when they recommended it in their June 2007 report to Congress (see discussion of that report in Part I of this two-part series). In Chapter 1 of their June 2018 report to Congress, MedPAC claimed the HRRP has reduced 30-day readmissions of targeted patients without raising the mortality rate.
MedPAC is almost certainly wrong about that. What is indisputable is that MedPAC’s defense of the HRRP in that report was inexcusably sloppy and, therefore, not credible. To illustrate what is wrong with the MedPAC study, I will compare it with an excellent study published by Ankur Gupta et al. in JAMA Cardiology in November 2017. Like MedPAC, Gupta et al. reported that 30-day CHF readmission rates dropped after the HRRP went into effect. Unlike MedPAC, Gupta et al. reported an increase in mortality rates among CHF patients. 
We will see that the study by Gupta et al. is more credible than MedPAC’s for several reasons, the most important of which are: (1) Gupta et al. separated in-patient from post-discharge mortality, while MedPAC collapsed those two measures into one, thus disguising any increase in mortality during the 30 days after discharge; (2) Gupta et al.’s method of controlling for differences in patient health was superior to MedPAC’s because they used medical records data plus claims data, while MedPAC used only claims data.
I will discuss as well research demonstrating that readmission rates have not fallen when the increase in observation stays and readmissions following observations stays are taken into account, and that some hospitals are more willing to substitute observation stays for admissions than others and thereby escape the HRRP penalties.
All this research taken together indicates the HRRP has given CHF patients the worst of all worlds: No reduction in readmissions but an increase in mortality, and possibly higher out-of-pocket costs for those who should have been admitted but were assigned to observation status instead.
Egged on by the Medicare Payment Advisory Commission (MedPAC), Congress has imposed multiple pay-for-performance (P4P) schemes on the fee-for-service Medicare program. MedPAC recommended most of these schemes between 2003 and 2008, and Congress subsequently imposed them on Medicare, primarily via the Affordable Care Act (ACA) of 2010 and the Medicare Access and CHIP Reauthorization Act (MACRA) of 2015.
MedPAC’s five-year P4P binge began with the endorsement of the general concept of P4P at all levels – hospital, clinic, and individual physician – in a series of reports to Congress in 2003, 2004, and 2005. This was followed by endorsements of vaguely described iterations of P4P, notably the “accountable care organization” in 2006 , punishment of hospitals for “excess” readmissions in 2007 , the “medical home” in 2008 and the “bundled payment” in 2008. None of these proposals were backed up by anything resembling evidence.
Congress endorsed all these schemes without asking for evidence or further details. Congress dealt with the vagueness of, and lack of evidence supporting, MedPAC’s proposals simply by ordering CMS to figure out how to make them work. CMS staff added a few more details to these proposals in the regulations they drafted, but the details were petty and arbitrarily adopted (how many primary doctors had to be in an ACO, how many patients had to sit on the advisory committee of a “patient-centered medical home,” how many days had to expire between a discharge and an admission to constitute a “readmission,” etc.).
New rule, new culture
This process – invention of nebulous P4P schemes by MedPAC, unquestioning endorsement by Congress, and clumsy implementation by CMS – is not working. Every one of the proposals listed above has failed to cut costs (with the possible exception of bundled payments for hip and knee replacements) and may be doing more harm than good to patients. These proposals are failing for an obvious reason – MedPAC and Congress subscribe to the belief that health policies do not need to be tested for effectiveness and safety before they are implemented. In their view, mere opinion suffices.
This has to stop. In this two-part essay I argue for a new rule: MedPAC shall not propose, and Congress shall not authorize, any program that has not been shown by rigorously conducted experiments to be effective at lowering cost without harming patients, improving quality, or both. This will require a culture change at MedPAC. Since its formation in 1997, MedPAC has taken the attitude that it does not have to provide any evidence for its proposals, and it does not have think through its proposals in enough detail to be tested. Over the last two decades MedPAC has demonstrated repeatedly that it believes merely opining about a poorly described solution is sufficient to discharge its obligation to Congress, taxpayers, and Medicare enrollees. Continue reading…
When persons are admitted to a hospital, insurers’ payment rates are based on the diagnosis, not the number of days in the hospital (known as a “length of stay”). As a result, once the admission is triggered, the hospital has important economic incentive to discharge the patient as quickly as possible. My physician colleagues used to refer to this as “treat, then street.”
Unfortunately, discharging patients too soon can result in readmissions. That’s why I have agreed with others that diagnosis-based payment systems and a policy of “no pay” for readmissions were working at cross purposes. Unified bundled payment approaches like this seem to be a good start.
But that’s all theoretical. What’s the science have to say?
Peter Kaboli and colleagues looked at the push-pull relationship between diagnosis-based payment incentives and the likelihood of readmissions in a scientific paper just published in the Annals of Internal Medicine.
The authors used the U.S. Veterans Administration (VA) Hospital’s “Patient Treatment Files” to examine length of stay versus readmissions in 129 VA hospitals. The sample consisted of over 4 million admissions and readmissions (defined as within 30 days and not involving another institution) from 1997 to 2010. The mean age started out at 63.8 years and increased to 65.5 years, while the proportion of persons aged 85 years or older increased from 2.5% to 8.8%. Over the years, admissions also grew more complicated with a higher rate of co-morbid conditions, such as diseases of the kidney (from 5% to 16%).
As length of stay went down, readmissions should have gone up, right?
I’ve been getting emails about the NY Times piece and my quotation that the penalties for readmissions are “crazy”. Its worth thinking about why the ACA gets hospital penalties on readmissions wrong, what we might do to fix it – and where our priorities should be.
A year ago, on a Saturday morning, I saw Mr. “Johnson” who was in the hospital with a pneumonia. He was still breathing hard but tried to convince me that he was “better” and ready to go home. I looked at his oxygenation level, which was borderline, and suggested he needed another couple of days in the hospital. He looked crestfallen. After a little prodding, he told me why he was anxious to go home: his son, who had been serving in the Army in Afghanistan, was visiting for the weekend. He hadn’t seen his son in a year and probably wouldn’t again for another year. Mr. Johnson wanted to spend the weekend with his kid.
I remember sitting at his bedside, worrying that if we sent him home, there was a good chance he would need to come back. Despite my worries, I knew I needed to do what was right by him. I made clear that although he was not ready to go home, I was willing to send him home if we could make a deal. He would have to call me multiple times over the weekend and be seen by someone on Monday. Because it was Saturday, it was hard to arrange all the services he needed, but I got him a tank of oxygen to go home with, changed his antibiotics so he could be on an oral regimen (as opposed to IV) and arranged a Monday morning follow-up. I also gave him my cell number and told him to call me regularly.
The debate over pay for performance in healthcare gets progressively more interesting, and confusing. And, with Medicare’s recent launch of its value-based purchasing and readmission penalty programs, the debate is no longer theoretical.
If we weren’t talking about the central policy question of a field as important as healthcare, we could call this a draw and move on. But the stakes are too high, so it’s worth taking a moment to review what we know.
In the U.S., the main test of P4P has been Medicare’s Hospital Quality Incentive Demonstration (HQID) program. A recent analysis of this program, which offered relatively small performance-based bonuses to a sample of 252 hospitals in the large Premier network, found that, after 6 years, hospitals in the intervention group had no better outcomes than those (3363 hospitals) in the control arm. Prior papers from the HQID demonstrated mild improvements in adherence to some process measures, but – as in a disconcerting number of studies – this did not translate into meaningful improvements in hard outcomes such as mortality.
As a cardiac electrophysiologist, I’m pretty far removed from public policy. But I have to admit that I was interested in the latest move by CMS to cut their Medicare payment rates to hospitals by invoking pay cuts for hospital readmissions. The Chicago Tribune‘s article is enlightening and filled with some interesting anecdotes after the first round of pay cuts were implemented:
(1) The vast majority of Illinois hospitals were penalized (112 of 128)
(2) Heart failure, heart attack, and pneumonia patients were targeted first because they are viewed as “obvious.”
(3) “A lot of places have put a lot of work and not seen improvement,” said Dr. Kenneth Sands, senior vice president for quality at Beth Israel.
(4) Even the nation’s #1 Best Hospital (according to US News and World Report) lost out.
The tendency of government to impose crude performance metrics on hospitals is a well known phenomenon, but its use is growing as jurisdictions look for ways to cut their budgets. The latest example is found in Massachusetts.
As reported by the MA Hospital Association:
Governor Deval Patrick’s FY2013 state budget proposal includes $40 million in rate cuts for hospitals. A significant portion of these cuts would be made through highly questionable policy changes. One of the more troubling policies would double penalties on hospitals for re-admissions that occurred in 2010.
The 2012 MassHealth acute hospital RFA – the main contract between the state and hospitals serving Medicaid patients — introduced a new preventable readmission penalty for hospitals that MassHealth determined had higher-than-expected preventable readmission rates.
Inpatient payment rates for 24 hospitals were reduced by 2.2% in FY2012. Now the administration is proposing to double the penalty to 4.4% in FY2013. There are so many things wrong with this. First, as I have reported in the past:
Even if the readmission rate is the right metric to use for comparison purposes, we don’t have a model that would accurately compare one hospital to the others. This suggests that the time is not ripe to use this measure for financial incentives or penalties. It might give the impression of precision, but it is not, in fact, analytically rigorous enough for regulatory purposes.