Medicare Payment Advisory Commission (MedPAC) and other proponents of the
Hospital Readmissions Reduction Program (HRRP) justified their support for the
HRRP with the claim that research had already demonstrated how hospitals could
reduce readmissions for all Medicare fee-for-service patients, not just
for groups of carefully selected patients. In this three-part series, I am
reviewing the evidence for that claim.
We saw in Part I and Part II that the research MedPAC cited in its 2007 report to Congress (the report Congress relied on in authorizing the HRRP) contained no studies supporting that claim. We saw that the few studies MedPAC relied on that claimed to examine a successful intervention studied interventions administered to carefully selected patient populations. These populations were severely limited by two methods: The patients had to be discharged with one of a handful of diagnoses (heart failure, for example); and the patients had to have characteristics that raised the probability the intervention would work (for example, patients had to agree to a home visit, not be admitted from a nursing home, and be able to consent to the intervention).
In this final installment, I review the research cited by the Yale New Haven Health Services Corporation (hereafter the “Yale group”) in their 2011 report to CMS in which they recommended that CMS apply readmission penalties to all Medicare patients regardless of diagnosis and regardless of the patient’s interest in or ability to respond to the intervention. MedPAC at least limited its recommendation (a) to patients discharged with one of seven conditions/procedures and (b) to patients readmitted with diagnoses “related to” the index admission. The Yale group threw even those modest restrictions out the window.
The Yale group recommended what they called a “hospital-wide (all-condition) readmission measure.” Under this measure, penalties would apply to all patients regardless of the condition for which they were admitted and regardless of whether the readmission was related to the index admission (with the exception of planned admissions). “Any readmission is eligible to be counted as an outcome except those that are considered planned,” they stated. (p. 10)  The National Quality Forum (NQF) adopted the Yale group’s recommendation almost verbatim shortly after the Yale group presented their recommendation to CMS.
In their 2007 report, MedPAC offered these examples of related and unrelated readmissions: “Admission for angina following discharge for PTCA [angioplasty]” would be an example of a related readmission, whereas “[a]dmission for appendectomy following discharge for pneumonia” would not. (p. 109) Congress also endorsed the “related” requirement (see Section 3025 of the Affordable Care Act, the section that authorized CMS to establish the HRRP). But the Yale group dispensed with the “related” requirement with an astonishing excuse: They said they just couldn’t find a way to measure “relatedness.” “[T]here is no reliable way to determine whether a readmission is related to the previous hospitalization …,” they declared. (p. 17) Rather than conclude their “hospital-wide” readmission measure was a bad idea, they plowed ahead on the basis of this rationalization: “Our guiding principle for defining the eligible population was that the measure should capture as many unplanned readmissions as possible across a maximum number of acute care hospitals.” (p. 17) Thus, to take one of MedPAC’s examples of an unrelated admission, the Yale group decided hospitals should be punished for an admission for an appendectomy within 30 days after discharge for pneumonia. 
notion that hospitals can reduce readmissions, and that punishing them for
“excess” readmissions will get them to do that, became conventional wisdom
during the 2000s on the basis of very little evidence. The Medicare Payment
Advisory Commission (MedPAC) urged Congress to enact the Hospital Readmissions
Reduction Program (HRRP) beginning in 2007, and in 2010 Congress did so. State
Medicaid programs and private insurers quickly adopted similar programs.
The rapid adoption of readmission-penalty programs without evidence confirming they can work has created widespread concern that these programs are inducing hospitals to increase utilization of emergency rooms and observation units to reduce readmissions within 30 days of discharge (the measure adopted by the Centers for Medicare and Medicaid Services [CMS] in its final rule on the HRRP), and this in turn may be harming sicker patients. Determining whether hospitals are gaming the HRRP and other readmission-penalty schemes by diverting patients to ERs and observation units (and perhaps by other means) should be a high priority for policy-makers. 
Part I of this series I proposed to address the question of whether hospitals
are gaming the HRRP by asking (a) does research exist describing methods by
which hospitals can reduce readmissions under the HRRP and, in the event the
answer is yes, (b) does that research demonstrate that those methods cost no
more than hospitals save. If the answer to the first question is no, that would
lend credence to the argument that the HRRP and other readmission-penalty
schemes are contributing to rising rates of emergency visits and observation
stays. If the answer to second question is also no, that would lend even more
credence to the argument that hospitals are gaming the HRRP.
In Part I, I noted that proponents of readmission penalties, including MedPAC and the Yale New Haven Health Services Corporation (hereafter the “Yale group”), have claimed or implied that hospitals have no excuse for not reducing readmission rates because research has already revealed numerous methods of reducing readmissions without gaming. I also noted many experts disagree, and quoted a 2019 statement by the Agency for Healthcare Research and Quality that “there is no consensus” on what it is hospitals are supposed to do to reduce readmissions.
this article, I review the research MedPAC cited in its June 2007 report to
Congress, the report that the authors of the Affordable Care Act (ACA) cited in
Section 3025 (the section that instructed CMS to establish the HRRP). In Part
III of this series I will review the studies cited by the Yale group in their
2011 report to CMS recommending the algorithm by which CMS calculates “excess”
readmissions under the HRRP. We will see that the research these two groups
relied upon did not justify support for the HRRP, and did not describe
interventions hospitals could use to reduce readmissions as the HRRP defines
“readmission.” The few studies cited by these groups that did describe an
intervention that could reduce readmissions:
The notion that hospital readmission rates are a “quality” measure reached the status of conventional wisdom by the late 2000s. In their 2007 and 2008 reports to Congress, the Medicare Payment Advisory Commission (MedPAC) recommended that Congress authorize a program that would punish hospitals for “excess readmissions” of Medicare fee-for-service (FFS) enrollees. In 2010, Congress accepted MedPAC’s recommendation and, in Section 3025 of the Affordable Care Act (ACA) (p. 328), ordered the Centers for Medicare and Medicaid Services (CMS) to start the Hospital Readmissions Reduction Program (HRRP). Section 3025 instructed CMS to target heart failure (HF) and other diseases MedPAC listed in their 2007 report.  State Medicaid programs and the insurance industry followed suit.
Today, twelve years after MedPAC recommended the HRRP and seven years after CMS implemented it, it is still not clear how hospitals are supposed to reduce the readmissions targeted by the HRRP, which are all unplanned readmissions that follow discharges within 30 days of patients diagnosed with HF and five other conditions. It is not even clear that hospitals have reduced return visits to hospitals within 30 days of discharge. The ten highly respected organizations that participated in CMS’s first “accountable care organization” (ACO) demonstration, the Physician Group Practice (PGP) Demonstration (which ran from 2005 to 2010), were unable to reduce readmissions (see Table 9.3 p. 147 of the final evaluation) The research consistently shows, however, that at some point in the 2000s many hospitals began to cut 30-day readmissions of Medicare FFS patients. But research also suggests that this decline in readmissions was achieved in part by diverting patients to emergency rooms and observation units, and that the rising rate of ER visits and observation stays may be putting sicker patients at risk  Responses like this to incentives imposed by regulators, employers, etc. are often called “unintended consequences” and “gaming.”
To determine whether hospitals
are gaming the HRRP, it would help to know, first of all, whether it’s possible
for hospitals to reduce readmissions, as the HRRP defines them, without gaming.
If there are few or no proven methods of reducing readmissions by improving
quality of care (as opposed to gaming), it is reasonable to assume the HRRP has
induced gaming. If, on the other hand, (a) proven interventions exist that reduce
readmissions as the HRRP defines them, and (b) those interventions cost less
than, or no more than, the savings hospitals would reap from the intervention
(in the form of avoided penalties or shared savings), then we should expect much
less gaming. (As long as risk-adjustment of readmission rates remains crude, we
cannot expect gaming to disappear completely even if both conditions are met.)
The message comes in over the office slack line at 1:05 pm. There are four patients in rooms, one new, 3 patients in the waiting room. Really, not an ideal time to deal with this particular message.
“Kathy the home care nurse for Mrs. C called and said her weight yesterday was 185, today it is 194, she has +4 pitting edema, heart rate 120, BP 140/70 standing, 120/64 sitting”
I know Mrs. C well. She has severe COPD from smoking for 45 of the last 55 years. Every breath looks like an effort because it is. The worst part of it all is that Mrs. C just returned home from the hospital just days ago.
The day after NBC releases a story on a ‘ground-breaking’ observational study demonstrating caramel macchiatas reduce the risk of death, everyone expects physicians to be experts on the subject. The truth is that most of us hope John Mandrola has written a smart blog on the topic so we know intelligent things to tell patients and family members.
A minority of physicians actually read the original study, and of those who read the study, even fewer have any real idea of the statistical ingredients used to make the study. Imagine not knowing whether the sausage you just ate contained rat droppings. At least there is some hope the tongue may provide some objective measure of the horror within.
Data that emerges from statistical black boxes typically have no neutral arbiter of truth. The process is designed to reveal from complex data sets, that which cannot be readily seen. The crisis created is self-evident: With no objective way of recognizing reality, it is entirely possible and inevitable for illusions to proliferate.
Historically, the Centers for Medicare & Medicaid Services’ (CMS) stance on the influence that social determinants of health (SDOH) have on health outcomes has been equal parts signal and noise. In April 2016, the agency announced it would begin adjusting the Medicare Advantage star ratings for dual-eligibility and other social factors. This was amid calls for increased equity in the performance determinations from the managed care industry. At the same time, CMS continued to refuse risk-adjustment for SDOH in the Hospital Readmissions Reduction Program (HRRP) despite the research supporting the influence of these factors on the HRRP.
It wasn’t until Congress interceded with the 21st Century Cures Act that CMS conceded to adjusting for dual-eligibility under the new stratified approach to determining HRRP penalties beginning in fiscal year 2019. The new methodology compares hospital readmission performance to peers within the same quintile of dual-eligible payer mix. The debate surrounding the adjustment of incentive-based performance metrics for SDOH likely is to continue, as many feel stratification is a step in the right direction, albeit a small one. And importantly, the Cures Act includes the option of direct risk-adjustment for SDOH, as deemed necessary by the Secretary of Health and Humans Services.
SDOH are defined as “the conditions in which people are born, grow, live, work and age.” The multidimensional nature of SDOH reach far beyond poverty, requiring a systemic approach to effectively moderate their effects on health outcomes. The criteria used to identify SDOH include factors that have a defined association with health, exist before the delivery of care, are not determined by the quality of care received and are not readily modifiable by health care providers.
The question of modifiability is central to the debate. In the absence of reimbursement for treating SDOH, providers lack the resources to modify health outcomes attributable to social complexities. Therefore, statistical adjustments are needed to account for differences in these complexities to ensure risk-adjusted performance comparisons of hospitals are accurate.
The Hospital Readmissions Reduction Program (HRRP), one of numerous pay-for-performance (P4P) schemes authorized by the Affordable Care Act, was sprung on the Medicare fee-for-service population on October 1, 2012 without being pre-tested and with no other evidence indicating what it is hospitals are supposed to do to reduce readmissions. Research on the impact of the HRRP conducted since 2012 is limited even at this late date , but the research suggests the HRRP has harmed patients, especially those with congestive heart failure (CHF) (CHF, heart attack, and pneumonia were the first three conditions covered by the HRRP). The Medicare Payment Advisory Commission (MedPAC) disagrees. MedPAC would have us believe the HRRP has done what MedPAC hoped it would do when they recommended it in their June 2007 report to Congress (see discussion of that report in Part I of this two-part series). In Chapter 1 of their June 2018 report to Congress, MedPAC claimed the HRRP has reduced 30-day readmissions of targeted patients without raising the mortality rate.
MedPAC is almost certainly wrong about that. What is indisputable is that MedPAC’s defense of the HRRP in that report was inexcusably sloppy and, therefore, not credible. To illustrate what is wrong with the MedPAC study, I will compare it with an excellent study published by Ankur Gupta et al. in JAMA Cardiology in November 2017. Like MedPAC, Gupta et al. reported that 30-day CHF readmission rates dropped after the HRRP went into effect. Unlike MedPAC, Gupta et al. reported an increase in mortality rates among CHF patients. 
We will see that the study by Gupta et al. is more credible than MedPAC’s for several reasons, the most important of which are: (1) Gupta et al. separated in-patient from post-discharge mortality, while MedPAC collapsed those two measures into one, thus disguising any increase in mortality during the 30 days after discharge; (2) Gupta et al.’s method of controlling for differences in patient health was superior to MedPAC’s because they used medical records data plus claims data, while MedPAC used only claims data.
I will discuss as well research demonstrating that readmission rates have not fallen when the increase in observation stays and readmissions following observations stays are taken into account, and that some hospitals are more willing to substitute observation stays for admissions than others and thereby escape the HRRP penalties.
All this research taken together indicates the HRRP has given CHF patients the worst of all worlds: No reduction in readmissions but an increase in mortality, and possibly higher out-of-pocket costs for those who should have been admitted but were assigned to observation status instead.
One of the main goals of the Affordable Care Act (ACA), perhaps second only to improving access, was to improve the quality of care in our health system. Now several years out, we are at a point where we can ask some difficult questions as they relate to value and equity. Did the ACA improve quality of care in the ways it intended to? Did it do so for some people, or hospitals, more than others?
How did the ACA Attempt to Improve Quality?
Three particular programs created by the ACA are worthy to note in this regard. The Hospital Acquired Condition Reduction Program (HACRP) took effect on October 1, 2014 and was created to penalize hospitals scoring in the worst quartile for rates of hospital-acquired conditions outlined by the CMS. The Hospital Readmissions Reduction Program (HRRP), which began for patients discharged on October 1, 2012, required CMS to reduce payments to short-term, acute-care hospitals for readmissions within 30 days for specific conditions, including acute myocardial infarction, pneumonia, and heart failure.The Medicare Hospital Value-Based Purchasing Program (HVBP) started in FY2013, was built to improve quality of care for Medicare patients by rewarding acute-care hospitals with incentive payments for improvements on a number of established quality measures related to clinical processes and outcomes, efficiency, safety, and patient experience.