Categories

Tag: Kip Sullivan

How are hospitals supposed to reduce readmissions? Part III

By KIP SULLIVAN, JD

The Medicare Payment Advisory Commission (MedPAC) and other proponents of the Hospital Readmissions Reduction Program (HRRP) justified their support for the HRRP with the claim that research had already demonstrated how hospitals could reduce readmissions for all Medicare fee-for-service patients, not just for groups of carefully selected patients. In this three-part series, I am reviewing the evidence for that claim.

We saw in Part I and Part II that the research MedPAC cited in its 2007 report to Congress (the report Congress relied on in authorizing the HRRP) contained no studies supporting that claim. We saw that the few studies MedPAC relied on that claimed to examine a successful intervention studied interventions administered to carefully selected patient populations. These populations were severely limited by two methods: The patients had to be discharged with one of a handful of diagnoses (heart failure, for example); and the patients had to have characteristics that raised the probability the intervention would work (for example, patients had to agree to a home visit, not be admitted from a nursing home, and be able to consent to the intervention).

In this final installment, I review the research cited by the Yale New Haven Health Services Corporation (hereafter the “Yale group”) in their 2011 report to CMS in which they recommended that CMS apply readmission penalties to all Medicare patients regardless of diagnosis and regardless of the patient’s interest in or ability to respond to the intervention. MedPAC at least limited its recommendation (a) to patients discharged with one of seven conditions/procedures and (b) to patients readmitted with diagnoses “related to” the index admission. The Yale group threw even those modest restrictions out the window.

The Yale group recommended what they called a “hospital-wide (all-condition) readmission measure.” Under this measure, penalties would apply to all patients regardless of the condition for which they were admitted and regardless of whether the readmission was related to the index admission (with the exception of planned admissions). “Any readmission is eligible to be counted as an outcome except those that are considered planned,” they stated. (p. 10) [1] The National Quality Forum (NQF) adopted the Yale group’s recommendation almost verbatim shortly after the Yale group presented their recommendation to CMS.

In their 2007 report, MedPAC offered these examples of related and unrelated readmissions: “Admission for angina following discharge for PTCA [angioplasty]” would be an example of a related readmission, whereas “[a]dmission for appendectomy following discharge for pneumonia” would not. (p. 109) Congress also endorsed the “related” requirement (see Section 3025 of the Affordable Care Act, the section that authorized CMS to establish the HRRP). But the Yale group dispensed with the “related” requirement with an astonishing excuse: They said they just couldn’t find a way to measure “relatedness.” “[T]here is no reliable way to determine whether a readmission is related to the previous hospitalization …,” they declared. (p. 17) Rather than conclude their “hospital-wide” readmission measure was a bad idea, they plowed ahead on the basis of this rationalization: “Our guiding principle for defining the eligible population was that the measure should capture as many unplanned readmissions as possible across a maximum number of acute care hospitals.” (p. 17) Thus, to take one of MedPAC’s examples of an unrelated admission, the Yale group decided hospitals should be punished for an admission for an appendectomy within 30 days after discharge for pneumonia. [2]

Continue reading…

How are hospitals supposed to reduce readmissions? Part II

By KIP SULLIVAN, JD

The notion that hospitals can reduce readmissions, and that punishing them for “excess” readmissions will get them to do that, became conventional wisdom during the 2000s on the basis of very little evidence. The Medicare Payment Advisory Commission (MedPAC) urged Congress to enact the Hospital Readmissions Reduction Program (HRRP) beginning in 2007, and in 2010 Congress did so. State Medicaid programs and private insurers quickly adopted similar programs.

The rapid adoption of readmission-penalty programs without evidence confirming they can work has created widespread concern that these programs are inducing hospitals to increase utilization of emergency rooms and observation units to reduce readmissions within 30 days of discharge (the measure adopted by the Centers for Medicare and Medicaid Services [CMS] in its final rule on the HRRP), and this in turn may be harming sicker patients. Determining whether hospitals are gaming the HRRP and other readmission-penalty schemes by diverting patients to ERs and observation units (and perhaps by other means) should be a high priority for policy-makers. [1]

In Part I of this series I proposed to address the question of whether hospitals are gaming the HRRP by asking (a) does research exist describing methods by which hospitals can reduce readmissions under the HRRP and, in the event the answer is yes, (b) does that research demonstrate that those methods cost no more than hospitals save. If the answer to the first question is no, that would lend credence to the argument that the HRRP and other readmission-penalty schemes are contributing to rising rates of emergency visits and observation stays. If the answer to second question is also no, that would lend even more credence to the argument that hospitals are gaming the HRRP.

In Part I, I noted that proponents of readmission penalties, including MedPAC and the Yale New Haven Health Services Corporation (hereafter the “Yale group”), have claimed or implied that hospitals have no excuse for not reducing readmission rates because research has already revealed numerous methods of reducing readmissions without gaming. I also noted many experts disagree, and quoted a 2019 statement by the Agency for Healthcare Research and Quality that “there is no consensus” on what it is hospitals are supposed to do to reduce readmissions.

In this article, I review the research MedPAC cited in its June 2007 report to Congress, the report that the authors of the Affordable Care Act (ACA) cited in Section 3025 (the section that instructed CMS to establish the HRRP). In Part III of this series I will review the studies cited by the Yale group in their 2011 report to CMS recommending the algorithm by which CMS calculates “excess” readmissions under the HRRP. We will see that the research these two groups relied upon did not justify support for the HRRP, and did not describe interventions hospitals could use to reduce readmissions as the HRRP defines “readmission.” The few studies cited by these groups that did describe an intervention that could reduce readmissions:

Continue reading…

How Are Hospitals Supposed to Reduce Readmissions? | Part I

By KIP SULLIVAN

The notion that hospital readmission rates are a “quality” measure reached the status of conventional wisdom by the late 2000s. In their 2007 and 2008 reports to Congress, the Medicare Payment Advisory Commission (MedPAC) recommended that Congress authorize a program that would punish hospitals for “excess readmissions” of Medicare fee-for-service (FFS) enrollees. In 2010, Congress accepted MedPAC’s recommendation and, in Section 3025 of the Affordable Care Act (ACA) (p. 328), ordered the Centers for Medicare and Medicaid Services (CMS) to start the Hospital Readmissions Reduction Program (HRRP). Section 3025 instructed CMS to target heart failure (HF) and other diseases MedPAC listed in their 2007 report. [1] State Medicaid programs and the insurance industry followed suit.

Today, twelve years after MedPAC recommended the HRRP and seven years after CMS implemented it, it is still not clear how hospitals are supposed to reduce the readmissions targeted by the HRRP, which are all unplanned readmissions that follow discharges within 30 days of patients diagnosed with HF and five other conditions. It is not even clear that hospitals have reduced return visits to hospitals within 30 days of discharge. The ten highly respected organizations that participated in CMS’s first “accountable care organization” (ACO) demonstration, the Physician Group Practice (PGP) Demonstration (which ran from 2005 to 2010), were unable to reduce readmissions (see Table 9.3 p. 147 of the final evaluation) The research consistently shows, however, that at some point in the 2000s many hospitals began to cut 30-day readmissions of Medicare FFS patients. But research also suggests that this decline in readmissions was achieved in part by diverting patients to emergency rooms and observation units, and that the rising rate of ER visits and observation stays may be putting sicker patients at risk [2] Responses like this to incentives imposed by regulators, employers, etc. are often called “unintended consequences” and “gaming.”

To determine whether hospitals are gaming the HRRP, it would help to know, first of all, whether it’s possible for hospitals to reduce readmissions, as the HRRP defines them, without gaming. If there are few or no proven methods of reducing readmissions by improving quality of care (as opposed to gaming), it is reasonable to assume the HRRP has induced gaming. If, on the other hand, (a) proven interventions exist that reduce readmissions as the HRRP defines them, and (b) those interventions cost less than, or no more than, the savings hospitals would reap from the intervention (in the form of avoided penalties or shared savings), then we should expect much less gaming. (As long as risk-adjustment of readmission rates remains crude, we cannot expect gaming to disappear completely even if both conditions are met.)

Continue reading…

Obsessive Measurement Disorder: Etiology of an Epidemic

By KIP SULLIVAN JD 

Review of The Tyranny of Metrics by Jerry Z. Muller, Princeton University Press, 2018

In the introduction to The Tyranny of Metrics, Jerry Muller urges readers to type “metrics” into Google’s Ngram, a program that searches through books and other material published over the last five centuries. He tells us we will find that the use of “metrics” soared after approximately 1985. I followed his instructions and confirmed his conclusion (see graph below). We see the same pattern for two other buzzwords that activate Muller’s BS antennae – “benchmarks,” and “performance indicators.” [1]

Muller’s purpose in asking us to perform this little exercise is to set the stage for his sweeping review of the history of “metric fixation,” which he defines as an irresistible “aspiration to replace judgment based on personal experience with standardized measurement.” (p. 6) His book takes a long view – he takes us back to the turn of the last century – and a wide view – he examines the destructive impact of the measurement craze on the medical profession, schools and colleges, police departments, the armed forces, banks, businesses, charities, and foreign aid offices.

Foreign aid? Yes, even that profession. According to a long-time expert in that field, employees of government foreign aid agencies have “become infected with a very bad case of Obsessive Measurement Disorder, an intellectual dysfunction rooted in the notion that counting everything in government programs will produce better policy choices and improved management.” (p. 155)

Muller, a professor of history at the Catholic University of America in Washington, DC, makes it clear at the outset that measurement itself is not the problem. Measurement is helpful in developing hypotheses for further investigation, and it is essential in improving anything that is complex or requires discipline. The object of Muller’s criticism is the rampant use of crude measures of efficiency (cost and quality) to dish out rewards and punishment – bonuses and financial penalties, promotion or demotion, or greater or less market share. Measurement can be crude because it fails to adjust scores for factors outside the subject’s control, and because it measures only actions that are relatively easy to measure and ignores valuable but less visible behaviors (such as creative thinking and mentoring). The use of inaccurate measurement is not just a waste of money; it invites undesirable behavior in both the measurers and the “measurees.” The measurers receive misleading information and therefore make less effective decisions (for example, “body count” totals tell them the war in Viet Nam is going well), and the subjects of measurement game the measurements (teachers “teach to the test” and surgeons refuse to perform surgery on sicker patients who would have benefited from surgery).

What puzzles Muller, and what motivated him to write this book, is why faith in the inappropriate use of measurement persists in the face of overwhelming evidence that it doesn’t work and has toxic consequences to boot. This mulish persistence in promoting measurement that doesn’t work and often causes harm (including driving good teachers and doctors out of their professions) justifies Muller’s harsh characterization of measurement mavens with phrases like “obsession,” “fixation,” and “cult.” “[A]lthough there is a large body of scholarship in the fields of psychology and economics that call into question the premises and effectiveness of pay for measured performance, that literature seems to have done little to halt the spread of metric fixation,” he writes. “That is why I wrote this book.” (p. 13)

Continue reading…

New Study: Medicare’s Readmission Penalties May Be Killing Patients

By KIP SULLIVAN JD 

On the morning of December 21, I opened my copy of the New York Times to find an op-ed that said almost exactly what I had said in a two-part article The Health Care Blog posted two weeks earlier. The op-ed criticized the Hospital Readmissions Reduction Program (HRRP), one of dozens of “value-based payment” programs imposed on the Medicare fee-for-service program by the Affordable Care Act. The HRRP punishes hospitals if their rate of readmissions within 30 days following discharge exceeds the national average. The subtitle of the op-ed was, “A well-intentioned program created by the Affordable Care Act may have led to patient deaths.”

The first half of the op-ed made three points: (1) The HRRP appears to have reduced readmissions by raising the rate of observation stays and visits to emergency rooms;  (2) the penalties imposed by the Centers for Medicare and Medicaid Services (CMS) for “excessive readmissions” have fallen disproportionately on “safety net hospitals with limited resources”; and (3) “there is growing evidence that … death rates may be rising.”

That’s exactly what I said in articles published here on December 6 and December 7. In Part I, I described the cavalier manner in which the Medicare Payment Advisory Committee (MedPAC) endorsed the HRRP in its June 2007 report to Congress. In Part II, I criticized the methodology MedPAC used to defend the HRRP in its June 2018 report to Congress, and I compared that report to an excellent study of the HRRP published in JAMA Cardiology by Ankur Gupta et al. which suggested the HRRP is raising mortality rates. In its June 2018 report, MedPAC had claimed the HRRP has reduced the rate at which patients targeted by the HRRP were readmitted within 30 days after discharge without increasing mortality. Gupta et al., on the other hand, found that for one group of targeted patients – those with congestive heart failure (CHF) – mortality went up as 30-day readmissions went down.

Continue reading…

Part II | MedPAC’s Proposed “Reforms” Should Be Tested Before They’re Implemented: CMS’s Hospital Readmissions Reduction Program Is Exhibit A

By KIP SULLIVAN JD 

The Hospital Readmissions Reduction Program (HRRP), one of numerous pay-for-performance (P4P) schemes authorized by the Affordable Care Act, was sprung on the Medicare fee-for-service population on October 1, 2012 without being pre-tested and with no other evidence indicating what it is hospitals are supposed to do to reduce readmissions. Research on the impact of the HRRP conducted since 2012 is limited even at this late date [1], but the research suggests the HRRP has harmed patients, especially those with congestive heart failure (CHF) (CHF, heart attack, and pneumonia were the first three conditions covered by the HRRP). The Medicare Payment Advisory Commission (MedPAC) disagrees. MedPAC would have us believe the HRRP has done what MedPAC hoped it would do when they recommended it in their June 2007 report to Congress (see discussion of that report in Part I of this two-part series). In Chapter 1 of their June 2018 report to Congress, MedPAC claimed the HRRP has reduced 30-day readmissions of targeted patients without raising the mortality rate.

MedPAC is almost certainly wrong about that. What is indisputable is that MedPAC’s defense of the HRRP in that report was inexcusably sloppy and, therefore, not credible. To illustrate what is wrong with the MedPAC study, I will compare it with an excellent study published by Ankur Gupta et al. in JAMA Cardiology in November 2017. Like MedPAC, Gupta et al. reported that 30-day CHF readmission rates dropped after the HRRP went into effect. Unlike MedPAC, Gupta et al. reported an increase in mortality rates among CHF patients. [2]

We will see that the study by Gupta et al. is more credible than MedPAC’s for several reasons, the most important of which are: (1) Gupta et al. separated in-patient from post-discharge mortality, while MedPAC collapsed those two measures into one, thus disguising any increase in mortality during the 30 days after discharge; (2) Gupta et al.’s method of controlling for differences in patient health was superior to MedPAC’s because they used medical records data plus claims data, while MedPAC used only claims data.

I will discuss as well research demonstrating that readmission rates have not fallen when the increase in observation stays and readmissions following observations stays are taken into account, and that some hospitals are more willing to substitute observation stays for admissions than others and thereby escape the HRRP penalties.

All this research taken together indicates the HRRP has given CHF patients the worst of all worlds: No reduction in readmissions but an increase in mortality, and possibly higher out-of-pocket costs for those who should have been admitted but were assigned to observation status instead.

Continue reading…

MedPAC’s Proposed “Reforms” Should Be Tested Before They’re Implemented: CMS’s Hospital Readmissions Reduction Program Is Exhibit A

By KIP SULLIVAN JD 

Egged on by the Medicare Payment Advisory Commission (MedPAC), Congress has imposed multiple pay-for-performance (P4P) schemes on the fee-for-service Medicare program. MedPAC recommended most of these schemes between 2003 and 2008, and Congress subsequently imposed them on Medicare, primarily via the Affordable Care Act (ACA) of 2010 and the Medicare Access and CHIP Reauthorization Act (MACRA) of 2015.

MedPAC’s five-year P4P binge began with the endorsement of the general concept of P4P at all levels – hospital, clinic, and individual physician – in a series of reports to Congress in 2003, 2004, and 2005. This was followed by endorsements of vaguely described iterations of P4P, notably the “accountable care organization” in 2006 [1], punishment of hospitals for “excess” readmissions in 2007 [2], the “medical home” in 2008 and the “bundled payment” in 2008. None of these proposals were backed up by anything resembling evidence.

Congress endorsed all these schemes without asking for evidence or further details. Congress dealt with the vagueness of, and lack of evidence supporting, MedPAC’s proposals simply by ordering CMS to figure out how to make them work. CMS staff added a few more details to these proposals in the regulations they drafted, but the details were petty and arbitrarily adopted (how many primary doctors had to be in an ACO, how many patients had to sit on the advisory committee of a “patient-centered medical home,” how many days had to expire between a discharge and an admission to constitute a “readmission,” etc.).

New rule, new culture

This process – invention of nebulous P4P schemes by MedPAC, unquestioning endorsement by Congress, and clumsy implementation by CMS – is not working. Every one of the proposals listed above has failed to cut costs (with the possible exception of bundled payments for hip and knee replacements) and may be doing more harm than good to patients. These proposals are failing for an obvious reason – MedPAC and Congress subscribe to the belief that health policies do not need to be tested for effectiveness and safety before they are implemented. In their view, mere opinion suffices.

This has to stop. In this two-part essay I argue for a new rule:  MedPAC shall not propose, and Congress shall not authorize, any program that has not been shown by rigorously conducted experiments to be effective at lowering cost without harming patients, improving quality, or both. This will require a culture change at MedPAC. Since its formation in 1997, MedPAC has taken the attitude that it does not have to provide any evidence for its proposals, and it does not have think through its proposals in enough detail to be tested. Over the last two decades MedPAC has demonstrated repeatedly that it believes merely opining about a poorly described solution is sufficient to discharge its obligation to Congress, taxpayers, and Medicare enrollees.
Continue reading…

The verdict is in: All three of CMS’s “medical home” demonstrations have failed

Between September of 2016 and last month, CMS released “final evaluations” of all three of its “medical home” demonstrations. All three demos failed.

This spells bad news not just for the “patient-centered medical home” (PCMH) project, but for MACRA. The PCMH, along with the ACO and the bundled payment (BP), is one of the three main “alternative payment models” (APMs) within which doctors are supposed to be able to find shelter from the financial penalties inflicted by the MIPS (Merit-based Incentive Payment System) program which was recently declared to be unworkable by the Medicare Payment Advisory Commission. Medicare ACOs and virtually all Medicare BP programs are also failing. Thus, we may conclude what some predicted a long time ago – that neither arm of MACRA (the toxic MIPS program and the byzantine APM program) will work.

In this post I describe each of CMS’s three PCMH demos, review the findings of the final evaluations of the three demos, and then explore the reasons why all three demos failed. I’ll conclude that the most fundamental reason is that the PCMH is so poorly defined no one, including the doctors inside the PCMHs, knows what it’s supposed to do. That’s not to say that the hopes and dreams of PCMH proponents were never clear. They have always been clear. PCMH proponents have said over and over the PCMH is supposed to lower costs and improve care. But a clear expression of hopes and dreams is not the same thing as a clear description of what it is you’re dreaming about.Continue reading…

Why Do We Need ACOs and Insurance Companies?

Six years ago Ezekiel Emanuel and Jeffrey Liebman made the foolish prediction that ACOs would eat the insurance industry’s lunch. “By 2020, the American health insurance industry will be extinct,” they wrote. “Insurance companies will be replaced by accountable care organizations….”  This would happen, they argued, because ACOs are just so darned good at lowering costs compared with insurance companies.

The first Medicare ACO programs began in 2012. Today there are 800 to 1,000 ACOs in business. [1] But ACOs aren’t even close to displacing the insurance industry. The most obvious reason is they don’t want to be insurance companies – they don’t want to bear full insurance risk. And the reason for that is they can’t cut costs. The performance of the Medicare ACOs, which are the only ACOs for which we have reliable data, illustrates both problems: Very few want to accept “downside risk” (the risk of losing money if they can’t cut costs); and they are incapable of cutting costs.

ACO hype confronts reality: Reality wins

Anyone paying attention to the research knew even before 2012 that ACOs wouldn’t cut costs for a general population (as opposed to a small slice of the population that is very sick). The Physician Group Practice Demonstration, which was widely seen as the first test of the ACO concept, raised Medicare spending. According to the final evaluation of the demonstration, the ten participating ACOs raised Medicare’s costs by 1.2 percent over the five years the demonstration ran (2005-2010), and it might have been worse if the ACOs hadn’t upcoded. [2] This failure to cut costs occurred despite the fact that the ten participating “group practices”/ACOs were very experienced in managing risk. They had names anyone who studies health policy would recognize, including Dartmouth-Hitchcock Clinic, Geisinger Clinic, and Marshfield Clinic. According to the final report on the demo, “Seven of the ten participants had currently or previously owned a health maintenance organization ….” (p. 15)

Continue reading…

Practicing Medicine While Black
(Part II)

Managed care advocates see quality problems everywhere and resource shortages nowhere. If the Leapfrog Group, the Medicare Payment Advisory Commission, or some other managed care advocate were in charge of explaining why a high school football team lost to the New England Patriots, their explanation would be “poor quality.”

If a man armed with a knife lost a fight to a man with a gun, ditto: “Poor quality.” And their solution would be more measurement of the “quality,” followed by punishment of the losers for getting low grades on the “quality” report card and rewards for the winners. The obvious problem – a mismatch in resources – and the damage done to the losers by punishing them would be studiously ignored.

This widespread, willful blindness to the role that resource disparities play in creating ethnic and income disparities and other problems, and the concomitant widespread belief that all defects in the US health care system are due to insufficient “quality,” is difficult to explain. I will attempt to lay out the rudiments of an explanation in this essay.

In my first article in this two-part series, I presented evidence demonstrating that “pay-for-performance” (P4P) and “value-base purchasing” (VBP) (rewarding and punishing providers based on crude measures of cost and quality) punish providers who treat a disproportionate share of the poor and the sick.

Continue reading…

assetto corsa mods