Pay for performance, the catchall term for policies that purport to pay doctors and hospitals based on quality and cost measures, has been taking a bashing.
Last November, University of Pittsburgh and Harvard researchers published a major study in Annals of Internal Medicine showing that a Medicare pay-for-performance program did not improve quality or reduce cost and, to make matters worse, it actually penalized doctors for caring for the poorest and sickest patients because their “quality scores” suffered. In December, Ankur Gupta and colleagues reported that a Medicare program that rewards and punishes hospitals based on arbitrary limits on the number of hospital admissions of heart failure patients may have increased death rates. On New Year’s Day, the New York Times reported that penalties for “inappropriate care” concocted by Veterans Affairs induced an Oregon hospital to deny acute medical care to its sickest patients, including an 81-year-old “malnourished and dehydrated” vet with skin ulcers and broken ribs.
And just three weeks ago, the Medicare Payment Advisory Commission recommended that Congress repeal a Medicare pay-for-performance program, imposed by Congress in 2015, because the program is costly and ineffective.
This bad news comes on top of a decade of less-publicized research indicting policies intended to reward and penalize doctors based on measures — most of them inaccurate — of their cost and quality. That research demonstrates that penalties against doctors:
Encourage doctors and hospitals to avoid or “fire” sicker patients who drag down quality scores due to factors outside physicians’ control
Cause some doctors to stop using lifesaving treatments if they don’t result in bonuses
Create interruptions in needed medical care
Reduce job satisfaction and undermine altruism and professionalism among doctors
Cause doctors to game quality measures. For example, a Medicare program that punished hospitals for hospital-acquired infections actually induced some hospitals to characterize infections acquired after admission as “present upon admission” or to simply not report the infection rather than reduce actual infection rates.
Subjecting doctors and hospitals to carrots and sticks hasn’t worked for several reasons. The most fundamental one: Clinician skill is not the only factor that determines the quality of care. Consider one widely used performance measure: the percent of patients diagnosed with high blood pressure whose blood pressure is brought under control. Doctors who treat older, sicker, and poorer patients with high blood pressure will inevitably score worse on this so-called quality measure than doctors who treat healthier and higher-income patients.
This divergence between actual and measured skill will happen — regardless of economic incentives — because of factors outside physicians’ control. These include patients’ health, genes, income, ability and willingness to exercise, access to health insurance, and stressors at home and work. In other words, this “performance” measure is not a measure of quality but a mishmash of many factors, only one of which might be physician skill.
The use of such crude performance measures creates several destructive side effects, most notably harm to patients. This harm is inflicted in two ways. First, doctors who treat a disproportionate share of sicker and poorer patients are the most likely to be hit with penalties and therefore end up with reduced resources with which to treat their patients. Second, the certainty that sicker and poorer patients drag down doctors’ scores causes some doctors to avoid treating these patients, causing serious preventable illness and additional medical costs.
With all the bad news about pay-for-performance programs and their destructive effects, it would be easy to assume that the concept will soon die a well-deserved death. In an editorial accompanying the Annals of Internal Medicine study, Harvard’s Ashish Jha and Boston University’s Austin Frakt, both of whom had previously expressed sympathy for paying bonuses, argued that it was time to abandon pay-for-performance programs. The Annals study “should be the final nail in the coffin of the current generation of P4P [pay-for-performance],” they wrote.
Yet we aren’t celebrating the death of this policy because evidence has never mattered to its proponents. Bonus-and-penalty policies became wildly popular among policymakers and the insurance industry, even though there was no evidence supporting the fad when it took off in the early 2000s. Although research indicting pay for performance has piled up since then, policymakers and academic cheerleaders have either ignored it or argued that pay for performance only needs tweaking.
But their suggested tweaks, such as increasing payments to doctors, don’t work. A nationwide incentive and penalty program in the United Kingdom paid an extra $40,000 per year on average to family doctors and still failed to improve care.
In the early 2000s, pay for performance was endorsed by influential groups and individuals, including the Medicare Payment Advisory Commission and Donald Berwick, who was later to become President Obama’s administrator of the Centers for Medicare and Medicaid Services. These endorsements cited no research. As one review paper put it in 2006, pay-for-performance programs “are being implemented in a near-scientific vacuum.”
Despite the lack of evidence, proponents hyped the costly policy with great confidence. “There’s no question that pay for performance will work,” said Thomas Scully, CMS administrator under President George W. Bush, in 2003. Berwick, who had declared in 1995 that pay-for-performance policies are “toxic,” “naïve,” and “absolutely wrong,” asserted in 2003 that payment for performance should become “a top national priority.” Berwick’s 180-degree reversal illustrates how powerful pay-for-performance folklore had become by the early 2000s, even without a shred of good evidence.
Thanks to the groundless cheerleading by health-policy heavyweights, bonus-and-penalty programs spread like crabgrass through the American health care system. By the late 2000s, objective research on pay for performance began to trickle in. By the early 2010s, there was more than enough evidence to conclude that it does not work and even harms patients. Jha and Frakt concluded that practices that care for lower income or sicker patients received greater penalties, “essentially creating a reverse Robin Hood effect” that may have “exacerbated existing disparities in care.”
The Medicare Payment Advisory Commission and other critics of Medicare’s current pay-for-performance program have adopted a baffling response to this research. They argue that Medicare should terminate its program but that other organizations should continue to use the same crude pay-for-performance schemes that Medicare uses. Jha and Frakt, for example, justify the abandonment of pay for performance on the ground that “alternative payment models,” most notably accountable care organizations, “have exhibited more promising performance than standard P4P programs.”
We disagree. Accountable care organizations have failed just as badly as pay for performance, in large part because they, just like Medicare, dish out rewards and penalties using the same crude “performance” measures.
Performance-based pay may improve the sales of products like dishwashers and computer products. But it is irrelevant to the complexities and professionalism of good doctoring and other human services like education. The research on pay for performance in health care is now conclusive: It’s time to terminate these harmful bonus-and-penalty schemes.
Kip Sullivan, J.D., is a member of the Health Care for All Minnesota Policy Advisory Committee and the legislative strategy committee of the Minnesota Chapter of Physicians for a National Health Program. Stephen Soumerai, Sc.D., is professor of population medicine and founding and former director of the Division of Health Policy and Insurance Research at Harvard Medical School, where he teaches research methods.
This post was first published on Stat News.