I just finished reading the 962-page MACRA rule CMS released late in April. I was prepared for the mind-numbing complexity of the document. What I was not prepared for was CMS’s glib treatment of two fundamental issues: The woeful inaccuracy of the scores CMS will use to punish and reward doctors, and the cost to doctors of participating in ACOs, “medical homes” and other “alternative payment models” (APMs)
These are not peripheral issues. If CMS dishes out financial rewards and punishments based on inaccurate data, MACRA will, at best, have no impact on cost and quality and may well have a negative effect. The second problem – the high cost of setting up and running APMs – may not be as lethal as the inaccurate-data problem, but at minimum it will reduce physician participation in APMs and, therefore, the already slim probability that APMs will reduce Medicare costs and improve quality.
In this comment and two more to come, I will review both of these problems and CMS’s what-me-worry attitude toward them. I begin with a jaw-dropping example of CMS’s reckless indifference to its inability to measure physician “merit” accurately.
Apres moi le deluge
Outside the bubble where Congress and CMS live, there is a widespread recognition that CMS cannot measure physician “performance” accurately. Here are three statements by experts to that effect:
- “[T]he practical reality is that … CMS, despite heroic efforts, cannot accurately measure any physician’s overall value, now or in the foreseeable future.” (Berenson and Kaye, “Measuring a physician’s value….” New Eng J Med, 2013 )
- “We can’t estimate what the MIPs performance results will be, but the experience with other individual or group-level assessment of clinician performance generally finds that most clinicians cannot be easily differentiated from average. For example, the value modifier results for 2015, which in Medicare applies to large groups of 100 clinicians or more, found that 80 percent could not be differentiated from average, and they received no adjustment.” (Kate Bloniarz, MedPAC staff, transcript of January 16, 2016 MedPAC meetiing, p. 74)
- “The [MIPS] resource use measures are scheduled to become more important, but measures to date have a poor track record in identifying efficient physicians and practices. For example, 96 percent of physician practices were scored as ‘average cost’ using similar measures in the 2016 Value-Based Payment Modifier program.” (Clough and McClellan, “Implementing MACRA….” JAMA, 2016)
What these statements tell us is that CMS cannot accurately measure the value or merit of the vast majority of physicians. In a world where evidence guides policy-making rather than groupthink, CMS would acknowledge this fact. But CMS refuses to do that. CMS simply couldn’t find the space in its ponderous MACRA rule to make even a single comment like the three I quoted above.
To the contrary, CMS made it clear they are hell-bent on inflicting rewards and punishments on all doctors who treat Medicare patients regardless of the accuracy of their data. The proposed rule contains two tables showing how CMS’s pay-for-“merit” scheme would affect doctors in the “Merit-based Incentive Payment System” (MIPS) program (which is where most doctors will be in the early years of the MACRA regime). The tables show that 46 percent of doctors will be deemed to have unacceptable “merit” and therefore worthy of punishment while 54 percent will be “meritorious” and therefore deserving of rewards. Worse, one of the tables shows that doctors in small clinics will suffer far more than those in large systems. Table 64 shows that 87 percent of solo doctors and 70 percent of 2-to-9-doctor clinics will be punished while only 18 percent of doctors in clinic chains with over 100 doctors will be punished.
CMS’s failure to say a word elsewhere in the rule about the disproportionate punishment meted out to smaller clinics, and CMS’s refusal to admit it will be dishing out this punishment on the basis of crude measurement, is appalling!
Too much noise, not enough signal
The crudeness of CMS’s cost and quality measurement, and the high noise-to-signal ratio of the feedback to physicians such measurement guarantees, is due primarily to two intractable problems: CMS’s inability to determine accurately which patients “belong” to which physicians (the attribution problem), and CMS’s inability to adjust cost and quality scores for factors outside physician control (the risk adjustment problem). Either problem by itself makes accurate measurement very difficult and, at the individual doctor and clinic level, impossible for all but a few simple process measures. Together the two problems are a lethal one-two punch to the fantasy that CMS or anyone else will ever measure the “value” of the vast majority of physicians accurately. 
CMS and managed care advocates generally have very little to say about the risk-adjustment problem and almost nothing to say about the attribution problem. If they say anything at all about the crude risk adjusters in use today, it is that scientists are rapidly improving the accuracy of risk adjustment and it’s only a matter of time before the risk-adjustment problem is fixed. That is nonsense (see my next comment). As for the attribution problem, all we hear from managed care buffs is silence. I have been unable to find a single paper, peer-reviewed or otherwise, that demonstrates that any of the attribution methods used by CMS or other insurers are accurate enough to measure physician “value.”
Of the two sources of noise in CMS’s feedback I am discussing here – sloppy attribution and crude risk adjustment – the attribution problem is logically the first one we should address. I do so in the remainder of this comment. I’ll discuss the risk-adjustment problem and CMS’s indifference to the cost of participating in APMs in subsequent comments.
Phantom patients and medical hotels
The “attribution” fad is a relative newcomer as managed care fads go. It arose around 2005, which is approximately when the ACO and “medical home” concepts began their overnight journey from obscurity to conventional wisdom. CMS inaugurated its first test of the ACO concept, the Physician Group Practice (PGP) Demonstration, in 2005. In November 2006, the amorphous phrase “accountable care organization” was invented at a MedPAC meeting, and in March 2007 the amorphous phrase “medical home” was endorsed by four physician groups.
The ACO and “home” fads triggered the attribution craze because proponents of ACOs and “homes” didn’t want to force patients to enroll with ACOs and “homes” and to have to use only the providers in those entities. Apparently ACO and “home” proponents feared that an enrollment requirement would trigger the sort of patient rebellion that HMOs triggered in the 1990s.
In any event, having decided that enrollment was to be avoided and attribution required, CMS and the rest of the health policy cognoscenti then decided that attribution to ACOs and “homes” would be done with a two-step process: (1) Patients would be assigned to primary care doctors from whom they received the plurality of their primary care services (as determined by claims data); (2) patients would, unbeknownst to them, be assigned to the ACO or “home” the doctor they were attributed to was in. That two-step method was the one CMS used to assign patients to the ten “group practices” that participated in the PGP demo. CMS continued to use this method in its ACO and “home” demos.
There are two administrative advantages to CMS’s two-step method. First, the plurality-of-primary-care method allows CMS to assign a lot more patients than a majority method would. Second, the use of claims data only (as opposed to claims data plus medical records data) makes attribution financially feasible. But the two-step method has a serious disadvantage: A substantial portion of the patients the method attributes to doctors have no relationship or only a tenuous relationship with the doctor and, therefore, with the ACO or “home” the doctor is part of.
The seriousness of this defect became obvious during the PGP demo. The final evaluation of that demo reported that the PGPs lost 60 percent of their assigned patients over the five-year period the demo ran.  The loss rate appears to be even higher for Pioneer ACOs. The 23 ACOs that were still in the Pioneer demo at the end of 2013 lost 38 percent of “their” patients between 2012 and 2013.  Valerie Lewis et al. reported an annual loss rate of 31 percent for simulated Medicare Shared Savings Program ACOs (see p. 590). Friedberg et al. reported a 43 percent loss rate over three years among medical home clinics participating in the Pennsylvania Chronic Care Initiative.
For those of us not steeped in the peculiar traditions of the managed care culture, it is very difficult to understand why doctors should be “held accountable” for phantom patients. Or even for patients doctors see only infrequently. Similarly, it is very difficult for ordinary people to grasp why a clinic should be called a “medical home” when 43 percent of “its” patients disappear from the attribution list over a three-year period. If we must use cute labels, I suggest we call the medical home a “medical hotel” until such time as “homes” are no longer populated with sloppy attribution methods.
CMS’s silence about sloppy attribution is not acceptable
Since CMS began tinkering with attribution schemes a decade ago, it has acted as if it has no obligation to justify its use of any method of attribution. It can pick any scheme it likes and critics can go hang themselves. That see-no-evil attitude is conceivably justifiable for demonstrations affecting small slices of the physician and patient populations. But MACRA is no demonstration.
Yet in its proposed MACRA rule, CMS continues its see-no-evil attitude about its attribution methods. Here is the most informative statement in the MACRA rule that CMS makes about its attribution method: “Commenters [responding to CMS’s 2015 request for information on MACRA] also expressed concern that current attribution methods are holding many clinicians accountable for costs they have no control over, while other clinicians have no patients attributed and no way of calculating accurate scores.” (p. 137) Does CMS care what these commenters think? Apparently not. CMS has no comment. CMS simply tells us they will use the plurality-of-primary-care-
CMS should have said a lot more than that. CMS should have made at least these three statements about its attribution method:
- Its attribution method, while cheap, has substantially dulled the accuracy of its measurement of physician “performance” in the value-modifier program and the ACO and “home” demos;
- CMS’s two-step method has created high churn rates among patients assigned to ACOs and “homes”; and
- CMS has gotten into the habit of using the two-step approach without bothering to justify it, and CMS would now like the public to comment on whether its attribution method can be justified by any moral or logical principle.
It is obvious why CMS made no statements like these in its rule. The attribution problem isn’t fixable. CMS can cling to the plurality-of-primary-care-
CMS should at minimum tell us which form of poison it prefers. Ideally CMS will concede neither poison should be picked and that attributing patients hither and yon was never a good idea to begin with.
 Note that I am discussing the measurement of the “value” of all physicians who treat Medicare patients using claims data, which is what CMS proposes to do under MACRA. I am not saying it is impossible to measure accurately the cost or quality of specific medical services using claims data, medical records data, and data from other sources such as information about income. I’ll have more to say about accurate measurement of specific services in my next comment.
 Here is a quote from the final evaluation of the PGP demo: “PGPs generally retained approximately 70 percent of their assigned beneficiaries from one year to the next; and … PGPs generally retained approximately 40 percent of their assigned beneficiaries after five years.” (p. 221)
 ] Unlike the final evaluation of the PGP demo, the evaluation of the Pioneer ACO demo by L&M Policy reported retention rates for both patients and doctors. The retention rate among Pioneer ACO doctors between 2012 and 2013 was 73 percent (see p. 96), which means the loss rate was 27 percent. I derived the 38 percent loss rate for patients in the Pioneer ACO demo from data presented in Table 23 (p. 95) using the same unweighted averaging method L&M used to derive the retention rate for doctors.
 CMS states in the MACRA rule that it will continue to use the plurality-of-primary-care-