Between September of 2016 and last month, CMS released “final evaluations” of all three of its “medical home” demonstrations. All three demos failed.
This spells bad news not just for the “patient-centered medical home” (PCMH) project, but for MACRA. The PCMH, along with the ACO and the bundled payment (BP), is one of the three main “alternative payment models” (APMs) within which doctors are supposed to be able to find shelter from the financial penalties inflicted by the MIPS (Merit-based Incentive Payment System) program which was recently declared to be unworkable by the Medicare Payment Advisory Commission. Medicare ACOs and virtually all Medicare BP programs are also failing. Thus, we may conclude what some predicted a long time ago – that neither arm of MACRA (the toxic MIPS program and the byzantine APM program) will work.
In this post I describe each of CMS’s three PCMH demos, review the findings of the final evaluations of the three demos, and then explore the reasons why all three demos failed. I’ll conclude that the most fundamental reason is that the PCMH is so poorly defined no one, including the doctors inside the PCMHs, knows what it’s supposed to do. That’s not to say that the hopes and dreams of PCMH proponents were never clear. They have always been clear. PCMH proponents have said over and over the PCMH is supposed to lower costs and improve care. But a clear expression of hopes and dreams is not the same thing as a clear description of what it is you’re dreaming about.
I will close with the recommendation that policy makers should eliminate sprawling “reforms” like the PCMH and the ACO which target entire populations and replace them with pilot projects that focus on specific types of patients.
Review of the three demos and their final evaluations
The first “home” demonstration out of the chute was the Federally Qualified Health Centers Advanced Primary Care Practice Demonstration (referred to hereafter as the FQHC demo). It ran from November 2011 to November 2014. CMS published the final evaluation of this demo in September 2016. That evaluation, written by scholars at the RAND Corporation, concluded that the demo raised costs, had no impact on quality, and was associated with a worsening of burnout among staff. Here is an excerpt: “Demonstration sites were associated with significant increases in total Medicare expenditures during Year 3 relative to comparison sites and cumulatively when care management fee payments were excluded from the analysis (p<.01). When the fees were included, total Medicare expenditures were significantly higher in demonstration sites (p <.05) … [B]eneficiary surveys identified few significant differences in outcomes for demonstration and comparison FQHC patients.” (p. xxii) The report also noted that PCMH staff experienced severe stress. “[C]linicians and staff reported significant reductions in overall professional satisfaction and corresponding increases in stress, burnout, chaos, and likelihood of leaving their practices,” noted the report. (p. 124)
The other two PCMH demos started in October 2012, roughly one year after the FQHC demo began. The first of the two to receive a final evaluation was the Multi-payer Advanced Primary Care Practice (MAPCP) demo. This evaluation, written by RTI International, was published in June of 2017. RTI concluded that the demo raised Medicare’s costs but not by a statistically significant amount, and had “unimpressive effects” on quality (p. ES11). Here’s how they put it: “Medicare expenditures for the MAPCP Demonstration beneficiaries were … nearly $171 million more than the non-PCMH comparison beneficiaries” (after taking into account CMS’s payments to the PCMHs) (p. ES8), and “there were no consistent impacts by the MAPCP Demonstration on quality of care, access to care, utilization, or expenditures within or across states….” (p. ES12) RTI reported the same results for Medicaid. RTI did not attempt to assess staff morale, but the report did note, “Many practices across the MAPCP Demonstration states found it challenging to fund extended hours and to find staff to work the hours.” (ES6)
The last of the three demos to receive its final evaluation was the Comprehensive Primary Care Initiative (CPCI), which, like the MAPCP demo, ran from October 2012 to October 2016. The final report, written by Mathematica and published last month (May 2018), was as dismal as the evaluations of the other two demos. Mathematica concluded the CPCI had no impact on Medicare spending and had a “minimal” impact on the tiny handful of quality measures CMS used. “After including care management fees, Medicare expenditures increased by $6 PBPM [per beneficiary per month] more for CPC practices than for comparison practices,” the report stated. “The difference was not statistically significant.” (p. xviii) Moreover, the report found that the CPCI “had minimal effects on the limited claims-based quality-of-care process and outcome measures examined” and “had little impact on beneficiaries’ experience of care.” (p. xviii) Mathematica reported that the burdens associated with “transforming” clinics into PCMHs had no impact, good or bad, on burnout among PCMH staff. However, they did note that “some … care managers felt overwhelmed” (p. xxxii) and that “[R]espondents reported that participation throughout the CPC initiative was burdensome…. In addition…, several respondents reported that they had overall change fatigue.” (p. 135)
If the three final evaluations had taken into account the costs that PCMH clinics incurred in order to do whatever it is PCMHs do, all three evaluations would probably have concluded PCMHs raise total spending. A tiny body of research indicates these costs are well in excess of $100,000 per physician per year. However, only one of the contractors that CMS hired to write evaluations (RAND) attempted to determine how much PCMHs spent, and they came up empty. None of the three contractors attempted to determine how much CMS contributed in the form of services (webinars, data that allegedly helped clinics assess their “performance,” etc.). And in the case of the MAPCP and CPCI demos (which involved multiple payers, not just Medicare), Mathematica and RTI did not report what non-CMS payers contributed in the form of subsidies and services. 
It is now obvious the PCMH does not work. What went wrong?
A laundry list of aspirations is not a plan
I’m sorry to do this to you, but I must begin my exploration of why the CMS demos failed by asking you to look at the mandala below. You’re not looking at the Great Wheel of Life. You’re looking at CMS’s PCMH “change package.” According to Mathematica’s final and interim evaluations of the CPCI, this mandala – this collage of vague and manipulative phrases – expresses CMS’s muddled “understanding” of the PCMH. As Mathematica put it in the final evaluation, “The CPC [comprehensive primary care] change package describes the underlying logic of CPC, including the primary and secondary drivers to achieve the aims of CPC and the concepts and tactics that support the changes.” (footnote 1, p. xvii) Do you see any “underlying logic” in this “change package”? The title CMS chose for this mandala was, “Comprehensive Primary Care Initiative Logic Diagram.”
The wheel has four concentric rings around a circle in the middle with the words “patient and family.” Let’s ignore for now the sanctimonious assumption that patients were never the primary concern of health care professionals. Let’s concentrate on the phrases in the four rings and ask whether they’re understandable and whether any of them are evidence-based. What is “an engaged community,” an “environment to support comprehensive primary care,” “strategic use of practice revenue,” “enhanced accountable payment,” “coordination of care across the medical neighborhood,” “a culture of improvement,” or “optimal use of HIT,” to take the more oleaginous examples? How would anyone – doctors, CMS or evaluators – define and operationalize flabby phrases like that? And where is the evidence that any of the activities hinted at by these vague phrases lower costs or raise quality, much less do both simultaneously?
I have asked you to look at CMS’s “change package” because it illustrates at a glance why the PCMH has failed. The two most fundamental reasons are: The “definition” of the PCMH consists of multiple elements, nearly all of which are unsupported by evidence; and most of the elements are so vaguely defined they are impossible to operationalize. It’s that combination of (a) multiple elements unsupported by evidence and (b) multiple elements that elude clear definition that renders the PCMH concept so useless.
Note that the multiplicity of elements attributed to the PCMH is not by itself lethal. The Mediterranean diet and the lifestyle arm of the Diabetes Prevention Program are examples of things that are testable and usable even though they are defined by multiple components – whole grains, olive oil and fish in the case of the former, and exercise and diet classes in the case of the latter. All human beings know what “olive oil” and “exercise” mean, and those terms are, therefore, easy to operationalize (define clearly and in a uniform manner), and once that is done for all elements, the bundle of elements is testable. But “medical neighborhood,” “culture of change” and the other phrases in the CMS mandala are too imprecise to reduce to testable definitions. It’s the indefinability of nearly all of the multiple elements of the PCMH that dooms it. The multiplicity of elements alone is not fatal.
In fairness to CMS, the dual defects I’m discussing – multiple components vaguely defined, and lack of evidence supporting the components separately or as a package – were not created by CMS. They were invented by the four primary care groups who propelled the PCMH from esoteric idea to stardom in 2007 with the publication of a document entitled “Joint Principles of the Patient-Centered Medical Home”. This statement purported to define the PCMH with seven vague components called “principles.” Principle 1, for example, was “ongoing relationship with a personal physician.” How is CMS supposed to operationalize that notion? How would CMS employees or anyone else know when a “relationship” has progressed from “not ongoing” to “ongoing”? Principle 5, to take another example, was entitled, “Quality and safety are hallmarks.” According to this principle, doctors who wish to conjure said “hallmarks” must “support the attainment of optimal, patient-centered outcomes.” How would anyone know when Doctor X has progressed from not supporting “patient-centered outcomes” to supporting them? 
The seven principles in the Joint Statement are laudable, but they are merely vague expressions of aspiration. They do not define anything. They read like the Boy Scouts’ Oath — a scout is “trustworthy, loyal, helpful, friendly, courteous, kind,” etc. The Oath consists of wonderful aspirations, but do those aspirations constitute a testable program? Of course not. The seven principles in the Joint Statement don’t either.
If CMS had decided that the “home” as defined by the Joint Statement and other PCMH manifestoes is not testable, and had instead tested one or two of its more definable components, I would have no criticism of CMS. But CMS didn’t do that. They took the PCMH proponents’ lists of aspirations at face value, concocted their ridiculous “change package,” and then tried to operationalize portions of the “package” with requirements for clinics they called “milestones.” Examples of CPCI “milestones” enforced in 2016 included a requirement to “risk stratify all patients,” “[a]ssess patient experience through patient surveys or patient and family advisory council meetings…,” and “[p]articipate in regional and national learning offerings….” “Learning offerings”? As you can see, these “milestones” exhibit the defects we saw in the pious phrases in the “change package” – they are evidence-free and very vague. 
“Learnings” from the PCMH debacle
The contractors who wrote the evaluations for CMS failed to tell readers what I have just told you – that the PCMH is failing because it is so poorly defined no one, including PCMH staff, knows which activities PCMHs should engage in, and because evidence does not support the assumption that the poorly defined components of PCMHs lead to lower costs or higher quality. The evaluators instead offered a few speculative explanations.
The most common explanation was that the PCMHs just ran out of time. This excuse might be believable if we had any evidence to support the claim that three to five years is not long enough for at least some of the vaguely defined components of “homes” to kick in and produce results. But as I noted above, we have no research indicating the components do anything to lower costs or (with the possible exception of shared-decision-making tools, an element of some definitions of the PCMH) improve care and lower costs over any period of time. The contractors offered little or no evidence for this claim.
A second justification, offered by the authors of the evaluations of the MAPCP and FQHC evaluations, was that the PCMHs might have been underfunded. This may very well be true. Payments by Medicare and other insurers under the CPCI, for example, amounted to $70,045 per clinician in 2013 and $50,189 in 2016 (p. xxiv), far below the estimate of $105,000 for only partial PCMH implementation reported by Macgill et al.  But the insufficient-funding diagnosis presents a brain-twister. If the subsidies to the PCMHs were, say, doubled, that would raise total costs and make net savings even less likely.
A related argument offered by the Mathematica authors in an article just posted by Health Affairs is that PCMH doctors “might need stronger value-based financial incentives.” This is becoming a common excuse for the failure of ACOs as well. No evidence supports this speculation.
No, the fundamental defect in the PCMH is its flabby definition and the resulting lack of focus. It is time to replace the sprawling PCMH experiment with focused pilot projects aimed at improving the quality of care for specific types of patients. If lower costs result, we can count our blessings. As one “lead clinician” told Mathematica during an interview about the CPCI, CMS “tried to fix everything in one program, rather than pick one high-value target area, start it, assess it, and then build from there.” (p. 135) That is precisely the right explanation for the failure of the PCMH.
Too bad Mathematica didn’t think to endorse it.
 The failure of all three of CMS’s “home” demos is consistent with the failure of earlier “home” experiments, including the Southeastern Pennsylvania Chronic Care Initiative and the Veterans Health Administrations PCMH program . It is consistent as well with the literature on PCMHs. A 2013 review of 19 studies found “no evidence for overall cost savings.”
 Here is an edited version of the list of the seven “principles” enumerated by the 2007 statement on the “medical home” by the American Academy of Family Physicians and three other primary care specialty groups. Every one of the principles is vague and aspirational.
- Personal physician: each patient has an ongoing relationship with a personal physician….
- Physician directed medical practice: the personal physician leads a team of individuals at the practice level who collectively take responsibility for the ongoing care of patients.
- Whole person orientation: the personal physician is responsible for providing for all the patient’s health care needs….
- Care is coordinated and/or integrated across all elements of the complex health care system (e.g., subspecialty care, hospitals, home health agencies, nursing homes) and the patient’s community (e.g., family, public and private community-based services). Care is facilitated by registries, information technology, health information exchange and other means….
- Quality and safety are hallmarks of the medical home: Practices advocate for their patients to support the attainment of optimal, patient-centered outcomes ….
- Enhanced access to care is available through systems such as open scheduling, expanded hours and new options for communication between patients, their personal physician, and practice staff.
- Payment appropriately recognizes the added value provided to patients who have a patient-centered medical home.
 Mathematica made a feeble effort to defend CMS’s “milestones.” In their first annual report on the CPCI, Mathematica admitted, “While the Milestones themselves are not evidence based, they are rooted in strong conceptual thinking about what activities a practice needs to pursue to achieve comprehensive primary care.” (p. 81) Mathematica offered no support for this statement.
 Although the final evaluation of the CPCI reported that the payments to PCMHs were far below the $105,000 figure reported by Macgill et al., Mathematica stated, “More than three-quarters of practices reported on the CPC practice surveys in 2014, 2015, and 2016 that CPC payments – including care management fees and, when relevant, shared savings payments – were adequate or more than adequate relative to the costs of implementing CPC.” (p. xiv) It is not clear from Mathematica’s report that the respondents had any idea what they’re clinics, or the systems their clinics were part of, were actually spending on “home” activities.
Kip Sullivan, J.D., is a member of the Health Care for All Minnesota Policy Advisory Committee and the legislative strategy committee of the Minnesota Chapter of PNHP