“[We are supposed to gather information from patients] prior to the physician going into the room. It doesn’t happen. I’m going to be honest – the reality … is … we also are responsible for telephone triage, walk-in emergencies, diabetic meter teaching, I mean, the list goes on and on.”
That is a quote from an interview with a “care coordinator” for a “medical home” in Minnesota. Minnesota is one of the eight states that participated in the Multi-Payer Advanced Primary Care Practice (MAPCP) Demonstration, which is one of three experiments CMS has conducted testing the “patient-centered medical home” (PCMH) concept. The quote appears in a report published by the University of Minnesota in February 2016. (p. 75)
In this three-part series, I am addressing the question, What can we learn from the latest report from CMS about the MAPCP demo? The report in question is the second-year evaluation of the demo which CMS released with zero publicity on May 11, 2016. That evaluation reported that PCMHs have had virtually no effect on the cost or quality of medical care given to Medicare beneficiaries (with the possible exception of Vermont, where PCMHs lowered costs not counting CMS subsidies to PCMHs, but had little effect on quality.  Evaluations of the other two CMS “medical home” experiments have reached the same conclusion (see Table 2 of this Kaiser Family Foundation report and my comment here.
I open this essay with that quote from the Minnesota care coordinator because it captures the most important lesson we can learn from the second-year evaluation of the MAPCP demo: The fundamental flaw in the PCMH fad is that PCMHs don’t focus. They don’t focus because they are so vaguely defined. The extra resources they receive are not dedicated to any specific patients or services, but instead are spread out over all patients and all services.
The authors of the second-year evaluation of the MAPCP demo, RTI International and two subcontractors, do not say what I just said. So what do they say? How do they explain the underwhelming performance of the PCMHs?
Not enough money for what?
Based on interviews with 269 people in the eight states participating in the demo, RTI concludes the PCMHs encountered numerous obstacles, including inadequate resources, useless feedback from insurers, and clunky EMRs. However, RTI explicitly identifies only one factor in the PCMHs’ failure: Inadequate resources. RTI repeatedly quotes and paraphrases interviewees complaining about inadequate resources. Here are four examples:
- “[S]takeholders in most states [were] concerned about practices’ ability to afford to sustain their practice enhancements. One stakeholder in Maine reported that chronic underfunding of primary care left practices struggling to transform care even with the additional funds from the MAPCP Demonstration. Others questioned the initiative’s continued financial viability after the end of the demonstration…. Stakeholders in Pennsylvania and Rhode Island feared that a reduction in demonstration funds to practices might jeopardize their ability to sustain transformation efforts.” (p. 2-8)
- “Limited resources were a barrier [in Vermont]. Time and financial resources … remained stretched for many participants. Many respondents commented that they did not have enough staff to meet the needs of the program.” (p. 5-14)
- “One particularly frustrated [Minnesota] physician complained, ‘We’re supplying all the manpower and all the money to save the system money, but we get nothing in return….’ Another said, ‘Not many places would sign up to lose as much money as we have!’” (p. 7-24)
- “Some practices wished that the demonstration had been all-payer (e.g., including patients from self-insured employer plans), instead of multi-payer, so that they would have seen greater financial rewards. This also would have allowed them to offer medical home services to all of their patients, instead of having to keep track of which patients were insured by which payer. [P]ractices were worried about what would happen when the MAPCP Demonstration ended and payments stopped.” (p. 2-24)
And for good measure, here’s a quote from the Year Three evaluation indicating that frustration with insufficient resources continues:
“In Year One and Year Two, Pennsylvania struggled with retaining payers and practices in their initiative, and, in Year Three, attrition of both payers and practices continued. Enthusiasm and support for the initiative eroded as practices confronted a lack of shared savings and reduced medical home payments. Vermont also struggled with waning support, as numerous practices and CHTs [community health teams] expressed frustration that their enhanced payments under this initiative were small…. In Year Three, some practices said that staff retention was challenging because of questions about the sustainability of the medical home initiative.” (p. 2-34)
But RTI does not tell us how much money PCMHs need, what percent of PCMHs are underfunded, or which PCMH goods or services are underfunded. In fact, RTI makes no attempt to determine how much money the PCMHs receive or where the money went. That is understandable. CMS designed a very sloppy experiment. “Each state has its own payment levels and established its own methods,” RTI reports, and “each state had broad flexibility to adopt its own definition of what constitutes [a PCMH].” (p. 1-1) Not surprisingly, “Demonstration payment methods and generosity varied widely by state….” (p. 2-20). Moreover, PCMHs in all states received money and in-kind contributions from numerous sources. In short, neither CMS nor anyone else is keeping track of all the money pouring into PCMHs.
And if all that weren’t evidence enough of how poorly designed the MAPCP demo is, we also have no data on how PCMHs spend their subsidies. We have only RTI’s broad-brush impressions. RTI suggests that most of the extra money “homes” received from the various payers went into the salaries of additional staff, mainly “care coordinators,” and secondarily into electronic health records (EHRs). But RTI gives us hints only.
Here are two examples of RTI’s ambiguous language on this important issue:
- “Demonstration payments usually were used directly to offset the cost of new care coordinators’ salaries or the purchase or upgrading of EHRs. Some payments simply went towards practices’ bottom lines…. Practices owned by larger health care systems typically reported not receiving demonstration payments directly, as they were paid to their organization’s corporate headquarters….” (p. 2-24)“Care coordinators or care managers … were clearly a central aspect of the PCMH model, and they were viewed as the most transformative and valuable part of the model in all eight states.…” (p. 2-17)
If in fact most of the additional resources PCMHs received were used to hire “care coordinators,” and they constitute “the most transformative part of the model,” then we might be able to discern what PCMH activities are underfunded by asking, What is it “care coordinators” do?
Care coordinators coordinate care
Unfortunately, RTI can’t tell us what “care coordinators” do. The job description of the “care coordinator,” like that of the PCMH, is all over the map. The statement by the Minnesota “care coordinator” I quoted at the outset suggests coordinators are utility infielders – they do everything except empty the wastebaskets. RTI’s descriptions of coordinators reinforce this impression. Here are two examples:
- “[T]here was wide variation in every aspect of care coordination…. Care managers sometimes engaged in a variety of activities, depending on whether they were managing the care of a moderate or a complex patient.” (p. 2-17).
- “Not all [Minnesota PCMH] practices were using their care coordinators in the same way, and this led to substantial variation in the number of patients each care coordinator oversaw…. The types of individuals hired as care coordinators varied considerably across practices. Some seemed to favor using RNs in this role, while others used licensed practical nurses…, medical assistants, or social workers…. One patient advocate …. felt ‘it was a point of concern’ that ‘there is no minimum requirement for who can do the care coordination.’” (p. 7-19)
- RTI reports that in several states “community health workers” played vaguely defined roles that overlapped the vaguely defined roles of “care coordinators.” RTI states, for example, that the work being done by “community health teams” (CHTs) in Vermont is so poorly defined and documented that Vermont insurance companies, which are required to finance CHTs, are upset. Here is how RTI puts it in its third-year evaluation : “[I]n Vermont, payers expressed frustration that the CHTs did not systematically track the services they provided. According to commercial payers, this led to a lack of accountability and made it difficult to determine which of the teams’ services created a return on investment.” (p. 2-8)
One could make the identical argument about “care coordinators”: They also “lack accountability” because they don’t “systematically track the services they provide” and nobody else does either. In fact, we can make the identical argument about PCMHs: Because no one is minding the store – unknown amounts of money pour into PCMHs and unknown amounts pour out for unknown purposes– it’s impossible to know which PCMH services “create a return on investment.”
To sum up, in our quest to determine which PCMH services are underfunded, it didn’t help to ask what “care coordinators” do. RTI can’t tell us. Nobody can.
What do we do now?
So here’s what we have learned from RTI’s Year Two evaluation: PCMHs are accomplishing little or nothing that your garden-variety doctors are not already accomplishing; PCMH staff and many others feel PCMHs are woefully underfunded; but no one can say by how much PCMHs are underfunded or which services might be underfunded. So what’s the solution?
RTI doesn’t tell us. So what should readers conclude? Should we attribute the failure of PCMHs to underfunding? Or do we diagnose the problem as insufficient focus caused by the flabby definition of the “home”? I vote for the latter diagnosis. I’ll explain why in my next comment.
 The second-year evaluation compares MAPCP PCMHs not just to non-PCMHs, which makes sense, but also to PCMHs not participating in the MAPCP demo, which does not make sense. In this three-part series, I discuss only the PCMH versus non-PCMH comparisons.