In my three-part series on why we know so little about ACOs, I presented three arguments:
- We have no useful information on what ACOs do for patients;
- that’s because the definition of “ACO” is not a definition but an expression of hope; and
- the ACO’s useless definition is due to dysfunctional habits of thought within the managed care movement that have spread throughout the health policy community.
Judging from the comments from THCB readers, there is no disagreement about points 1 and 3. With one exception (David Introcaso), no one took issue with point 2 either. Introcaso agreed with point 1 (we have no useful information on ACOs), but he argued that the ACO has been well defined by CMS regulations, and CMS, not the amorphous definition of “ACO,” is the reason researchers have failed to produce useful information on ACOs.
Another reply by Michael Millenson did not challenge any of the three points I made. Millenson’s point was that people outside the managed care movement use manipulative labels so what’s the problem?
I’ll reply first to Introcaso’s post, and then Millenson’s. I’ll close with a plea for more focus on specific solutions to specific problems and less tolerance for the unnecessarily abstract diagnoses and prescriptions (such as ACOs) celebrated today by far too many health policy analysts.
Summary of Introcaso’s comment and my response
I want to state at the outset I agree wholeheartedly with Introcaso’s statement that something is very wrong at CMS. I don’t agree with his rationale, but his characterization of CMS as an obfuscator is correct.
Introcaso argues that in fact the ACO is well defined. He offers this syllogism:
- Two-thirds of all ACOs are now MSSP ACOs;
- in 2011 CMS issued regulations governing the Medicare Shared Savings Program (MSSP) which defined ACOs “regulatorily in great detail”; ergo
- it just isn’t true the ACO is poorly defined.
Having explained, to his satisfaction, that in fact CMS has posted a useful definition of “ACO,” he then blames CMS for the dearth of useful research on MSSP ACOs (although the title of his article indicates he blames CMS for the lack of research on all ACOs). He argues CMS has released very little “evaluative information” on MSSP ACO spending and utilization, and this “has made it inexplicably difficult for providers and policy analysts to understand how the MSSP model works operationally.”
As I explain below, neither of Introcaso’s conclusions are correct. CMS has not defined what ACOs are, and even if CMS were to release cost and utilization data on MSSP ACOs analysts would still be unable to produce useful research on what ACOs do for patients. I agree that CMS has provided very little data on MSSP ACOs, but the same cannot be said of CMS’s Pioneer ACOs and the “practices” that participated in the Physician Group Practice (PGP) Demonstration that CMS ran between 2005 and 2010. And yet researchers have produced no studies that tell us what those ACOs and PGPs did for their patients that generated those data.
Moral of story: CMS is not the fundamental problem; the flabby definition of ACO is the fundamental problem.
We don’t know how many ACOs exist
Introcaso opens his essay with the claim that two-thirds of all ACOs are MSSP ACOs. We don’t know that. Introcaso cites no document for that claim, but it doesn’t matter because estimates of the number of ACOs are all over the map and none are credible. And why is that? All together now: Because the ACO has no useful definition. That means any group of clinics, hospitals and insurers can claim they have set one up, and researchers can claim they spied any number of them, and no one can argue with them.
But let’s set this issue aside and assume, for the sake of argument, that MSSP ACOs are the only ACOs we need to concern ourselves with. Is it true that MSSP ACOs acquired a useful definition in 2011 when CMS issued rules governing ACOs?
CMS’s ACO regs do not tell us what ACOs do for patients
Introcaso asserts, “Medicare ACOs are defined regulatorily in great detail. This fact is made obvious by the … 430-relevant Federal Register pages.” No, that “fact” is not only not obvious, it’s not a fact.
The MSSP ACO regulations merely define the laws governing entities that want to be designated an “ACO” by CMS. To say that these regs tell us what ACOs are is like saying that describing the laws governing trucks tells us what trucks are. If I tell you a truck is something that has to stop by the side of the road to be weighed, that its drivers must have a license, and (to take an example dear to the hearts of managed care proponents), its owner has to pester customers to fill out “satisfaction” surveys and send them to the government, have I told you what a truck is? No, I have not.
Here are some examples of what the CMS MSSP regulations tell us:
- Contracts between ACOs and CMS will last for three years;
- Medicare beneficiaries will not choose ACOs but rather will be “attributed” to them by CMS; and
- The Office of the Inspector General will inspect the records of some ACOs.
Is there anything in those regs that tells us what ACOs do for their “attributed” patients that is different from what non-ACO providers do for their real patients? Of course not.
All of the MSSP regs are like that. If you think I exaggerate, I urge you to open this link to the 2011 Final Rule.
Since Introcaso’s first premise (MSSP ACOs equal two-thirds of all ACOs) is unprovable and probably wrong, and his second premise (CMS has defined the ACO) is unquestionably wrong, the first of his two conclusions fails. CMS has never published a useful definition of the ACO.
Cost and utilization data do not tell us what ACOs do
CMS may be slow in releasing data on the MSSP ACOs, but it has released five years of cost, utilization and quality data for the ten PGPs that participated in the PGP Demo and two years of data for the 32 entities that signed up for the Pioneer ACO program.
But despite all that data, CMS and analysts cannot tell us what the PGPs/ACOs did for patients that your run-of-the-mill clinic and hospital does not do.
Consider one of the oddest findings from the evaluation of the first two years of the Pioneer ACO program by L&M Research: Only one ACO (Dartmouth Hitchcock) increased utilization of primary care visits in both 2012 and 2013, while 29 cut primary care utilization in both years (see Table 5 pp. 17-18). As every reader of this blog knows, primary care physicians are supposed to be the foundation of ACOs. What accounts for such a surprising finding? We have no idea.
I am not the only one who has noticed this problem. When CMS employees published a paper in JAMA in May 2015 based on the L&M evaluation, Modern Healthcare noted how difficult it is to make any sense of the paper. The article , entitled “Successful Pioneer journey leaves faint trail for followers” took note of the reduction in primary care visits. When Stuart Guterman, a vice president at the Commonwealth Fund (an ACO proponent), was asked by the reporter how he explained that reduction, he said, “It’s very hard to take apart the findings of a study like this,” and, “I’m not sure how to explain it.”
This anecdote is an excellent illustration of the problem created by the wish-based definition of the ACO. Even intelligent proponents of the ACO cannot explain what ACOs did to generate any of CMS’s data, be it cost, utilization or quality data. None of us can.
The solution to this problem is not to ask CMS to produce more data. Nor is the solution to hope the NCQA or some other organization will “certify” private-sector ACOs with requirements like those CMS has published for its ACOs. The solution is for ACO proponents to admit the ACO has no there there and to fix that problem.
Reply to Millenson
Michael Millenson praises my three-part series for being “insightful” and does not object to any of the three points I made. But he offers no indication that he cares about the problem I described. He devotes his entire comment to an obvious but irrelevant platitude: Managed care advocates are not the only people who traffic in manipulative labels. “It’s not just in health care reform where various ideas are given different names to make them more attractive,” he says.
Is that all Millenson has to say about this serious issue? Other people do it? That is a feeble defense of the managed care movement. With friends like Millenson, the managed care movement does not need enemies.
If I had accused professional wrestlers of using manipulative language and not caring about evidence, I might expect to hear the rejoinder that people in other walks of life do the same. But I wasn’t talking about professional wrestlers. I was talking about highly educated people who make a living as health policy experts, people who ostensibly place a high value on the basic rules of scientific discourse, people who routinely demand that doctors and patients practice evidence-based medicine.
Millenson is a recognized name in the field of health policy. If the best he can do in the face of the serious accusations I leveled against Paul Ellwood, the Robert Wood Johnson Foundation, and the rest of the managed care movement is to say other people do it, I rest my case.
I invite Millenson to stop trying to defend his colleagues with the “everybody does it argument” and apply his fine intellect to solving the problem I’m writing about – the absence of useful research on ACOs and the tolerance of sloppy thinking that allowed a concept as poorly defined as the ACO to become national health policy.
Butter knives, not chain saws
I don’t have space to comment on other interesting issues raised by readers. I hope I can respond to some of them in future articles. I’ll close by raising an issue I hope we can discuss in greater detail in the future.
The early HMO proponents and their intellectual heirs sent us down the wrong path with their addiction to unnecessarily abstract diagnoses and prescriptions. The idea that all problems with our health care system, or at least the most fundamental problems, flow from the fee-for-service system is the primary example of such excessively abstract thinking. It is the original intellectual sin that has contaminated the health care reform debate ever since.
Similarly, the notion that all the evils that flow from the fee-for-service system can be eliminated or at least ameliorated by various forms of premium-splitting – shifting risk to providers – is also unnecessarily abstract and sweeping. Pushing as many primary care doctors as possible into “medical homes” and ACOs, and punishing all hospitals for all “excessive readmissions,” are other examples of policy-making at 80,000 feet.
Let’s focus first on specific problems. Do we want to reduce central-line infections? Ok, let’s study the problem and propose specific solutions. Do we want to reduce the number of nursing home residents who wind up in the ER? Let’s determine whether that’s possible and then devise a solution to that problem. Let’s not endorse overkill. Let’s not punish all hospitals for all “excess” readmissions. That’s the equivalent of buying a chainsaw to cut butter. It’s unnecessarily expensive, it creates unwanted side effects, and it’s such a crude tool it might not even cut the butter.
 The PGP demo is widely regarded as a test of the ACO concept.
 The brief methods sections of papers that purport to count ACOs give you some idea of the problem the hollowness of the ACO definition creates. For example, in the meager “methods” section on page 5 of an un-peer reviewed document published by Leavitt Partners, Muhlestein et al. describe their methodology as follows: “Leavitt Partners has sought to pinpoint ACOs by identifying two types of organizations: those that self-identify as ACOs and those who have been specifically identified as adopting the tenets of accountable care.” That’s it. If you call yourself an ACO, that’s good enough for Leavitt Partners. And if you don’t, Leavitt might label you an ACO anyway if you “have been identified” (note the passive voice) as “adopting the tenets of accountable care” (those tenets are not named, nor is any document cited where we might find them).
Two recent surveys of ACOs in California and Minnesota (two states which led the nation in experimenting with HMOs) produced very different estimates of the prevalence of ACO “covered lives” in those states. Fulton et al. report that only 2.4 percent of Californians were “covered by an ACO” in 2014 but that 47 percent received care from “providers that bear financial risk for professional and/or hospital services” (p. 682) How could this be, they ask, if the “definition” of ACO includes risk-bearing? Their answer: “[A] key limitation of our study stems from the difficulty of determining which payer-provider relationships should be classified as an ACO.” (p. 683). Fulton et al. also reported that just 2.3 percent of California’s commercially insured are in ACOs. Meanwhile, a study of ACOs in Minnesota by IBM/KPMG concluded 41 percent of all commercially insured Minnesotans are “attributed to ACO models,” far higher than the 2.3 percent derived by Fulton et al. KPMG couldn’t be bothered to specify the year when their survey was fielded, but the study was released in the fall of 2015 so 2014 is a good guess.
 The implication of Introcaso’s argument that the ACO had no definition before 2011 when CMS issued its Final Rule does not speak well of the US health policy establishment. Even assuming Introcaso is correct (that CMS’s 2011 rule finally defined the ACO), that would mean the ACO had no definition between November 2006, when Elliot Fisher and then-MedPAC chair Glen Hackbarth dreamed up the ACO concept, and March 2010 when the president of the United States signed a law endorsing the vaporous ACO. Even if we accept Introcaso’s argument, we’re left with a baffling question: How did a concept with so little definition and no research to back it up rocket from obscurity to law of the land? What does that say about the standards of the ACO advocates and of the journal editors, health policy analysts, reporters, and politicians who took the braggadocio of ACO proponents at face value?
 For a quicker, gentler and kinder introduction to CMS’s umpteen MSSP regulations, you might read pages 3-7 of this summary of the regs produced by CMS