Uncategorized

Slavitt’s “Data Paradox”

flying cadeuciiAndy Slavitt began his statement at the Datapalooza conference with encouraging words for those of us who believe that the measurement craze has been a disaster and that MACRA will make it worse.  

Slavitt claimed to be in favor of electronic medical record “reform” that “works with doctors, not against them.” He seemed to say he understood MACRA could aggravate the damage that “meaningful use” and the pay-for-performance fad have already inflicted on doctors.

He even accurately summarized the lousy results to date of the measurement craze. He said doctors feel all the data entry “took time away from patients and provided nothing or little back in return.” “[P]hysicians are baffled by what feels like the ‘physician data paradox,’” he said. “They are overloaded on data entry and yet rampantly under-informed.”

But the rest of Slavitt’s statement reveals he has no idea how to solve the “data paradox.” He asserted that “technology that works for doctors and patients” is the solution. I have no idea what this means and Slavitt did not indicate that he has a clue either. What I’m sure of is that “technology” is not the solution to the “data paradox.”

The “Paradox” is not fixable with EMRs

The “data paradox” as Slavitt described it is not fixable with changes in “technology.” It’s the mindset of people like Slavitt that has to change. The “data paradox” will be fixed only when Andy Slavitt and other proponents of the measurement craze terminate the craze or, at minimum, drastically reduce measurement activities. That in turn will require that Slavitt et al. concede that they have vastly oversold what measurement and “data feedback” can accomplish and have vastly underestimated the cost of chronic measurement.

There is no “data paradox.” Physician hostility to being turned into data entry clerks so they can receive mountains of data back from CMS and other insurers can be explained very simply: The data they get back is either worthless or at best useful for generating hypotheses that physicians have neither the time, money nor training to prove or disprove. The data is not, as CMS likes to say, “actionable” by the physicians who receive it.

The data is not actionable because it is inaccurate or, at best, too abstract. Usually the data is both – grossly inaccurate and uselessly abstract. The inaccuracy is caused by two problems: The inability of CMS and other insurers to “attribute” patients accurately to the clinics that treat them (the “attribution problem”); and the inability of CMS et al. to adjust cost and quality scores to reflect factors outside physician control (the “risk adjustment problem”). Technology cannot solve any of these problems. It cannot solve the attribution and risk-adjustment problems, and it cannot make the data less abstract without making it more inaccurate.

Andy Slavitt and his allies in the managed care movement need to be deprogrammed. The deprogramming has to occur in two steps. First, Slavitt et al. must be disabused of the illusion that the data that CMS collects and throws back at doctors is “actionable,” or in the Mother Tongue, useful. Once this is achieved, the “data paradox” will disappear. Next Slavitt and his allies must be persuaded that improvements in technology (slicker EMRs, more interoperability, whatever) cannot solve the problems created by the measurement craze. During this phase of the deprogramming, Slavitt et al. must be persuaded that the problem is sloppy thinking by those who promote the measurement craze.

Conversely, they must be persuaded the problem does not lie anywhere else. The problem is not “resistance” by doctors. It is not stupid patients. It is not “bad technology.” It is cult-like, mulish, sloppy thinking on the part of people who have the power to inflict bad policy on the rest of us.

CMS’s “aerial view” of the Earthlings

The best evidence that CMS’s “data feedback” is almost totally useless is the inability of CMS and the researchers who evaluate CMS demonstrations to articulate what the feedback is good for. For the last three or four years, CMS has been giving “data feedback” to physicians who participate in the Medicare ACO programs (Pioneer and MSSP) and all three of the “medical home” demonstrations. CMS has hired researchers to evaluate these demos (it appears CMS does not intend to evaluate the MSSP demo). You can read published evaluations of these experiments from cover to cover and find no useful information on what function the data served.

The single best characterization of CMS’s data I have found is this one by an unidentified doctor quoted in the latest evaluation of the Comprehensive Primary Care Initiative, one of the three “home” experiments CMS has conducted: “[The report] leaves it up to us to try to figure out how to study that [the cause of high costs]. So it gives you an aerial view of what is going on but does not help you know where to attack the problem.” (P. 32. Bracketed language in the original) The “report” referred to by this doctor is the quarterly “feedback” report that CMS makes available for downloading.

I urge readers to memorize the “aerial view” metaphor used by this doctor. It perfectly illustrates the sloppy thinking at CMS. CMS and its allies in the managed care movement develop evidence-free policy at 80,000 feet and then wonder why Earthlings have so much trouble executing their brilliant policies. When the behavior of the Earthlings fails to improve, CMS refuses to get out of its hovercraft and investigate what happened. Instead they invent fact-free diagnoses like “data paradox.”

CMS’s worthless “tips”

Let us examine the contents of the reports CMS hatched in its hovercraft. I call your attention to a document CMS posted last September entitled “Medicare FFS Physician Feedback/Value Based Payment Modifier: 2014 QRUR and 2016 Value Modifier” 

This document contains the latest Quality and Resource Use Reports (QRURs) published by CMS. According to CMS, these reports are prepared for every doctor who treats Medicare patients. Doctors and their clinics are identified by their Taxpayer Identification Number (TIN). These are the reports CMS expects doctors involved in the ACO and “home” demonstrations to download. It is these reports that CMS and its cheerleaders think will lead doctors to practice “smarter” medicine.

The cost data contained in the QRURs includes CMS’s estimate of the doctor’s or group’s expenditures per attributed patient as well as per attributed patient with one of four diseases (diabetes etc.) The “quality” measures include, among others, 30-day hospital readmission rates. [1]

If you scroll about half way down the page that opens at the address above, you will come upon a document with the encouraging title, “How to Understand Your 2014 Annual QRUR and Supplementary Exhibits.” https://www.cms.gov/Medicare/Medicare-Fee-for-Service-Payment/PhysicianFeedbackProgram/Downloads/2014-UnderstandingYourQRUR.pdf CMS says you should read this document because it “provides tips on how groups and solo practitioners can use the QRUR … to understand their performance and identify opportunities for improvement.” That word “tips” is your first hint that CMS doesn’t have a clue how doctors should make sense of their QRUR. The waffle words “opportunities for improvement” constitute your second hint.

Here are two examples of the “tips” we encounter in this “How to understand your … QRUR” document.

Example 1: Exhibit 3 of the QRUR presents “the average number of primary care services provided to [Medicare] beneficiaries attributed to your TIN.” Here is CMS’s tip: “If you observe that a large percentage of primary care services provided to your TIN’s attributed beneficiaries is provided by eligible professionals outside your TIN, you may wish to coordinate with these eligible professionals to ensure that your TIN’s attributed beneficiaries are receiving efficient, effective care.”(p. 3)

That’s it. You “may wish to coordinate,” whatever that means. “Coordinate” is one of those all-purpose managed-care buzzwords. You’re supposed to know what it means even though the word is used incessantly, rarely defined, and never defined with anything resembling precision. And don’t ask what “efficient, effective care” means. That’s for CMS to know and the doctor to figure out.

Example 2: Exhibit 10 presents “per episode” costs for four diseases. CMS’s tip (I have italicized waffle words): “The information … allows you to determine specific groups of beneficiaries for which your TIN’s costs are higher than your peers. For example, if your TIN’s Per Capita Costs for Beneficiaries with Diabetes are higher than your peers, then you could consider developing a strategy to improve the efficiency of the care of these beneficiaries, perhaps by adopting care management practices or by educating beneficiaries on self-management techniques.” (p. 7)

I quote two more examples in the footnote below. [2]

All the “tips” are like this. They are vague, and they never hint at the possibility that CMS’s data could be inaccurate. The possibility that CMS’s data might be far to inaccurate to help anyone figure out why their “performance” is above or below average is beneath discussion.

Researchers cannot identify meaningful use of CMS’s data

Since CMS cannot offer any useful advice on how to use its data, it is not surprising that researchers who write papers about CMS’s ACO and “medical home” experiments have been unable to offer any evidence demonstrating that CMS’s data is helpful. The best these researchers can do is report that “some” doctors say the feedback is useful and offer an anecdote. For example, Mathematica, the author of the second-year evaluation of the Comprehensive Primary Care Initiative, tells a story about a clinic that said it began to investigate why its ER costs were “above average.” Mathematica did not report any effort to determine what truths the clinic unearthed as a result of its investigation, what actions it took as a result of its investigation, what these actions cost the clinic, and what the results were. We’re simply left with the claim that one clinic told a researcher they took some time to look at the ER use of their patients.

On the other hand, researchers frequently report evidence that doctors find the reports to be not worth reading. Here is a typical example from Mathematica’s evaluation  of the Comprehensive Primary Care Initiative: “Some practices considered [CMS’s] data feedback useful, but many found it challenging to understand how to use it in their improvement efforts.” (p. xviii) Based on interviews with 21 of the “home” clinics, Mathematica stated that “many … practices … indicated the Medicare FFS data feedback reports lack actionable information from which to draw conclusions. Some practice leadership described the Medicare FFS reports as complicated and did not know how to reconcile the costs being reported with the clinical issues they face in their practice. Moreover, practices noted that the feedback reports do not differentiate between unnecessary and appropriate costs for care consistent with standards of care.” (pp. 32-33)

Wrong diagnosis, wrong prescription

Slavitt’s diagnosis of the problem is half right: Doctors justifiably feel they’ve been forced to take time away from patients to enter data for CMS. But it is incorrect to say doctors feel “rampantly under-informed.” They feel rampantly pestered. They feel the bumptious staff at CMS are forcing them to engage in many hours of busywork and all they’re getting in return is abstract and inaccurate data on their “performance.” This problem will get much worse under MACRA.

Slavitt’s urgent plea to the IT buffs at the Datapalooza conference to “think bigger” and come up with a technological solution to this problem is wildly misguided. Slavitt should have instead promised his audience that he will use his remaining months in office to eliminate the fire-aim-ready approach to policy-making CMS has promoted for decades and replace it with evidence-based health policy. The first policies he should subject to an evidence-based examination are CMS’s measurement and pay-for-performance policies.

[1] Here is how CMS describes the contents of the QRURs: “The cost measures included in this report, and calculated using administrative claims, are Per Capita Costs for All Attributed Beneficiaries, Per Capita Costs for Beneficiaries with Specific Conditions (Diabetes, Chronic Obstructive Pulmonary Disease (COPD), Coronary Artery Disease (CAD), and Heart Failure), and Medicare Spending per Beneficiary (MSPB). The claims-based quality outcome measures included in this report are the 30-day All Cause Hospital Readmission, Acute Ambulatory Care-Sensitive Condition (ACSC) Composite, and Chronic ACSC Composite measures. PQRS and CAHPS measures are also included, if your TIN reported these measures.”

[2] Here are two more examples of useless “tips” from CMS.

Example 3: Exhibit 6 tells doctors their “quality” scores on what Berenson and Kaye  have described as a “vanishingly small part” of the services doctors provide to patients. CMS’s tip:

“A low Quality Domain Score may alert you to opportunities for improvement; review Exhibit 6 to determine the quality domains of weakest performance and to identify the quality measures on which you may wish to focus your quality improvement efforts.” (p. 4) That’s it. Let us count the waffle words in this single sentence: “Opportunities,” “weakest performance,” “focus,” and “efforts.”

Example 4: “Exhibit 7 identifies the hospitals that provided at least 5 percent of your TIN’s attributed beneficiaries’ inpatient stays over the performance period. This exhibit includes only the beneficiaries attributed to your TIN for the three claims-based outcome measures and the five per capita cost measures.” CMS’s tip (I have italicized the waffle words): “Use the data presented in the last column to better understand which hospitals most frequently admitted your TIN’s attributed beneficiaries. This information can help you target care coordination efforts more appropriately.” (p 5)

[3] Mathematica reported an odd discrepancy in its data that suggests that the clinics that claimed to find CMS’s data useful were just brownnosing. In response to one survey question about CMS’s data feedback, 90 percent of the “home” clinics (or perhaps the hospital-clinic cartels that own the clinics) said the data was useful. But in response to another survey, 64 percent said they had never seen the reports. (See discussion pp. 30-31.) Mathematica promised to investigate this discrepancy.

Categories: Uncategorized

6 replies »

  1. “The data is not actionable because it is inaccurate or, at best, too abstract. Usually the data is both – grossly inaccurate and uselessly abstract. The inaccuracy is caused by two problems: The inability of CMS and other insurers to “attribute” patients accurately to the clinics that treat them (the “attribution problem”); and the inability of CMS et al. to adjust cost and quality scores to reflect factors outside physician control (the “risk adjustment problem”). Technology cannot solve any of these problems. It cannot solve the attribution and risk-adjustment problems, and it cannot make the data less abstract without making it more inaccurate.”

    This is so important.

    Every MACRA post on this site going forward should have to answer to these two points:

    1. We can’t solve the attribution problem.
    2. We can’t solve the risk adjustment problem.

  2. Slavitt talks a good game, but well intentioned policies made at 80,000 feet translate to this

    via TweetBot

  3. Well done. Well done: Perhaps a bit cynical…my truth is somewhere in the middle

    via TweetBot

  4. To your point about costs/patient with a chronic disease (diabetes etc.), a mathematical proof is worth 1000 words. If you to reduce your cost/person with diabetes, you simply diagnose more people with diabetes. That also makes it look like your practice serves a higher-risk population. There is no 30,000-foot CMS metric that can’t be gamed by a doctor with the desire to do it, and they’ll never be any the wiser.