The Federal Comparative Effectiveness Research Coordination Council has posted its draft definition of comparative effectiveness research and the draft criteria for research prioritization athttp://www.hhs.gov/recovery/programs/cer/draftdefinition.html for public review and comment.
An increased investment in comparative effectiveness research to gather additional evidence on what medical therapies and technologies work best is often cited as a fix for the nation’s rising health costs.
Unfortunately, lessons from its use abroad and in the U.S. show that this dramatically overstates its benefits as a cost-containment tool.
Comparative effectiveness research entities, such as England’s National Institute for Health and Clinical Excellence (NICE) and Germany’s Institute for Quality and Efficiency in Health care (IQWiG), have not led to decreased national health spending on new technologies. NICE recommendations are thought to account for 10 percent of the increase in England’s health costs.
And as Tara Parker Pope reminded us this week in her NY Times Well column, the uptake and adoption of the evidence, which is just as important as the research, varies widely among physicians.
While it hasn’t always been called comparative effectiveness research, the U.S. has plenty of evidenced-based guidelines for physicians and has a long, sordid history with technology assessment (another name for CER).
Thanks to White House budget director Peter Orszag, a Dartmouth Atlas aficionado, $1.1 billion found its way into the stimulus piñata for “comparative effectiveness” research. Terrific, but – to paraphrase Jack Nicholson – can we handle the truth?
In other words, are we mature enough to use comparative effectiveness data to make tough decisions about what we will and won’t pay for? I worry that we’re not.
First, a bit of background. Our health care system, despite easily being the world’s most expensive, produces (by all objective measures) relatively poor quality care. Work begun 3 decades ago by Dartmouth’s Jack Wennberg and augmented more recently by Elliott Fisher has made a point sound-bitey enough for even legislators to understand: cost and quality vary markedly from region to region, variations that cannot be explained by clinical evidence and do not appear to be related to health care outcomes. In other words, plotting a 2×2 table with costs on one axis and quality on the other, we see a state-by-state Buckshot-o-Gram.
Three key conclusions flow from this “variations research”:
- Lots of what we do in health care is costly and ineffective
- We must somehow goose the system to move all providers and patients into the high quality, low cost quadrant on that 2×2 table; and
- Better evidence about what works would help with such goose-ing.Continue reading…
Barack Obama’s health reform proposal includes creating a center for comparative effectiveness research.
John McCain also has expressed support for this research.
And the American College of Physicians would like patients and doctors to use comparative effectiveness information when making health decisions.
What the heck are they talking about?
Policymakers, pundits and journalists have begun throwing around the term “comparative effectiveness” as if people know what it means.
I haven’t seen a formal survey, but I’m confident that the general public does not understand the concept behind this jargon nor the reasons why a national center might be needed to compare different medical treatments and procedures to find out what is most effective for different patients.
The first step to helping people understand these issues is to stop using the term comparative effectiveness. Using insider terms like this will ensure the public never engages in the issue and never buys into it. And public buy-in is important — crucial actually — says Gail Wilensky, the term’s mother of sorts.