An increased investment in comparative effectiveness research to gather additional evidence on what medical therapies and technologies work best is often cited as a fix for the nation’s rising health costs.
Unfortunately, lessons from its use abroad and in the U.S. show that this dramatically overstates its benefits as a cost-containment tool.
Comparative effectiveness research entities, such as England’s National Institute for Health and Clinical Excellence (NICE) and Germany’s Institute for Quality and Efficiency in Health care (IQWiG), have not led to decreased national health spending on new technologies. NICE recommendations are thought to account for 10 percent of the increase in England’s health costs.
And as Tara Parker Pope reminded us this week in her NY Times Well column, the uptake and adoption of the evidence, which is just as important as the research, varies widely among physicians.
While it hasn’t always been called comparative effectiveness research, the U.S. has plenty of evidenced-based guidelines for physicians and has a long, sordid history with technology assessment (another name for CER).
Physician organizations, such as the American College of Cardiology and American College of Surgery, issue guidelines. The U.S. Preventative Services Task Force uses systematic reviews and even began incorporating cost effectiveness analysis into its recommendations. And insurance companies conduct technology assessments to shape their benefit packages.
Still, the Dartmouth Atlas researchers reported last week persistent geographic variation in practice patterns remains among doctors treating Medicare patients, and presumably the rest of us, too. This geographic variation in health care utilization is partly what convinced White House budget chief Peter Orszag of the need for comparative effectiveness research. Under his helm, the Congressional Budget Office issue several reports recommending investments in the research.
President Obama signed an economic stimulus package last month devoting $1.1 billion to comparative effectiveness research. Unlike NICE in England, the adoption of the results of this research will not be mandatory in Medicare or any public program. It’s unclear whether the research will include cost effectiveness analysis.
NICE makes headlines when it rejects a drug or new technology, but it’s worth noting that most of the time it approves a technology for some subgroups of patients. Those patients then have legal right to the treatment.
Washington and Oregon are among the states experimenting with health technology assessments and taking a bolder line than the national government. They are using guidelines – based on comparative clinical and cost effectiveness – to shape their state Medicaid benefits. It’s basically NICE on the West Coast.
While NICE is taking its model around the globe, there is limited evidence from abroad that accurately assesses the impact of health technology assessments. It’s next to impossible to isolate the recommendations from other health system factors that may impact health outcomes or spending.
How will this investment in CER change the U.S. health care system or impact physician practice? How will we know? Will it reduce spending? Will we see less geographic variation in the Dartmouth Atlas report in five years?
Spending won’t decrease unless payment reforms accompany this investment, experts say. And they say because the guidelines won’t be mandatory, it will be difficult to infuse the evidence into practice, and even more challenging to evaluate its use.
Policy makers are scrambling to organize a scientifically sound system for carrying out this research. Lessons from abroad show that transparency and independence are crucial for it to gain legitimacy. But in addition to focusing on these important details, policy makers also should consider how this research and subsequent recommendations will be diffused to impact patient care.
Patient care, after all, is at the heart of this matter, least we forget.
Categories: Uncategorized
Cost effectiveness reforms that impact global spending are much more difficult than is usually assumed in public discourse. It is easy to have a rule (whether public or private payer) that, say, a PET scan is not covered for X disease. Doctors and hospitals submit coded claims and the rule can be applied and autodeny the claim. But the percent of healthcare that is driven by such rule-ready decisions is not high. For example, ICU’s are “technology” but it is hard to write a rule for which patients get it; similarly for liver transplants, etc. Medicare has had exactly the same rules and payment edits in, for example, MN and IL, but the average costs per patient are drastically different. Similarly, Medicare has had exactly the same rules in Northern and Southern California since about 2002, but the average payments per beneficiary are much higher in S.CA. It is impossible to “write rules” that substantially lower global payments in our $2T system, although rules can lower payments than a little.
I think Dr. Dustin Ballard, an emergency physician practicing in Marin County, Ca, nails it. He writes his Medically Clear column every other Monday in the Marin Independent Journal.
He feels given the volume of information and misinformation available in the health care arena, the work of the Federal Coordinating Council for CER, if done right, would be extremely valuable, and centralized and trusted catalog of comparisions would have far wider influence.
http://www.marinij.com:80/lifestyles/ci_11810118