An increased investment in comparative effectiveness research to gather additional evidence on what medical therapies and technologies work best is often cited as a fix for the nation’s rising health costs.
Unfortunately, lessons from its use abroad and in the U.S. show that this dramatically overstates its benefits as a cost-containment tool.
Comparative effectiveness research entities, such as England’s National Institute for Health and Clinical Excellence (NICE) and Germany’s Institute for Quality and Efficiency in Health care (IQWiG), have not led to decreased national health spending on new technologies. NICE recommendations are thought to account for 10 percent of the increase in England’s health costs.
And as Tara Parker Pope reminded us this week in her NY Times Well column, the uptake and adoption of the evidence, which is just as important as the research, varies widely among physicians.
While it hasn’t always been called comparative effectiveness research, the U.S. has plenty of evidenced-based guidelines for physicians and has a long, sordid history with technology assessment (another name for CER).
Physician organizations, such as the American College of Cardiology and American College of Surgery, issue guidelines. The U.S. Preventative Services Task Force uses systematic reviews and even began incorporating cost effectiveness analysis into its recommendations. And insurance companies conduct technology assessments to shape their benefit packages.
Still, the Dartmouth Atlas researchers reported last week persistent geographic variation in practice patterns remains among doctors treating Medicare patients, and presumably the rest of us, too. This geographic variation in health care utilization is partly what convinced White House budget chief Peter Orszag of the need for comparative effectiveness research. Under his helm, the Congressional Budget Office issue several reports recommending investments in the research.
President Obama signed an economic stimulus package last month devoting $1.1 billion to comparative effectiveness research. Unlike NICE in England, the adoption of the results of this research will not be mandatory in Medicare or any public program. It’s unclear whether the research will include cost effectiveness analysis.
NICE makes headlines when it rejects a drug or new technology, but it’s worth noting that most of the time it approves a technology for some subgroups of patients. Those patients then have legal right to the treatment.
Washington and Oregon are among the states experimenting with health technology assessments and taking a bolder line than the national government. They are using guidelines – based on comparative clinical and cost effectiveness – to shape their state Medicaid benefits. It’s basically NICE on the West Coast.
While NICE is taking its model around the globe, there is limited evidence from abroad that accurately assesses the impact of health technology assessments. It’s next to impossible to isolate the recommendations from other health system factors that may impact health outcomes or spending.
How will this investment in CER change the U.S. health care system or impact physician practice? How will we know? Will it reduce spending? Will we see less geographic variation in the Dartmouth Atlas report in five years?
Spending won’t decrease unless payment reforms accompany this investment, experts say. And they say because the guidelines won’t be mandatory, it will be difficult to infuse the evidence into practice, and even more challenging to evaluate its use.
Policy makers are scrambling to organize a scientifically sound system for carrying out this research. Lessons from abroad show that transparency and independence are crucial for it to gain legitimacy. But in addition to focusing on these important details, policy makers also should consider how this research and subsequent recommendations will be diffused to impact patient care.
Patient care, after all, is at the heart of this matter, least we forget.