Are We Mature Enough to Make Use of Comparative Effectiveness Research?

Thanks to White House budget director Peter Orszag, a Dartmouth Atlas aficionado, $1.1 billion found its way into the stimulus piñata for “comparative effectiveness” research. Terrific, but – to paraphrase Jack Nicholson – can we handle the truth?

In other words, are we mature enough to use comparative effectiveness data to make tough decisions about what we will and won’t pay for? I worry that we’re not.

First, a bit of background. Our health care system, despite easily being the world’s most expensive, produces (by all objective measures) relatively poor quality care. Work begun 3 decades ago by Dartmouth’s Jack Wennberg and augmented more recently by Elliott Fisher has made a point sound-bitey enough for even legislators to understand: cost and quality vary markedly from region to region, variations that cannot be explained by clinical evidence and do not appear to be related to health care outcomes. In other words, plotting a 2×2 table with costs on one axis and quality on the other, we see a state-by-state Buckshot-o-Gram.

Three key conclusions flow from this “variations research”:

  • Lots of what we do in health care is costly and ineffective
  • We must somehow goose the system to move all providers and patients into the high quality, low cost quadrant on that 2×2 table; and
  • Better evidence about what works would help with such goose-ing.

Since nothing can happen without the research, the new funding for comparative effectiveness is welcome and helpful. But will it be sufficient to move the needle?

Here’s where things get dicey. A chief medical officer I know was once discussing unnecessary procedures in his health care system. In a rare moment of unvarnished truth telling, one of his procedural specialists told him, “I make my living off unnecessary procedures.” Even if we stick to the correct side of the ethical fault line, doctors and companies inevitably believe in their technologies and products, making it tricky to get them to willingly lay down their arms. Robert Pear described the political challenges surrounding effectiveness research in last week’s New York Times:

[the legislation has become] a lightening rod for pharmaceutical and medical-device lobbyists, who fear the findings will be used by insurers or the government to deny coverage for more expensive procedures and, thus, to ration care. In addition, Republican lawmakers and conservative commentators complained that the legislation would allow the federal government to intrude in a person’s health care by enforcing clinical guidelines and treatment protocols.

At this moment, Medicare’s rules – yes, the same Medicare that’s slated to go broke in a decade or so – forbid it to consider cost in its coverage decisions. Rather, its mandate is to cover treatments that are “reasonable and necessary.” So if Medicare comes to believe that a new chemotherapy will offer patients an extra week of life at a cost of $100,000 per patient, it is pretty much obligated to cover it. This is insane, obviously, but such are the rules.

And, if anybody tries to put the Kybosh on the Chemo, you can count on boatloads of oncologists, patient advocates, and pharma companies to descend on Washington like teenagers with Obama inaugural tickets, hammering the authorities to “be humane” and “take the decisions out of the hands of government bureaucrats and MBAs” and “put them in the hands of doctors, where they belong.” (This is precisely what happened in at Medicare’s hearings regarding cardiac CT, a technology that Medicare decided to cover despite a striking dearth of evidence of effectiveness). And TV news magazines will be right there, telling the compelling and tragic story of the kindly grandma who will never see her grandchildren’s bar mitzvahs because of Medicare’s heartlessness.

As Stalin said, “a single death is a tragedy, a million deaths a statistic.” Such is the problem with trying to make rational, evidence-based tradeoffs (that lead some people to not get the care they want) in a media-saturated open society.

But we can’t give up. We need to get a handle on healthcare costs, and it’s far better to do it by jettisoning non-evidence-based, wasteful care than by getting rid of the good stuff.

Luckily, we’ve waited long enough that we have some models to learn from – and some cautionary tales. Let’s begin by talking NICE. Literally.

A decade ago, Britain’s National Health Service launched NICE, the National Institute for Health and Clinical Excellence. In a recent NEJM article entitled “Saying No Isn’t NICE,” Robert Steinbrook reviewed the “travails” of NICE:

Since 2002, National Health Service organizations…have been required to pay for medicines and treatments recommended in NICE “technology appraisals.” The NHS usually does not provide medicines or treatments that are not recommended by NICE… NICE can be viewed as either a heartless rationing agency or an intrepid and impartial messenger for the need to set priorities in health care…

As we look to NICE for a roadmap, it is worth remembering the differing dynamics of a closed, tax-funded system such as the NHS, and the pluralistic, chaotic hodgepodge that is American health care. NICE’s physician-chair told Steinbrook that the Institute had to

be fair to all the patients in the National Health Service… If we spend a lot of money on a few patients, we have less money to spend on everyone else. We are not trying to be unkind or cruel. We are trying to look after everybody.

NICE, with its 270-member staff and $50M budget, not only reviews whether treatments work, but explicitly analyzes cost-effectiveness (leading some drug manufacturers to cut their prices to achieve better C-E ratios and chances of NICE approval). Although the cost-effectiveness cutoff is a bit fluid, NICE generally does not recommend treatments whose cost per quality-adjusted-life-year is more than about $40,000. According to American health care mythology, our cutoff is $50,000, but in reality it is hard to find examples of practices that have been withheld based on cost-effectiveness considerations.

Even in relatively non-litigious Britain, about one-third of NICE’s decisions are appealed, and several have generated impassioned pleas by patients and advocates for re-consideration. But most decisions have held up. Steinbrook praises NICE for helping to focus global attention on cost-effectiveness, but notes that

It remains to be seen… how many other countries will follow its lead. After all, saying no takes courage – and inevitably provokes outrage.

To me, NICE’s experience shows that rationing based on cost-effectiveness can be done, but we can count on it being about ten times harder in the United States (with our fragmented healthcare system, our sensationalist media, our hypertrophied legal system, and our tradition of individual benefit trumping the Good of the Commons) than it has been in the UK.

A second cautionary note: In November, the Times ran an article describing the sad case of the ALLHAT trial, the 2002 JAMA hypertension study that found that diuretics, costing pennies a day, worked better than 3 other classes of drugs (ACE inhibitors, calcium channel blockers, and alpha blockers) that cost up to 20 times more. The study, which took nearly a decade and cost over $100 million, was largely ignored – six years after its publication, the number of hypertensive patients on diuretics has bumped by an underwhelming 5% (from 35 to 40%).

Why the wimpy response to ALLHAT’s results? Partly resistance to change, partly new drugs that came out as the study was being conducted, and partly pharmaceutical company lobbying. As Medicare’s former CMO Sean Tunis said, “there’s a lot of magical thinking that [the application of comparative effectiveness studies] will all be science and won’t be politics.”

And if that isn’t depressing enough for anyone favoring science and rationality, here’s one last cautionary tale:

In the mid-1990s, the buzzword for encoding evidence-based practice was “practice guidelines,” and an agency called the Agency for Health Care and Policy Research (AHCPR) set out to create such guidelines using clinical evidence. Sound familiar? One of the first procedures AHCPR addressed was surgical management of back pain, bringing together a panel of national experts (led by Seattle’s Rick Deyo) to review the literature and recommend evidence-based practice standards.

You can guess the rest. The AHCPR panel found virtually no evidence supporting thousands of back surgeries each year, and recommended against them. Orthopedic surgeons worried that the guidelines were the first step to blocking insurance coverage for one of their favorite pastimes. As described by Jerome Groopman in a 2002 New Yorker article,

…almost as soon as the panel convened, it came under attack. Contending that the deliberations were not an open process and that the panelists were biased against surgery, a group of spine surgeons, led by Dr. Neil Kahanovitz, an orthopedist who was then a board member of the North American Spine Society, lobbied Congress to cut off AHCPR’s funding. Deyo recently told me [Groopman] that the line taken by the opponents of the panel was “ ‘These guys are anti-surgery, they’re anti-fusion.’ But we really had no axe to grind,” he went on. “Our aim was to critically examine the evidence and outcomes of common medical practices.”

Congress, led by then-House Speaker Newt Gingrich and in a nasty, budget-slashing mood, “zeroed-out” AHCPR’s funding. Though the Agency survived (a fraction of its budget was restored by the Senate), it did the only thing it could – ending the guideline program, re-branding itself as being about producing evidence but not recommending practice, and even changing its name to the Agency for Healthcare Research and Quality (AHRQ), a masterful series of moves by the late John Eisenberg credited with saving the agency from bureaucratic purgatory. As much as we like to blame the politicos, the drug and device companies, and the MBAs, the AHCPR fiasco demonstrated that physicians are every bit as capable of self-interested venality as any other group.

So, is it worth wasting our time and money on comparative effectiveness research? I’m hoping that this is a new day – the coming implosion of the healthcare system is now well recognized, as is the quality chasm. We simply must find ways to drive the system to produce the highest quality, safest care at the lowest cost, and we need to drag the self-interested laggards along, kicking and screaming if need be. Comparative effectiveness research is the scientific scaffolding for this revolution, so bring it on.

But let’s not be naïve about it – one person’s “cost-ineffective” procedure may be a provider’s mortgage payment, a manufacturer’s stock-levitator, and a patient’s last hope for survival.

So my hope is that we have the brains to produce the right kinds of data, and the maturity to act on it, humanely but responsibly.

Robert Wachter, MD, is widely regarded as a leading figure in the modern patient safety movement. Together with Dr. Lee Goldman, he coined the term “hospitalist” in an influential 1996 essay in The New England Journal of Medicine. His most recent book, Understanding Patient Safety, (McGraw-Hill, 2008) examines the factors that have contributed to what is often described as “an epidemic” facing American hospitals. His posts appear semi-regularly on THCB and on his own blog “Wachter’s World.”

Livongo’s Post Ad Banner 728*90

Leave a Reply

39 Comment threads
0 Thread replies
Most reacted comment
Hottest comment thread
34 Comment authors
Geogrestinjury attorney austinaltın çilekcheap sildenafilmaggiemahar Recent comment authors
newest oldest most voted

Doing Expired Hydrocodone Mixing Hydrocodone Promethazine [url=http://geodon.onlinecanadianpharmacyusa.com/ ]buy geodon online without prescription[/url]. What Does Hydrocodone Doo To You Hydrocodone 35 92 How Do You Use Hydrocodone . Hydrocodone Ortho Mcneil Hydrocodone Pain Tablets . Medications With Hydrocodone Withdrawal Symptoms Buy Hydrocodone Rx Skelaxin [url=http://hydrocodone.onlinecanadianpharmacyusa.com/ ]order hydrocodone online[/url] Interaction Tramadol Hydrocodone Hydrocodone No Rx Needed Vicodin Lortab Xanax On Line Hydrocodone Vicodin Buy Xanax Hydrocodone

injury attorney austin

I can not but agree.I usually wanted to jot down in my web site some thing like that but I suppose you’r more rapidly. Ciao, Pricey Pal!

altın çilek

Well, I hate to be radical, but I think one possible solution is to institute DRG’s for physicians for inpatient care, with or without the additional step of bundling physician and hospital DRG payments and letting them fight it out over who gets what. I cite the latter with reluctance, because I think hospital administrators would hold the whip hand the way hospitals are currently organized, but one must at least consider it. What other way can one force physicians and hospitals to pay attention to comparative effectiveness research? As I commented to Dr. E. Novak recently, there is already… Read more »

cheap sildenafil

Dan I agree with you 100%!


Bob & Dan– Bob–Thanks for a very good post. I’m afraid it will be more difficult in the U.S. than in the U.K. for all the reasons you suggest. But, as Obama keeps telling us, time to grow up. Dan- Take a look at who the Obama adminisatration has appointed to the panel that will oversee comparative reserach. The 16-member panel is composed mainly of physicians–people like Jim Weinstein (of Dartmouth) and Christine Cassell (head of the American Board of Internal Medicine.) The only insurers represented: George Isham of Health Partners, the head of research at Kaiser and the chief… Read more »

Dan Gebow
Dan Gebow

I agree with many of your points about profit from unnecessary procedures, but there is a deeper story to comparative effectiveness (CE). I was one of the leaders in the movement to block Medicare’s attempt to eliminate reimbursement for Cardiac CT. Our action was not based upon a disregard for EBM, but instead, the methodology that was used to justify the coverage decision. This methodology and back story holds many lessons for CE: -The proposed Medicare decision was for coverage of 2 indications that were not appropriate for Cardiac CT. The medical basis for coverage (via clinical trials) of these… Read more »

Roger Williams
Roger Williams

Here is the CATO paper I have been reading.
The title is: A BetterWay to Generate and
Use Comparative-Effectiveness Research


Do you have a URL or citation for the CATO paper?

John R. Graham

We don’t have a national institute to determine which automobile or house or organic pork chop is most effective. We rely on individual choice. Health insurers should be free to state “we cover any treatment that costs up to $50,000/QALY as determined by (insert name of non-profit institute)” or whatever other QALY and allow individuals to decide the value. See Michael Cannon’s nice briefing paper from CATO Institute on how the government prevents the private sector from producing CER.


A great post. I live and work in England and have been watching the development of NICE and the evidence based healthcare movement here. The going is not easy, and many of the challenges you foresee in the US remain extremely contentious here, despite 10 years of overt “rationing” by NICE. We have not avoided serious and ongoing dissent, despite the differences between our healthcare system and yours (universal health insurance, less fragmentation, less litigation and an electorate more used to – but not particularly accepting of – “greater good” arguments). Given the challenges here, in an apparently more receptive… Read more »


Excellent post. Nuanced, and understands all the issues at play, including here in the UK.

Christopher George
Christopher George

Because the only case which you discuss is one in which supposedly greedy doctors perform ineffective surgery for profit, one might be left with the impression that the principal problem in healthcare is restraining rapacious doctors. It is well known in certain segments of the medical community that back surgery, and cardiac angioplasty are largely ineffective. It is also well known that regulators with government sponsorship have a limited grasp of statistics and science, and an uncanny tendency to target effective procedures as often as stupid ones. Don’t be surprised if you don’t like the result once a soviet style… Read more »

bev M.D.
bev M.D.

Well, I hate to be radical, but I think one possible solution is to institute DRG’s for physicians for inpatient care, with or without the additional step of bundling physician and hospital DRG payments and letting them fight it out over who gets what. I cite the latter with reluctance, because I think hospital administrators would hold the whip hand the way hospitals are currently organized, but one must at least consider it. What other way can one force physicians and hospitals to pay attention to comparative effectiveness research? As I commented to Dr. E. Novak recently, there is already… Read more »

Jeff Kreisberg

Over one billion dollars of the economic stimulus package will be used to study which procedures, drugs, and devices are the most effective to treat disease and carry the least risk to the patient We already have scientific evidence that many of our most expensive tests and treatments are unneccessary, risky and very expensive, adding over $700 billion to our healthcare costs. One of the leading culprits is coronary care; $60 billion is spent on unnecessary angioplasties and coronary artery bypass surgeries. In additions, thousands of women are having unnecessary Cesarean sections (that cost 50% more than natural delivery) simply… Read more »

Tom Leith
Tom Leith

Well, Barry, I think what your’re getting at is that Emmanuel is proposing differentiated co-pays and tiering based on significant differences in clinical efficacy. Fine and dandy by me. And to be fair, this is what’s happening right now in most pharma formularies. It just hasn’t extended so well to diagnostics and other procedures. My point is that “consumer oriented” strategies can’t help at all because only a very few patients will understand the Comparative Efficacy Research well enough to know whether to pay the higher co-pay. Last time we tried this with the “good HMOs” in the 1980s, docs… Read more »