The Case for Comparative Effectiveness Research

When I was a kid growing up in Los Angeles, there was this local TV show my dad used to enjoy  watching called Fight Back with David Horowitz.  Basically, Horowitz, a TV reporter and consumer advocate, used to put the claims a manufacturer made about their products to the test—whether it was if Samsonite luggage could withstand abuse from a Gorilla or Bounty really was the “quicker picker upper,” it was on its show and ended up either endorsed or debunked by it.  It was Consumer Reports come to life, if you will—pitting products against one another to see which one was worth putting down some hard earned dollars for.

Now, over 30 years later, we in medicine are just getting around to doing the exact same thing that Horowitz was with retail way back in the 1970s—comparing the claims made by drug and device makers about their products.

Being the sophisticated academics we believe we are, we’ve given this process a name:  Comparative Effectiveness Research (CER).  But come on, aren’t we really just asking David Horowitz to come Back to the Future and host Fight Back for doctors and patients?

One has to wonder why this common sense approach to figuring out how to spend or health care dollars has taken so much time to become part of our efforts to fix health care.  Frankly, I have no idea, except to point the finger at the usual suspects like drug companies and their lobbyists on Capital Hill, who don’t want to invest money into R&D if they’re product is going to be more expensive, more dangerous or less effective than an existing one.  Or more, recently, the cliché argument that somehow learning something about what works best is somehow a rationing of care (as if spending trillions on excessive, inefficient, and uncertain care, yet having 48 million people uninsured, isn’t).

Still, you’d think that holy triad of characters–doctors, patients and insurance agents (including Medicare)–would want have wanted to know just this kind of information eons ago.

It’s been in pondering what seems like our complete lack of practicality that I read more about CER and what it’s going to take to make it work.  Among the challenges:

Where to spend our money:  Remember, Obama is funding CER with stimulus funds—a one time shot in the arm for the project (but one that will hopefully continue into the future).  Still, even $1.1 billion seems limited relative to the vast array of drugs and priorities.  To that end, the Institute of Medicine released a report that’s basically its Hot 100 list of topics we ought to invest in.   More on that in an earlier post by Josh Seidman.

We spent the money and got some answers.  Now what?  In a great commentary about the potential and pitfalls of CER in JAMA.  Dr. Robert Brooks asks this fundamental question not because it answer isn’t obvious—implement the finding of research—but because history tells us that doing that is much harder than it sounds. “The history of science shows that it takes a long time for new knowledge to be incorporated into day-to-day practice. So a second requirement for work funded under the stimulus package should be that successful innovations are implemented immediately. Thus, a successful application under the comparative effectiveness initiative must include constituents, such as health care organizations, hospitals, physicians, or organized community groups, that would agree to adopt the new therapy immediately if it were shown to be as safe as the old therapy but substantially less expensive.”

Brooks has a point and if you don’t believe him, just look at how poorly current best practices are implemented by doctors across the country.

Despite those possible concerns, it seems that CER is here to stay in a big way.  My feeling is that it’s better we have it and hold doctors, drug and device makers accountable for their health care decisions than keep practicing in the black box we work in today.

Dr. Rahul Parikh is a Pediatrician in the San Francisco Bay Area and a frequent contributor to Salon.com and THCB. Dr. Parikh practices with the Walnut Creek Medical Center and Kaiser Permanente.

More by this author:

7 replies »

  1. I work for a company called RemedyMD and we build registries in part for medical practitioners to conduct CERs on upcoming drugs and their effects on patients. While CER’s may not be applicable to every situation, I do believe that they can be useful, especially in the realm of drug testing and comparing the effects of drugs in different instances.

  2. Most patients have too many comorbid conditions and the garbage in garbage out process will render most results of this type flawed. Did you ever wonder why CHF is one of the most popular DRG’s and is used as the code for peripheral swelling of any type? It is because it pays well and the case managers and coders are taught to massage the findings, and if that does not work, the admin bullies the doctors or just has their staff enter the dx. with user unfriendly electronic records, most doctors just click, click, click to electronically sign without reading what they are signing (or if they wanted to read it, the print is too small, requiring a determination of which icon should be clicked to enlarge (this is sickness itself). Then, the hospital gets paid a lot and great marks in treating the heart failure the patient never had, enabling the executives to continue their million dollar salaries.

  3. One aspect that concerns me about this is how CRE will deal with subgroups. Of course it’s easy to match one medication versus another head-to-head but what about the wide spectrum of people who are taking a drug. When comparing effectiveness will we looks into how the new drug functions in patients taking multiple prescriptions? What about those who just don’t respond to a drug and have to use alternatives which may be deemed less effective. Certainly this holds true for psychiatric drugs where for no known reason some drugs can be effective on a group of patients and ineffective on another group with identical distinguishing factors. Lithium would be a great example because it is highly effective for treating personality disorders but has no action in many patients, would all drugs that run up against it in a head-to-head be relegated to the trash pile because it would be very difficult to get a clinical trial together that could accurately measure all subgroup effect levels.

  4. If we are smart, the new model for health care reform will contain within it the capacity to collect patient treatment data and rapidly build the necessary information system to help effectively manage care.
    I suspect the information we need is all out there, we just can’t get at it.

  5. I remember David Horowitz!!
    > Still, you’d think that holy triad of characters–
    > doctors, patients and insurance agents (including
    > Medicare)–would want have wanted to know just this
    > kind of information eons ago.
    I bet they have wanted to know and the reason they basically can’t easily get the information is another case of regulatory capture (http://en.wikipedia.org/wiki/Regulatory_capture) but maybe a little more of a subtle kind than usually thought of.
    The FDA does CER right now — but the standard of comparison isn’t “existing known-to-work treatments”. No, the standard of comparison is “nothing” as in “does this new thing work better than nothing (i.e. a placebo)?” Well, who got THAT through?
    I should like to see CER extended to things besides drugs and devices, and it is to some extent (thankfully) by The Guild itself when they say things like “spinal cord stimulation doesn’t seem to work.” But they don’t take the next step and discourage members not to practice spinal cord stimulation.
    As to the difficulty of moving from knowledge to practice: yes, absolutely you are right. Some people will never believe the CER anyway and others will take advantage of that. Have a look at QuackWatch (http://www.quackwatch.com) These guys “earn” billions, and not all of it is a cash business.

  6. I am note sure if it can be compared to vacuum testing…as here the variables are both the vacuum and the floor – analogically speaking.
    I just wrote about it on my blog coincindently….It just dawned on me that the key would not be to compare the understanding versus the result. Since there are no reliable analytical models avaialbe, it is not going to be easy…on a higher level the variable are the physician’d ability to diagnose, ability to prescribe right medicine, person, and the medicine.
    Now that sounds simple till you start peeling the onion.

Leave a Reply

Your email address will not be published. Required fields are marked *