Having the best evidence at hand is vitally important for making health care treatment decisions. But even when the right—or best—information is available, it isn’t always put to use in clinical practice.
Why? Although we are getting better at generating evidence, we’re still not doing a great job of using it.
Our progress in creating a robust pipeline of comparative effectiveness research (CER) is clear. By 2019, the Patient Centered Outcomes Research Institute (PCORI) is expected to receive an estimated $3.5 billion from the PCOR Trust Fund to fund CER. CER is not new, but the investment in PCORI represents a national appetite for a robust and reliable queue of research to overcome one of the greatest perennial challenges in health care delivery—knowing what works, for whom it will work and under what conditions.
CER offers every provider, patient and payer the promise of better care, yet its impact on patient outcomes remains on the horizon, rather than a reality in health care settings today. Why? Research published recently in the American Journal of Managed Care suggests that changes are needed in order to see more consistent translation of research findings into clinical practice. In short, at the moment, we have a hard time using what we learn from CER.
This research examined how major CER studies have impacted care. We evaluated real-world utilization trends before and after a) publication of CER findings and b) the release of relevant clinical practice guidelines (CPGs) from four high-profile CER studies published within the last decade.
The research we examined tells the story. Under the microscope were four major studies, including: PROVE-IT, an examination of cholesterol-lowering treatment strategies from 2004; MAMMOGRAPHY WITH MAGNETIC RESONANCE IMAGING (MRI), a comparison of diagnostics to detect breast cancer from 2004; SPORT a comparison of surgical and non-operative treatments for herniated disks from 2006; and COURAGE, a comparison of percutaneous coronary intervention (PCI) to optimal medical therapy (OMT) for people with coronary artery disease, from 2007.
These studies delved into pressing therapeutic questions, and the findings of each study revealed new thinking in optimizing care for patients. But, despite the shifts in care that could have—or should have—occurred, our analysis revealed no clear pattern of utilization in the first four quarters after publication. Even after the studies were incorporated into CPG, we were not able to consistently find changes in utilization or clinical practice.
This lag in translating research findings into practice is reflected in the expectations among health care stakeholders for CER. Although a recent survey shows continued optimism for the use of CER as a tool for improving health care decision-making, most stakeholders are expecting the impact to occur in the near future.
To realize the full potential of CER to improve every single patient encounter, we must be pragmatic about the challenges to adoption. What’s our roadmap for getting it right?
- Ask the right research questions. This means getting input from many stakeholders, including patients, at the outset. PCORI’s focus on patient-centeredness and the Food and Drug Administration’s patient-focused drug development initiative have made tremendous progress in setting the right tone.
- Generate enough of the right kind of research to avoid equivocation. Accumulating studies can make all the difference, especially when evidence conflicts with real world practice or when the science is less mature.
- Foster a learning health system to identify questions and streamline incorporation into clinical practice.
- Better align financial incentives with evidence. We’re seeing momentum to achieve this in various efforts, such as the promotion of Value-Based Insurance Design and other pilot programs.
- Get smart on how to translate knowledge into practice. Just like research on the best ways to promote behavior change specifically in patients, growing research on knowledge translation will help isolate the factors that move research into practice at the point of decision-making.
- Get information into clinician and patient hands at bedside. We need clinical practice guidelines as a first step, but we also need to properly disseminate and amplify them so they have maximum impact. That might mean, for example, media campaigns and targeted outreach to impacted populations. Research shows that information is more effective when it is provided at the time treatment decisions are being made.
The stakes for keeping CER off the shelf and in the hands of health care decision-makers—providers, payers and patients alike—couldn’t be higher. The return on investment of CER will be measured by if, and how quickly, results from the research are used and translated into clinical practice and will almost surely impact whether the nation’s investment in CER extends beyond 2019.
By building a better infrastructure for CER, we’ve made the right investment in improving patient care. The challenges we face in using what we know shouldn’t mitigate that investment. Instead, the barriers to adoption and application of CER are a clear roadmap to tackling the challenges and realizing CER’s full promise. This study shows we must extend beyond simply creating evidence: collaboration is needed to ensure that research is translated and used by patients and their care providers in practice.
Categories: Uncategorized
Vik is so straightforward and shows courage to say what he/she says. While some may find the comments cynical, I do not. I only practice as an informed medical decision maker consultant and find that when patients are exposed to the literature and studies, they are better at critical appraisal than any physician I ever knew. In addition, most often patients scoff at the information, its lack of relevance to them, and the precarious nature of evidence derived in small populations sifted through the gauntlet of the current process of conducting research (antiquated, slow, old before it is over and may be irrelevant at presentation)
Dr Nortin Hadler and I have written on CER in other blogs; I will not reiterate except to say that if anyone believes CER will offer useful evidence, they are mistaken. We can not know anything from data derived from populations of patients and physicians who are free to choose form a plethora of interventions.
And, in my view, the blogger above is correct; translating evidence from research laboratories to the local environ is daunting. I have written editorials extolling a change in the definition of science; it has to be local; it has to be present and relevant to the immediate time and practice base; it has to include the systems caring for patients; it has to be continuous.
I will venture that the RCT is nearly done and CER already is done. There are better models out there; the patients will find them and demand them. Medicine is moving from us to them; let’s find a way to speed the process.
As a mere lowly healthcare consumer, here is what bugs me about clinical research and the way it is conducted and reported: we allow too many trials comparing new treatment A to placebo, instead of to current treatments B and C. Maybe all trials done in support of US drug or device approvals (first approval or label expansion) should be required to have three arms (at least in cases where placebo is ethically allowable). No doubt manufacturers and academics will complain that it is so tough to get enough people into two-armed studies that getting sufficient patients for three arms will be impossible. Fine, pay people. Pay health plans even. If the payoff is going to be huge (read: Sovaldi), let the manufacturer pay for patient recruitment. (Maybe they already do, I don’t know.)
Second, I simply no longer trust the published literature. What mechanisms can we develop so that the methods, results, and raw data of all trials are available on the web as open access resources. Industry buries studies that produce unhelpful results (unhelpful, at least from the perspective of labeling or promotion), but might prove very insightful in terms of knowing how a particular treatment or test flamed out against other ones. To return to my example, if new treatment A fails when measured against existing treatments B and C, but B was shown to be far superior to C, shouldn’t we know that, even if the company never files for approval for treatment A?
Finally, be careful about translating “evidence” into practice. A growing number of Americans not only distrust their government, they distrust the medical establishment. There will be a strong vein of skepticism that what’s happening in converting “evidence” into “practice guideliness” is just an attempt to control choices for ulterior motives.