As a physician, I know the challenge of helping patients determine which health care options might work best for them given their personal situation and preferences.
Too often they — and their clinicians — must make choices about preventing, diagnosing and treating diseases and health conditions without adequate information. The Patient-Centered Outcomes Research Institute (PCORI) was created to help solve this problem — to help patients and those who care for them make better-informed health decisions.
Established by Congress through the Patient Protection and Affordable Care Act as an independent research institute, PCORI is designed to answer real-world questions about what works best for patients based on their particular circumstances and concerns. We do this primarily by funding comparative clinical effectiveness research (CER), studies that compare multiple care options.
But more research by itself won’t improve clinical decision-making. Patients and those who care for them must be able to easily find relevant evidence they can trust. That’s why our mandate is not just to fund high-quality CER and evidence synthesis but to share the results in ways that are meaningful to patients, clinicians and others.
We’re also charged with improving the methods used in conducting those studies and enhancing our nation’s capacity to do such research.
We will be evaluated ultimately on whether the research we fund can change clinical practice and help reduce the variations and disparities that stand between patients and better outcomes. We’re confident that the work we’re funding brings us and the audiences we serve closer to that goal.
Recently, some questions have been raised in health policy circles about our holistic approach to PCORI’s work. That view holds that direct comparisons of health care options — especially those involving high-priced interventions — should be the dominant if not sole focus of PCORI’s research funding approach as a path to limiting the use of expensive, less-effective options.
We agree that discovering new knowledge on how therapies compare with one another is a critical mandate of PCORI and is essential to improving the quality and effectiveness of care. However, ensuring that patients and those who care for them have timely access to and can use this knowledge, so that they can effectively apply it to improve their decisions, is also very important.
That is the reasoning behind our integrated approach path that addresses the gaps in available evidence, and also studies how best to make the evidence available and usable.
Much of our research portfolio is in fact dedicated to the research that our critics desire. We’re funding studies to answer questions about common, serious conditions like cardiovascular disease, cancer, and mental illness, which affect millions of Americans.
And the specific topics we’re addressing are recommended and vetted by researchers and the patient and stakeholder communities to whom we’re responsible.
This work will be bolstered by a new series of large pragmatic studies that will compare outcomes between two or more approaches to addressing high priority clinical issues in real-world settings. We also are establishing a national clinical research infrastructure to make it easier and less expensive to conduct these kinds of large studies in diverse patient populations.
By focusing on outcomes and data from real-world settings, we’re more likely to obtain relevant answers to the many important questions patients have. Consider patients newly diagnosed with cancer. They want to know not just survival rates but the impact of different therapies on their quality of life or ability to work.
Such outcomes matter to patients, but research often fails to address them.
That’s why we’ve made engagement a cornerstone of our research. Every research study we fund must include a plan to engage patients, their clinicians and others across the health care community to ensure the research focuses on practical questions.
Because engagement of this depth is new to many researchers and patients alike, we’re developingmethods to improve our ability to incorporate patients’ perspectives in research. Patient-centered research methods that are transparent and scientifically sound will enhance the credibility and usefulness of the studies we fund.
This integrated approach will provide patients with the right information in the right place at the right time. It recognizes that engagement, methods, infrastructure, and effective dissemination make patient-centered research timely and useful.
We’ve invested more than $464 million in such research to date. About 62 percent of this research funding has focused on CER, with the rest spread across infrastructure (18%), methods (11%), and communication and dissemination research (8%). We expect to commit $1 billion over the next two years to expand this work.
With a foundation for patient-centered research in place, we’re confident our work will provide patients, caregivers, clinicians and others throughout the health care community with the information they need and help to improve health and well-being for all of us.
Joe Selby, MD, MPH, is PCORI’s Executive Director.
Selby, Joe. PCORI’s Research Will Answer Patients’ Real-World Questions, Health Affairs Blog, 25 March 2014. Copyright ©2014 Health Affairs by Project HOPE – The People-to-People Health Foundation, Inc.
Categories: Uncategorized
Dear Dr. Selby,
Your goals of teasing clinically meaningful small effects from the observational and systematic studies you describe are certainly understandable given your charge at PCORI, and the present way we conduct research with large data sets. Your expectation that you can accomplish these goals either by broadening the outcomes, targeting the subsets, and/or randomizing the populations are the stuff of contemporary epidemiologic hubris, especially because research continues to use administrative data (because it is easy to get) that lacks clinical nuance (because it is hard to get). This approach stymies our ability to progress in meaningful research. Even if you establish multiple outcomes a priori, as you must, you are limited by the indistinct and overlapping nature of clinical outcomes. The same limitations pertain to targeting subsets confounded further by considerations of statistical power.
And even if you manage to hone definitions of disease and of outcome, RCTs are susceptible to what we call “reduction to the irreproducible.” RANDOMIZATION ERROR always comes into play when seeking small effects. There are always confounders that were not measured or are immeasurable, confounders that introduce “tiny” biases that can be of a magnitude comparable to any statistically significant tiny difference in outcome. You’ll never be the wiser until you discover the result doesn’t reproduce in a second study.
The only way you can reach your goals is to start with clinically meaningful results from well-designed efficacy trial in a well-defined subset. Then the CER can ask is there a subset in a more general experience that does as well or better or not. For us, a clinically meaningful result would be an NNT<50 for a hard outcome and <20 for a soft outcome. These cut-offs are debatable and that debate is prerequisite to CER and to EBM. It is long overdue. We chose our numbers to start the debate based of our experience informing patients; we find they have difficulty understanding differences in benefit and risk at numbers smaller than these. However, we – and they – don't put much stock in lesser efficacies even if the study population was 10,000 inbred mice, let alone 10,000 outbred Americans.
The most important contemporary problem facing our patients today is the definition of evidence. Perhaps, PCORI could start funding that question, and funding putting patients in their rightful place in a profession, not business, of medicine. Patients will tell us what evidence is; no one else can or should.
Nortin M Hadler MD MACP MACR FACOEM
Robert A. McNutt MD
Responses to Drs. Hadler, Lewis, and McNutt:
To Dr. Hadler: I agree. Comparative effectiveness studies are often done in situations where expected differences are small or where the expectation is that there is no difference – difficult territory to operate in, but nonetheless important. We also agree that comparative effectiveness research is complementary to and usually follows on the establishment of efficacy and even effectiveness. Three points may offer some comfort. First, a major problem when differences are small is that it’s hard to discern true differences from apparent differences that are due to selection bias (confounding), especially in non-randomized studies. That’s why nearly half of all PCORI studies and nearly all of the larger studies we’re funding are randomized trials. Second, although differences for some outcomes may be small or nonexistent, they may be larger for others, like side effects or quality of life. That’s why PCORI emphasizes considering a broader range of outcomes than in typical efficacy studies. And third, “marginal” effects often result from studying heterogeneous populations that contain subgroups who benefit differentially and subgroups who don’t – so the average effect is “marginal.” PCORI emphasizes and expects funded research to pay special attention to possible differences in relative effectiveness within a study population, and to seek to identify those patients who benefit a lot from making a particular choice and those in whom the choice doesn’t matter or is reversed. This is the patient-centered approach: learning what works better for whom, for the outcomes that matter most to them.
To Al Lewis: First, PCORI is not set up to save money, but to increase our understanding of the relatively benefits and harms of alternative choices faced by patients and those who care for them. This evidence should lead to better choices and better outcomes. In some instances it may lead to reductions in wasted spending; in others, it may lead to greater spending, at least for subgroups shown to benefit from a newer or more costly therapy. Here it will be critical that we STOP doing the option the new approach is intended to replace, because that would now be wasteful. Regarding patient engagement, I think that Al Lewis underestimates the readiness of patients to engage, as individuals or organizations. At PCORI, we engage before we fund research, so that we get the questions right. If the research serves the true information needs of patients, we think they’ll be more likely to use the results.
To Dr. McNutt: First, an encouraging word. All of the research of the last 3 decades, including that which you participated in, has undoubtedly led to better treatments and better outcomes. Heart attacks, strokes, cancer mortality are three major killers that are in decline thanks to clinical and outcomes research, quality improvement research and implementation research. Do we already know what we need to know from clinical trials? I’d say not. While we may know that outcomes differences between two choices are small (or large) on average, we often don’t understand how the effects may differ for patients with various demographic and clinical characteristics. To your point that research seldom is helpful to individuals, we’d say “that can change.” Greater attention to treatment heterogeneity and to the wider range of outcomes that interest patients can make research more useful to individuals. Engagement with patients and other stakeholders, which PCORI does intensely, is another strategy for making research relevant. And finally, while clinical trials are complex and costly, they are often the only way to get unbiased answers and PCORI funds many randomized trials. We agree that research, whether randomized or observational is best done in real word clinical settings and that clinical medicine and clinical care more broadly should be approached as research. Given that we so often make treatment decisions without good evidence, every opportunity should be taken to learn from the choices we’ve made. That is why PCORI has funded PCORnet, a national network that building the infrastructure for embedding clinical research in care delivery systems for millions of Americans receiving their care in a wide range of clinical settings.
In summary, a clearer focus on questions that matter to patients and outcomes for individuals, using appropriate research methods in studies based in real world settings, and engaging all the end users of research in the process seems to us a very logical response to the concerns raised by Drs. Hadler, Lewis, and McNutt.
The comments to this blog post triggered some thougths that I would like to have critized by this group, especially, PCORI. My comments are informed by my experience; living through massive effortrs to improve research and translation. I have seen state wide research efforts; lived through the cancer cooperatives; was part of several Patient Outcomes ResearchTeams (PORTs) which meta-analyzed everything; to AHRQ as grantee and reviewer; translational reseach centers (61 nation wide and growing), PCORI, and, last, listening to former President Bill Clinton on a golf program.
First, we already know what we need to know from the present research process. We need another another research process. From the present research process we are dancing around on the head of pin in terms of “small effectology” as Dr Hadler points out. As an informed decision practictioner, it is rare to find data that I can share with a patients that is meaningful to the individual. Hence, our clincal research efforts are presently dead in the water and I have to admit I resent spending more of my money on the same ol approach.
Second, our government really doesn’t exist in health care. Big business runs health care. Given this is true (medicare governing board, for example), the goal of the government is not primarlly better care (they have already seen that little differs despite marginally higher cost/outcome), but to take money from the medical field and move it to brighter pasteurs. Bill Clinton said it succinctly. In addition, the “goverment” of health care is more worried, it seems, about jobs than better, informed patients who, when they become acquainted with the marginal benefits of some of the things we promulgate, begin to stop doing so much.
Third, we need a new model of research. I have simulated too many trials to not realize that any single RCT is nothing more than a flip of the coin, no matter the size. I also, as an editor, find that no matter the size of the trial (bigger means looking for smaller effects) it is very difficult to balance compared groups from statistical standpoint. The real issue of a RCT is to limit the choice to patients, perhaps, and we can do that in larger groups of patients with varied clinical characteristics if we abandon the RCT. Why isn’t the practice of clinical medicine not research. It should be and we can do that with creative uses of registries, for example.
Last, it may sound good, but I doubt Dr Shelby or PCORI really knows anything about the questions people are asking. We have moved away from individualized medicine to “population health” like we somehow know what that means. The best population health will be because informed patients distribute themselves by benefit and harm; and they will want proof of that. CER won’t do it, Dr Hadler is on the mark.
We are in a stalemate; I hope my diabtribe triggers others to rethink this whole mess.
Joe, it seems like quite literally everything else in ACA designed to reduce costs through engagement has either failed to reduce costs at all and probably increased them instead (PCMH, wellness) or the jury is still out and as Dr. Hadler says (and other researchers have pointed out), the hoped-for effect is so marginal that you need a far larger patient population to prove it.
Perhaps this is just my cycnicism, but how will this be different in getting patients to engage? (I don’t think engagement is an issue in cancer but it certainly is, in lifestyle diseases)
I applaud the premise behind CER and therefore the rationale for PCORI. However, I have grave concerns that the effort is largely doomed by methodological challenges. I expressed these reservations on THCB shortly after the creation of PCORI and nothing has assuaged my concerns to date:
https://thehealthcareblog.com/the_health_care_blog/2010/01/comparative-effectiveness-research-and-kindred-delusions.html#more
In essence, I don’t think any degree of data torturing can compensate for the marginal efficacy of so many targets for CER. If the efficacy in a carefully selected subset of patients is marginal (NNT>50 or 100 or…), you run the risk of comparative ineffectiveness research. CER only makes sense if it can be anchored in a relevant example of efficacy in some subset.
Examples of marginal efficacy are legion such as oral hypoglycemics, statins, many cancer screening protocols, and much more including highly marketed fads such as the stenting of STEMIs:
https://thehealthcareblog.com/blog/2013/10/13/the-end-of-the-coronary-angioplasty-era/
and anti-coagulants for non-valvular A-fib:
https://thehealthcareblog.com/blog/2014/02/01/why-your-a-fib-diagnosis-may-not-be-as-bad-as-you-think-it-is/
Of course, this concern would not pertain to comparative cost research, but that is not a PCORI agenda.
“We will be evaluated ultimately on whether the research we fund can change clinical practice.” And, of course, on how PCORI contributes to the cost-effectiveness of American health care without ever saying those dreaded words. Not an easy mission.
Keep up the good fight, Joe.
Very glad you’re taking a proactive and outward facing role, but think it’s important that you take feedback from sites such as this one and not simply act in a marketing/representative capacity for another Congress/Administration funded healthcare initiative.
I appreciate your points about high-quality CER, but think you’re overreaching in saying that there is currently a limited focus on outcomes and real world data – for example, in cancer patients. That was the 1980’s. The last 30yrs have had an explosion of functional and lifestyle outcome measures from both Europe and the US. Fields like Orthopedic surgery are still largely focussed on immediate postoperative outcomes and not longer term functional data, but even those areas are changing. The fields of Cardiovascular medicine and Oncology have made those changes long ago. The focus of PCORI should be less about simply coming up with new studies (many of which are already being done at top tier and rigorous scientific institutions) and more about integrating the evidence (‘evidence synthesis’ as you call it) so that both practitioners and patients actually know how to use in practical terms the 10 new sets of papers that the Society of X, Y, and Z just approved, often with their own bias.
As to the debate of the other big goal of PCORI – ‘direct comparisons of health care options especially those involving high-priced interventions as a path to limiting the use of expensive, less-effective options’ – I would like to second the comments from my esteemed (yet tragically named) prior commenter…
It is important to publish negative findings. Additionally, it’s important to not start with a hypothesis and focus all attention on proving it. That becomes a case of if you’re a hammer, everything in the world is a nail. The goal should not be to simply dissuade use of more expensive treatments but rather a more neutral assessment of the value of a treatment and an equally neutral assessment of the cost. Value is always more imputed and what makes cost benefit analyses in healthcare challenging. If all the studies are slanted to show how expensive treatments just aren’t as good as the cheaper ones, you lose the effectiveness and power of your data and reasoned assessment. Like the blinded scales of justice, PCORI needs to provide unbiased and reasoned assessments, not just more skewed and directed view point agendas.
1. Let’s evaluate the effective of alternative therapies.
2. Let’s open up the process to the American public year round on the web, not through a form on a dropdown on a menu but through a website dedicated to that purpose.
3. Let’s publish negative findings.