What’s So Wrong With Randomized Trials?

Screen Shot 2014-09-24 at 7.32.39 PMOften, at scientific conferences, the most important learning happens in the question and answer period.

I spoke at the American Diabetes Association conference earlier this year, presenting results of an observational study we did on medication adherence and diabetes.

We found that if people starting using the online patient portal (sometimes called the personal health record), to order their medication refills, they were more likely to take their medication regularly. Dr. Katherine Newton of Group Health Research Institute spoke before me, describing a randomized study showing that a clinical pharmacist-led blood pressure management program did not lower blood pressure any more than usual care by an outpatient provider.

The first audience comment came from a program officer from the National Heart, Lung, Blood Institute, part of the National Institutes of Health. Program officers are incredibly important because they help set the research priorities for the major funding mechanism for medical research. I will never forget her comment, because it was so strongly worded.

She said (close to but not exactly a direct quote): “Having listened to your talk and the one before it, I am more convinced than ever that we should focus on randomized trials,” implying that the negative results of the randomized trial were more believable than the positive results of our observational study.

This worship of randomized trials at the expense of other forms of study is understandable. If we randomly assign people to one treatment or another, we can ensure that the differences we see between the two groups are really because of the treatment and not some other factor. After all, researchers, myself included, spend a tremendous amount of time obsessing over our methods. We go to extreme lengths to make sure that we correctly interpret the data before us. Our holy grail is “causal inference” in which we can be sure that whatever risk factor or treatment we study truly causes the health outcome we are interested in. Randomization is the best way to ensure that you’re not unwittingly attributing your effect to the wrong cause.

So why did I think this comment was off-base? First, you cannot always randomize people to one treatment or another. In the case of my study, the online patient portal was available to everyone, as a part of the health system.

When healthcare systems change or offer a new service, it is important to quantify the benefit, and government and accreditation mandates often make randomization impossible. So, we use our methodological skills to try to approximate cause and effect, by choosing the population under study carefully and adjusting for all the factors that we can think of and measure that might affect the outcome.

Is it perfect? Nope. However, it’s better than not understanding the health effects of health system delivery changes.

A second reason to think beyond trials is because they are designed to answer a single question in a narrow group of people. Enrolling in a trial involves meeting strict criteria about what other medical conditions, medications, treatment, and history you have. In the real world, patients are people, not single-disease entities. As a primary care physician caring for medically complex, low income, ethnically diverse patients, I often struggle with how to apply results from trials to my own practice. It bears repeating that studying real-world populations is critical to improving health in real-world populations.

If that weren’t enough, trials are expensive and time-consuming. Trial researchers need to enroll a lot of people to detect significant differences in outcomes. However, when policymakers and public health leaders are making decisions for an entire population, small differences matter. Our study showed a 6% difference in the proportion of patients who took their cholesterol medication regularly. That’s not a huge number, but over an entire group of diabetes patients, lowering cholesterol modestly is important. It would be a tall order to fund a trial large enough to detect such a difference.

Finally, many approaches that work in randomized trials don’t end up helping in real life.

A study earlier this year found no benefit to implementing surgical checklists in Canada, even though the same checklist had powerful results in other settings. No randomized trial is going to be able to explain that contradiction! We need more methods, collectively known as implementation science, in order to understand not only what works, but how it can be applied, implemented, and spread, so that new treatments and approaches translate to health benefits for all.  In our study, perhaps there was something unique about portal users that used the online refill function for their medications – understanding that, rather than designing new randomized trials of new interventions, may be well worth our time.

Let’s end the tyranny of the randomized trial and advocate for good data and rigorous methods in every aspect of health care delivery. My patients, and all patients, deserve better.

Urmimala Sarkar MD, MPH is an Associate Professor of Medicine in Residence at UCSF in the Division of General Internal Medicine and a primary care physician at San Francisco General Hospital’s General Medicine Clinic. Dr. Sarkar’s research focuses on (1) patient safety in outpatient settings, including adverse drug events, missed and delayed diagnosis, and failures of treatment monitoring, (2) health information technology and social media to improve the safety and quality of outpatient care, and (3) implementation of evidence-based innovations in real-world, safety-net care settings. 

6 replies »

  1. This shows that randomized trials are not applicable to any sickness because each patient has a different condition. I agree with better health, that most diseases require a special treatment and sometimes randomized trials are not appropriate for a specific disease.

  2. The case against RCTs is pretty strong. They can’t do a lot of things, namely answer most questions we are interested in. The case for highly biased observational methods is very weak. If you had to bet on one of these two hypotheses: (1) the patient portal improved compliance/health/whatever or (2) more motivated patients are healthier and more likely to use the portal, how many people are really going to be on #1? Probably the authors and… that’s it. So observational methods are not going to be helpful unless you put them in an explicitly Bayesian framework.

  3. This is a very thought-provoking article. Your insight somehow proves a good point. Most diseases are case-to-case basis so randomized trials would serve as clinical tests and provide better feedback and efficacy. – Angel S

  4. Agree with Jha. There is a selection effect for the motivated with this observation. It would make sense that the motivated could be motivated further with other forms of empowerment, like a patient portal. It is unknown if this additional effort, in this particular population, will influence outcomes. As you can see in your own population, patients who are detached from their own care, either by preference or economics, may not benefit from a patient portal at all. RCTs have their own flaws for sure. Much of the evidence falls apart when it hits the “real world”. Because of the pervasive lack of cross validation and reproducibility, medical science continues to make similar mistakes over and over. Many of the studies are rife with impure motivations that corrupt unbiased observation. It would be very difficult to achieve the scientific rigor of physics, since we cannot treat patients like molecules. But I think we can do better to move the needle along the spectrum of scientific inquiry. For instance: not calling “gold standard” and bludgeoning people with treatment guidelines or mass adoption after a solitary positive or negative finding. Good discussion. Thank you.

  5. Thank you for your comment. I would suggest that observational methodology has advanced considerably, especially with the advent of implementation science methods to which you can link it the post.

    n regards to your specific question about our study, we do not make any claim about the underlying mechanism by which adherence improved (it could well be internal motivation, as you suggest). Rather, we think this means that having the convenience of online refills may just have lowered a health system barrier for some patients. To me, this evidence supports helping patients with diabetes learn to use the patient portal for their refills, even if we don’t know the reason that they adhere better when they use it. Of course, health systems should also track this in real time, so that they can see whether investing in training/ encouraging/ promoting a patient portal continues to affect the metrics they care about.

  6. I agree that RCTs are simply not as feasible as many want. They are also so contrived that the external validity is next to nil. And even when they are ground breaking often detractors find something wrong with them, either patients have not been followed up long enough to judge all-cause mortality, or the sample is under-powered for the treatment effect (see trial on checklists, NEJM).

    But…as bad as RCTs are the alternative is not much inspiring either.

    Lots of clever statistics go in to observational studies. Nevertheless there are serious problems. The directionality is far from established. Post hoc ergo propter hoc.

    I haven’t read your study but the first question I would wish to know is whether the more motivated person is the one using electronic portals, whether you are not simply measuring concordance and compliance (and the cultural, social and educational factors) rather than the utility of online patient portals.

    “However, it’s better than not understanding the health effects of health system delivery changes.”

    Perhaps. But I’m not sure this is empirically correct. Ioannidis showed that most published research in medicine is false. Trying to change the world repeatedly in the face of new published research is not only disruptive but, per Ioannidis, likely harmful.

    Perhaps society needs to temper its “let’s rationally and scientifically design society” ambitions. Any attempt is likely to be wrong, statistically speaking.