It’s a lousy Saturday morning in Southeastern Pennsylvania. The 100-mile bike ride I had scheduled, the first century of the year, was cancelled at 5 AM due to inclement weather. I’ve been scanning my Twitter feed ever since.
I only joined Twitter yesterday, so I’m a bit obsessed at this point. The synapses in my prefrontal cortex are getting fresh hits of dopamine every time I land on another exciting science/political story, journal article, or blog that’s been tweeted about. Yes, I’m a nerd.
Through Twitter, I was introduced to Michel Accad less than 24 hours ago. He’s a cardiologist, philosopher, writer, and creator of the blog “Alert and Oriented”. Over last evening and this morning, I read most of his blog articles as well as a few research papers he has authored. In short, I think he’s a fantastic writer and very intelligent guy but I have to take issue with a recent piece titled “The devolution of evidence-based medicine”. In it, he praises Anish Koka’s recent article on this site titled, “In defense of small data”. I know Anish personally; he is brilliant and paradoxically, he positively covered my own research using big data in another recent article.
Before making my case I need to state that I suspect that Michael, Anish and I agree more than we disagree. Also, I believe that to be an “expert” in EBM (if that’s even a desirable thing), one must fully appreciate its limitations. In reading their prior arguments I do believe Michael and Anish would qualify as experts; however, to me, their arguments come dangerously close to the idiomatic expression “throwing the baby out with the bathwater” and seem to favor the return of “eminence-based medicine” of yesteryear. While I think it’s healthy to point out EBMs limitations and even to recognize that it’s been hijacked by various interests as John Ioannidis argues in a recent commentary we must not lose all perspective.
Since Michel Accad and I are both enthusiasts of Austrian economics I thought it best to defend EBM from a Hayekian perspective. This may seem counterintuitive because Austrian economics is against central planning and to some extent, EBM seems like a well-orchestrated, centrally-planned cabal designed to rob average physicians of their “ability to think, judge, and reason for [themselves]” in Accad’s words. In fact, I share that view and argued it in an article I wrote for Mises Daily in 2011. But at the same time, we need to appreciate EBM, with all its shortcomings, and here’s my Austrian economist spin on it. [Disclaimer: Mises, Hayek, Rothbard or any other prominent Austrian economist never discussed this, so I’m taking liberty]
The primary concern of economics is to understand the factors that determine the production, distribution and consumption of goods within society. Without describing the various schools, Austrians believe the best social order arises spontaneously through the mechanism of prices in an un-fettered capitalist system, you could call them radical capitalists and no, not like Donald Rumsfeld, Dick Cheney, Alan Greenspan or any other right wing figure universally loathed by the left.
In The Use of Knowledge in Society (1945) F.A. Hayek explained how prices communicate information. To state his theory in an over-simplistic way: there is a lot of knowledge out in the economy (this is sometimes referred to as the knowledge problem) and there is no way to collate all of it into a single place or central body and thus, economic calculation or central planning will usually fail to achieve its desired ends. Better economic order arises based on information from price signals. Only prices can communicate what is ultimately valuable to consumers and entrepreneurs. Value roughly equates to truth and in the Austrian economic sense it is an unequivocally subjective property. Thus, prices serve to share and synchronize local and personal knowledge and values, allowing society’s members to achieve diverse and complex ends through a principle of spontaneous organization.
Take my road bike for example, which is sitting idle in the bed of my truck. I paid a lot for it, even more for the carbon wheels; most would say I’m crazy. But I love to ride, and even more, I love to ride fast. I’ve had a handful of road bikes in my life, each one more expensive than the last. And I’m not alone; the small band of guys I ride with all have bikes equally or more expensive than my own. How did this boutique bike market come to be? Making one of these marvelous machines is incredibly complex; for a simpler example that underscores my point, I recommend reading “I, Pencil” by Leonard Reed.
How does any of this apply to medicine?
Like the economy, the human body is a complex system composed of 100 trillion individual cells – the number of chemical reactions occurring at any one time is too numerous to count and furthermore, each human being is different. Its complexity multiplied hundreds of billions of times. Like the knowledge problem that exists in economics; the same could be applied to the complexity of managing disease and preventing illness. In a recent journal article, Saurabh Jah did a nice job addressing medicine’s information problem and its application to overdiagnosis.
However, this doesn’t mean we can’t arrive at truth when it comes to managing disease or preventing illness but it means that arriving at the truth is not a simple process. And while prices enable us to arrive at some rough level of truth in the economy; outcomes allow us to arrive at a rough level of truth in medicine. By outcomes, I am referring to meaningful, patient-level outcomes in response to an intervention collected as part of a well conducted clinical trial, where attempts are made to limit biases as much as possible. This is clinical science or population science, it’s not Newtonian physics as has been pointed out, but I reject the notion that there’s anything “soft” about it either.
Other forms of research serve a useful purpose for generating testable hypotheses but alone, are insufficient for promoting new interventions. The medical community has been fooled too many times and based on the work of people like Vinay Prasad and Adam Cifu it is reasonable to estimate that the majority of what clinicians do in practice may be simply incorrect. Yes, it’s a tough pill to swallow but we have to get over it. In my own field of cardiology, examples abound.
Arguing for a return to small data and physician judgment based on personal experience is, in my opinion, the worst thing we could be promoting. Human judgement (even that of physicians) is too frail and susceptible to cognitive biases such as confirmation, misattribution, overconfidence and illusion of control to simply concede that EBM is a lost cause and we should return to the days of common sense and physicians’ applying their idiosyncratic judgments or those of so-called experts in the field. Many, if not all, medical reversals made good sense. PVC suppression with antiarrhythmics in post-MI patients is a classic example.
In the 1980’s there was a plethora of observational research showing that in patients who had suffered a major heart attack, premature ventricular contractions (PVCs) were associated with an increased risk of sudden cardiac death. Thus, it was reasoned that suppressing PVCs, through the use of antiarrhythmic medications, would prevent the development of a full blown ventricular arrhythmia leading to sudden cardiac death. Patients who had suffered a heart attack and were found to have asymptomatic PVCs would be given a trial of an antiarrhythmic drug and followed with the use of a 24-hour continuous cardiac monitor to see whether the drug significantly suppressed the PVCs. If it did, the patient would be maintained on the drug; if not, more drugs would be tried until one was found that worked.
Against this backdrop, the Cardiac Arrhythmia Suppression Trial (CAST) was undertaken to test the hypothesis that PVC suppression would reduce the risk of sudden cardiac death compared with placebo. As of March 30, 1989, 2309 patients had been recruited for the initial drug-titration phase of the study. In this phase, all patients were given antiarrhythmic therapy but only those who had adequate PVC suppression were then randomized to receive active drug or placebo. 1727 out of 2309 patents (75%) had initial suppression of their arrhythmia and were randomized. During an average of 10 months of follow-up, the patients treated with active drug had a significantly higher rate of death than the patients assigned to placebo (7.7% vs. 3.0%). This means that for every 21 patients treated with an antiarrhythmic medication, 1 additional death was caused. The results from CAST suggest that either PVCs are not on the causal pathway of sudden cardiac death or that the negative effects of treatment with antiarrhythmic drugs far outweigh the benefits of PVC suppression. It is estimated that 50,000 Americans died due to this practice and millions more would have if not for this clinical trial.
This doesn’t mean that clinical trials are beyond reproach. We should critique and even criticize them as John Mandrola does so well in his regular Medscape columns.
We should point out when an effect size is small and question whether it has meaning in clinical practice.
We should be critical in evaluating the harms of an intervention and realistic in expecting that harms in the real world will exceed those in the sterile settings of a clinical trial.
We should encourage data sharing since the epitome of good science lies in being able to replicate its findings.
We should point out when a meta-analysis is simply a case of garbage-in-garbage-out.
We should consider the biases of authors and funding sources because while bias is, in many cases, unavoidable that doesn’t mean it should be ignored.
We should call out “guideline-tyranny” when recommendations are based on low to mid-level evidence and the writing committee is full of conflicted members.
We should fight senseless pay-for-performance measures which are a bastardization of EBM and encourage overtreatment in some cases and potentiate patient harm.
We should also encourage observational research, big and small, to generate hypotheses and allow a platform for medical students, house staff, fellows, and junior attendings to participate in the research process.
We should do all these things but we should not even entertain the idea of abandoning EBM. Nor should we advocate for cookbook medicine either. The best approach combines EBM with well-informed physician judgment. Shared-decision making requires physicians have a good command of EBM and the ability to relay information on benefits and risks to the patient sitting in front of them. I would argue that if, as a physician, you cannot provide the NNT and NNH for a given intervention because either you don’ know it or because insufficient evidence exists to calculate it, then recommending it should give you serious pause. It would be impossible to have this information without EBM – we’d still be in the medical stone ages.
Andrew Foy is an academic cardiologist who is taking up blogging, again, for the instant gratification it brings while his real research is under peer-review. His Twitter account is @AndrewFoy82.