Uncategorized

Why I Don’t Believe In Science

A few days ago, cardiologist and master blogger John Mandrola wrote a piece that caught my attention. More precisely, it was the title of his blog post that grabbed me: “To Believe in Science Is To Believe in Data Sharing.”

Mandrola wrote about a proposal drafted by the International Committee of Medical Journal Editors (ICMJE) that would require authors of clinical research manuscripts to share patient-level data as a condition for publication. The data would be made available to other researchers who could then perform their own analyses, publish their own papers, etc.

The ICMJE proposal is obviously controversial, raising thorny questions about whether “data” are the kinds of things that can be subject to ownership and, if so, whether there are sufficient ethical or utilitarian grounds to demand that data be “forked over,” so to speak, for others to review and analyze.

Now all of that is of great interest, but I’d like to focus attention on the idea that conditions Mandrola’s endorsement of data sharing. And the question I have is this: Should we believe in science?

Mandrola’s belief in science must assume that medical science can reveal durable answers, truths upon which we can base our clinical decisions confidently. He comments:

I often find myself looking at a positive trial and thinking: “That’s a good result, but can I believe it?”…Are the authors, the keepers of the data sets, telling the whole story?

Presumably, science is the way to get the “whole story,” for after weighing the pros and cons of data sharing, he concludes:

Open data would make it easier to believe. And we need to believe in science.

In other words, we give people who are in the midst of a heart attack aspirin because “science has shown” that aspirin in that setting reduces the mortality rate, we screen for colorectal cancer because science has shown that cancer incidence decreases, we give the latest immune system modulator to patients with rheumatoid arthritis because science has shown that symptoms are improved, etc.

But Mandrola, and most doctors are also perfectly aware that scientific truths change all the time.

John Ioannidis, a Harvard-trained physician and statistician, made a splash a few years ago when he published a paper called “Why Most Published Research Findings Are False” arguing that the claims of most research publications are rapidly contradicted or reversed, at least in medicine. That paper is the most downloaded article from the journalPLoS Medicine to date, and it earned Ioannidis glowing press coverage inThe Atlantic and The Economist.

Now, Ioannidis believes—as does Mandrola—that scientific findings are unreliable because of circumstantial or external considerations. These considerations fall into two categories: statistical or human. Statistical deficiencies usually have to do with an insufficient number of subjects (for a given effect size) or perhaps with faulty statistical models. Human deficiencies have to do primarily with bias, which comes under many disguises that Ioannidis describes in his paper.

For Ioannidis, Mandrola, and many others commentators, bias is the most important “correctable” reason for scientific failures. And that’s why all are so keen on promoting some kind of data sharing to improve the reliability of a study result. If the data are open to scrutiny, different teams of researchers can examine the findings and draw their own conclusions. Presumably, the truth will emerge from such a process or, at least, we will get closer to it. Ioannidis advances the notion that while “100% certainty” may not be achieved, we can get close to the true answer if we reform the ways we conduct science.

A major problem is that it is impossible to know with 100% certainty what the truth is in any research question. In this regard, the pure “gold” standard is unattainable. However, there are several approaches to improve the post-study probability.

And he goes on, in that paper and in several others he has authored since then, to make specific recommendations about ways to reform medical science in order to improve its veracity.

But Ionnidis may be overlooking an uncomfortable truth about the truths of science. The notion that truth can be approached by way of scientific refinement is not itself necessarily true. In fact, it is an idea that has been subject to considerable controversy over the last 70 years.

Without getting into the various debates that have taken place since theneopositivist dream of establishing science as a unified way of explaining the universe was dashed by Kurt Gödel, Willard Quine, Michael Polanyi, Thomas Kuhn, and others in the mid-twentieth century, we can at least remind ourselves that science has trouble with the concept of a “gold standard,” and that a scientific consensus, however close to 100% it may be, has led us down many blind alleys in the past. This is not very controversial, even among self-confident scientists.

Secondly, even we confined ourselves to the modest “normal science” of clinical trials that concerns Mandrola and Ioannidis, I’m not sure what importance we should attach to the pursuit of scientific truth. In fact, when it comes to medical care, too strong a belief in science can be problematic.

As a case in point, a few weeks ago the New England Journal of Medicinepublished the results of the SPRINT trial which had randomized patients with high blood pressure to one of two treatment protocols: high intensity therapy, to try to achieve a systolic blood pressure less than 120 mmHg, or low intensity treatment for a more modest reduction in blood pressure (less than 140 mmHg).

To the surprise of many, the trial was stopped early because the difference in mortality rates between the two groups was statistically strong enough to trigger an automatic termination. High intensity was superior to low intensity, and continuing the trial could expose the low intensity treatment group to an excess risk of mortality if the trial kept going for another few years.

Now, the trial results caught the medical community by complete surprise, not only because the apparent benefit of intense therapy was strong enough to trigger an early termination, but also because the results were contrary to prior clinical trial findings. Those previous trials, however, were smaller in size than SPRINT, and had not addressed the question of treatment intensity as directly and generally as SPRINT did. SPRINT was specifically designed to “put that question to rest.”

Luckily, SPRINT was a NIH-sponsored trial, so the usual suspicion that the results could have been rigged by profit-motivated pharmaceutical companies could not be raised. In many other ways (size, design, statistical methods, etc.) SPRINT seemed to follow some of the recommendations made by Ioannidis. Nevertheless, many physicians seemed upset about the results, and some of their reactions seemed to betray biases of their own.

As soon as the abrupt trial termination was announced, Eric Topol and Harlan Krumholz, two academic leaders in cardiology, wrote an Op Ed pages in the New York Times in which they demanded that all patient-level data be made promptly available for review by the scientific community. Being NIH-sponsored, it seemed, was not good enough to satisfy the skeptics.

Now, imagine for a moment that the data were made available to the doubting Thomases to examine, analyze, and interpret to their heart’s content. And imagine further the reviewers were able to scrub the data squeaky clean and purify them from any possible biasing influence. Suppose furthermore that they subjected the data to the best statistical methodology known to man. What would the new results then mean in relationship to a truth we can “believe in?” How would we know that we are closer to “100% certainty”?

The answer is we wouldn’t really know, or at least not for very long. Conditions change constantly, so that new therapies, new practice patterns, new epidemiological considerations, new social, economic, and environmental factors, all invariably conspire to render clinical trial results much less relevant, if not completely obsolete within a short period of time. Trial results are short-lived not just because of design or bias. They are short-lived because the world they try to capture quickly outlives them.

And besides, in real life settings, the importance of scientific truths are vastly overstated. From a practical standpoint, how close the SPRINT trial results are to “the truth” is of secondary concern when treating patients. Even if a clinical trial were carried out with utmost probity and satisfied the most fastidious of scientists, we would still not be obligated to apply its results into practice willy nilly, as Harlan Krumholz himself has reminded us.

Shortly after SPRINT was published, Krumholz wrote another blog post in the New York Times cataloging all the reasons why we shouldn’t rush to try to lower blood pressure intensely in everyone. But his reasons had nothing to do with concerns about the trial design or the possible bias of the investigators. Instead, Krumholz correctly reminded us that numerous patient factors must first be taken into consideration before making a clinical decision.

A trial can tell us that on average certain types of patients may do better with this treatment than that one, but a trial can tell us nothing about how the patient at hand will fare, and this patient invariably has certain characteristics that make him or her different from those patients enrolled in the trial. We are not clones, after all.

The upshot of this inherent limit to the value of clinical trials is that Mandrola, Ioannidis, Topol, Krumholz, myself, and any physician worth the M.D. after their name have to use judgment to take care of patients, and clinical judgment is decidedly “unscientific.” After all, if we are allowed our own respective clinical judgments—and thankfully, so far, we are—there is no scientific explanation for any agreement or discrepancy among us. Judgments are decisions, not “discoveries.”

Now, of course, an immediate retort is to say “Well, Accad, I understand your point about clinical decisions per se not being scientific actions, but isn’t it better to have medical science be least tainted by potential bias, so our clinical decisions can be as efficacious as possible? Surely, you’re not advocating that we leave it up to investigators to advance whatever claims they wish? Let’s not go back to the age of snake oil salesmen!”

Well one answer to that retort is that scientific “purity” comes at a cost, and determining if it’s worthwhile or not to purify science is not itself a matter of scientific inquiry. There is a cost to implementing new rules that must be added to an already horrendously expensive and lengthy clinical trial process, and if the aim is to get ever closer to scientific certainty, there is no end to the resources that could be employed to triple verify and vet everything that goes into a clinical trial.

In fact, clinical research is extremely costly precisely because of perennial calls to make it “more rigorous” and more believable. And these calls are not new: our healthcare system was born out of a desire to put snake oil salesmen out of business by making medicine ever more scientific. But if that desire leads to a boundless commitment of resources and still remains unsatisfied, perhaps it’s time to reconsider its pre-suppositions. Besides, I’m not sure that the academic community, whose living depends in large part on the conduct of research, is sufficiently impartial when it comes to determining the optimum cost of the scientific enterprise.

But there another, more important cost, that is not material in nature.

In the aftermath of the SPRINT trial, some physicians and scientists were so upset about the mini “paradigm shift” they were potentially facing that they railed against the decision taken to stop the trial early according to its pre-specified safety termination rule.

In the comment section of a Health News Review blog post discussing the SPRINT trial, Mayo Clinic physician-scientist Victor Montori—a very well-respected champion of patient-centered medicine and “shared decision-making”—expressed a wish that “new pressure be applied to prevent [early trial termination] from ever happening again.”

Alan Cassels, a health policy researcher and the author of the article, agreed with Montori and added that “we should not cheer the decision to stop the trial, and sprint to erroneous conclusions about what it all means.” I have read similar remarks made in other venues online or in print.

Now, it is quite possible that these comments were made off-the-cuff in the excitement of the surprising news. But if Montori, Cassels and others making these remarks are willing to stand by them, then not only do they demonstrate circular reasoning in regards to scientific validity (termination rules are precisely put in place to remove human bias), but they also implicitly express the view that seeking scientific clarity is worth risking some lives.

This, in my mind is a troubling position to take, and it points to a more general danger, which is that if we believe in science too strongly, we may end up not believing in patients anymore.

Categories: Uncategorized

Tagged as:

14 replies »

  1. I’m late to this party as I was busy doing decidedly non scientific things. The ethics of stopping SPRINT are interesting. It seems that science makes more science unethical 😉

  2. Thanks, John, and I agree with your decision to remove the quotation marks. I hesitated on that point and decided on the more cautious choice, but you’re reasoning is correct.

  3. Quotation marks are almost always unnecessary and weaken the meaning of a post or confuse the reader as to the proper. A well-written piece will convey the nuance in the author’s meaning effectively, as Michel’s does.

    Is the author really saying we shouldn’t believe in science? Could it be? Well, no. Is the question at the center of the argument here. Yes?

    Hopefully, we’re bright enough to work it out for ourselves.

  4. PS: Also, there may be more of an “anti science” tone than I intended here. My original title for this piece has “Believe in Science” in quotation marks, to make a direct reference to John Mandrola’s blog post, but these disappeared when the post was imported to THCB. I am not that much of a heretic!… 🙂

  5. Hi Brad,

    While it may seem that absent randomized clinical trial data, physicians would be paralyzed and unable to make any decision regarding a new compound, that is not borne out by historical evidence (pretty good progress was made, and care rendered, before RCTs became indispensable in the last 20-30 years).

    If one pays attention to the needs of the particular patient at hand, the added information provided by a large randomized trial may not amount to much. There is safety information, of course, but that can be obtained other ways. Furthermore, if safety information is not available, than one would presumably proceed very cautiously, weighing the expected benefits and risks.

    These two back-to-back posts I wrote a few years ago may be of interest to you, as well as my response to BobbyGvegas below.

    http://alertandoriented.com/the-clinical-trial-on-trial/

    http://alertandoriented.com/why-n-of-1-is-enough/

    I have also a series about medical decision-making which you may find pertinent to your questions. http://alertandoriented.com/tag/medical-decision-making

    Thank you for your comments, and sorry if that’s more than you were asking!

    Michel

  6. Perhaps I would understand your position better if you explained how you would use a PCSK9 or Entresto, say, in your practice. Both novel compounds with implications for your own cardiology practice. They also both have SPRINT like RCTs in back them, i.e., lots of depth, but little breadth on the evidence front. Also, no real world experience in the US population.

    I am not sure where you are going with your post. You lost me. Do you shun the above, and if not, why and how do you incorporate them into your decision making?

  7. Understood, but let me offer a couple of thoughts. First, to shake well entrenched norms is not that easy, and it is probably best done with focus and vigor (polemics). The article is pretty long as it stands, and adding another part about what to do next might be too heavy to digest.

    Second, “how we should proceed” can refer to many different questions, each worthy of its own focus. I think there is a lot of corrective work that can be done and is being done. If we are talking specifically about the question of optimizing medical decisions, you may find this post less bleakly destructive 🙂 http://alertandoriented.com/phronesis/

    Finally, there is a strong will in medicine that we “all get along,” which is certainly fine and laudable on a personal level, but frequently that means that we should be expected to endorse mutually exclusive ideas and approaches, which is not very healthy. Polemics is one way to keep that from happening.

    Thank you for reading!

    Michel

  8. Sorry, you have been reported. 🙂 While I am not into “scientism” (the position that ONLY reductive Western science is epistemologically sound), your post, while long on slapping down the shortcomings and limitations of science, doesn’t offer us any rational alternatives. Again, it’s disturbing.

  9. Oh, no! Don’t report me to the authorities! (The folks at that website have already heard about my heresies, I believe…)

  10. Thoughtful, provoking reflection on the limits of science in medicine. Muy recomendable!