Don’t Confuse Hard Science With Bad Pharma

A key lesson of science is the importance of a control group; I worry that a lot of coverage and discussion of the biopharma industry (in which I work) neglects this lesson, and instead contrasts (implicitly or explicitly) industry behavior to that of an imagined, idealized standard of perfection, and fails to place the actions in the context of medical science as a whole.

I appreciate critical coverage of the industry: reporters should always maintain high standards, approach new information skeptically, and not take anything at face value.

However, what disappoints me is the common, implicit assumption that industry science deserves to be treated as a special case, rather than considered within the broader framework of contemporary research.  I’m especially disappointed by the frequent assumption that the behavior of industry scientists should be viewed more skeptically than the behavior of academic scientists; this strikes me as a magical, often self-serving belief that has now become elevated to the status of conventional wisdom.

Take data sharing, a topic in the news today (and discussed very thoughtfully here by John Wilbanks, the guru of open science).  While most media coverage of this topic (both today and over the years) has focused on the transparency of industry research, I’ve been attending the annual Sage Commons Congress since its inception in 2010 (disclosure: I served as a founding advisor to Sage, a non-profit organization focused on open science, founded by Eric Schadt and Stephen Friend), and hearing every year about how incredibly difficult it is to get academic groups to share with each other, for a wide variety of reasons.  (See this exceptional talk from Josh Sommer of the Chordoma Foundation at the First Sage Congress).  Getting scientists (or any group of competitive human beings) to exchange data turns out to be a real problem — especially in the highly-regulated environment in which clinical data sit.

Unfortunately, as a colleague recently told me, “it’s easier to write stories about industry being bad than it is to write about science being hard.”   Consequently, most stories about transparency seem interested in exploring only how industry is behaving, rather than looking at the broader challenges associated with data sharing.

Another example is the recent discussion around the always captivating subject of post-hoc subgroup analysis, a topic experiencing significant buzz in the context of a recent industry-sponsored Alzheimer Disease study– see this Twitter exchange).  Highly suspect?  You bet.  The sole provenance of industry?  Not a chance.  There’s a remarkable amount of statistically suspect data dredging that occurs in academia – one of the reasons so many papers aren’t reproducible.

To be clear: bad statistics is a problem – but one hardly unique to pharma.

Here’s why this matters: a recent New England Journal of Medicine (NEJM) published a study showing that readers of scientific articles are significantly less likely to trust results emanating from industry than from academia.  Predictably, some pharmascolds trotted this out as a perfect example of just desserts – industry, they said, is now reaping what it sowed.

Jeff Drazen, the NEJM’s Editor-in-Chief, offered an appreciably more nuanced view: while acknowledging that industry has a financial stake in the outcome of studies, and occasionally has behaved improperly, he also pointed out that “investigators in NIH-sponsored studies also have substantial incentives, including academic promotion and recognition, to try to ensure that their studies change practice.”

Drazen went on to argue that science should be judged based on the quality of the data, and highlighted some ways this could be assured.

I agree with Drazen’s conclusions, which (sadly) must be considered courageous rather than common-sense, given the arena in which he operates.

I’m inclined to take the logic a step further – in my view, the NEJM data emphasize the extent and impact of reflexive anti-industry bias on the coverage of medicine, and demonstrate just how successful the pharmascolds have been in promulgating the message they have so diligently shaped.

The stubborn truth is that while industry is certainly not perfect, it actually conducts science in a remarkably robust fashion – I would have far more confidence in the reproducibility of an industry study – any industry study, from laboratory research to late-phase clinical trial – than I would in a similar study conducted by university scientists.

I have deep respect for my colleagues in industry (I wouldn’t have stayed here if I didn’t), and have been impressed by their dedication, integrity, and profound determination to do good, and make a difference.  They have deliberately chosen to enter the arena, to pursue valiantly the incredibly difficult, risky, and uncertain mission of creating new medicines.  They deserve and merit admiration and respect – certainly not derision and scorn.

While industry research isn’t flawless, or anything close, much of the critique industry faces – on subjects ranging from data transparency to subgroup analysis – reflect problems facing medical science as a whole, and responsibly should be viewed in that context.

So criticize industry – please.  But let’s also be sure to properly contextualize the obstacles we face within the broader challenges scientists everywhere struggle with in the process of generating new knowledge, and — we fervently hope – delivering new cures.

David Shaywitz is co-founder of the Center for Assessment Technology and Continuous Health (CATCH) in Boston.  He is a strategist at a biopharmaceutical company in South San Francisco. You can follow him at his personal website. This post originally appeared on Forbes.

1 reply »