A key lesson of science is the importance of a control group; I worry that a lot of coverage and discussion of the biopharma industry (in which I work) neglects this lesson, and instead contrasts (implicitly or explicitly) industry behavior to that of an imagined, idealized standard of perfection, and fails to place the actions in the context of medical science as a whole.
I appreciate critical coverage of the industry: reporters should always maintain high standards, approach new information skeptically, and not take anything at face value.
However, what disappoints me is the common, implicit assumption that industry science deserves to be treated as a special case, rather than considered within the broader framework of contemporary research. I’m especially disappointed by the frequent assumption that the behavior of industry scientists should be viewed more skeptically than the behavior of academic scientists; this strikes me as a magical, often self-serving belief that has now become elevated to the status of conventional wisdom.
Take data sharing, a topic in the news today (and discussed very thoughtfully here by John Wilbanks, the guru of open science). While most media coverage of this topic (both today and over the years) has focused on the transparency of industry research, I’ve been attending the annual Sage Commons Congress since its inception in 2010 (disclosure: I served as a founding advisor to Sage, a non-profit organization focused on open science, founded by Eric Schadt and Stephen Friend), and hearing every year about how incredibly difficult it is to get academic groups to share with each other, for a wide variety of reasons. (See this exceptional talk from Josh Sommer of the Chordoma Foundation at the First Sage Congress). Getting scientists (or any group of competitive human beings) to exchange data turns out to be a real problem — especially in the highly-regulated environment in which clinical data sit.