In an unusually candid editorial in the NEJM, Longo and Drazen say that data sharing may be problematic because some researchers fear that the data could be used to by others to disprove their results. The editors predicted a new class of researchers who use data created by other researchers without ever taking the trouble to generate data themselves – research parasites.
With this editorial, the NEJM has firmly established itself as descriptive (the way the world is), rather than normative (the way the world ought to be). I, for one, find this move rather refreshing. I have been pumped to a diabetic state by the saccharine naivety of the hopey-changey, “we need this and that” brigade. The editors merely said what some researchers secretly think, and how many actually behave.
Once, I asked the PI of an RCT a specific question about outcomes. I received a reply within seconds. The PI sent me a pdf of the data. The email ended with that banal academic signature “Best, MD.”
I was flattered by the promptness of her response – many PIs who publish in high impact journals don’t bother replying. Then I discovered she sent me the supplementary appendix, which was also available online. Unsurprisingly, my question was not answered. But it was not supposed to be answered. The unpublished data, which included the answer to my question, was going to be used by the PI for more papers in high impact journals, as it should be.
Another time I asked an economist to share an economic model of a technology, which I did not believe was as cost-effective as he said it was. After a few evasive responses, when it became apparent that I was not getting the message through my thick skull, he replied, “sorry I can’t show you my model. I spent my PhD developing it.” If he thought that I was a data parasite gagging to prove him wrong he was, to put it plainly, spot on.
Karl Popper, the philosopher of science, said that what tells science apart from astrology is that science is falsifiable – if it can’t be disproven it’s not science. Replication, the key to progress in physical sciences, is medical science’s Achilles’ heel.
We should dwell a bit longer on the hard sciences because they are instructive. No body had to do a meta-analysis of experimental evidence of the presence of ether (the mysterious substance believed to conduct gravity). Why? Because ether either exists or doesn’t exist, and it doesn’t exist. Since Michelson-Morley’s famous failed attempt to show the presence of ether, several physicists have tried, and failed.
There is no meta-analysis of experiments of possible curvature of space-time with I2 to measure the heterogeneity of sample size. Arthur Eddington showed that gravity bent space-time, during a solar eclipse, and several others have, since, verified Einstein’s theories. Publication bias is not an issue in physics.
While physicists seek truth which uncovers the mysteries of the universe, the objective truth in medical sciences is settling petty quibbles probabilistically, such as:
Should we give 150 mg or 75 mg of aspirin after a myocardial infarction?
Do patient-centered ward rounds improve outcomes?
Is cardiac CT superior to SPECT in the diagnosis of obstructive coronary artery disease?
A bunch of studies root for cardiac CT. Another set root for SPECT. Then we have a meta-analysis. Then an RCT. Then more RCTs. Then a meta-analysis of RCTs. Then finally an analysis of an administrative database with dodgy risk adjustment renders all previous research obsolete.
To which one is tempted to yell – FFS it doesn’t matter, use either or neither. But it does matter. It matters because we’re rational optimizers. We cannot ever, ever, not be doing the best we can, even if the best is like adding a gnome on top of Everest and celebrating the total height.
Optimization is endless quibbling because the differences are so small. Optimization like is a train which departs on a lengthy journey but never leaves the platform. Optimization makes careers. Optimization leads to lots of publications. Lots. There is an infinite number of ethers and space-times to prove and disprove in healthcare sciences.
But optimization is a methodological nightmare. Because when you’re dealing with such small differences, to make sure those Lilliputian differences are real, the measuring instrument has to be precise. And one cause of an imprecise instrument, other than inherent imprecision, is sloppy research.
If you tell physicists that facts have changed they’ll say “welcome to science and have a nice day.” If you tell physicians that facts change they’ll scream “research fraud.” If Einstein and Newton were doctors, Einstein would have asked Newton to retract his theory of gravity.
One reason why there is no culture of replication in the biomedical sciences is that falsification is suffused with moral outrage. Retraction should be a normal clearing house for biomedical sciences – like shedding hair or clipping nails. It has become a consumer watchdog.
I wonder if the illiteracy most urgently in need of redress in doctors is philosophy of science, rather than evidence-based medicine or statistics. Science is a provisional assumption. Facts are supposed to change. Changing facts doesn’t mean that science is broken. It means that science is happening.
Much angst against the NEJM editorial, mostly hilarious on Twitter, is a common misunderstanding of the ought-is dilemma – confusing the description of the way the world is with endorsement of the way the world is.
I stand with the detractors though. Data ought to be shared. But, as I have learnt painfully, there is a way to ask for it. You must endear yourself to the researchers – a few complimentary emails followed by a chance encounter at a national meeting where you prostrate with reverie of the PI. Parasites must be charming.
It takes a lot of effort to generate data in biomedical sciences. To expect researchers to surrender the data for the greater good is fuzzy, and lamentably boring, adolescent naivety. If we do not recognize the self-interest of researchers, data socialism, like other forms of socialism, is condemned to failure. This is what I think Longo and Drazen are warning us in their editorial.
Saurabh Jha is a radiologist based in Philadelphia and a contributing editor for THCB.