Every day, there is another medical study in the news. There’s another newspaper or TV story telling us that X can cure depression or make you thinner or cause autism or whatever. And since it’s a medical study, we usually think that it’s true. Why wouldn’t it be?
But what most people don’t realize, let alone really think about, is that there might be other studies that show that X does none of those things — and that some of those studies might never have been published.
Just this week, the journal Pediatrics released an article that perfectly demonstrates this problem. There have been a number of studies that have shown that a certain type of medication, selective serotonin reuptake inhibitors (SSRIs), can help stop the repetitive behaviors of autism, like hand-flapping or head-banging. If you were to do a search of the medical literature, as doctors and parents and patients often do, you’d think that using SSRIs is a good idea. But when researchers dug deeper, they found that there were just as many unpublished studies that showed that SSRIs didn’t help. If they had all been published (they were all good enough to be published), that same search of the medical literature would have shown that using SSRIs isn’t a good idea.
This is bad. We rely on studies to guide our decisions. What is going on?
The journals that publish articles certainly play a role. After all, it’s cooler to publish a study that has a grabby headline, that promises an answer or a cure. That’s much more likely to get readers than a study that says that something doesn’t do anything at all. But it turns out that the researchers themselves play a bigger role.