In 2007/08, the work of Nicholas Christakis and James Fowler revealed that human behaviors, and even states of mind, tracked through social networks much like infectious disease.
Or put another way, both obesity and happiness worm their way into connected communities just like the latest internet meme, the best Charlie Sheen rumors, or the workplace gossip about Johnny falling down piss-drunk at the company’s holiday party.
But according to a new research study, incorrect medical facts may be no different, galloping from person to person, even within the confines of the revered peer-reviewed scientific literature. And by looking at how studies cite facts about the incubation periods of certain viruses, a new study in PLoS ONE has found that quite often, data assumed to be medical fact isn’t based on evidence at all.
How many glasses of water are we supposed to drink each day? Eight – everyone knows it’s eight. But according to researchers from the schools of Public Health and Medicine at Johns Hopkins University, this has never been proven true. In fact, they argue there’s not one single piece of data that supports this claim.
Digging a little deeper, the research team dove into scientific papers looking for places where researchers quoted the incubation period of different viruses, from influenza to measles. Every time a claim was made, they traced the network of citations back to the original data source (and provided a cool visualization of the path, to boot). For example, many studies will set the stage for their own research by saying that it’s commonly known that the incubation period for influenza is 1-4 days, and next to that statement, they’ll put a small reference in parenthesis, which signals where they obtained that information.
The problem is, many articles cited another study, that cited another study, which in turn cited yet another – you get the picture. It’s like a really bad version of the “telephone game” played by kids. And 50% of the time, the researchers found no original source of incubation period data when they started backtracking. Scary stuff.
By factoring in review articles, which are supposed to be a comprehensive analysis of a field of research, the team found that 65% of viral incubation data never gets cited again after its first publication. 65%! Granted, review articles have to factor in the quality of the research done in individual experiments. So is that much crappy research being done, or is the majority of science in this particular arena simply falling into the growing chasm of “dark data”?
I’ve been chewing on this article for a while, waiting for the right time to write something about it. Today, a tweet by Nieman Lab caught my attention, and spurred me into action.
The tweet pointed to a post on Doc Searls’ blog asking media outlets to do a better job linking to original sources (I, like Searls, get super-frustrated with the NYT, when they either don’t link to a source, or you click on the underlined blue text thinking you’ll be enlightened by profound insight, only to find you’ve been swept away to some vaguely-related post authored by another NYT staffer).
Time to add scientists to your list of offenders, Doc.
Photo via Flickr / Dan Zen
Citation: Reich NG, Perl TM, Cummings DAT, Lessler J, 2011 Visualizing Clinical Evidence: Citation Networks for the Incubation Periods of Respiratory Viral Infections. PLoS ONE 6(4): e19496. doi:10.1371/journal.pone.0019496
** Update, 18 May 2011: The statistics cited in this post (50% of original data not traced back to source, 65% of studies never cited again) apply, in this case, to viral incubation data only. The authors didn’t extrapolate these findings to other medical claims. I updated the statements above to make this explicitly clear. -bjm
Brian Mossop is a freelance science writer, and the Community Manager of the Public Library of Science (PLoS). He has a Ph.D in biomedical engineering and postdoctoral training in neuroscience. He has written for Wired, Scientific American MIND, Slate, and elsewhere.
This post first appeared at Thomas Goetz’s The Decision Tree.
Categories: Uncategorized
Medical folk keep referring to risk in terms of relative risk as opposed to absolute risk in talking about their data to support whatever drug, product or procedure they want you to use. I can’t get the newspaper to say which percent risk refers to: relative or absolute. Example: I was told to take a statin drug because there was 25% risk less that I would have a stroke with the drug than without the drug, even though the drug was causing me permanent nerve damage. I looked up the original papers. It turned out that with the drug you had 3% chance of getting a stroke and without the drug you had 4% chance of getting a stroke. They called that a 25% (relative) risk. I call it a 1% (absolute) risk. Since the drug was causing me damage and I read other papers from other countries which did not recommend the drug to people like me where the drug caused peripheral nerve damage, I refused to take the drug – not without lamentation from doctors and nurses who were angry that I was not following “procedure.”
Hello everyone, it’s my first visit at this site, and article iis really fruitful in support of me, keep
up posting these typess of posts.
Excellent beat ! I would like to apprentice at the same time as you
amend your website, how could i subscribe for a blog web site?
The account aided me a appropriate deal. I have been a little bit acquainted of this your broadcast provided shiny clear concept
Eating Fruit on an empty stomach helps with fighting cancer. It is best to eat fruit on an empty stomach.
Reminds me not to take for granted that responses to my questions are going to all be based on evidence. Despite that, I think it’s still imperative to ask our providers lots of questions during our appointments. I found this helpful: http://tinyurl.com/4odprtz
As a health blogger I’ve been guilty of not linking to the research I mention, mainly for the following reasons:
As the previous comment mentions, some sites have a policy of internal linking only.
If no full text is available then the abstract is often not very helpful to the average reader.
I sometimes leave enough clues for researcher: “a paper in this months XYZ journal by Dr. Smith from Harvard”……
unless the paper is a year or two old and linking to it makes the story seem dated.
I write blog posts in a super hurry and sometimes forget or don’t have time to figure an elegant way to cite or mention where the it came from ( my bad).
Also, self- citation becomes a way of making it seem like there’s evidence where there may be none, lending additional weight to an advocacy effort leveraging medical publications for greater influence. This seems increasingly common. Publish something as inane as a commentary or letter to the editor in the scientific literature, then continuously cite it in other publications, creating a pyramid of evidence that in actuality doesn’t really exist.
I, too, share your frustration at the mainstream media’s unwillingness to link to the research they cite. In fact, they often don’t even give you the name of the lead author or title of the article. How often have we read something like: “Researchers at Harvard Medical School have determined that ….”?
I suspect that this is an editorial decision driven by two reasons. First, if they linked to the scholarly article, there is a likelihood that the reader will not return to finish the newspaper article. (Even if the scholarly article is gated, the abstract will likely tell you more than the newspaper article does.) That would frustrate advertisers. Second, there is the risk that the reader will read the scholarly article and go back to the newspaper article to write a comment debunking something that the reporter wrote.