In 2007/08, the work of Nicholas Christakis and James Fowler revealed that human behaviors, and even states of mind, tracked through social networks much like infectious disease.

Or put another way, both obesity and happiness worm their way into connected communities just like the latest internet meme, the best Charlie Sheen rumors, or the workplace gossip about Johnny falling down piss-drunk at the company’s holiday party.

But according to a new research study, incorrect medical facts may be no different, galloping from person to person, even within the confines of the revered peer-reviewed scientific literature. And by looking at how studies cite facts about the incubation periods of certain viruses, a new study in PLoS ONE has found that quite often, data assumed to be medical fact isn’t based on evidence at all.

How many glasses of water are we supposed to drink each day? Eight – everyone knows it’s eight. But according to researchers from the schools of Public Health and Medicine at Johns Hopkins University, this has never been proven true. In fact, they argue there’s not one single piece of data that supports this claim.

Digging a little deeper, the research team dove into scientific papers looking for places where researchers quoted the incubation period of different viruses, from influenza to measles. Every time a claim was made, they traced the network of citations back to the original data source (and provided a cool visualization of the path, to boot). For example, many studies will set the stage for their own research by saying that it’s commonly known that the incubation period for influenza is 1-4 days, and next to that statement, they’ll put a small reference in parenthesis, which signals where they obtained that information.

The problem is, many articles cited another study, that cited another study, which in turn cited yet another – you get the picture. It’s like a really bad version of the “telephone game” played by kids. And 50% of the time, the researchers found no original source of incubation period data when they started backtracking. Scary stuff.

By factoring in review articles, which are supposed to be a comprehensive analysis of a field of research, the team found that 65% of viral incubation data never gets cited again after its first publication. 65%! Granted, review articles have to factor in the quality of the research done in individual experiments. So is that much crappy research being done, or is the majority of science in this particular arena simply falling into the growing chasm of “dark data”?

I’ve been chewing on this article for a while, waiting for the right time to write something about it. Today, a tweet by Nieman Lab caught my attention, and spurred me into action.

The tweet pointed to a post on Doc Searls’ blog asking media outlets to do a better job linking to original sources (I, like Searls, get super-frustrated with the NYT, when they either don’t link to a source, or you click on the underlined blue text thinking you’ll be enlightened by profound insight, only to find you’ve been swept away to some vaguely-related post authored by another NYT staffer).

Time to add scientists to your list of offenders, Doc.

Photo via Flickr / Dan Zen

Citation: Reich NG, Perl TM, Cummings DAT, Lessler J, 2011 Visualizing Clinical Evidence: Citation Networks for the Incubation Periods of Respiratory Viral Infections. PLoS ONE 6(4): e19496. doi:10.1371/journal.pone.0019496

** Update, 18 May 2011: The statistics cited in this post (50% of original data not traced back to source, 65% of studies never cited again) apply, in this case, to viral incubation data only. The authors didn’t extrapolate these findings to other medical claims. I updated the statements above to make this explicitly clear. -bjm

Brian Mossop is a freelance science writer, and the Community Manager of the Public Library of Science (PLoS).  He has a Ph.D in biomedical engineering and postdoctoral training in neuroscience.  He has written for Wired, Scientific American MIND, Slate, and elsewhere.

This post first appeared at Thomas Goetz’s The Decision Tree.

Share on Twitter

4 Responses for “Fact-checking Medical Claims”

  1. I, too, share your frustration at the mainstream media’s unwillingness to link to the research they cite. In fact, they often don’t even give you the name of the lead author or title of the article. How often have we read something like: “Researchers at Harvard Medical School have determined that ….”?

    I suspect that this is an editorial decision driven by two reasons. First, if they linked to the scholarly article, there is a likelihood that the reader will not return to finish the newspaper article. (Even if the scholarly article is gated, the abstract will likely tell you more than the newspaper article does.) That would frustrate advertisers. Second, there is the risk that the reader will read the scholarly article and go back to the newspaper article to write a comment debunking something that the reporter wrote.

  2. Ano Lobb says:

    As a health blogger I’ve been guilty of not linking to the research I mention, mainly for the following reasons:
    As the previous comment mentions, some sites have a policy of internal linking only.
    If no full text is available then the abstract is often not very helpful to the average reader.
    I sometimes leave enough clues for researcher: “a paper in this months XYZ journal by Dr. Smith from Harvard”……
    unless the paper is a year or two old and linking to it makes the story seem dated.
    I write blog posts in a super hurry and sometimes forget or don’t have time to figure an elegant way to cite or mention where the it came from ( my bad).

    Also, self- citation becomes a way of making it seem like there’s evidence where there may be none, lending additional weight to an advocacy effort leveraging medical publications for greater influence. This seems increasingly common. Publish something as inane as a commentary or letter to the editor in the scientific literature, then continuously cite it in other publications, creating a pyramid of evidence that in actuality doesn’t really exist.

  3. susan says:

    Reminds me not to take for granted that responses to my questions are going to all be based on evidence. Despite that, I think it’s still imperative to ask our providers lots of questions during our appointments. I found this helpful: http://tinyurl.com/4odprtz

  4. John B says:

    Eating Fruit on an empty stomach helps with fighting cancer. It is best to eat fruit on an empty stomach.

Leave a Reply

FROM THE VAULT

The Power of Small Why Doctors Shouldn't Be Healers Big Data in Healthcare. Good or Evil? Depends on the Dollars. California's Proposition 46 Narrow Networking

Masthead

Matthew Holt
Founder & Publisher

John Irvine
Executive Editor

Jonathan Halvorson
Editor

Alex Epstein
Director of Digital Media

Munia Mitra, MD
Chief Medical Officer

Vikram Khanna
Editor-At-Large, Wellness

Joe Flower
Contributing Editor

Michael Millenson
Contributing Editor

We're looking for bloggers. Send us your posts.

If you've had a recent experience with the U.S. health care system, either for good or bad, that you want the world to know about, tell us.

Have a good health care story you think we should know about? Send story ideas and tips to editor@thehealthcareblog.com.

ADVERTISE

Want to reach an insider audience of healthcare insiders and industry observers? THCB reaches 500,000 movers and shakers. Find out about advertising options here.

Questions on reprints, permissions and syndication to ad_sales@thehealthcareblog.com.

THCB CLASSIFIEDS

Reach a super targeted healthcare audience with your text ad. Target physicians, health plan execs, health IT and other groups with your message.
ad_sales@thehealthcareblog.com

ADVERTISEMENT

Log in - Powered by WordPress.