This is a request for help finding people who have had bad experiences with online health resources.
Let me first say that the internet is often a positive force in people’s lives.
My own organization’s research can paint a rather rosy picture: teens are mostly kind to each other online, technology users have more friends than those who stay offline, more people are online than ever before, etc.
But there is another side to the story.
Pew Internet has also documented the fact that, among other groups, people living with disability and those living with chronic health conditions are disproportionately offline. Some people have only dial-up or intermittent access, like at the library or a friend’s house, and therefore miss out on important conversations or information.
The internet can also transmit false or misleading information. A 2010 survey found that 3% of all U.S. adults said they or someone they know has been harmed by following medical advice or health information found online (1% minor, 1% moderate, and 1% serious harm). Thirty percent of adults reported being helped.
There are emotional pitfalls online, too. A 2006 Pew Internet survey found that 10% of people seeking health information online said they felt frightened by the serious or graphic nature of what they found online during their last search.
Amplifying that dark side, a November 2008 Microsoft study found that “Web search engines have the potential to escalate medical concerns.” Now, I have my quibbles with that study, but I think it points to an important truth: online health information can be scary. You can’t unsee some pictures. You can’t unread some blog posts. You can’t get back that night of sleep you lost worrying, searching, wondering what’s going to happen to you, your child, your partner, your parent.
Research — Pew Internet’s and others’ — suggest that, at times, people are right to worry, to ask the scary question, and to post frightening stories. But sometimes the pain resolves on its own, the fever subsides, or the injury heals perfectly.
I would love to hear from people who fit into the small groups listed above — those who don’t go online (or who have intermittent access), the 3% who feel online health research brought harm, the 10% who were frightened, perhaps unnecessarily. This is an aspect of online life that isn’t yet fully understood, so I’m hoping to learn from people who have lived through it. My promise is to then tell your story — with or without your name attached, your choice — as part of my ongoing mission to help people understand what’s really going on with the internet and health care.
If you’d like, you can post your story in the comments. If you’ve already written about this somewhere else, just post a link. Alternatively, you can email me privately: sfox at pewinternet.org. If you know someone who is offline, please let them know that I am happy to talk on the phone: 202 419 4511. Whatever mode works — I’m listening.
Susannah Fox is the associate director of the Pew Internet & American Life Project. She blogs at susannahfox.com
I’m glad to see that you + Pew are taking on this topic. We conducted an in-depth study this year of people’s perceptions of content credibility–both overall and for health. Why? Because so much research + decision-making happens online. And, the content ecosystem is changing. Any organization or individual can publish health content online.
I’m sharing a few links to articles and summaries based on our research. If you give me an email address, I’ll be happy to send a complimentary copy of the full report.
ACM Interactions: Will content credibility problems flatline health innovation?
Contents Magazine: (Re)consider the source
About the Study
My email is sfox at pewinternet dot org and I’d love to get a copy of that study.
And not to mention the healthcare implications of this little bombshell ..
Is That Review a Fake? / NYT
In a Race to Out-Rave, 5-Star Web Reviews Go for $5
By DAVE STREITFIELD
As online retailers increasingly depend on reviews as a sales tool, an industry of fibbers and promoters has sprung up to buy and sell raves for a pittance.
“For $5, I will submit two great reviews for your business,” offered one entrepreneur on the help-for-hire site Fiverr, one of a multitude of similar pitches. On another forum, Digital Point, a poster wrote, “I will pay for positive feedback on TripAdvisor.” A Craigslist post proposed this: “If you have an active Yelp account and would like to make very easy money please respond.”
The boundless demand for positive reviews has made the review system an arms race of sorts. As more five-star reviews are handed out, even more five-star reviews are needed. Few want to risk being left behind.
Sandra Parker, a freelance writer who was hired by a review factory this spring to pump out Amazon reviews for $10 each, said her instructions were simple. “We were not asked to provide a five-star review, but would be asked to turn down an assignment if we could not give one,” said Ms. Parker, whose brief notices for a dozen memoirs are stuffed with superlatives like “a must-read” and “a lifetime’s worth of wisdom.”
Determining the number of fake reviews on the Web is difficult. But it is enough of a problem to attract a team of Cornell researchers, who recently published a paper about creating a computer algorithm for detecting fake reviewers. They were instantly approached by a dozen companies, including Amazon, Hilton, TripAdvisor and several specialist travel sites, all of which have a strong interest in limiting the spread of bogus reviews.
“The whole system falls apart if made-up reviews are given the same weight as honest ones,” said one of the researchers, Myle Ott. Among those seeking out Mr. Ott, a 22-year-old Ph.D. candidate in computer science, after the study was published was Google, which asked for his résumé, he said.
Linchi Kwok, an assistant professor at Syracuse University who is researching social media and the hospitality industry, explained that as Internet shopping has become more “social,” with customer reviews an essential part of the sales pitch, marketers are realizing they must watch over those opinions as much as they manage any other marketing campaign.
“Everyone’s trying to do something to make themselves look better,” he said. “Some of them, if they cannot generate authentic reviews, may hire somebody to do it.”
Web retailers are aware of the widespread mood of celebration among their reviewers, even if they are reluctant to discuss it. Amazon, like other review sites, says it has a preponderance of positive reviews because of a feedback loop: Products with high-star ratings sell more, so they get more reviews than products with poor ratings.
But they are concerned about the integrity of those reviews. “Any one review could be someone’s best friend, and it’s impossible to tell that in every case,” said Russell Dicker, Amazon’s director of community. “We are continuing to invest in our ability to detect these problems.”
The Cornell researchers tackled what they call deceptive opinion spam by commissioning freelance writers on Mechanical Turk, an Amazon-owned marketplace for workers, to produce 400 positive but fake reviews of Chicago hotels. Then they mixed in 400 positive TripAdvisor reviews that they believed were genuine, and asked three human judges to tell them apart. They could not.
“We evolved over 60,000 years by talking to each other face to face,” said Jeffrey T. Hancock, a Cornell professor of communication and information science who worked on the project. “Now we’re communicating in these virtual ways. It feels like it is much harder to pick up clues about deception.”
So the team developed an algorithm to distinguish fake from real, which worked about 90 percent of the time. The fakes tended to be a narrative talking about their experience at the hotel using a lot of superlatives, but they were not very good on description. Naturally: They had never been there. Instead, they talked about why they were in Chicago. They also used words like “I” and “me” more frequently, as if to underline their own credibility.
How far a business can go to get a good review is a blurry line. A high-end English hotel, The Cove in Cornwall, was recently accused in the British media of soliciting guests to post an “honest but positive review” on TripAdvisor in exchange for a future discount of 10 percent. Nearly all the recent reviews of the Cove are glowing except for the one headlined, “Sadly let down by overhyped reviews.”
The hotel said it was a loyalty scheme that was being misconstrued. TripAdvisor, though, posted a warning about the Cove’s favorable notices on its page for the hotel. The site declined to say how often it has had to post such caveats.
Founded 11 years ago, TripAdvisor never expected to see so many positive reviews. “We were worried it was going to be a gripe site,” said the chief executive, Stephen Kaufer. “Who the heck would bother to write a review except to complain?” Instead, the average of the 50 million reviews is 3.7 stars out of five, bordering on exceptional but typical of review sites.
Negative reviews also abound on the Web; they are often posted on restaurant and hotel sites by business rivals. But as Trevor J. Pinch, a sociologist at Cornell who has just published a study of Amazon reviewers, said, “There is definitely a bias toward positive comments.”
Mr. Pinch’s interviews with more than a hundred of Amazon’s highest-ranked reviewers found that only a few ever wrote anything critical. As one reviewer put it, “I prefer to praise the ones I love, not damn the ones I did not!”
The fact that just about all the top reviewers in his study said they got free books and other material from publishers and others soliciting good notices may have also had something to do with it.
Even if you get a failing grade or two, all is not lost. Dot-coms like Main Street Hub manage the reputations of small businesses for a fixed fee.
“A courteous response to a negative review can persuade the reviewer to change their reviews from two to three or four stars,” said Main Street’s chief executive, Andrew Allison. “That’s one of the highest victories a local business can aspire to with respect to their critics.”
The result, he said: “It’s like Lake Wobegon. Everyone is above average.”
I get the value of the research Pew is doing here. But the numbers you’re reporting seem a little bit (maybe horribly) skewed.
Only ten percent report being frightened?
That suggests to me that you’re not talking to the right people or asking the wrong questions. (To your credit, you are now attempting to find the right people. So points for that.) Define frightened. Do you mean moderately or seriously freaked out? Do you mean vague feelings of discomfort? Do you mean panic attacks and obsessive web surfing?
Only three percent report being harmed? Ouch. Sorry.
Something is clearly seriously wrong with the way you’re framing the question.
What you mean to say is that three percent BELIEVE that information they found online caused them harm, which suggests that a lot of people using the Internet to look up this kind of information believe what they’re finding no matter how ridiculous or obviously self-serving it is .. leeches are a great cure for vapors, immunizations for your kiddo are the root of all evil, the best cure for your cancer is an innovative herb based diet and a trip to Palm springs, surgery is better than aggressive chemo (or vice versa), your blood pressure and heart function will be fine if you eat a cheeseburger a day and follow the steps outlined in dr. smart bottle’s home health newsletter, etc …
Let’s ask new questions.
Thanks, John. I welcome the skeptical questions! Pew Internet’s role is often to put numbers on what people *think* is happening — sometimes it matches perception, sometimes it doesn’t. And we are always looking to improve.
The question wording may provide a clue for the “frightening” data point — we focused on the LAST time someone did a health-related search online (not “have you ever felt…”). In that way we are able to get the respondent to focus on a specific moment and capture a portrait of a “typical” search. We do the same thing when we ask if the respondent did a certain activity “yesterday.”
For example, here’s our list of activities and what percentage of internet users have ever done them:
And the “typical day” list, for comparison:
As for the low percentage reporting harm from online health info, it’s been consistently 2-3% since we started asking about it 10 years ago.
I think my wording in the post is clear that it’s 3% who “said” that — and clicking through to the report supports that. We don’t make any claims about what exactly happened, good or bad, or whether the respondents’ beliefs are off-base.
I will add, though, that I’ve looked for medical journal articles documenting harm coming from online health info. I’ve found some — few, really — but I’d love to build my collection if anyone knows of a good lit review.
Also, if you want to dig in, here’s the full data set for the September 2010 survey:
We are currently in the field with the 2012 survey. We didn’t repeat the helped/harmed question, mostly because we thought it’s outlived its usefulness as a measure. I thought it was still worth highlighting in this post, but it is riding off into the sunset soon.
I developed a possible cancer. I got the call by voicemail from my doctor on a Wednesday night but didn’t actually pick up the message until Friday night when it was too late to reach the doctor’s office. Between Friday night and Monday morning, I researched enough on the internet to terrify me beyond words.
I’m not a lay person. I should know enough to put context to the volumes of information and I do. But there were too many unknowns and so many people’s horror stories and awful possible outcomes. I ended up taking an alternative therapy that I read about that had a lot of supporting data. I found that information on a patient blog and I believe it was a good decision. That being said, I ended up having a complication that I believe was directly related to it. In the end, I could live with the trade off if it helped. I have no idea if the alternative medication did or did not but I appreciated the level of rigor in the patient blog and hearing other people’s stories, though probably unnecessarily terrifying, did give me a feeling of not being so alone.
I eventually saw my doctor and I continue to follow with him, with some trepidation and fear at each visit. I gained a lot from the information I found, but I was surprised that even for a person in healthcare, it was hard for me to sort through what was appropriate and/or relevant for me. I have no doubt it this kind of mass access to information helps a lot of people, emotionally as well in terms of specific and directed therapy (not to mention possible connection to research trials, etc). But I also have no doubt that it sends many into a panic or leads them to less than ideal or expected outcomes.
Sometimes I wish I could field a survey question along the lines of “Would you rather know, or not know?” But of course the answer would be, “It depends.” And even if you answer yes, you might take it back later, when you are terrified beyond words. I was so sorry to hear that part of your story.
Maybe we can explore your point about sorting through what is relevant or appropriate — what strategies do people employ? I’m sure other research has been done on this — if anyone knows of some, please share.
The faulty information issue is a huge problem with no clear answer.
For many pharma companies and providers the temptation to use the Internet and increasing social media to spin products and services is proving irresistible. Sophisticated users know the warning signs. Novices? Not so much. Farmed content. Salesy language. A lack of contact information. Suspect endorsements. An obvious commercial pitch.
Let’s put it this way. I run into this stuff each and every time I go online to research a health problem. Its cyberkudzu. Is the answer to look only at supposedly “credible” sources like Cleveland Clinic or Mayo?
I don’t think so. Too much valuable information is out there, living outside the current system. Alternative therapies. First hand experiences. I think the answer is to give people tools to help evaluate information.
A registry for suspect information might not be a bad idea. As might some attempt at regulation …
“Cyberkudzu” resonates with me. Early in our research at Pew Internet (like, 2001) we asked people how they judge the credibility of online health information. One answer: if the advice appears on more than one site, that increases its credibility. Of course that was just the beginning of the age of syndicated content — it’s everywhere now.