“False Hopes, Unwarranted Fears: The Trouble with Medical News Stories.” If you find the headline alarming, you should read the editorial, published just last week in PLoS Medicine. There, the journal’s editors summarize what the Health News Review has discovered over the past two years while evaluating medical stories about new products and procedures throughout the mainstream media.
“It’s not a pretty picture,” says Gary Schwitzer, the University of Minnesota School of Journalism professor who publishes the online project.
In a video linked to the Health News Review website, Schwitzer
points out that “about 65% of the time” major news organization are not
telling viewers and readers how “big the potential harms” of new
treatments are–or “how small the potential benefits.”
Meanwhile, about three-quarters of the stories about a new product
or procedure fail to talk about how much the idea costs. “At a time
when the U.S. is spending 16 percent of GDP on healthcare, I find this
unfathomable,” says Schwitzer. “No one is asking: ‘How are we going to
pay for it?’; ‘Who will have access to these things’?; ‘Who’s to say
that we even need some of these things? This is what we need to
discuss.”
Ultimately, “these stories are painting a ‘kid in the candy-store’
picture of US health care,” Schwitzer charges, “whereby everything is
made to look terrific, risk-free, and without a price tag. Nothing
could be further from the truth.”
Health News Review is supported by a grant from the nonprofit
Foundation for Informed Decision Making, which was founded in 1989 by
Dr. Jack Wennberg and colleagues. Its mission is to assure “that people
understand their choices and have the information they need to make
sound decisions affecting their health and wellbeing.”
But rather than helping people understand that they have choices,
news stories trumpeting a new product often fail to compare it to
existing alternatives. Schwitzer explains: “We expect that a story
would put the new approach being discussed into the context of
alternatives, with some discussion of the possible advantages or
disadvantages of the new approach compared with” treatments already on
the market.
Instead, says Schwitzer, good-news stories about “medical
breakthroughs” are “feeding people” who believe there is “a pill for
every ill, creating unrealistic expectations and undue demand for
unproven ideas. This may help explain why we are spending 16 percent of
GDP on healthcare–and not getting the value for our dollars.”
In addition to the editorial, the May issue of PLoS includes an
article by Schwitzer detailing the shortcomings of the 500 medical
stories that Health News Review has reviewed over the past two years
while evaluating stories published in the top 50 U.S. newspapers, in
the three major newsweeklies, carried on the Associated Press’ wires,
and aired on morning and evening news at ABC, NBC and CBS.
Too often, reporters rely on sources that have an axe to grind: “Of
170 stories that cited an expert or a scientific study” Schwitzer
observes “85 (50%) cited at least one with a financial tie to the
manufacturer of the drug, a tie that was disclosed in only 33 of the 85
stories.”
For example, a story that ran on ABC World news in April of 2007
heralding a new test for prostate cancer “did not disclose what was
abundantly clear even in a Johns Hopkins news release: the principal
investigator receives a share of the royalties received on sales of the
test. He is also a paid consultant to the manufacturer of the test.
There were no quotes from anyone expressing skepticism about the
development.”
Stores that hype hope can also spread fear. The reviewers, who gave
the ABC piece a “2” on a 10-point scale, criticized it for leading with
a dramatic graphic that stated: "Prostate cancer in the U.S.: 1.6
million men undergo prostate biopsies each year."
“That graphic, setting the stage for the story, can be misleading
and confusing to viewers,” the reviewers noted. "It could easily be
inferred that 1.6 million men each year develop prostate cancer. And
therefore we rate it as disease-mongering. The American Cancer Society
estimates that during 2007 about 218,890 new cases of prostate cancer
will be diagnosed in the United States – a number not provided in the
story… it [also] would have been helpful to simply show the number of
men diagnosed and the number of men who actually die to…help men to
understand that this cancer isn’t always a killer.” Finally, “at the
least, the story could have included one line saying that screening is
controversial regardless of method chosen, because it isn’t yet clear
if treatment saves lives.”
But it isn’t just television news that falls short by relying too
heavily on sources who have a financial interest in the product.
Top-tier newspapers fall into the same trap. A 2006 New York Times
story headlined “Drug Doubles Endurance” also received a “2” from
reviewers, in part because it failed to provide “more sources
[expressing] healthy skepticism to balance the overwhelming enthusiasm
from other sources, several of whom had ties to the drug companies
promoting the substance.” Then too the story failed to note that “there
is an important difference between the results from a few research
studies in animals and demonstration of efficacy in people.” (The old
mice vs. men problem.)
Who Does the Reviews? How Do the Reporters Respond?
Health News Review uses a team of reviewers from around the country.
“Some have a Masters in public health; some are RNs; some are MDs from
places like Duke, UCSF, Harvard and Dartmouth,” Schwitzer explains.
“There are people trained in evidence-based medicine. In a sense we are
trying to promote evidence-based health journalism.”
Three reviewers analyze each article. (All reviewers are listed
online). As the publisher of the project, Schwitzer is always the third
reviewer of each story. “I’ll mediate any disagreements between the
first two reviewers,” he explains, “gaining consensus before publishing
the final review.”
The rating instrument used includes ten criteria used by similar
websites in Australia and Canada. All of the criteria—which range from
“Adequately explains and quantifies potential harms” to “Compares the
new idea with existing alternatives” are addressed in the Association
of Health Care Journalists’ “Statement of Principles.” For each of the
10 criterion, the story is given a rating of “satisfactory,”
“unsatisfactory,” or “not applicable.”
The goal of the exercise is “not media-bashing” says Schwitzer.
“It’s outreach. When we evaluate a health news story, we e-mail the
evaluation to the journalist who wrote that story. We’re saying: ‘Come
see how we have reviewed your story; learn from it, engage us—and the
public –in a discussion of where things could be done better.’”
“And their responses have been overwhelmingly positive,” he reports.
“It’s quite sobering to read the reviews,” wrote one journalist. “I
imagine you’ve heard all the laments from reporters, but the lack of
both space and research time is enormously frustrating (and will
probably drive me out of journalism in the end).”
Cutbacks at many newspapers plus a lack of training also make it
difficult for journalists to do the job that they would like to do. One
week they’re reporting on crime; the next week they’re covering cancer.
Yet the public does not understand that, even at our leading
newspapers, a reporter may be writing about something that he or she
does not fully understand.
Editors and publishers also can get in the way of telling the true
story. As Schwitzer observes: “Reporters and writers have been
receptive to the feedback; editors and managers must be reached if
change is to occur.”
As the PLoS editorial points out: “There is also a broader context
in which medical stories get exaggerated—the 24-hour news cycle means
that media organizations are battling for audience share, which in turn
means that the press has moved towards sensationalism, entertainment,
and opinion. Headlines are often written by news editors, rather than
the article’s reporter, and are particularly prone to exaggeration. All
of this sensationalism strays far from the reality of biomedical
research, a slow process that yields small, incremental results based
on long-term studies that always have weaknesses.”
I know, from experience, that publishers and editors are sometimes
more concerned about ratings and circulation that they are about the
facts. While working as a journalist, I was told on more than one
occasion: “Our readers don’t like negative stories. They want to hear
good news.”
Headlines about medical miracles sell newspapers. Articles that
explain that the breakthrough fizzled do not. Unless the bad news is
truly sensational (“400 women felled by Botox treatments in L.A.”)
readers and viewers may not be terribly interested in tales of
side-effects, risks, and complications. Nevertheless, while some
editors worry about what their customers “want to hear” good
journalists know that it’s their job to inform people—to tell them what
they “need to know.”
Schwitzer’s project should open up some much-needed dialogue about the difference—especially when the topic is so important.
Categories: Uncategorized
Great articles! Thanks!
Thanks Tom–That sounds interesting.
I put it in my file.
Another good resource, with a slightly different slant to that of the Health News Review, is the Behind The Headlines service provided by the NHS on the new nhs.uk website.
See: http://www.nhs.uk/News/Pages/NewsIndex.aspx
Every weekday, the research that underpins healthcare stories in the media is retrieved and appraised and posted by noon to provide an objective (or as close as you can get) POV.
The Guardian recently praised the service here: http://www.guardian.co.uk/media/2008/jun/02/pressandpublishing.healthandwellbeing
Add it to your bookmarks.