Categories

Tag: Studies

The Problem Is Relative


Numerous studies have shown that the general public has exaggerated perceptions of the health risks they face — as well as exaggerated expectations of the benefit of medical care.

Is it because they’re stupid? No. Instead, the problem relates to how various sources of health information — researchers, doctors, reporters, web designers, advertisers, etc. — frequently frame their messages: using relative change.

“Forty percent higher” and “50 percent lower” are statements of relative change. While they are easy to understand, they are also incomplete. Relative change can dramatically exaggerate the underlying effect. It’s a great way to scare people.

For example, research earlier this year found that women with migraines had a 40 percent higher chance of developing multiple sclerosis. That sounds scary.

But the researchers were careful to add some important context: Multiple sclerosis is a rare disease. In fact, for women with migraines, the chance of developing multiple sclerosis over 15 years was considerably less than 1 in 100 — only 0.47 percent. To be sure, that is about 40% higher than the analogous risk for women without migraines — 0.32 percent — but it’s a lot less scary. More importantly, it’s a much more complete piece of information.

What makes it more complete is the context of two additional numbers: the risk of developing multiple sclerosis in women with and without migraines. Epidemiologists call these “absolute risks.” You and I might call them the real numbers.

Relative change also exaggerate effects in the other direction. It’s a great way to make people believe there has been a real medical breakthrough.

A few years ago a study of a cholesterol-lowering statin drug was hailed for big reductions in heart attacks in people with so-called healthy cholesterol levels. The drug led to about a 50 percent reduction in the risk of heart attack. That sounds like a breakthrough.

But the absolute risks — the real numbers — are sure to look a little different. Why? Because in people with healthy cholesterol levels, heart attacks are rare. To get that context, get the two additional numbers: the risk of heart attack in people taking and not taking the drug.

For people taking the drug, the chance of having a heart attack over five years was less than 1 percent. To be sure, that is about 50 percent lower than the analogous risk for those not taking the drug — less than 2 percent — but it sounds a lot less like a breakthrough.

These absolute risks suggest that 100 apparently healthy people have to take the medication for five years for one to avoid a heart attack. And it’s not even clear from the research — or the federal registry of clinical trials — what kind of heart attack: the kind that patients experience (the bad kind) or the kind that is diagnosed by detecting less than a billionth of gram of a protein in the blood (the not-so-important kind). Add in all the hassle factors of being on another drug (filling scripts, blood tests, insurance forms) and the legitimate concerns about side effects, the use of relative change might now strike you as more than a little misleading.

Whatever the finding — harm or benefit — relative change exaggerates it.

Upon learning this, one of my students likened relative change to funhouse mirrors. If you are thin, there is a mirror that can make you look too thin; if you are heavy, there is mirror that can make you look too heavy.

In the case of relative change, it all happens in the same mirror. It provides a potent combo to promote medical care: exaggerated perceptions of risk and exaggerated perceptions of benefit. Can you imagine a more powerful marketing strategy?

Relative change is not the only culprit in misleading health information, but it is an important one. The good news is that more and more researchers, reporters and editors are on to this game. The bad news is that there is an awful lot of information to police and sometimes it can be hard to even find the real numbers.

That’s where a skeptical, numerate public comes in — one that knows to ask for the real numbers. And, if they can’t be found, one that knows to move on.

H. Gilbert Welch is a professor of medicine at the Dartmouth Institute for Health Policy and Clinical Practice. He is the coauthor of Overdiagnosed: Making People Sick in the Pursuit of Health. This post originally appeared on The Huffington Post.

 

Inside the New Data on ADHD Diagnosis Rates

The New York Times had a cover story recently reporting on the estimated prevalence of Attention-Deficit/Hyperactivity Disorder from the 2011-2012 National Survey of Children’s Health (they don’t identify the survey by name).

The story is going to get a lot of people interested in what is happening to children — every new datapoint on ADHD is noteworthy because it allows journalists to reopen the black box on childhood behavioral health disorders, and to raise the perennial alarm bells about over-diagnosis of children.

All of the issues raised in the article are valid. Many children with very mild impairments are getting a diagnosis, and enterprising drug companies are increasing demand for their product by implying that ADHD medications are a cure for generalized social impairments.

But — and this is critical – we have little systematic population-level data to compare the reported prevalence of a diagnosis with underlying data on ADHD symptoms in children. Continue reading…

Less Research Is Needed

The most over-used and under-analyzed statement in the academic vocabulary is surely “more research is needed”.

These four words, occasionally justified when they appear as the last sentence in a Masters dissertation, are as often to be found as the coda for a mega-trial that consumed the lion’s share of a national research budget, or that of a Cochrane review which began with dozens or even hundreds of primary studies and progressively excluded most of them on the grounds that they were “methodologically flawed”.

Yet however large the trial or however comprehensive the review, the answer always seems to lie just around the next empirical corner.

With due respect to all those who have used “more research is needed” to sum up months or years of their own work on a topic, this ultimate academic cliché is usually an indicator that serious scholarly thinking on the topic has ceased. It is almost never the only logical conclusion that can be drawn from a set of negative, ambiguous, incomplete or contradictory data.

Recall the classic cartoon sketch from your childhood. Kitty-cat, who seeks to trap little bird Tweety Pie, tries to fly through the air.  After a pregnant mid-air pause reflecting the cartoon laws of physics, he falls to the ground and lies with eyes askew and stars circling round his silly head, to the evident amusement of his prey. But next frame, we see Kitty-cat launching himself into the air from an even greater height.  “More attempts at flight are needed”, he implicitly concludes.

Continue reading…

Is There Something Wrong With the Scientific Method?

A recurring them on this blog is the need for empowered, engaged patients to understand what they read about science. It’s true when researching treatments for one’s condition, it’s true when considering government policy proposals, it’s true when reading advice based on statistics. If you take any journal article at face value, you may get severely misled; you need to think critically.

Sometimes there’s corruption (e.g. the fraudulent vaccine/autism data reported this month, or “Dr. Reuben regrets this happened“), sometimes articles are retracted due to errors (see the new Retraction Watch blog), sometimes scientists simply can’t reproduce a result that looked good in the early trials.

But an article a month ago in the New Yorker sent a chill down my spine tonight. (I wish I could remember which Twitter friend cited it.) It’ll chill you, too, if you believe the scientific method leads to certainty. This sums it up:

Many results that are rigorously proved and accepted start shrinking in later studies.

This is disturbing. The whole idea of science is that once you’ve established a truth, it stays put: you don’t combine hydrogen and oxygen in a particular way and sometimes you get water, and other times chocolate cake.

Reliable findings are how we’re able to shoot a rocket and have it land on the moon, or step on the gas and make a car move (predictably), or flick a switch and turn on the lights. Things that were true yesterday don’t just become untrue. Right??

Bad news: sometimes the most rigorous published findings erode over time. That’s what the New Yorker article is about.

I won’t try to teach here everything in the article; if you want to understand research and certainty, read it. (It’s longish, but great writing.) I’ll just paste in some quotes. All emphasis is added, and my comments are in [brackets].

Continue reading…

assetto corsa mods