The New York Times had a cover story recently reporting on the estimated prevalence of Attention-Deficit/Hyperactivity Disorder from the 2011-2012 National Survey of Children’s Health (they don’t identify the survey by name).
The story is going to get a lot of people interested in what is happening to children — every new datapoint on ADHD is noteworthy because it allows journalists to reopen the black box on childhood behavioral health disorders, and to raise the perennial alarm bells about over-diagnosis of children.
All of the issues raised in the article are valid. Many children with very mild impairments are getting a diagnosis, and enterprising drug companies are increasing demand for their product by implying that ADHD medications are a cure for generalized social impairments.
But — and this is critical – we have little systematic population-level data to compare the reported prevalence of a diagnosis with underlying data on ADHD symptoms in children. Continue reading…
The most over-used and under-analyzed statement in the academic vocabulary is surely “more research is needed”.
These four words, occasionally justified when they appear as the last sentence in a Masters dissertation, are as often to be found as the coda for a mega-trial that consumed the lion’s share of a national research budget, or that of a Cochrane review which began with dozens or even hundreds of primary studies and progressively excluded most of them on the grounds that they were “methodologically flawed”.
Yet however large the trial or however comprehensive the review, the answer always seems to lie just around the next empirical corner.
With due respect to all those who have used “more research is needed” to sum up months or years of their own work on a topic, this ultimate academic cliché is usually an indicator that serious scholarly thinking on the topic has ceased. It is almost never the only logical conclusion that can be drawn from a set of negative, ambiguous, incomplete or contradictory data.
Recall the classic cartoon sketch from your childhood. Kitty-cat, who seeks to trap little bird Tweety Pie, tries to fly through the air. After a pregnant mid-air pause reflecting the cartoon laws of physics, he falls to the ground and lies with eyes askew and stars circling round his silly head, to the evident amusement of his prey. But next frame, we see Kitty-cat launching himself into the air from an even greater height. “More attempts at flight are needed”, he implicitly concludes.
A recurring them on this blog is the need for empowered, engaged patients to understand what they read about science. It’s true when researching treatments for one’s condition, it’s true when considering government policy proposals, it’s true when reading advice based on statistics. If you take any journal article at face value, you may get severely misled; you need to think critically.
Sometimes there’s corruption (e.g. the fraudulent vaccine/autism data reported this month, or “Dr. Reuben regrets this happened“), sometimes articles are retracted due to errors (see the new Retraction Watch blog), sometimes scientists simply can’t reproduce a result that looked good in the early trials.
But an article a month ago in the New Yorker sent a chill down my spine tonight. (I wish I could remember which Twitter friend cited it.) It’ll chill you, too, if you believe the scientific method leads to certainty. This sums it up:
Many results that are rigorously proved and accepted start shrinking in later studies.
This is disturbing. The whole idea of science is that once you’ve established a truth, it stays put: you don’t combine hydrogen and oxygen in a particular way and sometimes you get water, and other times chocolate cake.
Reliable findings are how we’re able to shoot a rocket and have it land on the moon, or step on the gas and make a car move (predictably), or flick a switch and turn on the lights. Things that were true yesterday don’t just become untrue. Right??
Bad news: sometimes the most rigorous published findings erode over time. That’s what the New Yorker article is about.
I won’t try to teach here everything in the article; if you want to understand research and certainty, read it. (It’s longish, but great writing.) I’ll just paste in some quotes. All emphasis is added, and my comments are in [brackets].