What if policymakers, science reporters and even scientists can’t distinguish between weak and trustworthy research studies that underlie our health care decisions?
Many studies of healthcare treatments and policies do not prove cause-and-effect relationships because they suffer from faulty research designs. The result is a pattern of mistakes and corrections: early studies of new treatments tend to show dramatic positive health effects, which diminish or disappear as more rigorous studies are conducted.
Indeed, when experts on research evidence do systematic reviews of research studies they commonly exclude 50%-75% because they do not meet basic research design standards required to yield trustworthy conclusions.
In many such studies researchers try to statistically manipulate data to ‘adjust for’ irreconcilable differences between intervention and control groups. Yet it is these very differences that often create the reported, but invalid, effects of the treatments or policies that were studied.
In this accessible and graph-filled article published recently by the US Centers for Disease Control and Prevention, we describe five case examples of how some of the most common biases and flawed study designs impact research on important health policies and interventions, such as comparative effectiveness of medical treatments, cost-containment policies, and health information technology.