Uncategorized

Consumer-Generated Clinical Trials? Research Minus Science = Gossip

ValjonesMy readers know how passionate I am about
protecting the public from misleading health information. I
have
witnessed first-hand
many well-meaning attempts to “empower consumers” with Web 2.0 tools.
Unfortunately, they were designed without a clear understanding of the
scientific method, basic statistics, or in some cases, common sense.

Let me first say that I desperately want my patients to be
knowledgeable about their disease or condition. The quality of their
self-care depends on that, and I regularly point each of my patients to
trusted sources of health information so that they can be fully
informed about all aspects of their health. Informed decisions are
founded upon good information. But when the foundation is corrupt –
consumer empowerment collapses like a house of cards.

In a recent lecture on Health 2.0, it was suggested that websites
that enable patients to “conduct their own clinical trials” are the
bold new frontier of research. This assertion betrays a lack of
understanding of basic scientific principles. In healthcare we often
say, “the plural of anecdote is not data” and I would translate that to
“research minus science equals gossip.” Let me give you some examples
of Health 2.0 gone wild:

1. A rating tool was created to “empower” patients to score their
medications (and user-generated treatment options) based on their
perceived efficacy for their disease/condition. The treatments with the
highest average scores would surely reflect the best option for a given
disease/condition, right? Wrong. Every single pain syndrome (from
headache to low back pain) suggested a narcotic was the most popular
(and therefore “best”) treatment. If patients followed this system for
determining their treatment options, we’d be swatting flies with cannon
balls – not to mention being at risk for drug dependency and even
abuse. Treatments must be carefully customized to the individual –
genetic differences, allergy profiles, comorbid conditions, and
psychosocial and financial considerations all play an important role in
choosing the best treatment. Removing those subtleties from the
decision-making process is a backwards step for healthcare.

2. An online tracker tool was created without the input of a
clinician. The tool purported to “empower women” to manage menopause
more effectively online. What on earth would a woman want to do to
manage her menopause online, you might ask? Well apparently these young
software developers strongly believed that a “hot flash tracker” would
be just what women were looking for. The tool provided a graphical
representation of the frequency and duration of hot flashes, so that
the user could present this to her doctor. One small problem: hot flash
management is a binary decision. Hot flashes either are so personally
bothersome that a woman would decide to receive hormone therapy to
reduce their effects, or the hot flashes are not bothersome enough to
warrant treatment. It doesn’t matter how frequently they occur or how
long they last. Another ill-conceived Health 2.0 tool.

When it comes to interpreting data, Barker Bausell
does an admirable job of reviewing the most common reasons why people
are misled to believe that there is a cause and effect relationship
between a given intervention and outcome. In fact, the deck is stacked
in favor of a perceived effect in any trial, so it’s important to be
aware of these potential biases when interpreting results. Health 2.0
enthusiasts would do well to consider the following factors that create
the potential for “false positives”in any clinical trial:

1. Natural History: most medical conditions have
fluctuating symptoms and many improve on their own over time.
Therefore, for many conditions, one would expect improvement during the
course of study, regardless of treatment.

2. Regression to the Mean: people are more likely
to join a research study when their illness/problem is at its worst
during its natural history. Therefore, it is more likely that the
symptoms will improve during the study than if they joined at times
when symptoms were not as troublesome. Therefore, in any given study –
there is a tendency for participants in particular to improve after
joining.

3.  The Hawthorne Effect: people behave differently
and experience treatment differently when they’re being studied. So for
example, if people know they’re being observed regarding their work
productivity, they’re likely to work harder during the research study.
The enhanced results therefore, do not reflect typical behavior.

4. Limitations of Memory: studies have shown that
people ascribe greater improvement of symptoms in retrospect. Research
that relies on patient recall is in danger of increased false positive
rates.

5. Experimenter Bias: it is difficult for
researchers to treat all study subjects in an identical manner if they
know which patient is receiving an experimental treatment versus a
placebo. Their gestures and the way that they question the subjects may
set up expectations of benefit. Also, scientists are eager to
demonstrate positive results for publication purposes.

6. Experimental Attrition: people generally join
research studies because they expect that they may benefit from the
treatment they receive. If they suspect that they are in the placebo
group, they are more likely to drop out of the study. This can
influence the study results so that the sicker patients who are not
finding benefit with the placebo drop out, leaving the milder cases to
try to tease out their response to the intervention.

7. The Placebo Effect: I saved the most important
artifact for last. The natural tendency for study subjects is to
perceive that a treatment is effective. Previous research has shown
that about 33% of study subjects will report that the placebo has a
positive therapeutic effect of some sort.

In my opinion, the often-missing ingredient in Health 2.0 is the
medical expert. Without our critical review and educated guidance,
there is a greater risk of making irrelevant tools or perhaps even
doing more harm than good. Let’s all work closely together to harness
the power of the Internet for our common good. While research minus
science = gossip, science minus consumers = inaction.

Livongo’s Post Ad Banner 728*90

7
Leave a Reply

7 Comment threads
0 Thread replies
0 Followers
 
Most reacted comment
Hottest comment thread
6 Comment authors
Al Musella, DPMDr. ValLisa EmrichSteve DavisJohn M. Grohol, PsyD Recent comment authors
newest oldest most voted
Al Musella, DPM
Guest

I think you are missing the middle ground.. patient reported data under more scientific conditions. I reported on our project, the brain tumor virtual trial, at the 1999 AACR conference. See: #264 at http://virtualtrials.com/pdf/virtualtrialsabstract99.pdf We set up a patient reported registry, but require patients to send us pathology reports and MRI reports. We had our team of MDs evaluate the reports and compare them to what the patients were reported, and found that the patients were capable of reporting their own information correctly. These people are highly motivated because we deal with brain tumors. Our endpoint is death, which is… Read more »

Dr. Val
Guest

Thank you all for your thoughtful comments. John Grohol raises an excellent point: no data is often better than bad data. False assumptions can be more harmful than no assumptions, so we must be very careful in drawing conclusions from aggregates of subjective opinions. Steve also raises the issue of trust – it is very sad that some people have come to distrust scientists and physicians – the very people who are committed to finding the objective truth about diseases and treatments. Emotions certainly play a role in this -one bad apple can spoil the bunch. I would ask people… Read more »

Greg Pawelski
Guest
Greg Pawelski

There should be more effective online patient networks to aggregate clinical data and effectively learn and research in real-time. Useful observational data can inform science and medicine and hopefully at a faster pace. Clinical efficacy trials don’t do “real world” studies under “real world” conditions. Patient outcomes need to be reported in real-time, so patients and physicians can learn immediately if and how patients are benefiting from new drug therapies. There is a need for quality peer-review, but there is also unlimited space on the Web to publish. It should be easier to publish well-designed studies that show if there… Read more »

Lisa Emrich
Guest

Thank you Dr. Val for an excellent article. I agree with the valid points you have made. In fact, I’ve been having discussions loosely related to this topic when a number of friends. “Patient empowerment” using Health 2.0 is a new interest of mine. I began blogging to discuss health policy issues and how they affect my ability to afford care. Then my blogging began to incorporate discussion of multiple sclerosis and living with chronic illness. In researching topics, it is evident that poorly written misinformation abounds on the web. I understand why a physician might ask a newly-diagnosed patient… Read more »

Steve Davis
Guest

Dr. Jones, First of all thank you for a well-written, thoughtful article. And there’s no doubt that you are totally correct on the science. What’s at work throughout much of the consumer literature and Web 2.0, though, is an abandonment of trust in science, in scientists and in the scientific method. Consumers trust none of the above to act in their own best interest, with honesty, openness and without adverse financial incentives. Until those issues are fixed (if they’re even fixable at all) consumers will trust their own instincts, sometimes to their detriment.

John M. Grohol, PsyD
Guest

While this is true for the obvious low-hanging Health 2.0 fruit that excite some VCs and naive technologists, I don’t think it holds at all true for more thoughtful designs such as PatientsLikeMe.com. Their efforts are backed by experienced researchers who understand statistics. In addition, I would point out that single-case experimental designs are a completely valid research tool. It’s just that the data gained from such experiments is of limited value. I have written about the invalidity of rating sites previously, the first time nearly 2 years ago: Reliability and Validity in a Web 2.0 World How Good Are… Read more »

Greg Pawelski
Guest
Greg Pawelski

This posting doesn’t take into the consideration how major drug manufacturers hide data about the safety and efficacy of its drugs, and produce data of scant clinical value. Conflicts of interest have thoroughly corrupted American medical research. There are dangerous potential for conflicts of interest when pharmaceutical and other for-profit businesses control the dissemination of findings generated by medical research. The ability of drug companies to pick and choose the research they provide in support of their products is an outrageous conflict of interest and puts all patients in harm’s way. It can undermine public trust in and support for… Read more »