My readers know how passionate I am about
protecting the public from misleading health information. I
have
witnessed first-hand
many well-meaning attempts to “empower consumers” with Web 2.0 tools.
Unfortunately, they were designed without a clear understanding of the
scientific method, basic statistics, or in some cases, common sense.
Let me first say that I desperately want my patients to be
knowledgeable about their disease or condition. The quality of their
self-care depends on that, and I regularly point each of my patients to
trusted sources of health information so that they can be fully
informed about all aspects of their health. Informed decisions are
founded upon good information. But when the foundation is corrupt –
consumer empowerment collapses like a house of cards.
In a recent lecture on Health 2.0, it was suggested that websites
that enable patients to “conduct their own clinical trials” are the
bold new frontier of research. This assertion betrays a lack of
understanding of basic scientific principles. In healthcare we often
say, “the plural of anecdote is not data” and I would translate that to
“research minus science equals gossip.” Let me give you some examples
of Health 2.0 gone wild:
1. A rating tool was created to “empower” patients to score their
medications (and user-generated treatment options) based on their
perceived efficacy for their disease/condition. The treatments with the
highest average scores would surely reflect the best option for a given
disease/condition, right? Wrong. Every single pain syndrome (from
headache to low back pain) suggested a narcotic was the most popular
(and therefore “best”) treatment. If patients followed this system for
determining their treatment options, we’d be swatting flies with cannon
balls – not to mention being at risk for drug dependency and even
abuse. Treatments must be carefully customized to the individual –
genetic differences, allergy profiles, comorbid conditions, and
psychosocial and financial considerations all play an important role in
choosing the best treatment. Removing those subtleties from the
decision-making process is a backwards step for healthcare.
2. An online tracker tool was created without the input of a
clinician. The tool purported to “empower women” to manage menopause
more effectively online. What on earth would a woman want to do to
manage her menopause online, you might ask? Well apparently these young
software developers strongly believed that a “hot flash tracker” would
be just what women were looking for. The tool provided a graphical
representation of the frequency and duration of hot flashes, so that
the user could present this to her doctor. One small problem: hot flash
management is a binary decision. Hot flashes either are so personally
bothersome that a woman would decide to receive hormone therapy to
reduce their effects, or the hot flashes are not bothersome enough to
warrant treatment. It doesn’t matter how frequently they occur or how
long they last. Another ill-conceived Health 2.0 tool.
When it comes to interpreting data, Barker Bausell
does an admirable job of reviewing the most common reasons why people
are misled to believe that there is a cause and effect relationship
between a given intervention and outcome. In fact, the deck is stacked
in favor of a perceived effect in any trial, so it’s important to be
aware of these potential biases when interpreting results. Health 2.0
enthusiasts would do well to consider the following factors that create
the potential for “false positives”in any clinical trial:
1. Natural History: most medical conditions have
fluctuating symptoms and many improve on their own over time.
Therefore, for many conditions, one would expect improvement during the
course of study, regardless of treatment.
2. Regression to the Mean: people are more likely
to join a research study when their illness/problem is at its worst
during its natural history. Therefore, it is more likely that the
symptoms will improve during the study than if they joined at times
when symptoms were not as troublesome. Therefore, in any given study –
there is a tendency for participants in particular to improve after
joining.
3. The Hawthorne Effect: people behave differently
and experience treatment differently when they’re being studied. So for
example, if people know they’re being observed regarding their work
productivity, they’re likely to work harder during the research study.
The enhanced results therefore, do not reflect typical behavior.
4. Limitations of Memory: studies have shown that
people ascribe greater improvement of symptoms in retrospect. Research
that relies on patient recall is in danger of increased false positive
rates.
5. Experimenter Bias: it is difficult for
researchers to treat all study subjects in an identical manner if they
know which patient is receiving an experimental treatment versus a
placebo. Their gestures and the way that they question the subjects may
set up expectations of benefit. Also, scientists are eager to
demonstrate positive results for publication purposes.
6. Experimental Attrition: people generally join
research studies because they expect that they may benefit from the
treatment they receive. If they suspect that they are in the placebo
group, they are more likely to drop out of the study. This can
influence the study results so that the sicker patients who are not
finding benefit with the placebo drop out, leaving the milder cases to
try to tease out their response to the intervention.
7. The Placebo Effect: I saved the most important
artifact for last. The natural tendency for study subjects is to
perceive that a treatment is effective. Previous research has shown
that about 33% of study subjects will report that the placebo has a
positive therapeutic effect of some sort.
In my opinion, the often-missing ingredient in Health 2.0 is the
medical expert. Without our critical review and educated guidance,
there is a greater risk of making irrelevant tools or perhaps even
doing more harm than good. Let’s all work closely together to harness
the power of the Internet for our common good. While research minus
science = gossip, science minus consumers = inaction.
Categories: Uncategorized
I think you are missing the middle ground.. patient reported data under more scientific conditions. I reported on our project, the brain tumor virtual trial, at the 1999 AACR conference. See:
#264 at http://virtualtrials.com/pdf/virtualtrialsabstract99.pdf
We set up a patient reported registry, but require patients to send us pathology reports and MRI reports. We had our team of MDs evaluate the reports and compare them to what the patients were reported, and found that the patients were capable of reporting their own information correctly. These people are highly motivated because we deal with brain tumors. Our endpoint is death, which is relatively easy to record accurately.
We were able to spot some trends early – such as using Temodar at the same time as radiation – about a year before it was widely reported – possibly saving or extending the lives of thousands of our members. We also picked up on the early success of Avastin for brain tumors before it became popular. On the flip side, we can also see trends of some things that don’t work..
Thank you all for your thoughtful comments. John Grohol raises an excellent point: no data is often better than bad data. False assumptions can be more harmful than no assumptions, so we must be very careful in drawing conclusions from aggregates of subjective opinions.
Steve also raises the issue of trust – it is very sad that some people have come to distrust scientists and physicians – the very people who are committed to finding the objective truth about diseases and treatments. Emotions certainly play a role in this -one bad apple can spoil the bunch. I would ask people to try to remember that understandable frustrations with our healthcare “system” is no reason to throw the baby out with the bathwater. Please don’t give up on careful analysis and the scientific method just because a pharmaceutical company did something unethical. The solution is better science and fuller transparency, not “giving up” on medicine.
And Lisa is right – the Patients Like Me approach has serious limitations. I believe that patients can and should support one another online – but they should be careful not to cross over into offering medical advice. I know that they have the best intentions at heart, but disease complexity and genetic differences make every patient unique. It takes a long and thoughtful analysis of all the details to arrive at the best care decisions. Without the full story, errors are sure to arise.
If your doctor is not taking the time to analyze the whole picture, then seek out another doctor. Don’t try to solve things yourself or with other patients. Good healthcare is based on a partnership with a knowlegeable provider – not a solo mission that excludes healthcare professionals.
There should be more effective online patient networks to aggregate clinical data and effectively learn and research in real-time. Useful observational data can inform science and medicine and hopefully at a faster pace.
Clinical efficacy trials don’t do “real world” studies under “real world” conditions. Patient outcomes need to be reported in real-time, so patients and physicians can learn immediately if and how patients are benefiting from new drug therapies.
There is a need for quality peer-review, but there is also unlimited space on the Web to publish. It should be easier to publish well-designed studies that show if there is no effect to a treatment and therefore minimize publishing bias.
Thank you Dr. Val for an excellent article. I agree with the valid points you have made. In fact, I’ve been having discussions loosely related to this topic when a number of friends.
“Patient empowerment” using Health 2.0 is a new interest of mine. I began blogging to discuss health policy issues and how they affect my ability to afford care. Then my blogging began to incorporate discussion of multiple sclerosis and living with chronic illness.
In researching topics, it is evident that poorly written misinformation abounds on the web. I understand why a physician might ask a newly-diagnosed patient NOT to consult “Dr. Google.” I would, however, expect that some trusted sources of information might be recommended to the patient.
It’s true that patients understand more clearly what other patients might experience and are able to provide welcome moral support. They may even be able to share anecdotes which could help a fellow patient who is frustrated with symptom or deficiency…maybe even spark an enhanced interaction with the patient’s physician.
You said: “In a recent lecture on Health 2.0, it was suggested that websites that enable patients to “conduct their own clinical trials” are the bold new frontier of research.”
I believe this refers to the ALS/Lithium project ongoing at PatientsLikeMe. I’m not a memeber of this particular community, but I am a member of the MS community at PatientsLikeMe.
It is apparent to me that the information collected could not be reliable, especially if you consider EVERY registered user. PLM now claims over 9000 MS patients in the community.
How many of them are individuals who have registered multiple times with different user IDs? How many have completed their self-reported disease history and medications use accurately? Weight is measured in the statistics, but I’ll tell you now that I’ve not filled-in that category carefully.
Then there are patients who had self-reported details in their profiles, but then deleted it all when they became frustrated with the website’s administrators.
All of this affects the validity of the experiment. Never mind the patients who may report much more severe disease activity simply because they are new and have no comparable experiences.
And how much does this truly help the patient who follows the advice of untrained peers who recommend treatment which counters the physician’s plan of action.
I read on PLM too much about how bad the doctors are and how the patients know more than they do. That viewpoint grows like weeds and doesn’t help patients who are looking for trustworthy advice.
Now I realize that this comment has turned into a full-out rant. I apologize for that. I just believe that more of this type of discussion which Dr. Val brings forward needs to be considered as Health 2.0 ventures into improving the quality of health care in America.
Dr. Jones, First of all thank you for a well-written, thoughtful article. And there’s no doubt that you are totally correct on the science. What’s at work throughout much of the consumer literature and Web 2.0, though, is an abandonment of trust in science, in scientists and in the scientific method. Consumers trust none of the above to act in their own best interest, with honesty, openness and without adverse financial incentives. Until those issues are fixed (if they’re even fixable at all) consumers will trust their own instincts, sometimes to their detriment.
While this is true for the obvious low-hanging Health 2.0 fruit that excite some VCs and naive technologists, I don’t think it holds at all true for more thoughtful designs such as PatientsLikeMe.com. Their efforts are backed by experienced researchers who understand statistics.
In addition, I would point out that single-case experimental designs are a completely valid research tool. It’s just that the data gained from such experiments is of limited value.
I have written about the invalidity of rating sites previously, the first time nearly 2 years ago:
Reliability and Validity in a Web 2.0 World
How Good Are Doctor Rating Sites?
But I wonder if we’re not just yelling into a desert here, since most people believe these kinds of ratings are “good enough.”
I’ve said this before, but bad data is far worse than no data, because we have no idea in what way it is invalid, while giving us the illusion of having additional information in which to base a decision. With no data, you have no additional false information, and so your decision has to be based upon more traditional factors (such as personal or professional recommendations from others you know and trust).
This posting doesn’t take into the consideration how major drug manufacturers hide data about the safety and efficacy of its drugs, and produce data of scant clinical value. Conflicts of interest have thoroughly corrupted American medical research. There are dangerous potential for conflicts of interest when pharmaceutical and other for-profit businesses control the dissemination of findings generated by medical research.
The ability of drug companies to pick and choose the research they provide in support of their products is an outrageous conflict of interest and puts all patients in harm’s way. It can undermine public trust in and support for scientific research, endanger research subjects and patients, and boost medical costs by encouraging doctors and patients to use new treatments that are no better than cheaper alternatives.
Studies with positive findings are more likely to be published than studies with negative results. Even negative results can provide useful information about the effectiveness of treatments. Any tendency to put negative results into a file drawer and forget them can bias reviews of treatments reported in medical literature, making them look more effective than they really are.
With most clinical trials, investigators never give out information as to how people are doing. Most trials are failures with respect to actually improving things. The world doesn’t find out what happen until after a hundred or 500 or 2,000 patients are treated and then only 24 hours before the New England Journal of Medicine publication date.
Having all the information you can gather for the participants and investigators is essential to maintain good doctor-patient communication that is beneficial for cancer patients.
Dangerous drugs have been allowed to reach the market because conflicts of interest have become so endemic in the system of drug evaluation, a trend that has been exacerbated by the rise of for-profit clinical trials, fast-tracking drug approvals, government-industry partnerships, direct consumer advertising and industry-funded salaries for FDA regulators.
The collaborations between academia and industry has clearly brought discernible influence on clinicians, bringing with it erroneous results, suppressed data, or harmful side effects from these drugs.
There is an inherent conflict of interest when organizations provide guidelines for treating disease who receive funding from corporations that benefit financially from those recommended treatments. There is no proof beyond reasonable doubt for any approach to treating cancer today. There is only the bias of clinical investigators as a group and as individuals.
The use of clinical trials to establish prescribing guidelines for evidence-based medicine is highly criticized because such trials have little relevance for the individual patient in the real world, the individuality and uniqueness of each patient. The choice of physicians to intergrate promising insights and methods remains an essential component of quality cancer care.