Uncategorized

Research Bites Dog

Screen Shot 2016-04-03 at 10.42.56 AMWe live in a headline/hyperlinked world.  A couple of years back, I learned through happenstance that my most popular blog posts all had catchy titles.  I’m pretty confident that people who read this blog do more than scan the titles, but there is so much information coming at us these days, it’s often difficult to get much beyond the headline.  Another phenomenon of information overload is that we naturally apply heuristics or short cuts in our thinking to avoid dealing with a high degree of complexity.  Let’s face it: it’s work to think!

In this context, I thought it would be worth talking about two recent headlines that seem to be set backs for the inexorable forward march of connected health.  These come in the form of peer reviewed studies, so our instinct is to pay close attention.

In fact, one comes from an undisputed leader in the field, Dr. Eric Topol.  His group recently published a paper where they examined the utility of a series of medical/health tracking devices as tools for health improvement in a cohort of folks with chronic illness.  In our parlance, they put a feedback loop into these patients’ lives.  It’s hard to say for sure from the study description, but it sounds like the intervention was mostly about giving patients insights from their own data.  I don’t see much in the paper about coaching, motivation, etc.

If it is true that the interactivity/coaching/motivation component was light, that may explain the lackluster results.  We find that the feedback loops alone are relatively weak motivators.  It is also possible that, because the sample included a mix of chronic illnesses, it would be harder to see a positive effect.  One principle of clinical trial design is to try to minimize all variables between the comparison groups, except the intervention.  Having a group with varying diseases makes it harder to say for sure that any effects (or lack of effects) were due to the intervention itself.

Dr. Topol is an experienced researcher and academician.  When they designed the study, I am confident they had the right intentions in mind.  My guess is they felt like they were studying the effect of mobile health and wearable technology on health (more on that at the end of the post). But you can see that, in retrospect, the likelihood of teasing out a positive effect was relatively low.

The other paper, from JAMA Internal Medicine, reported on a high profile trial for congestive heart failure, which involved using telemonitoring and a nurse call center intervention after discharge.  This trial included a large sample size and was published in a well-respected and well-read journal.  On initial reading, it was less clear to me why they did not see an effect.  I had to read thoroughly – way beyond the headline — to get an idea.  The authors, in the discussion section, provide several thoughtful possibilities.

One that jumps out to me is that the intervention was not integrated into the physician practices caring for the patients.  In our experience with CHF telemonitoring, it is crucial that the telemonitoring nurses have both access to the physician practices and the trust of the patients’ MDs.  Sometimes a simple medication change can prevent a readmission if administered in a timely manner. This requires speedy communication between the telemonitoring nurse and the prescribing physician.  If that connection can’t be made, the patient may wind up in the emergency room and the telemonitoring is for naught.

It is also fascinating that the authors point out that adherence to the intervention was only about 60%.  This reminds me of another high profile paper from 2010 that came to the conclusion that telemonitoring for CHF ‘doesn’t work.’  I blogged on that at the time, pointing out that their adherence rate was 50%.  In both cases, with such low adherence, it is not surprising that the effect was not noted.

In our heart failure program, adherence is close to 100%.  As a result, our readmission rate is consistently about 50% (both all cause and CHF related) and we showed that our intervention is correlated with a 40% improvement in mortality over six months.  The telemonitoring nurses from Partners HealthCare at Home cajole the patients in the most caring way and patients are therefore quite good at sending in their daily vitals.  If they don’t, the nurses call to find out why.  Our program is also tightly aligned with the patients’ referring practitioners.  I suspect these two features are important in explaining our outcomes.

A prime example of how these study headlines can derail the advancement of connected health, was captured in an email I received the other day from my good friend Chris Wasden.  Referring to theJAMA Internal Medicine study he said, “Our docs are using this research to indicate they should not waste their time on digital health.”

Perhaps a spirited discussion over some of these nuances may change some minds.

And that leads me again to the concept of headlines and heuristics.  How could ‘telemonitoring’ in CHF lead to such disparate results?  Is our work wrong? Spurious?  I don’t believe so.  Rather, I think we’ve collectively fallen into a trap of treating ‘mobile health’ and ‘telemonitoring’ as monolithic things when, as you can see, these interventions are designed quite differently.

I believe we are susceptible to this sort of confusion because of applying a heuristic. We are used to reading about clinical trials for new therapeutics or devices.  A chemical is a chemical and a device is a device.  In a pure setting, when applied to a uniform population of individuals, a chemical either has an effect or not.  Connected health interventions are multifaceted and complex. Thus the apparent contradiction that telemonitoring works in our hands but not in the recent JAMA Internal Medicinepaper.

My conclusion is that the next phase of research in this area should move away from testing technologies. Instead, we should focus on teasing out those design aspects of interventions that predict intervention success. Now I think that’s a good headline!

I’ll start out by offering two hypotheses:

  1. 1. mHealth interventions that are separate and distinct from the patient’s ongoing care process are less likely to be successful than those that are integrated.
  2. If adherence to a program is low, it will not be successful. Early phase, pre-clinical trial testing of interventions should include work to fine tune design features that promote adherence. Chapter 8 of The Internet of Healthy Things offers some ideas on this.

As I said five years ago, I’m not sure intention to treat analysis is the right way to evaluate connected health interventions. If patients are non-adherent to the intervention, is it any surprise that they don’t respond? I’m having trouble wrapping my head around that one.

<em>Joseph Kvedar is the Director of the Center for Connected Health</em>

8 replies »

  1. You do need an intent-to-treat design because if you just look at “adherent” patients you don’t capture state of mind. And in the one situation — once again, right here on TCHB — where a study was done both ways (intent to treat and RCT), the RCT showed no savings when the study group was measured against the control. But when the study group self-divided into participants and non-participants, the difference between the two was massive. And this was despite the fact that the people in this study had nothing wrong with them to begin with. I can’t make this stuff up. https://thehealthcareblog.com/blog/2015/12/16/genetic-testing-the-new-frontier-of-wellness-madness/

  2. Continuing. Vivify tried the same strategy as Propeller. This time the P-I spelled the company’s name right (it is, after all, phonetic) but misspelled “principal investigator.” I suspect all of their figures were made up but I know some of them had to be, because they contradicted the other ones. http://theysaidwhat.net/2016/02/28/could-reading-our-website-have-saved-upmc-17-million/

    And yet they raised $17-million. Is this a great country or what? One more comment…

  3. Hi Joe, there is one solution that has worked well in digital health research: lying. It worked really well for Propeller Health, as described right here in THCB. https://thehealthcareblog.com/blog/2014/03/21/meet-propeller-health-digital-healths-poster-child-for-invalid-savings-reporting/

    They raised money and really snookered a bunch of people. For real fun, look at the comments– the P-I is sniping at them but can’t spell their name right.

    I have two other observations–will put in separate comments.

  4. I wonder if there are ways to monitor, continously, ejection fraction or cardiac output or oxygen consumption by tissues. Then we might be able to see throughout the day the effects of fluid intake, medications, foods, and other activities. In other words maybe what we are monitoring in these studies is not tightly enough coorelated with what we really need to know…which probably is oxygen delivery…especially to the brain and myocardium.

  5. “we should focus on teasing out those design aspects of interventions that predict intervention success”

    Via TweetBot

  6. I just think this speaks to the fact that someone they trust needs to explain to pts: very time consuming endeavour

    Via TweetBot

  7. Blaming the media for writing the inevitable critical headlines isn’t going to change anything.

    Pointing fingers at the way the trials are managed is fair but ultimately misguided.

    Tech is not a cheap fix. It’s a blockbuster drug, but only when properly prescribed and managed.

    Digital health must police itself.

    Better research please!