Uncategorized

7 Ways We’re Screwing Up AI in Healthcare

The healthcare AI space is frothy.  Billions in venture capital are flowing, nearly every writer on the healthcare beat has at least an article or two on the topic, and there isn’t a medical conference that doesn’t at least have a panel if not a dedicated day to discuss. The promise and potential is very real.

And yet, we seem to be blowing it.

The latest example is an investigation in STAT News pointing out the stumbles of IBM Watson followed inevitably by the ‘is AI ready for prime time’ debate. If course, IBM isn’t the only one making things hard on itself. Their marketing budget and approach makes them a convenient target. Many of us – from vendors to journalists to consumers – are unintentionally adding degrees to an already uphill climb.

If our mistakes led to only to financial loss, no big deal. But the stakes are higher. Medical error is blamed for killing between 210,000 and 400,000 annually. These technologies are important because they help us learn from our data – something healthcare is notoriously bad at. Finally using our data to improve really is a matter of life and death.

In that spirit, here’s a short but relevant list of mistakes we’d all benefit from avoiding. It’s curated from a much longer list of sometimes costly, usually embarrassing mistakes I’ve made during my dozen years of trying to make these technologies work for healthcare.

  1. Inconsistent references to…whatever we’re calling it.  I hard a hard time settling on the title of this piece. I had plenty of choices to describe the topic of interest, including machine learning, big data, data mining, data science, cognitive computing, to name a few.  Within certain circles there are meaningful distinctions between all of these terms. For the vast majority of those we hope to help, using 10 ways to describe the same thing is confusing at best and misleading at worst.

I’d prefer the term ‘machine learning’ since that’s usually what we’re talking about, but I’ll trade my vote for consensus on any name. Except ‘artificial intelligence’. The math involved is neither artificial, nor intelligent. Which brings us to mistake 2.

  1. Machine learning is a tool, not a sentient being. It’s a really powerful tool that can help with detection of disease, early prediction of progression, and pairing individuals to interventions. The tool metaphor has real repercussions – not just for cooling off the “AI as doctor” hype but for how we actually put it to use.

For example, the hammer is a great tool. If you know how to use it. If you have a plan to create something value with it. If you are working with wood. If the job, ultimately, is to bang nails. If not, it’s useless. The second we claim otherwise, we’re setting up for disappointment.

  1. Ridiculously unhelpful graphics. On a related note, the images accompanying articles on the topic aren’t helping matters. I sympathize with the challenge of visually representing a somewhat intangible approach. However, robotic terminator arms presenting magical pills are not helpful (or even brains) are hilarious* but not helpful.
  2. People don’t get excited about being replaced. Yet our references to artificial intelligence, our graphics, and our headlines keep steering their audience back toward this one inevitable conclusion. I get it. Scare sells. But it doesn’t get us to better care faster.
  3. Outrageous promises (and belief) of what these tools can do.  For some reason people seem to be upset that IBM Watson hasn’t revolutionized cancer care yet. If I sold you a hammer based on the promise that it can build a house on its own, would you be disappointed if it didn’t?

For that matter, who deserves the blame? Me for selling you the hammer or you for believing it?

No one in their right mind would blame the hammer. Unlike the tools comprising AI, there are not thousands of studies over the past three decades demonstrating the effectiveness of hammers. And yet, inappropriate use, over-promising, and poor project management is causing many to question AI.

Why is it so easy to blame the tool? See above.

  1. Measure (and talk about) what matters. Hint: it’s not the predictive performance of an algorithm, the terabytes of data amassed, or grandiose introductions of your data scientists’ degrees. It’s dollars saved or earned, lives improved, time reduced, etc.

If you must describe value in terms of accuracy / statistical performance, best to do so responsibly. Claiming “90% accurate!” doesn’t mean anything without additional context. Accurate at what? Measured how? With what data? Details matter in healthcare.

  1. Technology is great. But people & process improve care. The best predictions are merely suggestions until they’re put into action. In healthcare, that’s the hard part. Success requires talking to people and spending time learning context and workflows – no matter how badly vendors or investors would like to believe otherwise. It would be fantastic if healthcare could be transformed by installing software that assumed your workflows and priorities. Just ask those dealing with the aftermath of electronic medical record installed (i.e., most practicing clinicians). Until certain fundamental realities change, invest in understanding, process, and workflow.

I share this partial list of lessons learned not out of frustration but with incredible enthusiasm for what’s to come. These technologies will become an integral part of how we identify patients in need of attention, reduce wasteful administrative overhead, recommend more appropriate pathways of care. I see it happening in small steps, in real healthcare organizations every day. The sooner we reframe the way we speak about and apply these tools, the sooner we can begin using our data to get better.

*Not helpful but hilarious. I started collecting them and tweeting out one new wildly unhelpful AI graphic every Friday. Feel free to send great specimen my way.

Leonard D’Avolio @ldavolio is CEO & Co-founder of Cyft Inc, and also Asst. Professor, Harvard Medical School & Brigham and Women’s Hospital

Categories: Uncategorized

Tagged as: , , ,

8 replies »

  1. On that we are in agreement ..

    BUT that doesn’t mean an app or a supercomputer or a geek with a mac can’t do interesting things. The problem is the pitch ..

  2. Grab almost any issue of Science or Nature and you will see research on the proteomics of disease. There will be big graphs and diagrams that will make your head spin. Molecular biology came along and is showing us that what we are dealing with in biology is astonishingly complex. More than anyone imagined. It used to said that the most complex thing made by man was a nuclear submarine…and that no one really understood all of its systems. Alas….The biology of the human body is magnitudes more complex.

    Thus, a plea to everyone working on AI: keep working…hard…make all the mistakes you need to make…get all the money you can in the budget. Keep screwing up! Don’t get depressed. Don’t give up.

    We need you.

    P.S. To see an example. Try to have your doc explain to you the differential diagnosis of Periodic Fevers (as in Familial Mediterranean Fever, et al) the next time your child has a unexplained fever.

    Our brains cannot keep all this data on top in a useful place! It is too much. HELP!

  3. John, solving the high cost of health care is not going to be done with an app or a super computer – fixed or not.

  4. “Medical error is blamed for killing between 210,000 and 400,000 annually. These technologies are important because they help us learn from our data – something healthcare is notoriously bad at. Finally using our data to improve really is a matter of life and death.”

    Won’t argue the stats but can you tell me how many die from our high fat, calories, sugar and sedentary lifestyle? Get Watson to fix that.

  5. For many years, a person with the intractable onset of an acute migraine could be nearly completely, and sustained, relieved of pain with an odd combination of two generic medications given IM, meperidine and hydroxyzine. The pain was resolved within one hour following the injections. If this appointment was scheduled by the nurses, they always required that someone drive the patient to the office for her return home after being treated. A similar visit would occur, virtually never by the same person, about every 2-3 years. Given the range of medications usually prescribed, excluding narcotics, most people had good control of their migraine problems. If not easily controlled, they benefited from a referral to a neurologist. Still no lateralizing signs or symptoms.
    .
    One day, a very healthy person presented in the office with an acute migraine but with no improvement in one or two hours. At two hours, I gave her a second injection and sent her to the nearest ED. Having called the ED physician about her arrival, I didn’t hear back for 2-3 hours. He eventually called to tell me about her CT scan. With the results, the patient had been sent to the local University ED for the care of a ruptured aneurysm.
    .
    The story is about pattern recognition. There are many issues with this story. BUT, it is unlikely that ‘AI’ could have been of any help with the trusting character of the interaction involving the patient, myself and my referral source for a timely process of evolving health care. Eventually, another event occurred several years later followed by a return to full employment, again. The second time she went directly to the University ED.