Today on Episode 58 of Health in 2 Point 00, Jess and I have more to share from Exponential Medicine, but this time we’re at the Health Innovation Lab checking out all of the startups. In this episode, Jess and I talk to Meghan Conroy from CaptureProof about decoupling medical care from time and location, Care Angel‘s Wolf Shlagman about the world’s first AI and voice powered virtual nursing assistant, and highlight Humm’s brain band which improves working memory, concentration, and visual attention. We leave you with some parting words from Godfrey Nazareth: “Let’s set the world on fire. Let’s change the world, with love.” -Matthew Holt
On Episode 57 of Health in 2 Point 00, Jess and I report from Exponential Medicine. In this episode, Jess and I talk about digital surgery and how Shafi Ahmed and Stefano Bini are transforming surgical training. She also asks me about my favorite session, one by Anita Ravi on health care for those who have been sex trafficked. Other highlights include ePatient Dave’s talk about access to data for patients and letting patients help, and Leerom Segal’s overview of why voice matters- Matthew Holt
WTF Health – ‘What’s the Future’ Health? is a new interview series about the future of the health industry and how we love to hate WTF is wrong with it right now. Can’t get enough? Check out more interviews at www.wtf.health.
What can you find diving into the black hole of healthcare’s unstructured data? Natural Language Processing (NLP) seems to be the ‘tech du jour’ this year, so I spoke to early-entrant Simon Beaulah of Linguamatics about the big picture of NLP-plus-AI and the tech’s evolving role in improving care by putting together a more complete ‘patient narrative’ in the EMR.
Wanna hear his thoughts on what’s next for NLP in terms of scaling? Jump in at 2:15 mark.
Jessica DaMassa asks me all about health & technology, in just 2 minutes, featuring venture rounds for Kyruus, Parsley Health, Livongo buying RetroFit, the RWJF AI challenge from Catalyst @ Health 2.0 and a ridiculously long explanation of where the @boltyboy twitter name came from…–Matthew Holt
WTF Health – ‘What’s the Future’ Health? is a new interview series about the future of health and how we love to hate WTF is wrong with it right now. Can’t get enough? Check out more interviews at www.wtf.health.
They just raised another $10M and you should find out why….
I met up with Kyruus co-founder and chief product officer Julie Yoo at #HIMSS18 to hear about the #AI magic behind their ‘intelligent routing engine.’ Apparently, it does such an incredible job driving business into health systems by better matching patients to docs that some more funding is in order to help them expand!
So, where does Kyruus fit into the ‘big picture’ of health’s ‘big data’ movement? Julie’s beat on how AI implementation in healthcare is going gives you a pretty good idea.
Artificial intelligence requires data. Ideally that data should be clean, trustworthy and above all, accurate. Unfortunately, medical data is far from it. In fact medical data is sometimes so far removed from being clean, it’s positively dirty.
Consider the simple chest X-ray, the good old-fashioned posterior-anterior radiograph of the thorax. One of the longest standing radiological techniques in the medical diagnostic armoury, performed across the world by the billions. So many in fact, that radiologists struggle to keep up with the sheer volume, and sometimes forget to read the odd 23,000 of them. Oops.
Surely, such a popular, tried and tested medical test should provide great data for training AI? There’s clearly more than enough data to have a decent attempt, and the technique is so well standardised and robust that surely it’s just crying out for automation?Continue reading…
Currently, three South Korean medical institutions – Gachon University Gil Medical Center, Pusan National University Hospital and Konyang University Hospital – have implemented IBM’s Watson for Oncology artificial intelligence (AI) system. As IBM touts the Watson for Oncology AI’s to “[i]dentify, evaluate and compare treatment options” by understanding the longitudinal medical record and applying its training to each unique patient, questions regarding the status and liability of these AI machines have arisen.
Given its ability to interpret data and present treatment options (along with relevant justifications), AI represents an interim step between a diagnostic tool and colleague in medical settings. Using philosophical and legal concepts, this article explores whether AI’s ability to adapt and learn means that it has the capacity to reason and whether this means that AI should be considered a legal person.
Through this exploration, the authors conclude that medical AI such as Watson for Oncology should be given a unique legal status akin to personhood to reflect its current and potential role in the medical decision-making process. They analogize the role of IBM’s AI to those of medical residents and argue that liability for wrongful diagnoses should be generally based on a medical malpractice basis rather than through products liability or vicarious liability. Finally, they differentiate medical AI from AI used in other products, such as self-driving cars.
“We built it and we just let it run. We’re a few dudes in an office and our goal is to keep it running. It does everything we could do, except it’s significantly more powerful and it has completely automated how our work is being done,” casually said the hedge fund manager as he described the process by which nearly $1billion was being managed within his fund.
The ‘it’ is an artificial intelligence (AI) based algorithm that uses complex statistics to analyze variables that went into successful decisions and uses advanced computer programs to keep replicating those decisions. All this, while it continuously learns from – and improves upon – its mistakes as it encounters new variables.
These machine intelligent systems are applying the many different forms of AI and fundamentally changing the financial industry. From applying Natural Language Processing in detecting Anti-Money Laundering and fraudulent financial activity to applying Cognitive Computing to analyze wide varieties of variables in building better trading algorithms and to leveraging Deep Learning to looking at consumer decision patterns and providing personalized ‘chatbots,’ AI is transforming the financial sector.
One of the most noticeable areas where this disruption is taking place is within hedge funds: hedge funds that are transitioning their trading desks to AI backed systems, are already beginning to outperform hedge-funds backed by humans alone. What’s really quite astonishing though is how, in the short span of a few years, how far reaching the results have been.
Hearing about hedgies working with AI researchers to make even more money doesn’t inspire the rest of us to greatness. However, it may be valuable to look a brief historical overview of how the financial industry reached this juncture.
We’ve all heard the big philosophical arguments and debate between rockstar entrepreneurs and genius academics – but have we stopped to think exactly how the AI revolution will play out on our own turf?
At RSNA this year I posed the same question to everyone I spoke to: What if radiology AI gets into the wrong hands? Judging by the way the crowds voted with their feet by packing out every lecture on AI, radiologists would certainly seem to be very aware of the looming seismic shift in the profession – but I wanted to know if anyone was considering the potential side effects, the unintended consequences of unleashing such a disruptive technology into the clinical realm?
While I’m very excited about the prospect and potential of algorithmic augmentation in radiological practice, I’m also a little nervous about more malevolent parties using it for predatory financial gains.
The healthcare AI space is frothy. Billions in venture capital are flowing, nearly every writer on the healthcare beat has at least an article or two on the topic, and there isn’t a medical conference that doesn’t at least have a panel if not a dedicated day to discuss. The promise and potential is very real.
And yet, we seem to be blowing it.
The latest example is an investigation in STAT News pointing out the stumbles of IBM Watson followed inevitably by the ‘is AI ready for prime time’ debate. If course, IBM isn’t the only one making things hard on itself. Their marketing budget and approach makes them a convenient target. Many of us – from vendors to journalists to consumers – are unintentionally adding degrees to an already uphill climb.
If our mistakes led to only to financial loss, no big deal. But the stakes are higher. Medical error is blamed for killing between 210,000 and 400,000 annually. These technologies are important because they help us learn from our data – something healthcare is notoriously bad at. Finally using our data to improve really is a matter of life and death.