Everyone seems to be amazed by artificial intelligence (AI) and machine learning in healthcare, but Enrico Coiera, Professor of Medical Informatics at Macquarie University, is not impressed — yet. Instead of designing algorithms, he advocates for designing “human-machine systems” that work with the best parts of the health system, the people. An interesting anecdote about how AI can go wrong? Diagnoses of thyroid cancer in South Korea have increased 15 times, but not because of a higher prevalence of the disease…it’s because of more sensitive AI diagnostics that are over-diagnosing people and rendering many with chemo and other treatments they don’t need. So, what should technologists do to ensure that tech doesn’t fail patient outcomes? Enrico gives his best advice for a healthcare industry that’s “in love with technology and can’t often see the simple solution for the sexy tech one.”
Filmed in the HISA Studio at HIC 2019 in Melbourne, Australia, August 2019.
Jessica DaMassa is the host of the WTF Health show & stars in Health in 2 Point 00 with Matthew Holt. Get a glimpse of the future of healthcare by meeting the people who are going to change it. Find more WTF Health interviews here or check out www.wtf.health.
On Episode 3 of HardCore Health, Jess & I start off by discussing all of the health tech companies IPOing (Livongo, Phreesia, Health Catalyst) and talk about what that means for the industry as a whole. Zoya Khan discusses the newest series on THCB called, “The Health Data Goldilocks Dilemma: Sharing? Privacy? Both?”, which follows & discuss the legislation being passed on data privacy and protection in Congress today. We also have a great interview with Paul Johnson, CEO of Lemonaid Health, an up-and-coming telehealth platform that works as a one-stop-shop for a virtual doctor’s office, a virtual pharmacy, and lab testing for patients accessing their platform. In her WTF Health segment, Jess speaks to Jen Horonjeff, Founder & CEO of Savvy Cooperative, the first patient-owned public benefit co-op that provides an online marketplace for patient insights. And last but not least, Dr. Saurabh Jha directly address AI vendors in health care, stating that their predictive tools are useless and they will not replace doctors just yet- Matthew Holt
Matthew Holt is the founder and publisher of The Health Care Blog and still writes regularly for the site.
Leave your bias aside and take a look into the healthcare future with me. No, artificial intelligence, augmented intelligence and machine learning will not replace the radiologist. It will allow clinicians to.
The year is 2035 (plus or minus 5 years), the world is waking up after a few years of economic hardship and maybe even some dreaded stagflation. This is an important accelerant to where we are going, economic hardship, because it will destroy most radiology AI startups that have thrived on quantitative easing polices and excessive liquidity of the last decade creating a bubble in this space. When the bubble pops, few small to midsize AI companies will survive but the ones who remain will consolidate and reap the rewards. This will almost certainly be big tech who can purchase assets/algorithms across a wide breadth of radiology and integrate/standardize them better than anyone. When the burst happens some of the best algorithms for pulmonary embolism, stroke, knee MRI, intracranial hemorrhage etc. etc. will become available to consolidate, on the “cheap”.
Hospitals can now purchase AI equipment that is highly effective both in cost and function, and its only getting better for them. It doesn’t make sense to do so now but soon it will. Consolidation in healthcare has led to greater purchasing power from groups and hospitals. The “roads and bridges” that would be needed to connect such systems are being built and deals will soon be struck with GE, Google, IBM etc., powerhouse hundred-billion-dollar companies, that will provide AI cloud-based services. RadPartners is already starting to provide natural language processing and imaging data to partners; that’s right, you speak into the Dictaphone and it is recorded, synced with the image you dictated, processed with everyone else to find all the commonalities in descriptors to eventually replace you. It is like the transcriptionists ghost of the past has come back to haunt us and no one cried for them. Prices will be competitive, and adoption will be fast, much faster than most believe.
Now we have some patients who arrive for imaging, as outpatients, ER visits, inpatients; it does not matter the premise is the same. Ms. Jones has chest pain, elevated d-dimer, history of Lupus anti-coagulant and left femoral DVT. Likely her chart has already been analyzed by a cloud-based AI (merlonintelligence.com/intelligent-screening/) and the probability of her having a PE is high, this is relayed to the clinician (PA, NP, MD, DO) and the study is ordered. She’s sent for a CT angiogram PE protocol imaging study. This is important to understand because there will be no role for the radiologist at this level. The recommendation for imaging will be a machine learning algorithm based off more data and papers than any one radiologist could ever read; and it will be instantaneous and fluid. Correct studies will be recommended and “incorrectly” ordered studies will need justifications without radiologist validation.
The year is 2019 and Imaging By Machines have fulfilled their prophesy and control all Radiology Departments, making their organic predecessors obsolete.
One such lost soul tries to decide how he might reprovision the diagnostic equipment he has set up on his narrow boat on the Manchester Ship Canal, musing at the extent of the digital take over during his supper (cod of course).
What I seek to do in this short paper is not to revisit the well-trodden road of what Artificial Intelligence, deep learning, machine learning or natural language processing might be, the data-science that underpins them nor limit myself to what specific products or algorithms are currently available or pending. Instead I look to share my views on what and where in the patient journey I perceive there may be uses for “AI” in the pathway.
I’ve been talking in recent posts about how our typical methods of testing AI systems are inadequate and potentially unsafe. In particular, I’ve complainedthat all of the headline-grabbing papers so far only do controlled experiments, so we don’t how the AI systems will perform on real patients.
Today I am going to highlight a piece of work that has not received much attention, but actually went “all the way” and tested an AI system in clinical practice, assessing clinical outcomes. They did an actual clinical trial!
Big news … so why haven’t you heard about it?
The Great Wall of the West
Tragically, this paper has been mostly ignored. 89 tweets*, which when you compare it to many other papers with hundreds or thousands of tweets and news articles is pretty sad. There is an obvious reason why though; the article I will be talking about today comes from China (there are a few US co-authors too, not sure what the relative contributions were, but the study was performed in China).
China is interesting. They appear to be rapidly becoming the world leader in applied AI, including in medicine, but we rarely hear anything about what is happening there in the media. When I go to conferences and talk to people working in China, they always tell me about numerous companies applying mature AI products to patients, but in the media we mostly see headline grabbing news stories about Western research projects that are still years away from clinical practice.
This shouldn’t be unexpected. Western journalists have very little access to China**, and Chinese medical AI companies have no need to solicit Western media coverage. They already have access to a large market, expertise, data, funding, and strong support both from medical governance and from the government more broadly. They don’t need us. But for us in the West, this means that our view of medical AI is narrow, like a frog looking at the sky from the bottom of a well^.
With the application deadline for Bayer’s G4A Partnerships program coming up on Friday, I thought I’d throw out a little inspiration to would-be applicants by featuring an interview I did with one of last year’s program participants at the grand-finale Launch Event.
Not only was this a great party, but a microcosm of the G4A program experience itself: a way to meet Bayer execs en-masse, an opportunity to sell directly to key decision-makers across Bayer’s various global business units, and a chance to feed off the energy of like-minded innovators eager to see ‘big health care’ change for the better.
While the G4A program itself has changed a bit this year to be more streamlined and to allow for bespoke deal-making that may or may not involve giving up equity (my favorite new feature), startups questioning whether or not they have what it takes should take a look at some alums.
There’s a playlist with nearly two dozen interviews waiting for you here if you’re REALLY up for some procrastinating, or you can click through and just check out my chat with Joe Curcio, CEO of KinAptic. A healthtech startup taking wearables to the bleeding edge, Joe shows us a mock-up of the KinAptic ‘smart shirt’ which features their real innovation: printed ink electronics that look and feel like screenprinting ink, but work bi-directionally to both collect data from the body AND apply signals back to it. Is it AI-enabled? Did you have to ask? Listen in for a mindblowing chat about how this tech can change diagnostic analysis and treatment and completely redefine our current limitations when it comes to healthcare wearables.Once you’re inspired, don’t forget to head over to www.g4a.health and fill out your own application for this year’s partnership program.
Jessica DaMassa is the host of the WTF Health show & stars in Health in 2 Point 00 with Matthew Holt
Two years ago we wouldn’t have believed it — the U.S. Congress is considering broad privacy and data protection legislation in 2019. There is some bipartisan support and a strong possibility that legislation will be passed. Two recent articles in The Washington Post and AP News will help you get up to speed.
Federal privacy legislation would have a huge impact on all healthcare stakeholders, including patients. Here’s an overview of the ground we’ll cover in this post:
Six Key Issues for Healthcare
We are aware of at least 5 proposed Congressional bills and 16 Privacy Frameworks/Principles. These are listed in the Appendix below; please feel free to update these lists in your comments. In this post we’ll focus on providing background and describing issues. In a future post we will compare and contrast specific legislative proposals.
Today, we are featuring Dr. Jesse Ehrenfeld from the American Medical Association (AMA) on THCB Spotlight. Matthew Holt interviews Dr. Ehrenfeld, Chair-elect of the AMA Board of Trustees and an anesthesiologist with the Vanderbilt University School of Medicine. The AMA has recently released their Digital Health Implementation Playbook, which is a guide to adopting digital health solutions. They also launched a new online platform called the Physician Innovation Network to help connect physicians with entrepreneurs and developers. Watch the interview to find out more about how the AMA is supporting health innovation, as well as why the AMA thinks the CVS-Aetna merger is not a good idea and how the AMA views the role of AI in the future of health care.
Zoya Khan is the Editor-in-Chief of THCB as well as an Associate at SMACK.health, a health-tech advisory services for early-stage startups.
I have seen the light. I now, finally, see a clear role for artificial intelligence in health care. And, no, I don’t want it to replace me. I want it to complement me.
I want AI to take over the mandated, mundane tasks of what I call Metamedicine, so I can concentrate on the healing.
In primary care visits in the U.S., doctors and clinics are buried in government mandates. We have to screen for depression and alcohol use, document weight counseling for every overweight patient (the vast majority of Americans), make sure we probe about gender at birth and current gender identification, offer screening and/or immunizations for a host of diseases, and on and on and on. All this in 15 minutes most of the time.
Never mind reconciling medications (or at least double checking the work of medical assistants without pharmacology training), connecting with the patient, taking a history, doing an examination, arriving at a diagnosis, and formulating and explaining a patient-focused treatment plan.
At long last, we seem to be on the threshold of departing the earliest phases of AI, defined by the always tedious “will AI replace doctors/drug developers/occupation X?” discussion, and are poised to enter the more considered conversation of “Where will AI be useful?” and “What are the key barriers to implementation?”
As I’ve watched this evolution in both drug discovery and medicine, I’ve come to appreciate that in addition to the many technical barriers often considered, there’s a critical conceptual barrier as well – the threat some AI-based approaches can pose to our “explanatory models” (a construct developed by physician-anthropologist Arthur Kleinman, and nicely explained by Dr. Namratha Kandulahere): our need to ground so much of our thinking in models that mechanistically connect tangible observation and outcome. In contrast, AI relates often imperceptible observations to outcome in a fashion that’s unapologetically oblivious to mechanism, which challenges physicians and drug developers by explicitly severing utility from foundational scientific understanding.