Categories

Category: Artificial Intelligence

THCB Spotlight: Jesse Ehrenfeld, AMA

By ZOYA KHAN

Today, we are featuring Dr. Jesse Ehrenfeld from the American Medical Association (AMA) on THCB Spotlight. Matthew Holt interviews Dr. Ehrenfeld, Chair-elect of the AMA Board of Trustees and an anesthesiologist with the Vanderbilt University School of Medicine. The AMA has recently released their Digital Health Implementation Playbook, which is a guide to adopting digital health solutions. They also launched a new online platform called the Physician Innovation Network to help connect physicians with entrepreneurs and developers. Watch the interview to find out more about how the AMA is supporting health innovation, as well as why the AMA thinks the CVS-Aetna merger is not a good idea and how the AMA views the role of AI in the future of health care.

Zoya Khan is the Editor-in-Chief of THCB as well as an Associate at SMACK.health, a health-tech advisory services for early-stage startups.

Where to Apply Artificial Intelligence in Health Care

By HANS DUVEFELT MD Dr. Hans Duvefelt, A Country Doctor Writes, AI

I have seen the light. I now, finally, see a clear role for artificial intelligence in health care. And, no, I don’t want it to replace me. I want it to complement me.

I want AI to take over the mandated, mundane tasks of what I call Metamedicine, so I can concentrate on the healing.

In primary care visits in the U.S., doctors and clinics are buried in government mandates. We have to screen for depression and alcohol use, document weight counseling for every overweight patient (the vast majority of Americans), make sure we probe about gender at birth and current gender identification, offer screening and/or immunizations for a host of diseases, and on and on and on. All this in 15 minutes most of the time.

Never mind reconciling medications (or at least double checking the work of medical assistants without pharmacology training), connecting with the patient, taking a history, doing an examination, arriving at a diagnosis, and formulating and explaining a patient-focused treatment plan.

Continue reading…

AI Doesn’t Ask Why — But Physicians And Drug Developers Want To Know

By DAVID SHAYWITZ MD

At long last, we seem to be on the threshold of departing the earliest phases of AI, defined by the always tedious “will AI replace doctors/drug developers/occupation X?” discussion, and are poised to enter the more considered conversation of “Where will AI be useful?” and “What are the key barriers to implementation?”

As I’ve watched this evolution in both drug discovery and medicine, I’ve come to appreciate that in addition to the many technical barriers often considered, there’s a critical conceptual barrier as well – the threat some AI-based approaches can pose to our “explanatory models” (a construct developed by physician-anthropologist Arthur Kleinman, and nicely explained by Dr. Namratha Kandula here): our need to ground so much of our thinking in models that mechanistically connect tangible observation and outcome. In contrast, AI relates often imperceptible observations to outcome in a fashion that’s unapologetically oblivious to mechanism, which challenges physicians and drug developers by explicitly severing utility from foundational scientific understanding.

Continue reading…

Looking Back at the RWJF Challenges

SPONSORED POST

By JOHN EL-MARAGHY

Catalyst @ Health 2.0 is proud to have worked with the Robert Wood Johnson Foundation to address issues in substance misuse and artificial intelligence through two exciting innovation challenges. Following the finalists’ live pitches at the Health 2.0 Annual Conference, Matthew Holt and Indu Subaiya had the pleasure to interview leaders from the six companies that placed in the top spots across both competitions.

First Place Winners

RWJF Opioid Challenge: the Grand Prize award went to Sober Grid, a social network designed to support, assist, and educate those suffering from addiction and substance misuse. The Sober Grid platform incorporates a suite of geolocated support, a “burning desire” distress beacon, and coaching tools. For those looking to get help and support, the Sober Grid platform is a fantastic free utility.

[ytp_video source=”sBBBD9Shpz8″]

RWJF AI Challenge: the Grand Prize award went to Buoy, a virtual triage chatbot designed to work on any browser. All too often we rely on quick online searches for health information and sometimes receive inaccurate or unreliable results. The Buoy system takes a more conversational approach and emulates similar techniques a doctor would use when diagnosing symptoms and speaking with a patient.

[ytp_video source=”FHn1WdwEBig”]

Second and Third place prizes were also awarded to the following organizations:

Continue reading…

Will Apple Track Your Mind, Not Just Your Heart?

By MICHAEL MILLENSON

If your heart throbs with desire for the new Apple Watch, the Series 4 itself can track that pitter-pat through its much-publicized ability to provide continuous heart rate readings.

On the other hand, if you’re depressed that you didn’t buy Apple stock years ago, your iPhone’s Face ID might be able to discover your dismay and connect you to a therapist.

In its recent rollout of the Apple Watch, company chief operating officer Jeff Williams enthused that the device could become “an intelligent guardian for your health.” Apple watching over your health, however, might involve much more than a watch.

The iPhone models introduced at the same time as the Series 4 all deploy facial analysis software. The feature works in part by projecting a grid of more than 30,000 infrared dots on the user’s face in order to create a three-dimensional map for user recognition. Continue reading…

Will Computers Really Replace Radiologists?

By SAURABH JHA

There is hope, hype and hysteria about artificial intelligence (AI). How will AI change how radiology is practiced?  I discuss this with Stephen Borstelmann, a radiologist in Florida and a scholar in machine learning.

Listen to our discussion on the Radiology Firing Line Series, hosted by the Journal of the American College of Radiology and sponsored by Healthcare Administrative Partners.

About the author:

Saurabh Jha is a radiologist and contributing editor to THCB. He hosts the Radiology Firing Line Podcasts

AI to the Rescue: 5 Semi-Finalists Advancing Through the RWJF AI and the Healthcare Consumer Challenge!

Decision making is a daunting task. Combined with navigating health insurance jargon, scattered health information, and feeling crummy as you rush to find care during the onset of a cold, making decisions can be an absolute nightmare. However, artificial intelligence (AI) enabled tools have the potential to change the way we interact with and consume healthcare for the better. AI’s ability to comprehend, learn, optimize and act are keys to organizing the varying nuisances of the healthcare experience.

In a 2018 survey by Accenture, healthcare consumers indicated they would likely use AI for after hours care, support in navigating healthcare services, lifestyle advice, post-diagnosis management, etc. While AI in health is not limited to these functions, the report highlights consumers’ trouble in making informed healthcare decisions, hence this may be an area where AI can truly help.

Continue reading…

Mudit Garg, Qventus on the $30m raise

Another day, another $30m round in health tech. On Monday Qventus raised that from Bessemer Partners, with Mayfield, Norwest and NY Presbyterian kicking in too. That brings their total to $43m in so far–not bad for a 75 person company that is in the somewhat obscure space of using AI to improve hospital operations. Qventus sucks in data and delivers operational suggestions to front line managers. Of course given that somewhere between $1-1.5 trillion goes through America’s hospitals each year, there’s huge potential for saving money. And given that most hospitals are being paid fixed cost per case, anything that can be done to improve throughput and increase productivity drops to the bottom line and is thus likely to meet interested buyers. I talked to CEO Mudit Garg about the problem, his company’s solution and what they were going to do next.

Artificial Intelligence & How Doctors Think: An Interview with Thomas Jefferson’s Stephen Klasko

As I walk into the building, the sheer grandiosity of the room is one to withhold — it’s as if I’m walking into Grand Central station. There’s a small army of people, all busy at their desks, working to carry out the next wave of innovations helping more than a million lives within the Greater Philadelphia region. However, I’m not here to catch a train or enjoy the sights. I’m at the office of the President and CEO of Thomas Jefferson University, Dr. Stephen Klasko, currently at the helm of one of the largest healthcare systems in the U.S.

Let me backup a little.

The theme of nearly every conversation about the future of technology now revolves around Artificial Intelligence (AI). Much weight is placed on the potential capacity of AI to disrupt industries and change them to the very core. This pressure has been felt to a large extent within nearly every aspect of healthcare where AI has been projected to improve patient care delivery while saving billions of dollars.

Unfortunately, most discussions exploring the implications of AI only superficially look at either the product or the algorithm that powers these products. The short-sightedness of this approach is not an easy one to fix. Yes, clinical studies validating AI backed products are vital but AI cannot be viewed just like any other drug or a medical device. There’s much more to be considered when we examine the broader role of this technology, because this technology can shape the entire healthcare system. To place the impact of a far reaching technology, you need an even longer sighted vision. It’s a rare breed of people that have experienced the tumultuous history of change within medicine but can still call upon the lessons learned to execute innovations and bring meaningful results.

Continue reading…

Hey Watson, Can I Sue You?

Currently, three South Korean medical institutions – Gachon University Gil Medical Center, Pusan National University Hospital and Konyang University Hospital – have implemented IBM’s Watson for Oncology artificial intelligence (AI) system. As IBM touts the Watson for Oncology AI’s to “[i]dentify, evaluate and compare treatment options” by understanding the longitudinal medical record and applying its training to each unique patient, questions regarding the status and liability of these AI machines have arisen.

Given its ability to interpret data and present treatment options (along with relevant justifications), AI represents an interim step between a diagnostic tool and colleague in medical settings. Using philosophical and legal concepts, this article explores whether AI’s ability to adapt and learn means that it has the capacity to reason and whether this means that AI should be considered a legal person.

Through this exploration, the authors conclude that medical AI such as Watson for Oncology should be given a unique legal status akin to personhood to reflect its current and potential role in the medical decision-making process. They analogize the role of IBM’s AI to those of medical residents and argue that liability for wrongful diagnoses should be generally based on a medical malpractice basis rather than through products liability or vicarious liability. Finally, they differentiate medical AI from AI used in other products, such as self-driving cars.

Continue reading…

Registration

Forgotten Password?