Uncategorized

Tech VC Answers ‘Will Computers Replace Doctors’ – I Mean VCs

Screen Shot 2016-06-04 at 9.37.53 AM

A constant frustration for many in healthcare is the cognitive dissonance between the elegant, highly anticipated promise of technology solutions and the messy, lived complexity of clinical practice.

In this context, I was fascinated–and feel compelled to share–this unexpectedly revealing excerpt from a recent (and, as always, captivating) a16z podcast, featuring a conversation a16z founder Marc Andreessen and board partner Balaji Srinivasan recorded at Stanford.

Following an extensive conversation about the factors associated with startup success and VC success, as well as about emerging (or re-, re-emerging) trends such as artificial intelligence (AI), an audience member asked whether AI might not select investments better than actual VCs–a VC version of the “will computers replace doctors?” gauntlet that tech VCs have thrown down before the medical establishment.

Andreessen’s response (at around the 40-minute mark) speaks for itself–but also, I’d argue, for most in healthcare (emphasis added):

The computer scientist in me and engineer in me would like to believe this is possible, and I’d like to be able to figure this out–frankly, I’d like us to figure it out.

The thing I keep running up against–the cognitive dissonance in my head I keep struggling with, is what I keep seeing in practice (and talk about in theory vs. in practice)–like in theory, you should be able to get the signals–founder background, progress against goals, customer satisfaction, whatever, you should be able to measure all these things.

What we just find is that what we just deal with every day is not numbers, is nothing you can quantify; it’s idiosyncrasies of people, and under the pressure of a startup, idiosyncrasies of people get magnified out to like a thousand fold. People will become like the most extreme versions of themselves under the pressure they get under at a startup, and then that’s either to the good or to the bad or both.

People have their own issues, have interpersonal conflicts between people, so the day job is so much dealing with people that you’d have to have an AI bot that could, like, sit down and do founder therapy.

My guess is we’re still a ways off.

Who knew that developing data-driven tech solutions could be challenging in a profession that at its core is focused on human idiosyncrasies, especially under conditions of stress?

Categories: Uncategorized

9 replies »

  1. I am currently halfway through chapter 7 of “Snowball in a Blizzard.” Fabulous. Should be required reading for clinicians and patients alike.

    https://www.sciencebasedmedicine.org/uncertainty-in-medicine/

    One hopes that “IA” (it will likely be way more “Intelligence Augmentation” than “AI”) will help reduce dx uncertainties, but I would not underestimate the difficulties.

  2. The question is not “if” but “when” for most things

    via TweetBot

  3. Point Millenson. I’ll use this one from now on when the topic comes up.

  4. I hope so–for diagnosis. But we are going to have to tame costs. Look up the differential diagnoses of FUO in Medscape. You haven’t seen costs until you see them ordered by a computer. Anyone here think of Meditarranean Fever as a cause of shoulder pain? Oh, btw, please r/o acute intermittent porphyria with appendicitis symptoms.?

    But, for therapy, the patient and maybe the society are going to have to weigh all kinds of cost/benefit/ethical dilemmas. No one else can do this, surely not the computer. The society and the government will not be able to do this–they never have (look at how insurance coverage benefits were handled by the ACA)–so that the individual doc-patient will have to do this and face ex post facto criticism and money constraints by society after the deed is done..

  5. The best comment on this prospect comes from Dr. Warner Slack, a pioneering informatician at Harvard. When computers in medicine started becoming mainstream in the 1980s, “being replaced” was a common worry. His laconic advice: “Any doctor who can be replaced by a computer should be.”

  6. Doctors are licensed professionals. The doctor’s license has a regulatory role as part of a regulatory triad that includes vendors (via FDA) and the patient herself (via independent action and informed consent).

    Computers, including AI, are just technology and technology can serve all three components of the triad. Vendors will sell AI, some of it regulated by the FDA, to doctors and to patients. IBM Watson Health has already announced collaboration with the American Cancer Society to do this. Patients and doctors will also build and own AI for themselves because AI technology already costs much less than the personal data it takes to train it.

    Computers will replace doctors and patients if they remain passive. To the extent people cede control of technology to vendors and hospitals (as physicians and patients have done with EHRs, so far) the regulatory triad becomes meaningless and the dignity of both the medical profession and the patient is lost.