Healthcare and the Second Machine Age: An Interview with Andy McAfee


Screen Shot 2015-01-15 at 8.57.52 PM

Andy McAfee is the associate director of the Center for Digital Business at MIT’s Sloan School of Management. He is also coauthor (with his MIT colleague Erik Brynjolfsson) of the 2014 book, The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologiesone of my favorite books on technology. While he sits squarely in the camp of “technology optimists,” he is thoughtful, appreciates the downsides of IT, and isn’t overawed by the hype. In the continuing series of interviews I conducted for my forthcoming book on health IT, The Digital Doctor: Hope, Hype, and Harm at the Dawn of Medicine’s Computer Age, I spoke to McAfee on August 13, 2014 in a restaurant in Cambridge, Massachusetts. I began by asking about some of the general lessons from today’s world of technology and business that have implications for healthcare.

McAfee: Our devices are going to continue to amaze us. My iPhone – it’s a supercomputer by the standards of 20 or 30 years ago. Right now, hundreds of millions of people carry a device that is about this powerful. Wait a little while. That number will become billions. And those devices will spit out ridiculous amounts of data of all forms, so this big data world that we’re already in – that’s going to accelerate.

Since data is the lifeblood of science, we’re going to get a lot smarter about some pretty fundamental things, whether it’s genomics or self-diagnosis or how errors happen. Then, because we’re putting all this power into the hands of so many people all around the world, it seems certain that the scale, pace, and scope of innovation are going to increase.

So I’m truly optimistic for the medium- to long-term. But the short-term is going to be a really interesting, really rocky time.

RW: When you say medium- to long-term, how many years before we get to this wonderful place?

AM: Don’t hold me to it. But within a decade.

RW: We always like to think we’re special in medicine. We’re so different. It’s so complicated. Do you see any fundamental differences between healthcare and other industries that will shape our technology path?

AM: There are two main things that might retard progress in medicine. The first is healthcare’s payment system, particularly how messed up it is trying to match who benefits versus who pays. The other thing is the culture of medicine. I understand that it’s changing, but there’s still this idea that “how dare you second-guess me, I’m the doctor.”

RW: But we can’t be alone in that. I’m sure many industries have their stars – supported by their guilds – who think, “We’re at the top of the heap, with high income and stature. We’re going to fight this technology thing, since it could erode our franchise.”

AM: Sure, but in the rest of the world eroding the franchise is what it’s all about. It’s Schumpeterian creative destruction [the theory advanced by Austrian economist Joseph Schumpeter – it is, in essence, economic Darwinism, and forms the core of today’s popular notion of “disruptive innovation”], so if you’re behind the times and I’m not, I’m going to come along and displace you and the market will speak to that.

I asked McAfee about some of the negative consequences of technology I explore in my book, particularly the issues of human “deskilling” and the changes in relationships – for example, the demise of radiology rounds because we don’t have to go to the radiology department to see our films anymore.

AM: Technology always changes social relationships and it often leads to the erosion of some skills. The example I always use is that I can’t use a slide rule. I was never trained to do that. Whereas engineers at MIT a generation before me were really, really good with their slide rules.

RW: Are there other industries in which people are now smart enough to say, “This is likely to be the impact of this new technology on social relationships, and here is how we should mitigate the harm”? Or do they just implement, see what happens, and then ask, “What have we lost and how do we deal with that?”

AM: Much more the latter. I haven’t seen a good playbook for “here’s what is going to happen when you put this technology and therefore do these three things in advance.” It’s much more that you have some thoughtful people saying, “Wait a minute. We used to do X and we kind of liked that and now we do less of X, so it’s turned into Y. We need to put some Z in place.”

RW: Does Z tend to be some high-tech relationship connector?

AM: In some cases, yeah. But there’s the story about the call center that was unhappy about some aspects of its social relationships. They just moved the break room and the break times so that people literally would just come and hang out a lot more. That made people a lot happier and it made the outcomes better. Sometimes the fix has a tech component, and sometimes it doesn’t.

As in many of my interviews, we turned to the question of whether computers would ultimately replace humans in medicine. I described a few situations in which physicians use “the eyeball test” – their intuition, drawn from subtle cues that are not (currently) captured in the data – to make a clinical judgment.

AM: The great [human] diagnosticians are amazing. But we still pat ourselves on the back about them far too much and we ignore or downplay or we think that we are exceptions to the really well identified problems of this particular computer [McAfee points to his brain]. The biases, the inconsistencies, the fact that if I’m going through a divorce or I have hangover or I’ve got a sick kid, so my wiring is all messed up.

Have you ever met anyone who thought they had below average intuition or was a below average judge of people or they were below average in recognizing sick patients? You’ll never meet that person. We have a serious problem with overconfidence in our own computers.

While severing the human link would be a deeply bad idea, much of what we currently think of as this uniquely human thing is in fact a data problem. The technology field called machine learning – and a special branch of it called deep learning – is just blowing the doors off the competition. We’re getting weirdly good at it very, very quickly.

In addition, my geekiest colleagues would say, “Okay. You think you’ve started data collection for this situation? You haven’t even begun. Why don’t we put a high def camera on the patient? For every encounter, we can assess skin tone. We can code for their body language. Let’s put a microphone in there. We’ll code for their speech tones.”

And then we’ll see which patterns are associated with schizophrenia, diabetes, Alzheimer’s. We’ll do pattern-matching on a scale that humans can never, never equal. In other words, our IT systems don’t care if the guy went to the intensive care unit two hours later or was diagnosed with Parkinson’s 20 years later. Just give us the data.

RW: How much of healthcare will be in the hands of patients and their technology? How much are they going to be monitoring themselves, independent of doctors or hospitals or other traditional healthcare organizations?

AM: It’s hard to imagine how that won’t come to pass. They’ll monitor the hell out of themselves. They’re going to have peer communities that they probably rely on a lot and they’re going to have algorithms guiding their treatment or their path.

I turned to the question of diagnosis, and particularly the issue of probabilistic thinking. The context was the 40-year history of predictions that computers would ultimately replace the diagnostic work of clinicians, predictions that, by and large, did not pan out.

RW: In medicine, there’s no unambiguously correct answer a lot of the times. It’s a probabilistic notion. I call something “lung cancer” or “pneumonia” when the probability is above a certain threshold, and I say I’ve “ruled out” a diagnosis when the probability is below a certain threshold. Setting these thresholds depends on the context, the patient’s risk factors, and the patient’s preferences. I also need to know how accurate the tests are, how expensive they are, and how risky they are. And often the best test is time – you decide to reassure the patient, not do anything, and then see how things go.

AM: Yeah. That complicates the work of the engineers. Not immeasurably, but it does make it a lot more complicated. But I imagine that there are a bunch of really smart geeks at IBM’s Watson eagerly taking notes as guys like you describe these kinds of situations. In their head they’re thinking, “How do I model all of that?”


Categories: Uncategorized

5 replies »

  1. platon20:

    I don’t think Mcafee ever claimed that technology will completely replace the physician.

    But he is right to point out that what we value as necessary skills/abilities in our doctors, abilities that we believe are uniquely human, will be easily supplanted by computer technology. Memorization and playing giant elimination games (essentially what differential diagnosis boils down to) are things that computers will be able to do much better and much faster.

    So the interesting question is, if a physician can rely on computer technology to do a lot of that diagnostic work and data crunching, how will the physician’s job change? I think for the better:

    For one, physician training could be less grueling and could focus more on the conceptual as opposed to the rote/years of experience necessary to become a good diagnostician. This would allow us to train more high quality physicians in less amount of time.

    Also, physicians will be able to focus more of their time on medical research/innovation. The types of machine learning systems that Mcafee describes would be great at identify patterns, but human minds will still be needed to process those patterns and place them in a framework that allows us to better understand human bodies and diseases. In fact, such systems will be revolutionary for medical research.

  2. Slow down a minute.

    The tech guy didnt say that computer + human > human. He said that computer > human.

    He claims that a computer by itself without a human “helper” will drive human doctors out of business.

    I think he’s wrong. HAL 9000 is a lot smarter than I am. But if he thinks that human patients pick their doctors based on how smart they are, he is sorely mistaken. Anybody who has actually worked in healthcare knows this.

    Computers are a great adjunct to human doctors. But they won’t replace me anytime soon.

  3. “I’ll bet you any amount of money that I have a longer waiting list of patients than the “tech clinic” next door has after 1 months, 3 months, 6 months, 5 years, 25 years, etc”

    “What these tech guys dont get is that patients dont come to us for just data, they come to us for CONTEXT. Computers cant provide context”

    Platon, you’re mostly right, but definitely for the wrong reasons.

    In your wager scenario, within 5 years, if you aren’t sitting next door armed with formidable technical support for your wetware wizardry, you may not be beaten by the “no humans” shop, but you’ll be CREAMED by every clinician thusly armed. And you know that, and sure, it’s scary. But you know that.

    Information technologies can and will provide context you literally can’t even conceive of.

  4. I always laugh at these “technology is god” articles. While data and technology can HELP, they are certainly not human replacements.

    To anyone who doubts this, I propose a challenge.

    You build a state of the art clinic next to mine with any computer technology you like. You can put 50 IBM Dr Watsons in there if you want. But you dont get to have any humans.

    I’ll sit by myself and treat patients in a clinic next door with no computers.

    I’ll bet you any amount of money that I have a longer waiting list of patients than the “tech clinic” next door has after 1 months, 3 months, 6 months, 5 years, 25 years, etc

    What these tech guys dont get is that patients dont come to us for just data, they come to us for CONTEXT. Computers cant provide context.

    So while the computer might be more “accurate” than I am with diagnoses, human patients will prefer me to the computer lab because I provide context that the computer cant.

  5. Fascinating discussion with an expert from my alma mater. The challenge has been how capture enough of the data for the models to be developed and validated. As more data hits digital vs paper, it’s easy to envision this threshhold being hit within a decade, with highly disrputive results in health care.