Last week we all watched in awe as the IBM computer, Watson, trounced two of Jeopardy’s finest. This event has been much heralded but it is worth stopping for just a minute to reflect on the experience of watching Jeopardy those three nights. I had no trouble rooting for Watson, feeling disappointed or embarrassed when he missed a question and chuckling when he displayed any behavior that seemed the least bit human. I knew the whole time, on one level, that Watson is a computer. On another level though, I bonded with him and felt a good deal of emotion regarding his success.
MIT Prof. Sherry Turkle recently released a book entitled Alone Together. She was also interviewed recently on TechCrunch. Turkle puts forth the view that technology is a poor substitute for interaction with a human being. However, she notes that when technologies (robots, relational agents and the like) respond to us, they push “Darwinian buttons,” prompting us to create a mental construct that we are interacting with a sentient being. This brings a host of emotions to the communication including affection. Turkle makes an argument that in the realm of human relationships this phenomenon is unhealthy for our species.
I’d like to bring in principles from behavioral psychologist, Robert Cialdini, who has authored several books on the psychology of persuasion. Cialdini offers simple tools that can be used in everyday life to persuade others to adopt one’s point of view. In doing so, he lays out solid experimental evidence that these tools are effective, in most cases without the recipient being aware. Continue reading…
A terrific article in The New York Times Magazine this summer described the decade-long effort on the part of IBM artificial intelligence researchers to build a computer that can beat humans in the game of “Jeopardy!” Since I’m not a computer scientist, their pursuit struck me at first as, well, trivial. But as I read the story, I came to understand that the advance may herald the birth of truly usable artificial intelligence for clinical decision-making.
And that is a big deal.
I’ve lamented, including in an article in this month’s Health Affairs, on the curious omission of diagnostic errors from the patient safety radar screen. Part of the problem is that diagnostic errors are awfully hard to fix. The best we’ve been able to do is improve information flow to try to prevent handoff errors, and teach ourselves to perform meta-cognition: that is, we can think about our own thinking, so that we are aware of common pitfalls and catch them before we pull our diagnostic trigger.
These solutions are fine, but they go only so far. In the age of Google, you’d think we’d be on the cusp of developing a computer that is a better diagnostician than the average doctor. Unfortunately, computer scientists have thought we were close to this same breakthrough for the past 40 years and both they and practicing clinicians have always come away disappointed. Before getting to the Jeopardy-playing computer, I’ll start by recounting the generally sad history of artificial intelligence (AI) in medicine, some of it drawn from our chapter on diagnostic errors in Internal Bleeding: