Arthur C. Clark and Stanley Kubrick predicted supercomputers more intelligent than humans. In 2001: A Space Odyssey, the HAL states, with typical human immodesty, “The 9000 series is the most reliable computer ever made… We are all, by any practical definition of the words, foolproof and incapable of error.” Forty years later, IBM’s Watson pummeled humans in Jeopardy – a distinctly human game.
Watson is a big shot oncology fellow at MD Anderson – he is already impressing nurses and the attendings. The supercomputer presented patients in the morning rounds, parsed data within seconds, and made few mistakes. The real oncology fellow, the human I mean, flabbergasted by the efficiency of his binary colleague, relayed to the Washington Post, “Even if you work all night, it would be impossible to be able to put this much information together like that.” Watson doesn’t have to worry about duty hour restrictions.
CEO of IBM, Ginni Rometty, claims that Watson 2.0 will interpret medical imaging like a radiologist. In its third iteration, the supercomputer will “debate and reason.” Why hire radiologists who sap productivity with lunch breaks and sleep? Watson will never complain about the dearth of vegan food in the cafeteria, never get tired, and – best of all – never whine about Medicare reimbursement cuts.
But forgive me for snoring at night without fear of the Robo-Radiologist. The reasons are simple.
There are tasks which a toddler can do with no easy computational solution, like recognizing moms, dads and aunts. “Aunt Minnies” are diagnoses that can instantly be identified the same way you might recognize the face of your aunt in a crowd. These are the easiest diagnoses for a radiologist but are so difficult for a computer that computer scientists have invented heuristics – shortcuts which trade accuracy for speed. Heuristics are “good enough” algorithms, but may not be good enough for the high stakes in medicine.
Facebook’s facial recognition might pick your face from a group picture most of the time, but it also makes laughable mistakes. Before Watson replaces radiologists, it must meet a higher bar than Facebook. “Might” is not good enough.
Suppose that Watson can spot Aunt Minnies. It must communicate which is does using natural language processing. But medical lingo is anything but natural. A helpful radiology request might read, “75 yo M w/ MM, AAA s/p TEVAR c/b EL on 2/2013 p/w CP r/t back.” A less helpful one: “Unspecified,” Medical lingo has typographical errors, missing punctuations and ambiguous acronyms.
MM – is that Multiple myeloma? Mediastinal mass? Malignant mesothelioma? Metastatic melanoma? Or Mr. Mean?
During the 2011 Jeopardy competition, a clue asked for the “anatomic oddity of US gymnast George Eyser.” Watson answered “what is ‘legs?” The correct response: “what is missing a leg?” Watson misinterpreted the question because it knew “anatomic” but not “oddity.”
Misunderstandings in medicine do not happen from bad grammar and split infinitives. Misunderstandings come from wrong context. Radiologists do not get partial credit for “pulmonary embolism” when the right diagnosis is “no pulmonary embolism.”
Watson does not need vacations, reimbursement or oxygen. It may not need to physically exist – Watson already works from the cloud. But its only way of maximizing utility is through a radiologist, not instead of one. The computer can be a decision support tool, not a doctor.
Dr. Chen is a radiology resident at the University of Pennsylvania, a programmer and an optimistic futurist.
Categories: Uncategorized
You make an excellent point. If I correctly interpret your point, humans are better than computers in some things (like contextual processing), and vice versa. In other words, there is an inherent complementary – not competitive – nature of humans and artificial intelligence. Perhaos we agree on more things than are initially apparent.
Watson makes contextual mistakes like man with leg named Bob and ambiguous terminology like MM and biweekly, but excels at Bayesian diagnostics not tainted by human cognitive biases. Human radiologists are excellent at contextual interpretation but must manage their all-too-human biases to avoid cognitive errors.
AI will have an entirely different profile of diagnostic success vs a human, all else being equal.
I spend much of my time at work doing what Watson should be doing and not enough time providing clinically relevant advice that impact care. The goal of Watson – as you elegantly stated – is freeing a physician to do value-added care, not replacing one.
In response to this article many tell me I’m mistakenly optimistic about the continued relevance of human radiologists, that replacement is the inevitable future. In the world of “crank out as many reports as possible” that may be true. The question: is faster reports the best radiology can do?
Frankly, the objections you raise here are a bit silly.
First, the issue highlighted in the title: nobody is asking Watson to “replace” you. (At least not yet…). I believe the short-term objective is to offload the “Aunt Minnies” you describe, allowing the human doctor to focus his/her attention on the tough things. If a machine is able to say “I’m 92% sure I’m looking at xyz”, that’s a pretty good level of confidence. Yes, the stakes in medicine are high, but every diagnosis — be they made by a machine or a human — will come with a certain level of uncertainty. Generally speaking, the tougher the diagnosis is to make, the bigger the uncertainty, right?
Second, the inability of a machine to decipher what you mean by “MM”? As a patient who recently had a standing order for blood work with the word “biweekly” at the top of it, I was told by the lab that only a week had gone by, to come back in another week. When informed of this, the endocrine doc that had made out the order told me “No, my standing order was for a draw twice a week.” So yes, once again, communication issues are a serious ongoing problem, regardless of whether we are talking about humans or machines. Frankly, if use of terms like “MM” and “biweekly” are the source of the problem, perhaps the solution should reside at the source: maybe what’s needed is the introduction of tools to clarify what a physician really means to say, while they’re saying it?
Again, while I fully agree that typographical errors, missing punctuations and ambiguous acronyms are a HUGE problem, I don’t think the issue stems from a machine trying to make sense of something in a radiology request like the example you gave, but is a much larger problem. Yes, natural language processing is a bit of a beast, with some of the stickiest problems showing up at the border of language and common sense — for instance, when I say “I know a man with one leg named Bob”, humans tend to understand that it’s not the leg that’s named Bob, where a machine might be somewhat confused. That’s a much tougher problem to solve than getting a machine to know what you meant when you typed “MM” (or checking whether or not you really meant to type “MM” at all…).
Agree – for the obvious diagnoses, adding value means rapid interpretation, and the doc at bedside is well suited. The radiologist’s value in those studies then lie not in the diagnosis but in excluding the rest – the left pancoast tumor, the lytic lesion in the right 8th rib, the strangely prominent hilar lymph nodes.
Maybe only on 5% of the films will the radiologist have something to show for their work, making marginal contribution over that of the clinician. But which 5%?
The biggest threat to radiologists is not a robot, it is a doctor. Doctors are trained to read x-rays. Just this am I performed a stat portable chest x-ray on a patient with dyspnea. A quick look at the film 60 secs after it was performed showed a huge right pneumothorax.
By the time I had inserted the chest tube, and re-inflated the lung, I received a dictated report from the radiologist, advising me there was a “50% pneumothorax on the right.” The radiologist also wisely “advised a chest tube placement.” Thanks doc, it’s already done.
By the time the patient was transported to the tertiary facility, we got another nice dictated report. This one said, “In the interval, there has been a right tube thoracostomy placed. There is full expansion of the right lung.” Thanks again, doc. I already saved the patient’s life.
My point being that radiology should, for the lack of a better term, be a “consult service.” There are many times when I need the assistance and advice of a radiologist. I am not good at reading MRI’s of the knee and don’t really want to take the time to learn. But I can spot a damn head bleed on a simple uncontrasted CT and send the patient to the neurosurgeon before I get the official read back. In the old days (15-20 yrs ago) the scenario above would also play out, but it would be Monday morning when I got the report for the patient who was stabbed in the chest on Friday night, and the patient would already have been discharged home. Now, with virtual services and with the ability of radiologists to read films from the comfort of their homes, (all of which I think is fantastic, BTW!), we do get official “reads” much more quickly, and I’m glad.
But I think maybe the future will be a scenario like this: “ma’am, your chest x-ray is fine. you do not have pneumonia and you do not need a z-pak and a rocephin shot. Do you want to have the radiologist also read your chest x-ray and send you a bill for his services?” (of course, those with Medicaid say “sure doc, go ahead, since it don’t cost me nuthin!”) Those with a $7500/yr obamacare deductible will say, “HECK NO, Thanks Doc!” See you in a week if I’m not better!!!”
Can Watson deal with extravasation?
“But its only way of maximizing utility is through a radiologist, not instead of one.”
Also through a radiologist in India for a much better price.
“The Robot will see you now — assuming you can pay”
http://regionalextensioncenter.blogspot.com/2015/05/the-robot-will-see-you-now-assuming-you.html