
By SAURABH JHA MD
In 2014, a jury in Massachusetts awarded $ 16.7 million in damages to the daughter of a Bostonian lady who died from lung cancer at 47, for a missed cancer on a chest x-ray. The verdict reminds me of the words of John Bradford, the heretic, who was burnt at the stakes: “There, but for the grace of God go I.” Many radiologists will sympathize with both the patient who died prematurely, and the radiologist who missed a 15-mm nodule on her chest x-ray when she presented with cough to the emergency department few months earlier.
The damages are instructive of the tension between the Affordable Care Act’s push for both resource stewardship and patient-centeredness, and between missed diagnosis and waste. But the verdict speaks of the ineffectualness of evidence-based medicine (EBM) in court. If EBM is a science, then this science is least helpful when most needed, i.e. when trying to influence public opinion.
EBM tells us that had the patient’s cancer been detected thirteen months before it actually was, it would have made little difference to her survival, statistically speaking. Researchers from Mayo Clinic examining the impact of frequent chest x rays in screening for lung cancer in a large number of smokers found that the intensively screened group knew about their cancers earlier, had more cancers removed, but did not live longer as a result. This is known as lead time bias, where early detection means more time knowing that one has the cancer, not more time one is actually alive. This means had the nodule been seen on the patient’s initial chest x-ray she would probably, though not certainly, not have survived much beyond 47.
Lead time bias is a basic concept in the statistics of screening. Physicians have it drilled in them. Recognition of this artifact curbs therapeutic and screening optimism. Why does lead time bias not enter the courtroom? Why did no one successfully convince the jury that early detection would not have made a difference? Why is a fact, which can be demonstrated in 1000 people, not applicable to any one of the 1000 people in courts?
There are a few factors. The arrow of time flies in only one direction in a court: chest x-ray – presentation with cancer – death. Lead time bias is an inverse problem, and inverse problems are difficult to think about, even for academics who spend time thinking about inverse problems. You’re turning the clock back. Death – presentation with cancer – chest x-ray. You’re asking “would early detection have made a difference?” You have to answer this question with the knowledge that the radiologist missed a finding. This tests our rationality. You’re saying to the plaintiff, “shit happens.” Shit is not supposed to happen in the United States, which has three times as many lawyers per capita as Britain. And Britain is where modern law was invented by William Blackstone.
Plaintiff’s lawyer need entertain only a possibility that early detection could have cured the cancer to prevail in a suit. Though cure in this case was improbable, it was not impossible. Statistics predict. Statistics don’t prophesize. Evidence-based medicine speaks of the probable. The plural of anecdote is not evidence, but the singular of evidence can still be an anecdote, a possibility. And possibility beats probability in court. Or, as a defense attorney for medical malpractice confided, “the last thing I want is to call the plaintiff a statistic. That’s a guaranteed turn off for the jury.”
The burden of proof in medical malpractice is not as high as the burden of proof in suspected homicide. In the latter, proof of guilt must be beyond reasonable doubt. In medical malpractice, the evidence for negligence need only be marginally more than the doubt that there has been negligence – a 52:48 margin, rather like presidential elections. This is for a sensible reason. If you’re wrong about the guilt of a person suspected of homicide, that could cost an innocent his life. If you’re wrong about the negligence of a doctor, big deal. The insurer pays the money and the society readjusts the costs. The doctor is a bit emotionally roughed up, but he or she will get over it.
In tort, there is a concept known as the Calabresian principle. It is named after Guido Calabresi who analyzed the cost of accidents. According to Calabresi, in an accident the costs of the tort should be internalized by the lowest cost avoider – i.e. the person who could have avoided the accident with the smallest amount of effort. In this case, assuming the patient was a non-smoker, the avoiders of the accident were fate (or unlucky genes) and the radiologist. Now you can’t do much about cosmic injustice. So it is clearly the radiologist who was the lowest cost avoider. He could so easily have recommended a CT. What’s another CT if it can save someone’s life? He should have been in more doubt about the chest x-ray than he actually was. What’s a bit of doubt if it can save someone’s life?
This case, and similar cases, has implications. Physicians are asked to be mindful of the population, not just the individual patient. The message from policy makers is that we must be sensitive of limited resources. We’re chastised for overutilization of imaging. Yet it’s hard to see how excess utilization can be curbed unless courts respect evidence-based medicine. It’s at times like this that meaningful tort reform appears painfully conspicuous by its absence from the Affordable Care Act.
You might counter that had the radiologist picked up the lung nodule the lawsuit would never have occurred. That a miss is a miss, regardless of the outcome. Alas, that, too, is not so simple. Perceptual errors are common in interpreting chest x-rays. Researchers have found that a large number of cancers can be seen, in hindsight, on chest x-rays. So much so, that an appeals court once judged that a perceptual error is not necessarily negligence.
This case reminds me of the wisdom of one of my professors during residency. A leading radiologist of his time, he cautioned that “the first ten thousand chest x-rays are easy.” Puzzled then, but having now interpreted well over ten thousand chest x-rays, I understand what he meant. I’m more aware than before of what I might miss. This has created an odd combination of expertise and insecurity. When the insecurity gets the better of me I imagine shadows on x-rays, and recommend CAT scans to “rule out a mass.” Sometimes I catch a cancer. But more often than not I raise false alarms, inconveniencing the patient and wasting resources.
Option traders can avoid trades that are low on the upside and high on the downside. Radiologists do not have that luxury. Paid barely $10 for a chest x-ray, with a multi-million-dollar bounty for missing cancer and a Russian roulette for perceptual errors, radiologists will simply recommend more CAT scans rather than take the risk in declaring that the X-ray is “normal.” The net cost of such verdicts to society is more than $16.7 million.
Saurabh Jha is an opinionated radiologist who is wrong about nearly everything. He can be reached on Twitter @RogueRad
Categories: Uncategorized
Nice post. Cited it on my new post about Michael Lewis’ new Kahneman-Tversky bio “The Undoing Project.”
http://regionalextensioncenter.blogspot.com/2016/12/kahneman-and-tversky-clinical-judgment.html
I suspect that the underlying issue is related to a legal tradition known as “Res ipsa loquitor.” The fact that a specific deficient decision occurs must be the direct cause of a bad outcome for a malpractice judgement. To achieve a connection with the person’s death doesn’t require expert opinion under the legal tradition Res Ipsa Loquitor. Accordingly, it does’t require an expert to assess that it might have been possible to cure the person with early recognition, in spite of the grim realities. Any rational person knows that delayed diagnosis is always bad. Miracles happen and the person who died didn’t have the chance to find a miracle, right.
Think making resources dirt cheap and think making them physiologically safe and think making interpretation rate of true positives yield qaly cost/benefit <50,000$ and false positive cost <10$. Then think: use gobs of resources because these all add information and the more information, the better.