A Vox.com piece about decision-making caught my attention this morning.
The story was compelling. A 12-year-old boy had intractable seizures from a leaking vascular malformation in the brain. A first neurosurgeon would not operate and recommended radiation therapy instead. The patient’s mother sought another opinion from a Mayo Clinic neurosurgeon who was adamant that an operation should be undertaken. The second surgeon surgeon was undeniably right. The patient is now a bright, fully-functional researcher at the University of California San Francisco.
So far, so good? Not so, according to Vox. That there should be a smart mom making a smart decision, and a smart doctor carrying out a successful surgery is apparently a problem.
Why? Because the more cautious surgeon had a different opinion and, had the mom compliantly accepted his recommendation, the child could have been worse off. Variability in judgment, as always, is the enemy.
Vox quotes a recent Annals of Surgery study that “supports” the concern. When surgeons are given clinical vignettes, they vary widely in how they estimate the risks and benefits of an operation. Consequently, the decision to operate also varies widely—at least, on vignette paper.
This variation seemed to come down to surgeons’ perceptions of risks and benefits, the researchers wrote: “Surgeons were less likely to operate as their perceptions of operative risk increased and their perceptions of nonoperative benefit increased.”
And those risk perceptions were very predictive of whether or not a surgeon would recommend an operation: “Surgeons were more likely to operate as their perceptions of operative benefit increased and their perceptions of nonoperative risk increased.”
The remedy? Risk calculators which take “high-quality data from millions of patients” who have had similar operations to come up with estimates on the risks of surgery.
Dr. Ashish Jha, a Harvard professor of health policy concurs: “It’s clear we need to develop more resources like this to be additional input beyond personal experience for surgical decision-making.” Accordingly, had the first surgeon, truly known the actual risk of the operation, he may have acted differently, and saved the mother and her child the delay of care and any additional expense.
Imagine the conversation:
Mom: my child is not doing well. Will you operate?
Surgeon: Too risky, I don’t want to.
Health Policy Expert: Hey, check out this cool risk calculator!
Surgeon: What was I thinking? Of course, I’ll open the skull!
What strange minds doctors seem to have! As it turns out, this peculiar understanding of how clinical decisions are made has been around for decades. Quoted in the Annals‘ article is a famous JAMA paper from 1990 in which author David Eddy, a Harvard physician, mathematician, and healthcare analyst, describes clinical decision-making according to a two-step process of inputs and outputs:
In other words, the doctor’s mind is a rather simple computer. Again, from the Annals article:
According to normative decision theory, treatment decisions under uncertainty should be based on an evaluation for each available treatment option of: (i) the probabilities of possible outcomes; and (ii) the relative attractiveness or unattractiveness (ie, the utilities) of these outcomes.
Of course the computational theory of mind is not unique to the medical field. But it has been criticized widely and, in my opinion, quite effectively, notably by John Searle and by the late Hilary Putnam. As Robert Epstein put it just a couple of days ago: “Your brain does not process information, retrieve knowledge, or store memories. In short, your brain is not a computer.” But the memo, it seems, hasn’t reach the ivory towers of healthcare analysis.
I have previously pointed to the work of psychologist Gary Klein (see here and here) who studies decision-making in real life settings. Klein shows that decision-making is clearly unlike what the computer model would lead one to believe. In fact, excellent decisions are made, at times, on the basis of inexpressible knowledge which, to an outside objective observer, could seem like purely subjective idiosyncrasies.
Health policy experts, however, remain wedded to the computer model of the surgical mind and view the objective risk of an operation and its quantified benefit as essential “inputs” for decision-making. Predictive analyses are then carried out with implacable logic: Add the probability of benefit, subtract the probability of harm…Uncertainty is almost conquered.
Unfortunately, healthcare analysts seem to forget that a 5% mortality risk for resecting an aneurysm is only an average risk. That number may be called into question when applied to a particular surgeon. What if the doctor at hand had a drinking problem? Or what if she or he “sensed” they had a personal technical limitation and preferred not to operate in a given case? Should we use the average risk number to convince them otherwise?
Of course, the analysts always concede that calculation is not the end-all-be-all. “Yes,” they say, “use your judgment along with the risk calculation.” The risk calculator is simply offered as a tool to help “lift the bottom up,” so to speak, and should not interfere with sound judgment.
But here’s the rub about lifting the bottom up. How does a doctor at the bottom know that his judgment is bad? By definition, bad surgeons misjudge risk or benefit unknowingly. If they knew they were misjudging, they would judge otherwise. And even if bad surgeons were to agree to employ the calculator, could we still trust them to use judgment along with calculation—as if that concept had any real meaning, anyway.
Faced with the grim reality of unequal surgical talent, health policy will inevitably lean on doctors to use more and more calculation and less and less judgment. That’s the natural history of these managerial devices. After all, how else can we expect to have a “standard of care?” At that point, top surgeons, too, will necessarily end up making decisions rotely, and meld with the mass of undifferentiated surgical care providers.
The irony is that the boy in the story who was saved by a heroic surgeon is now a PhD student in epidemiology and seeks to develop “decision support systems…[which] can supersede the personal biases and subjectivity of physicians.” Shouldn’t he know that getting rid of the “personal biases and subjectivity of surgeons” cuts both ways? If variability is to be overcome, so will excellence be defeated.
Categories: Uncategorized
We have be careful to not critique if the science is weak. We should only say the science is weak. Then, perhaps, we can push others to do a better job of telling stories that matter to us. This story is potentially dangerous if the judgement being touted was wrong in the first place. I love your take on things, but “errors” in diagnosis and decision making is an infancy field. Let’s make sure we push the field to better.
I agree that the case report that formed the basis of the Vox story was very shaky. That said, I gave the reporter the benefit of the doubt. I critique these decision tools regardless of the reliability of the risk estimates they provide. Thank you.
Thank you, Anish. I may write another blurb at some point to address your question. Stay tuned 🙂
Thanks, Dr. Holm. I suppose there is theoretical value in knowing that a treatment has a x% chance of causing y, but in practice I would argue that that information is never all that essential. Furthermore, the percentages and risk are simply estimate that may vary widely depending on the designs of the studies from which they are derived, but they give the illusion of precision.
Thank you for post. I went to Vox site and was struck by the report and our follow up in this way. I really could not learn from the report or the analysis of good and bad decision making. It is a case report. I could not find a RCT of value on the topic. One of biggest series of patients was about 14 in number. What was the radiation therapy proposed? Gamma Knife? We are missing way too many specifics to really know how to assess the psychology, or general learning points for decision making, aren’t we? Risk calculators for rare events? Really? To make a decision, some reliable evidence of benefit to trade-off against some reliable evidence of harm is needed. I am not a neurosurgeon but know this is tough area for them. Do we really know enough to even comment on such a case report? Wonder your thoughts. Thanks.
Great post Michel. Couldn’t agree more that the current move to reduce all decision making to calculators is flawed for many reasons. This is in part a response, however, to physicians not having a good mechanism to deal with our outliers. Is there any variance in medical decision making that makes you uncomfortable ?
Good post, but decision support tools are not applied the same way in other circumstances. Lets say a cancer drug has a 1% chance of helping someone survive. The FDA will approve it. The physician will offer it. The patient may ask for it. Patients will take chances where there is hope. Physicians will avoid risk where there is liability. The difference between a heroic save and a botched job is highly circumstantial. Not to discount support tools, I do think they are useful to guide a discussion. If there was a bad outcome in the above scenario, we could have a completely different discussion.
Thank you, Dr. Palmer. You’re right. We’ve been conditioned to thinking about patients according to a handful of “baseline characteristics” and fail to recognize the complexity and richness actually there.
I agree, of course.
On the other hand, I suspect it may be helpful to note the p.o.v. of those doing the science.
If you ask me a question about hep C infection (which killed my dad) or Lyme disease (which arguably caused the cardiac issue that killed my mom), my personal experiences will profoundly influence my interpretation of the science and the data
And s/b Jones not James
I think I was thinking of baseball
/ j
All decisions occur in a reference frame of the patient’s highly individualistic biochemistry and genetics and age and sex and epigenetic markers, and psychology and dozens of other frames that may affect outcomes, including the temperature and the barometric pressure and weather.
When patients appear roughly the same and we compare output results of decisions made by two physicians, we don’t know much unless we are studying the simplest things imaginable…Eg now we know that CMV and HHV-6 often are found latently together in film array assays of CSF and are not causing disease! How many times do people have pre-clinical viral myocarditis? How often are the elderly protein deficient? or low in magnesium? or B-12 or D?
It may be true that all decisions in medicine have errors or are incomplete. This would be true in Galen’s time. Why not now?If so, righteousness is not exactly the attitude we need now. Isn’t it better to just try to help docs who are getting bad results?
I wasn’t aware of these other stories, John. I suspect it’s a natural reaction to want to use one’s experience to help avoid repeating a tragedy or a near miss. And there’s nothing wrong with that, of course.
Thank you, Laurie.
Thank you, Dr. Centor. Yes, Dr. Klein has a lot to offer to our healthcare managers…
We already did. And this is what they gave us.
via E-mail
…or you could comment on the notice of proposed rulemaking.
via TweetBot
I believe the only answer to this MACRA ridiculousness is a national strike by doctors and nurses. Following Jeremy Hunt and the NHS business in the UK, where similar stupidity is underway. Washington needs to feel the pain.
via e-mail
Great post – even cites Gary Klein – one of my favorites
via TweetBot
Very smart anti widgetry essay @michelaccad – doctor’s brain is not a computer
via TweetBot
I find it interesting how many people involved in the field are driven by / or at least have – a personal history. James entered the field after the death of his son, Millenson’s story is well-documented, and we see the same in the case of the UCSF researcher here.
I don’t know what it all means, but I think it’s noteworthy.