There’s a growing movement in medicine in general and imaging in particular which wishes to attach a number to everything.
It no longer suffices to say: “you’re at moderate risk for pulmonary embolism (PE).”
We must quantify our qualification.
Either by an interval. “Your chances of PE are between 15 and 45 %.”
Or, preferably, a point estimate. “You have a 15 % chance of PE.”
If we can throw a decimal point, even better. “You have a 15.2 % chance of PE.”
The rationale is that numbers empower patients to make a more informed choice, optimizing patient-centered medicine and improving outcomes.
Sounds reasonable enough. Although I find it difficult to believe that patients will have this conversation with their physicians.
“Thank god doctor my risk of PE is 15.1 % not 15.2 %. Otherwise I’d be in real trouble.”
What’s the allure of precision? Let’s understand certain terms: risk and uncertainty; prediction and prophesy.
By certainty I mean one hundred percent certainty. Opposite of certainty is uncertainty. Frank Knight, the economist, divided uncertainty to Knightian risk and Knightian uncertainty (1).
What’s Knightian risk?
If you toss a double-headed coin you’re certain of heads. If you toss a coin with head on one and tail on the other side, chance of a head is 50 %, assuming it’s a fair coin toss. Although you don’t know for certain that the toss will yield head or tail, you do know for certain that the chance of a head is 50 %. This can be verified by multiple tosses.
When uncertainty can be quantified with certainty this is known as Knightian risk.
What’s Knightian uncertainty?
You’re fishing for the first time in a river. What are the chances you’ll bait fish? You don’t have the luxury of repeated coin tosses to give you a universe of possibilities, a numerator and denominator.
When uncertainty can’t be quantified with certainty, meaning there’s no meaningful numerator and/or denominator, this is known as Knightian uncertainty.
Predictions and prophesies deal with likelihood of future events. But there’s a difference.
Prophesies were certainties about the future if people did not change their moral trajectory. Often people did not change their ways, and a true prophet was able to predict the resolute nature of their moral failings.
Predictions or prognostications also tell us about the future but in a probabilistic way (Knightian risk). For example, reduced strength of and scar in failing heart confer a certain probability of five year survival.
Jeremiah never spoke about doom probabilistically (2).
Value of a prediction lies in its being precise about the timing of the event. John Maynard Keynes (JMK) quipped “in the long run we are all dead” (3). As a general statement, this is 100 % correct but not terribly predictive or useful.
Research in behavioral economics shows we prefer risk over uncertainty (4). Meaning, when we are unsure of the future we prefer our uncertainty to be quantified. Or, to borrow a Rumsfeldian aphorism, we prefer known unknowns to unknown unknowns.
Physicians, in particular, don’t like uncertainty. Attaching a number to a possible event enhances our expertise.
Think about it. Which physician would you think knew what he/ she was talking about?
Dr. Jha: You probably have PE.
Dr. Smith: You have a 10-90 % likelihood of PE.
Dr. Singh: You have a 24.21 % likelihood of PE.
The great philosopher of science, Karl Popper, cautioned against precision (5). According to Popper precision and certitude are not only unscientific (why so is beyond scope of this discussion) but very likely to be wrong. More precisely we assert, more likely we’re incorrect about our certitude.
More general our statement, more likely it is correct but less likely useful. Recall JMK’s quip.
Precision is balderdash. Sorry, let me restate this more scientifically. Precision is unverifiable. Why so?
During quantification many variables are taken, assumptions made about distribution of the variables and complex statistics used to join disparate numbers from disparate studies.
Thus we give a number for John Doe’s chances of a heart attack.
John Doe is a 45 year old Caucasian with well controlled type 1 diabetes, mildly elevated LDL-cholesterol, a sedentary life style and history of myocardial infarction in a great uncle twice removed, who has vague pain over his left upper chest.
You can give John Doe a number for his chances of death from an untreated heart attack, a number derived from elegant statistics. But this number can’t be verified.
Because John Doe is not a coin which can be tossed a thousand times to get an idea of the universe of possibilities. He is unique.
Unique? Wait that’s the reason we want to be precise. To practice John Doe-centered medicine.
Here we have an under-appreciated trade-off in medicine: between usefulness and accuracy. We can be beautifully precise but precisely wrong. Or we can be generally accurate but specifically useless.
But there are more problems with our quixotic quest for precision. One is the Ludic fallacy. Coined by Nassim Taleb, in this fallacy one believes they know the distribution of a variable or the variable used to derive risk. Taleb explains the danger of this fallacy in his popular tome Black Swan (6).
Danger of false computation of risk is that it leads to a false sense that we’re in control when, in fact, we’re not.
Some exponents of quantification in medicine are lulled into treating numbers as if they are a thermometer scale. This can be dangerous.
I never tire of explaining in patients with coronary CT for chest pain in the emergency department that there’s nothing magical about 70 % stenosis. This is the cut-off diameter stenosis of coronary arteries to proceed to coronary catheterization. Meaning, patient receives a catheterization if stenosis is greater than 70 % and stress test if stenosis is less than 70 %.
But it’s not as if when crossing from 68 % to 72 % diameter stenosis one falls off a cliff. One still must treat the patient not the percent stenosis.
Why do we need cut-offs?
Numbers are continuous. Decision-making is dichotomous. One can be at 15.1 %, 30.2 % or 45.3 % risk of sudden cardiac death. But one either receives an implantable cardioverter defibrillator (ICD) or does not. Not a 15.1 % ICD.
A line has to be drawn somewhere. An arbitrary line. Precision and arbitrariness are inseparable.
Belief that numbers empower patients is over-stated. Whilst the difference between 90 % and 2 % chance of PE may be real for the patient, and brings out different value systems, the difference between 28 % and 22 % likelihood of PE is noise, and contributes very little to informed decision-making.
And being precise enough to distinguish between 15.1 % and 15.2% likelihood of PE would do little other than accelerate the academic tenure of the researcher who developed the precise but likely wrong scale.
The dynamic range of numbers far exceeds the dynamic range of patient preferences. Precision is recipe for information overload; where numbers confuse, not empower.
Most importantly, precision and quantification do not absolve physicians from using their judgment. This is just as well. GK Chesterton once remarked: “do not free a camel of the burden of his hump; you may be freeing him from being a camel.”
To paraphrase the irreverent Chesterton: you can remove the burden of judgment from a physician but then you will no longer have a physician.
- Knight FH. The meaning of risk and uncertainty. In Risk, Uncertainty and Profit. Dover Publication, 2006
- Jeremiah 29:11
- Keynes JM. A Tract on Monetary Reform, 1924.
- Ellsberg D. Risk, ambiguity and the Savage axioms. Quarterly Journal of Economics, 1961, 75, 643-669.
- Popper K. The Logic of Scientific Discovery. Routledge Classics. Routledge, 2002.
- Taleb NN. The Ludic Fallacy or the Uncertainty of the Nerd. In The Black Swan. Random House, 2010, 2nd edition.