88.2 % of all statistics are made up on the spot

– Victor Reeves

There’s a growing movement in medicine in general and imaging in particular which wishes to attach a number to everything.

It no longer suffices to say: “you’re at moderate risk for pulmonary embolism (PE).”

We must quantify our qualification.

Either by an interval. “Your chances of PE are between 15 and 45 %.”

Or, preferably, a point estimate. “You have a 15 % chance of PE.”

If we can throw a decimal point, even better. “You have a 15.2 % chance of PE.”

The rationale is that numbers empower patients to make a more informed choice, optimizing patient-centered medicine and improving outcomes.

Sounds reasonable enough. Although I find it difficult to believe that patients will have this conversation with their physicians.

“Thank god doctor my risk of PE is 15.1 % not 15.2 %. Otherwise I’d be in real trouble.”

What’s the allure of precision? Let’s understand certain terms: risk and uncertainty; prediction and prophesy.

By certainty I mean one hundred percent certainty. Opposite of certainty is uncertainty. Frank Knight, the economist, divided uncertainty to Knightian risk and Knightian uncertainty (1).

What’s Knightian risk?

If you toss a double-headed coin you’re certain of heads. If you toss a coin with head on one and tail on the other side, chance of a head is 50 %, assuming it’s a fair coin toss. Although you don’t know for certain that the toss will yield head or tail, you do know for certain that the chance of a head is 50 %. This can be verified by multiple tosses.

When uncertainty can be quantified with certainty this is known as Knightian risk.

What’s Knightian uncertainty?

You’re fishing for the first time in a river. What are the chances you’ll bait fish? You don’t have the luxury of repeated coin tosses to give you a universe of possibilities, a numerator and denominator.

When uncertainty can’t be quantified with certainty, meaning there’s no meaningful numerator and/or denominator, this is known as Knightian uncertainty.

Predictions and prophesies deal with likelihood of future events. But there’s a difference.

Prophesies were certainties about the future if people did not change their moral trajectory. Often people did not change their ways, and a true prophet was able to predict the resolute nature of their moral failings.

Predictions or prognostications also tell us about the future but in a probabilistic way (Knightian risk). For example, reduced strength of and scar in failing heart confer a certain probability of five year survival.

Jeremiah never spoke about doom probabilistically (2).

Value of a prediction lies in its being precise about the timing of the event. John Maynard Keynes (JMK) quipped “in the long run we are all dead” (3). As a general statement, this is 100 % correct but not terribly predictive or useful.

Research in behavioral economics shows we prefer risk over uncertainty (4). Meaning, when we are unsure of the future we prefer our uncertainty to be quantified. Or, to borrow a Rumsfeldian aphorism, we prefer known unknowns to unknown unknowns.

Physicians, in particular, don’t like uncertainty. Attaching a number to a possible event enhances our expertise.

Think about it. Which physician would you think knew what he/ she was talking about?

Dr. Jha: You probably have PE.

Dr. Smith: You have a 10-90 % likelihood of PE.

Dr. Singh: You have a 24.21 % likelihood of PE.

The great philosopher of science, Karl Popper, cautioned against precision (5). According to Popper precision and certitude are not only unscientific (why so is beyond scope of this discussion) but very likely to be wrong. More precisely we assert, more likely we’re incorrect about our certitude.

More general our statement, more likely it is correct but less likely useful. Recall JMK’s quip.

Precision is balderdash. Sorry, let me restate this more scientifically. Precision is unverifiable. Why so?

During quantification many variables are taken, assumptions made about distribution of the variables and complex statistics used to join disparate numbers from disparate studies.

Thus we give a number for John Doe’s chances of a heart attack.

John Doe is a 45 year old Caucasian with well controlled type 1 diabetes, mildly elevated LDL-cholesterol, a sedentary life style and history of myocardial infarction in a great uncle twice removed, who has vague pain over his left upper chest.

You can give John Doe a number for his chances of death from an untreated heart attack, a number derived from elegant statistics. But this number can’t be verified.

Because John Doe is not a coin which can be tossed a thousand times to get an idea of the universe of possibilities. He is unique.

Unique? Wait that’s the reason we want to be precise. To practice John Doe-centered medicine.

Here we have an under-appreciated trade-off in medicine: between usefulness and accuracy. We can be beautifully precise but precisely wrong. Or we can be generally accurate but specifically useless.

But there are more problems with our quixotic quest for precision. One is the Ludic fallacy. Coined by Nassim Taleb, in this fallacy one believes they know the distribution of a variable or the variable used to derive risk. Taleb explains the danger of this fallacy in his popular tome Black Swan (6).

Danger of false computation of risk is that it leads to a false sense that we’re in control when, in fact, we’re not.

Some exponents of quantification in medicine are lulled into treating numbers as if they are a thermometer scale. This can be dangerous.

I never tire of explaining in patients with coronary CT for chest pain in the emergency department that there’s nothing magical about 70 % stenosis. This is the cut-off diameter stenosis of coronary arteries to proceed to coronary catheterization. Meaning, patient receives a catheterization if stenosis is greater than 70 % and stress test if stenosis is less than 70 %.

But it’s not as if when crossing from 68 % to 72 % diameter stenosis one falls off a cliff. One still must treat the patient not the percent stenosis.

Why do we need cut-offs?

Numbers are continuous. Decision-making is dichotomous. One can be at 15.1 %, 30.2 % or 45.3 % risk of sudden cardiac death. But one either receives an implantable cardioverter defibrillator (ICD) or does not. Not a 15.1 % ICD.

A line has to be drawn somewhere. An arbitrary line. Precision and arbitrariness are inseparable.

Belief that numbers empower patients is over-stated. Whilst the difference between 90 % and 2 % chance of PE may be real for the patient, and brings out different value systems, the difference between 28 % and 22 % likelihood of PE is noise, and contributes very little to informed decision-making.

And being precise enough to distinguish between 15.1 % and 15.2% likelihood of PE would do little other than accelerate the academic tenure of the researcher who developed the precise but likely wrong scale.

The dynamic range of numbers far exceeds the dynamic range of patient preferences. Precision is recipe for information overload; where numbers confuse, not empower.

Most importantly, precision and quantification do not absolve physicians from using their judgment. This is just as well. GK Chesterton once remarked: “do not free a camel of the burden of his hump; you may be freeing him from being a camel.”

To paraphrase the irreverent Chesterton: you can remove the burden of judgment from a physician but then you will no longer have a physician.

**References**

** **

- Knight FH. The meaning of risk and uncertainty. In Risk, Uncertainty and Profit. Dover Publication, 2006
- Jeremiah 29:11
- Keynes JM. A Tract on Monetary Reform, 1924.
- Ellsberg D. Risk, ambiguity and the Savage axioms. Quarterly Journal of Economics, 1961, 75, 643-669.
- Popper K. The Logic of Scientific Discovery. Routledge Classics. Routledge, 2002.
- Taleb NN. The Ludic Fallacy or the Uncertainty of the Nerd. In The Black Swan. Random House, 2010, 2
^{nd}edition.

there is a 94.2% chance that i will not be seeing patients for much longer due to the 97.36% chance the the federal govt is going to find a way to screw me along with the 75.89% chance that i will face a law suit because lady that had the 15.2% chance of the PE actually had one.

Just simply rely on the Farmer’s Almanac to feel empowered!

Patients want to make decisions and doctors foolishly encourage that to reduce their own accountability, but patients will never have the skill set or data set to do it.

If they did, they would have MD after their name.

This is why weathermen are never wrong. “There’s a 10% chance of showers.” So what does an “empowered” person do with a forecast like that?

What we need is more weathermen becoming doctors.

Interesting comparison to weather.

80 % probability of rain versus 20 % probability would change my plans to go to the beach.

92 % probability versus 78 % of rain wouldn’t make any difference.

In fact, numbers would have little incremental decision-making influence over “slight chance of rain versus moderate chances.”

Decision-making again is dichotomous: I either go the beach or don’t.

thnx..it’s quite good..

“The rationale is that numbers empower patients to make a more informed choice, optimizing patient-centered medicine and improving outcomes.” I think you miss the point, here. It is not that numbers empower patients, it is that understandable and usable information empowers patients. Terms like ‘moderate’ and ‘slight’ and ‘likely’ and ‘rare’ are fuzzy and mean vastly different things to different people. Nonetheless, I agree that numbers can make things worse rather than better. Some basic principles I find useful: try to stick to whole numbers less than 1000, never use percents, fractions, decimals, percentiles or relative risk. NNT and NNH… Read more »

Thanks for reading. “usable information empowers patients” Which is another way of saying “quantification!” I never cease to be amazed by the capacious lexicon available to restate concepts that can be said more simply! “Terms like ‘moderate’ and ‘slight’ and ‘likely’ and ‘rare’ are fuzzy and mean vastly different things to different people.” These terms are, in deed, fuzzy. Particularly when one physician’s “moderate” is another physician’s “mild.” But I’m not sure offering precise numbers improves decision-making. Largely because decision-making is, for the most parts, dichotomous. Yes, there are instances where numbers tailor treatment, such as desferioxamine for iron-chelation based… Read more »

I relish the ambiguity of the person and the signs and symptoms of that person. It takes art and creativity to get to the right diagnosis in that patient at the right time, and to start appropriate treatment.

I abhor the physician who runs along with an encyclopedia of stats and detai. That doc is not thinking but spouting. Stay away.

That is where EHRS come in. They are binary devices in a non binary world of medical care.

“You have a 15 % chance of PE.” If we can throw a decimal point, even better. “You have a 15.2 % chance of PE.” __ I began my white collar career in a forensic level environmental radiation lab in Oak Ridge. Our findings (and my apps development rounding algorithms) had to hew to contractual and regulatory “significant figures rounding.” Were you to put “15.2 pCi/kg” in a client report, you had better be able to demonstrate empirically to clients and authorities your ability to disciminate between 15.1 and 15.3 (via blind QC replicate assays), otherwise you had to lose… Read more »

“Were you to put “15.2 pCi/kg” in a client report, you had better be able to demonstrate empirically to clients and authorities your ability to disciminate between 15.1 and 15.3 (via blind QC replicate assays), otherwise you had to lose the decimal point.”

Exactly (pr precisely).

Physical science is verifiable. So this degree of precision is justified.

I like the phrase used by Worker’s Compensation in my state:

“Within a reasonable degree of medical certainty”.

That’s right.

And quantifying that degree of certainty (or uncertainty) to infinite precision does not aid in medical decision making.