I gave a keynote yesterday to the first-ever meeting on "Diagnostic Error in Medicine." I hope the confab helps put diagnostic errors on the safety map. But, as Ricky Ricardo said, the experts and advocates in the audience have some ‘splainin’ to do.
I date the origin of the patient safety field to the publication of the IOM report on medical errors (To Err is Human). It is the field’s equivalent of the Birth of Christ (as in, there was before, and there is after). But from the get-go, diagnostic errors were the ugly stepchild of the safety family. I searched the text of To Err… and found that the term “medication errors” is mentioned 70 times, while “diagnostic errors” appears twice. This is interesting because diagnostic errors comprised 17 percent of the adverse events in the Harvard Medical Practice Study (from which the IOM’s 44,000 to 98,000 deaths numbers were drawn), and account for twice as many malpractice suits as medication errors.
What I call “Diagnostic Errors Exceptionalism” has persisted ever since. Think about the patient safety issues that are on today’s public radar screen (i.e., they are subject to public reporting, included in “no pay for errors,” examined during Joint Commission visits, etc.). It’s a pretty diverse group, including medication mistakes, falls, decubitus ulcers, wrong-site surgery, and hospital-acquired infections. But not diagnostic errors. Funny, huh?
There are lots of reasons for this. Here are just a few:
The Problem of Visceral, Accessible Dread
Ask any horror movie producer – certain calamities cause visceral
dread. They tend to be “bolt out of the blue” events – ones that lack
both forewarning and opportunities for post-strike redemption. (Think
sharks, plane crashes, tsunamis, and earthquakes.) But diagnostic
errors often have complex causal pathways, take time to play out, and
may not kill for hours (missed MI), days (missed meningitis) or even
years (missed cancers). They don’t pack the same visceral wallop as
wrong-site surgery, the “shark bit off a guy’s leg” of the safety field.
Iconic, Mediagenic Examples
Think about the errors that have made 60 Minutes in the past decade or
so: the chemotherapy error that killed Boston Globe health columnist
Betsy Lehman, the Duke transplant mix-up involving failure to check ABO
type, the amputation of Willie King’s wrong leg, even Dennis Quaid’s
twins’ heparin OD. How about diagnostic errors? Personally, I can’t
think of one that ended up in the spot lights. The one mediagenic
error that was (at least in part) due to a diagnostic error – the death
of Libby Zion at New York Hospital in the 1980s – was framed as a death
caused by long residency work hours and poor supervision, not as one
caused by a diagnostic error.
Data Suitable to Create Sound Bites
These are great sound bites (I’ve used them many times myself):
We have no comparable data for diagnostic errors, and so they don’t
compete very well for attention. In fact, this measurement problem
(diagnostic errors are very hard to measure, particularly through
retrospective chart review) is a huge issue. How are we to convince
policymakers and hospital executives, who are now obsessing about
lowering the rates of hospital-acquired infections and falls, to focus
on diagnostic errors when their toll is so vague?
Some Research (or at Least Common Sense) Points to Solutions
Many traditional types of errors can be paired with well-understood
solutions, some of which even have data demonstrating that they work.
Just consider these:
* Prescribing errors: computerized order entry
* Drug administration errors: bar coding and smart pumps
* Failure in rote processes: double checks, checklists
* Wrong-site surgery: sign the site
* Retained sponges in surgery: count ‘em up
The solutions for diagnostic errors generally fall into two big
buckets. One might be thought of as “better thinking” — appreciating the
risks of certain heuristics (anchoring, premature closure),
correctly applying Bayesian reasoning and Iterative Hypothesis Testing,
and so on. This group of activities, while fascinating (building on the
groundbreaking work of brilliant cognitive psychologists like Amos
Tversky and Daniel Kahneman), is a bit too arcane for real people to
get their arms around. It seems like inside baseball.
The other broad bucket of proposed solutions to diagnostic errors
involves various forms of computerized decision support. Providing
computerized diagnostic support – and perhaps even some artificial
intelligence (AI) – at the point-of-care makes all the sense in the
world. But remember the Technology Hype Cycle from a few blogs back.
Diagnostic AI was way overhyped in the 1970s and 80s, much of the hype
focused on several programs that titillated the IT wonks of the day
(such as QMR and Iliad) but are now in the IT dustbin of history. Turns
out that replacing a doctor’s diagnostic abilities with a computer is
an incredibly knotty problem (partly because the symptoms, signs, and
initial labs in flu and plague have about 95 percent overlap). The
disappointment over the ineffectiveness of early AI programs led to
widespread skepticism that any decision support programs could help
physicians be better diagnosticians. This skepticism is getting in the
way of today’s markedly improved systems, such as Isabel, from gaining
the traction they deserve.
So solutions for diagnostic errors – whether new ways of training
people to think or computerized decision support – do not compete very
effectively in the battle for resources and attention against far more
easily implemented and better researched solutions to other safety
problems, such as “bundles” to prevent catheter infections or
The Problem of the Accountable Entity
One final problem is the absence of an accountable entity with deep
pockets. Because the patient safety field focused its attention on
hospital errors (at least initially), the hospital – rather than
individual physicians – could be held accountable (by the Joint
Commission, CMS, the state, and the media) for creating safer systems.
And hospitals have stepped up to this particular plate, by putting
safety atop their strategic plans, and by implementing incident
reporting systems, patient safety officers, CPOE, root cause analysis,
teamwork training, and more. They had no choice.
But if diagnostic errors are seen as individual physician cognitive
problems, then the hospital is unlikely to contribute to their
solution, or even to pay much attention to them. And so they haven’t,
and they don’t.
What Can Be Done?
Is there any hope of getting diagnostic errors included under the broad
umbrella of patient safety, where they can garner the attention and
resources they deserve? Sure. But we need to solve a chicken-or-egg
problem: If there is no interest and funding in the topic, we won’t
generate the research we need to measure the toll of the problem or
come up with effective solutions. And then there won’t be funding and
That’s why AHRQ’s sponsorship of the Diagnostic Errors Conference, and
the agency’s overall interest in the topic, is so crucial. Having
allies in high places, beginning with AHRQ and other funders, but
extending to malpractice carriers, accreditation boards, med schools
and residencies, and even the Joint Commission, will be essential.
Judging by the robust sales of Jerome Groopman’s book, How Doctors Think, the
public is interested in this topic. Passionate and effective leaders
and advocates, most of whom were in Phoenix yesterday, are emerging. If
we can find some support for their work, better data and solutions
cannot be too far behind. And then the problem of diagnostic errors
will get the attention it deserves.
As the quality and safety movements gallop along, the need to fix
Diagnostic Errors Exceptionalism grows more pressing. For until we do,
we face a fundamental problem: a hospital can be seen as high quality
hospital – receiving awards for being a great performer and oodles of
cash from P4P programs – if all of its “pneumonia” patients receive the
correct antibiotics, all its “CHF” patients are prescribed ACE
inhibitors, and all its “MI” patients get aspirin and beta blockers.
Even if every one of the diagnoses was wrong.