On the occasion of last year’s tenth anniversary of the IOM Report on medical mistakes, I was asked one question far more than any other: after all this effort, are patients any safer today than they were a decade ago?
Basing my answer more on gestalt than hard data, I gave our patient safety efforts a grade of B-, up a smidge from C+ five years earlier. Some commentators found that far too generous, blasting the safety field for the absence of measurable progress, their arguments bolstered by “data” demonstrating static or even increasing numbers of adverse events. I largely swatted that one away, noting that metrics such as self-reported incidents or patient safety indicators drawn from billing data were deeply flawed. Just look at all the new safety-oriented activity in the average U.S. hospital, I asked. How could we not be making patients safer?
I may have been overly charitable. This week, in an echo of the Harvard Medical Practice Study (the source of the 44,000-98,000 deaths/year from medical mistakes estimate, which launched the safety movement), a different group of Harvard investigators, led by pediatric hospitalist and work-hours guru Chris Landrigan, published a depressing study in the New England Journal of Medicine. The study used the Institute for Healthcare Improvement’s Global Trigger Tool, which looks for signals that an error or adverse event may have occurred, such as the use of an antidote for an overdose of narcotics or blood thinners. Following each trigger, a detailed chart review is performed to confirm the presence of an error, and to assess the degree of patient harm and the level of preventability. While the tool isn’t perfect, prior studies (such as this and this) have shown that it is a reasonably accurate way to search for errors and harm – better than voluntary reports by providers, malpractice cases, or methods that rely on administrative data.
Using this method in a stratified random sample of ten North Carolina hospitals, the authors found no evidence of improved safety over a five-year period, from 2002-2007.
Before taking out the defibrillator paddles and placing them on our collective temples, it’s worth considering the possibility that the findings are wrong. We know that the Trigger Tool misses certain types of errors (such as diagnostic or handoff glitches; it’s worth looking at this recent paper by Kaveh Shojania, which emphasizes the importance of using multiple methods to get a complete picture of an organization’s safety), and perhaps the study overlooked major improvements in these blind spot areas. That said, the tool does capture a sizable swath of safety activities – and the lack of improvement in those areas is still disappointing.
I guess it’s also possible that these ten North Carolina hospitals are unrepresentative laggards. But North Carolina has been relatively proactive in the safety world, and these hospitals volunteered to participate in the study, an indication that they were proud of their safety efforts. While I would have liked a bit more information about the state of the safety enterprise at each hospital (did they have computerized order entry during the period in question, for example), I think the findings are generalizable.
Another slight caveat surrounds measurement and ascertainment bias. Because safety is far harder to measure than quality (the latter can be captured with measures like door-to-balloon time and aspirin administration after MI – and, as Joint Commision CEO Mark Chassin notes in Denise Grady’s article in today’s NY Times that reviewed the Landrigan piece, these types of publicly reported quality measures have been improving briskly), there is always the risk that things will look worse when people begin looking for harms more closely… which, of course, they must do to make progress. This is the fatal flaw when we think about using provider-supplied incident reports to measure safety. While the Trigger Tool is more resistant to this concern, it is not completely immune. For example, the hospital that is more attuned to preventing decubitus ulcers will undoubtedly examine patients more carefully during their hospitalization for signs of early bedsores. The Trigger Tool might mistakenly read these “extra cases” as evidence of declining safety. The same holds for falls: our new attention to fall prevention may cause us to chronicle patient falls more carefully in the chart. But such issues only raise concerns for the minority of the triggers; I can’t see how measuring administration of antidotes for oversedation and overanticoagulation, or 30-day readmission or return-to-OR rates, should be biased by a hospital’s greater focus on safety.
So, despite my best efforts at nitpicking, I’m left largely believing the results of the Landrigan study. Lots of good people and institutions have spent countless hours and dollars trying to improve safety. Why isn’t it working better?
I think the study tells us something we’ve already figured out: that improving safety is damn hard. Sure, we can ask patients their names before an invasive procedure, or require a time out before surgery. But we’re coming to understand that to make a real, enduring difference in safety, we have to transform the culture of our healthcare world – to get providers to develop new ways of talking to each other and new instincts when they spot errors and unsafe conditions. They, and healthcare leaders, need to instinctively think “system” when they see an adverse event, and embrace openness over secrecy, even when that’s hard to do. Organizations need to learn the right mix of sharing stories and sharing data. They need to embrace evidence-based improvement practices, while being skeptical of ones that seem like good ideas but haven’t been fully tested. And policymakers and payers need to create an environment that promotes all of this work – policies that don’t tolerate the status quo but steer clear of overly burdensome regulations that strangle innovation and enthusiasm.
In other words, the fact that we haven’t sorted all this out only seven years after the launch of the Good Ship Safety shouldn’t be too surprising. And my sense – although I can’t prove it – is that things are starting to improve more rapidly. Remember that the observation period in the North Carolina study ended in 2007. The first several years of the safety field involved skill building and paradigm changing. Some of the big advances in safety – the embrace of checklists, more widespread implementation of less clunky IT systems, mandatory reporting of certain errors to states, widespread use of root cause analysis to investigate errors – all began in the 2005-2008 period (and some, like IT, are really only cresting now). It will be crucial to follow up this study over time to see if there are signs of progress. I suspect the results will be more heartening.
What now? As I’ve noted many times before, I worry that a harmful orthodoxy has crept into the safety field. We need to figure out ways to ensure that we do the things that we know work, like checklists to prevent central line infections and surgical errors, fall reduction programs, and teamwork training. We need to develop new models for those areas that haven’t worked as well as we’d hoped, like widespread incident reporting and CPOE. We must do the courageous and nuanced work of blending our “no blame” model with accountability when caregivers don’t clean their hands or perform a pre-op time out. And we must allocate the resources, at the institutional and federal level, to do these things and study them to be sure they’re working.
The study by Landrigan and colleagues is a wake-up call. Let’s figure out what’s working, and do more of it. Let’s figure out what’s not working, and do something different. And let’s not stop until we can prove that we have made our patients safer.
Happy Thanksgiving to you and yours.
Robert Wachter, MD, is widely regarded as a leading figure in the modern patient safety movement. Together with Dr. Lee Goldman, he coined the term “hospitalist” in an influential 1996 essay in The New England Journal of Medicine. His most recent book, Understanding Patient Safety, (McGraw-Hill, 2008) examines the factors that have contributed to what is often described as “an epidemic” facing American hospitals. His posts appear semi-regularly on THCB and on his own blog, Wachter’s World.
Categories: Uncategorized