On a snowy night in February 2001, Josie King, an adorable 18-month-old girl who looked hauntingly like my daughter, was taken off of life support and died in her mother’s arms at Johns Hopkins. Josie died from a cascade of errors that started with a central line-associated bloodstream infection, a type of infection that kills nearly as many people as breast cancer or prostate cancer.
Shortly after her death, her mother, Sorrel, asked if Josie would be less likely to die now. She wanted to know whether care was safer. We would not give her an answer; she deserves one. At the time, our rates of infections, like most of the country’s, were sky high. I was one of the doctors putting in these catheters and harming patients. No clinician wants to harm patients, but we were.
So we set out to change this. We developed a program that included a checklist of best practices, an intervention called CUSP [the Comprehensive Unit-based Safety Program] to help change culture and engage frontline clinicians, and performance measures so we could be accountable for results. It worked. We virtually eliminated these infections.
Then in 2003 through 2005, with funding from AHRQ, we partnered with the Michigan Health & Hospital Association. Within six months in over 100 ICUs, these infections were reduced by 66 percent. Over 65 percent of ICUs went one year without an infection; 25 percent went two years. The results were sustained, and the program saved lives and money, all from a $500,000 investment by AHRQ for two years.
In our rush to establish a national electronic medical record (EMR) system as part of the American Recovery and Reinvestment Act of 2009, powerful silos of independent EMR systems have sprung up nationwide.
While most systems are being developed responsibly, like the Wild, Wild West, many have been developed without an objective eye toward quality and the potential harm they may be causing our patients.
As most readers of this blog are aware, since 2005 the medical device industry in which I work has had widely publicized instances of patient deaths splashed all over the New York Times and other mainstream media outlets from defibrillator malfunctions that resulted in a just a few patient deaths.
The backlash in response to these deaths was significant: device registries were developed, software improvements to devices created, and billions of dollars in legal fees and damages paid to patients and their families on the path to improvement. In addition, we also learned about the limits of corporate responsibility for these deaths thanks to legal precedent established by the Reigel vs. Medtronic case.
There was a night when I was in training that all the decisions, disasters and chaos, which are the practice of medicine, caught up to me. In those dark hours, I felt practically despondent. What I had seen left me in tears and overwhelmed by the tasks in front of me.
At that moment a wise attending physician took a moment to sit with me. Rather than tell me how wonderful a doctor I might someday become or brush away my errors, he validated my feelings. He said the best doctors cared, worked hard and sacrificed. However, that the basic driving force is fear and guilt. Fear for the mistakes you might make. Guilt for the mistakes you already had. How I handled those feelings would determine how good a doctor I became.
I have reflected on those words over the years and tried to use that sage advice to learn and grow. Focused properly, guilt gives one the incentive to re-evaluate patient care that has not been ideal. It drives the study and the dissection of past decisions. Nonetheless, excessive guilt can cause a doctor to avoid completely certain types of cases and refuse even the discussion of those medical issues.
Fear of error drives compulsive and exact care. It helps doctors study and constantly improve. Taken too far it can result in over testing, avoidance and over treatment. The art of medicine requires the practitioner to open his heart to criticism and be strong enough to build from failure.
Some years ago, I saw a patient who had leukemia. I concluded that the patient’s low blood count was because of this blood cancer. This was correct. I missed that in addition to the leukemia she was bleeding from a stomach ulcer. By the time another doctor spotted the ulcer, the patient was sicker than she might have been, had I made that diagnosis earlier.
Every day, a 727 jetliner crashes and kills all the people on board.
Not really. But every day in America, the same number of people in American hospitals lose their lives because of preventable errors. They don’t die from their disease. They are killed because of hospital acquired infections, medication errors, procedural errors, or other problems that reflect the poor design of how work is done and care is delivered.
Imagine what we as a society would do if three 727s crashed three days in a row. We would shut down the airports and totally revamp our way of delivering passengers. But, the 100,000 people a year killed in hospitals are essentially ignored, and hospitals remain one of the major public health hazards in our country.
There are a lot of reasons for this, but I’d like to suggest that one reason is a terrible burden that is put upon doctors during their training and throughout their careers. They are told that they cannot and should not make mistakes. It is hard to imagine another profession in which people are told they cannot make mistakes. Indeed, in most professions, you are taught to recognize and acknowledge your mistakes and learn from them. The best run corporations actually make a science of studying their mistakes. They even go further and study what we usually call near-misses (but perhaps should be called “near-hits.” ) Near-misses are very valuable in the learning process because they often indicate underlying systemic problems in how work is done.
If you are trained to be perfect, it is very hard to improve.
Here’s a quiz for Patient Safety Awareness Week (and after): The number of Americans who die annually from preventable medical errors is:
A) 44,000-98,000, according to the Institute of Medicine
B) None, thanks to the Institute for Healthcare Improvement’s “100,000 Lives Campaign”
D) No one’s really counting
The correct answer is, “D,” but I confess it’s a trick question. With a slight twist in wording, the right answer could also be “C,” from an as-yet-unpublished new estimate with a unique methodology. (More below.) The main point of this quiz, however, is to explore what we actually know about the toll taken by medical mistakes and to dispel some of the confusion about the magnitude of harm.
Answer “A” refers to a figure in the oft-quoted (and often incorrectly quoted) 1999 IOM report, To Err is Human. The IOM estimate of 44,000-98,000 deaths and more than 1 million injuries each year refers only to preventable errors, and then just in hospitals. The quiz asked about all preventable harm. As the sophistication and intensity of outpatient care has increased, so, too, have the potential dangers.
For example, the Centers for Disease Control and Prevention (CDC) reported in 2011 that the majority of central-line associated bloodstream infections (CLABSIs) “are now occurring outside of ICUs, many outside of hospitals altogether, especially in outpatient dialysis clinics.” CLABSIs are both highly expensive and kill up to 25 percent of those who get them. Even in garden-variety primary care, one analysis found a harm rate of one per 35 consultations, with medication errors the most common problem. To Err is Human was silent about those types of hazards.
My wife was lying in the back of an ambulance, dazed and bloody, while I sat in the front, distraught and distracted. We had been bicycling in a quiet neighborhood in southern Maine when she hit the handbrakes too hard and catapulted over the handlebars, turning our first day of vacation into a race to the nearest hospital.
The anxiety when a loved one is injured is compounded when you know just how risky making things better can get. As a long-time advocate for patient safety, my interest in the topic has always been passionate, but never personal. Now, as Susan was being rushed into the emergency room, I wanted to keep it that way. “Wife of patient safety expert is victim” was a headline I deeply hoped to avoid.
In the weeks after the accident, we spent time at a 50-bed hospital in Maine; a Boston teaching hospital where Susan was transferred with a small vertebra fracture at the base of her neck and broken bones in her left elbow and hand; and a large community hospital near our suburban Chicago home. There were plenty of opportunities for bad things to happen – but nothing did. As far as I could tell, we didn’t even experience any near misses.
What went right? After all, though our health care system knows how to prevent errors that kill 44,000 to 98,000 people in hospitals each year, that death toll has remained stubbornly constant. Based on personal and professional observations, I’d simplify the formula that kept Susan safe into three variables: consciousness, culture and cash.
This article by John Tierney in the New York Times suggests that humans suffer from decision fatigue, the tendency to make worse decisions as you make a series of hard decisions as the day goes along. Here are some pertinent excerpts:
No matter how rational and high-minded you try to be, you can’t make decision after decision without paying a biological price. It’s different from ordinary physical fatigue — you’re not consciously aware of being tired — but you’re low on mental energy. The more choices you make throughout the day, the harder each one becomes for your brain, and eventually it looks for shortcuts, usually in either of two very different ways. One shortcut is to become reckless: to act impulsively instead of expending the energy to first think through the consequences. The other shortcut is the ultimate energy saver: do nothing. Instead of agonizing over decisions, avoid any choice. Ducking a decision often creates bigger problems in the long run, but for the moment, it eases the mental strain. You start to resist any change, any potentially risky move. Once you’re mentally depleted, you become reluctant to make trade-offs, which involve a particularly advanced and taxing form of decision making. Continue reading…
Earlier today, Secretary of Health and Human Services Kathleen Sebelius and Medicare chief Don Berwick announced the “Partnership for Patients,” a far-reaching federal initiative designed to take a big bite out of adverse events in American hospitals. The program – which aims to decrease preventable harm in U.S. hospitals by 40 percent and preventable readmissions by 20 percent by 2013 – marks a watershed moment in the patient safety movement. Here’s the scoop, along with a bit of back story (which includes a gratifying bit part for yours truly).
Last July, I attended the American Board of Internal Medicine’s Summer Forum in Vancouver. This confab has turned into medicine’s version of Davos, drawing a who’s who in healthcare policy. One of the attendees was an old friend, Peter Lee, a San Francisco lawyer and healthcare consumer advocate who had just been asked to lead a new Office of Delivery System Reform within the U.S. Department of Health and Human Services. Peter’s charge was to figure out how to transform the delivery of healthcare in America, challenging under any circumstances but Sisyphean given that he’d be pushing the rock up a mountain chock full of landmines comprised of endless legal and political threats to the recently-passed Affordable Care Act.
Fueled by the enthusiasm of being a new guy with a crucial task, Peter took advantage of some conference downtime to convene a small group – about 20 of us – to advise him on what he should focus on in his new role. After soliciting ideas from many of the participants around the table, he turned to me. I decided not to be shy.Continue reading…
On March 28, 1979 the Three-Mile Island Unit-2 nuclear power plant experienced a feed system failure which prevented the steam generators from removing heat from the plant. The reactor automatically shutdown but, without the feed system to cool the primary, the pressure in the primary system (the nuclear portion of the plant) began to increase. In order to prevent that pressure from becoming excessive, a relief valve opened. The valve should have re-closed once the pressure dropped by a small amount, but it didn’t. The only indication available in the control room showed the valve in the closed position, but that indication was erroneous, representing only that the signal to close the valve (pressure below a set value) had been sent to the valve. Nothing in the system verified the actual valve position. This stuck-open valve caused the pressure to continue to decrease in the system (and ultimately provided a path for spewing thousands of curies of radioactive material into the atmosphere), but the false shut indication prevented the operators from taking actions to mitigate their severe loss of coolant accident.
The primary relief valve design had a history of sticking. That same valve had been involved in at least nine other minor incidents prior to the TMI incident. Most notably, eighteen months before TMI, a similar incident had occurred in another nuclear plant involving a loss of feed and rising temperatures shutting down the plant. In that incident, the plant was just starting up after a maintenance shutdown, so the power level and temperature of the system were not as dangerously high as at Three-Mile Island.
This story about a kidney transplant mix-up in California is bound to get lots of coverage. It is these extraordinary cases that get public attention. I am sure it will lead to a whole new set of national rules designed to keep such a thing from happening.
Of course, such rules already exist, and it was likely a lapse in them that led to this result.
Nonetheless, we will “bolt on” a new set of requirements that, in themselves, will likely create the possibility for yet a new form of error to occur.
This kind of coverage and response is a spin-off from the “rule of rescue” that dominates decisions about medical treatment. We find the one-off, extreme case and devote excessive energy to solving it. In the meantime, we let go untreated the fact that tens of thousands of people are killed and maimed in hospitals every year.
Those numbers are constantly disputed by the profession. To this day, many doctors do not believe the Institute of Medicine’s studies that documented the number of unnecessary deaths per year.
And you never hear anyone talking about this 2010 report by the Office of the Inspector General, which concluded:
An estimated 1.5 percent of Medicare beneficiaries experienced an event that contributed to their deaths, which projects to 15,000 patients in a single month.
As the IOM notes, “Between the health care we have and the care we could have lies not just a gap, but a chasm.”
There is an underlying belief on the part of policy makers and public and private payers that the focus on quality is best addressed through payment reform. Let me state as clearly as I possibly can: That is wrong. It is a classic example of the old expression: “When you have a hammer, everything looks like a nail.” Changes in payment rate structures, penalties for “never events,” and the like can cause some changes to occur. Their main political advantage is that they give the impression of action, and their major financial advantage is a shift in risk from government and private payers to health care providers.Continue reading…