My Patient’s Keeper

Six years ago, my husband saved my life.

I had a severe allergic reaction to a medicine in the hospital in the middle of the night; he ran for the nurse. As for me, despite being a doctor myself, I couldn’t even breathe, let alone call for help. And so, even before and certainly since, I advise my patients not to be alone in the hospital if they can help it. I don’t even think anyone should be alone for office visits. There is too much opportunity to misunderstand the doctor, forget to ask the right questions, or misremember the answers.

National organizations like the American Cancer Society give the same advice: when possible, bring a friend.

As a patient safety researcher and an advocate for high quality healthcare, however, I find giving this advice distasteful. Is a permanent sidekick really the best we can do to keep patients safe? What about those who are already vulnerable because they don’t have such a superhero in their lives, or that superhero just has to punch in at some inflexible job?

Let’s take another look at the circumstances that ended up with my husband shouting, panic-stricken, in the hallway. The medicine I was given is known to cause severe allergic reactions. It is so well-established, in fact, that the standard protocol for giving this medication is to give a small test dose first. It was the test dose that nearly did me in. The hospital followed standard procedure by giving me the test dose. But they chose to do it at midnight, when the hospital is staffed by a skeleton crew, even though the medicine wasn’t urgent. Strike one for safety.

The nurse turned on the infusion and then left the room, politely closing the door to keep the room quiet and turning out the light—even though the whole intent of the test dose is to watch to be sure the patient tolerates the medication. Strike two.

So when I started to have difficulty breathing, it was in a dark and closed room. Had I been on my own, I would have had to shout for help—impossible when I felt as though an iron band were squeezing inexorably tighter around my chest—or push the call button for the nurse. That button? It was on a little remote control that continually fell to the floor, and it had no night illumination, impossible to find in the dark. Strike three.

Perhaps the most famous graphic representation of medical error is a block of Swiss cheese. The “Swiss cheese model,” coined by James Reason in 1990, illustrates the fact that it typically takes many failures (holes) to allow harm to come to a patient. Conversely, plugging a hole anywhere along the line keeps the patient safe. The basic goal of patient safety science is to convert the Swiss cheese into Cheddar: no holes. My adverse reaction to the medication was unavoidable, and unpredictable in my particular case. But the fact that eventually someone would have had that bad reaction? Inevitable. Therefore, the environment in which it occurred – a nurse call button that is virtually set up for failure, a low level of nighttime staffing, inadequately specified protocols—should have been designed differently. Fixing any one of them would have eliminated the need for an attentive husband.

What makes safety science so challenging is that bad things happen to patients all the time without there being any flaw in the system, while conversely, patients routinely survive catastrophic or cascading system failures. Distinguishing the unavoidable from the avoidable, finding the hidden system failures, and designing for safety is increasingly what hospitals are trying to do.

My own institution, Yale-New Haven Hospital, cut obstetric adverse outcome rates by a third by standardizing protocols, increasing nighttime coverage, putting every staff member through team training and hiring a patient safety nurse. Cincinnati Children’s Hospital has cut hospital deaths by a third over the past decade by avoiding hospital-acquired infections, implementing an electronic medical record, identifying worsening patients faster, and improving staff training.

We can design for safety outside the hospital, too. In the UK, the leading cause of suicide attempts—150-200 deaths per year—in the 1990s was paracetamol overdose (the equivalent to acetaminophen or Tylenol). In 1998, legislation limited the total number of tablets that could be sold at a time, and pharmacies began packaging them in blister packs instead of loosely in bottles. It takes long enough to push dozens of tablets out of the blister packs that impulsive would-be suicides reconsider. Sure enough, since 1998, overdoses have plummeted and the annual death rate has been cut in half, saving hundreds of lives.

The potential impact for patients is enormous. Yet nationally for every dollar taxpayers spend on basic science research and developing new treatments, we spend just one penny studying how to deliver those extraordinary new treatments safely, effectively, to the right patients, at the right time and according to their preferences. We need basic science and clinical research to remain strong and vibrant; it’s the only way to dramatically change healthcare and outcomes in the long run. But we need to pay more attention to implementation science, too: it’s the only way to dramatically change healthcare right now. Because not every patient has a husband who can run for help at midnight.

Leora Horwitz, a primary care internist, is an assistant professor at the Yale School of Medicine . This post was written through The Op-Ed Project’s Public Voices Fellowship Program at Yale University.