Here’s a quiz for Patient Safety Awareness Week (and after): The number of Americans who die annually from preventable medical errors is:
. A) 44,000-98,000, according to the Institute of Medicine
B) None, thanks to the Institute for Healthcare Improvement’s “100,000 Lives Campaign”
D) No one’s really counting
The correct answer is, “D,” but I confess it’s a trick question. With a slight twist in wording, the right answer could also be “C,” from an as-yet-unpublished new estimate with a unique methodology. (More below.) The main point of this quiz, however, is to explore what we actually know about the toll taken by medical mistakes and to dispel some of the confusion about the magnitude of harm.
Answer “A” refers to a figure in the oft-quoted (and often incorrectly quoted) 1999 IOM report, To Err is Human. The IOM estimate of 44,000-98,000 deaths and more than 1 million injuries each year refers only to preventable errors, and then just in hospitals. The quiz asked about all preventable harm. As the sophistication and intensity of outpatient care has increased, so, too, have the potential dangers.
For example, the Centers for Disease Control and Prevention (CDC) reported in 2011 that the majority of central-line associated bloodstream infections (CLABSIs) “are now occurring outside of ICUs, many outside of hospitals altogether, especially in outpatient dialysis clinics.” CLABSIs are both highly expensive and kill up to 25 percent of those who get them. Even in garden-variety primary care, one analysis found a harm rate of one per 35 consultations, with medication errors the most common problem. To Err is Human was silent about those types of hazards.
The Medicare program is betting on a new course of action to curb what one medical journal has dubbed an “epidemic” of uncontrolled patient harm.
The effort is pegged to the success of a little-known entity called a “hospital engagement network” (HEN). In December, the government selected 26 HENs and charged them with preventing more than 60,000 deaths and 1.8 million injuries from so-called “hospital-acquired conditions” over the next three years. That would be the equivalent of eliminating all deaths from HIV/AIDS or homicide over the same period.
Despite those big numbers, and an initial price tag of $218 million, it’s unclear whether the HENs are adequately ambitious or still only pecking away at the patient safety problem. While this is by far the most comprehensive public or private patient safety effort ever attempted in this country, it still aims to eliminate less than half the documented, preventable patient harm.
The inspiration for these networks comes from similar collaborative projects run by the Institute for Healthcare Improvement and other groups. Dr. Donald Berwick, IHI’s founder and president, headed up the Centers for Medicare & Medicaid Services for two years and launched a larger Partnership for Patients that includes the HENs.
In December, the government chose a mix of national and local groups — primarily health systems and hospital organizations — to run individual HENs. Each HEN is charged with spreading safety-improvement innovations that have been proven to work in leading hospitals to others through intensive training programs and technical assistance. Although the program lasts three years, initial HEN contracts are for two years, with an “option year” dependent upon performance.
From the start of the patient safety movement, the field of commercial aviation has been our true north, and rightly so. God willing, 2011 will go down tomorrow as yet another year in which none of the 10 million trips flown by US commercial airlines ended in a fatal crash. In the galaxy of so-called “high reliability organizations,” none shines as brightly as aviation.
How do the airlines achieve this miraculous record? The answer: a mix of dazzling technology, highly trained personnel, widespread standardization, rigorous use of checklists, strict work-hours regulations, and well functioning systems designed to help the cockpit crew and the industry learn from errors and near misses.
In healthcare, we’ve made some progress in replicating these practices. Thousands of caregivers have been schooled in aviation-style crew resource management, learning to communicate more clearly in crises and tamp down overly steep hierarchies. Many have also gone through simulation training. The use of checklists is increasingly popular. Some hospitals have standardized their ORs and hospital rooms, and new technologies are beginning to catch some errors before they happen. While no one would claim that healthcare is even close to aviation in its approach to (or results in) safety, an optimist can envision a day when it might be.
The tragic story of Air France flight 447 teaches us that that even ultra-safe industries are still capable of breathtaking errors, and that the work of learning from mistakes and near misses is never done.
Ms. Madeline Loftus, 24, was just one of the 50 individuals who lost their lives on February 12, 2009 when Continental Flight 3407 crashed in a neighborhood near Buffalo, NY. The NTSB investigation and a frightening PBS Frontline investigation called “Flying Cheap” identified airline industry practices that compromise pilots’ fitness for duty, including severe fatigue, as contributors to the disaster.
The Feb 2009 Pinnacle/Colgan/Continental airline disaster was not the first one in which fatigue was identified as a contributing factor in pilots’ errors and poor performance. Following an October 19, 2004 crash at the Kirksville, Missouri Regional Airport that killed 15, the NTSB noted that the pilots had inadequate overnight rest periods, early report for duty times, and too many consecutive flight legs. In response, the NTSB recommended in 2006 that FAA amend its regulations related to crew hours-of-service, and require the airlines to develop fatigue management programs. FAA responded in September 2010 to the NTSB recommendation by proposing comprehensive improvements and responding to thousands of comments on them. The final result is what was announced this week by the FAA.Continue reading…
My wife was lying in the back of an ambulance, dazed and bloody, while I sat in the front, distraught and distracted. We had been bicycling in a quiet neighborhood in southern Maine when she hit the handbrakes too hard and catapulted over the handlebars, turning our first day of vacation into a race to the nearest hospital.
The anxiety when a loved one is injured is compounded when you know just how risky making things better can get. As a long-time advocate for patient safety, my interest in the topic has always been passionate, but never personal. Now, as Susan was being rushed into the emergency room, I wanted to keep it that way. “Wife of patient safety expert is victim” was a headline I deeply hoped to avoid.
In the weeks after the accident, we spent time at a 50-bed hospital in Maine; a Boston teaching hospital where Susan was transferred with a small vertebra fracture at the base of her neck and broken bones in her left elbow and hand; and a large community hospital near our suburban Chicago home. There were plenty of opportunities for bad things to happen – but nothing did. As far as I could tell, we didn’t even experience any near misses.
What went right? After all, though our health care system knows how to prevent errors that kill 44,000 to 98,000 people in hospitals each year, that death toll has remained stubbornly constant. Based on personal and professional observations, I’d simplify the formula that kept Susan safe into three variables: consciousness, culture and cash.
Pat Mastors, a patient safety advocate, has written a clever blog post called, “A Few More Minutes with Andy Rooney.” Channeling the curmudgeonly tones of a 60 Minutes commentary, it begins:
I died last week, just a month after I said goodbye to you all from this very desk. I had a long and happy life – well, as happy as a cranky old guy could ever be. 92. Not bad. And gotta say, seeing my Margie, and Walter, and all my old friends again is great.
But then I read what killed me: “serious complications following minor surgery.”
Now what the heck is that?
The blog goes on to have Rooney ask for someone to find out what actually killed him. This has offended some respondents who, blinded by their own biases, think a writer using a celebrity’s death to push for information that could be used to improve care is the same thing as accusing his physicians of negligence or hauling Rooney’s family into court to publicly disclose private details.
Don’t you hate people like that?
OK, that was a cheap Andy Rooney imitation. But as it happens, I did have a phone conversation with Rooney about patient safety. It came right after the Institute of Medicine released its landmark report, To Err is Human, in November, 1999. The appalling toll of medical errors wasn’t exactly a secret back then, but doctors and hospitals had gotten used to publicly tut-tutting about the “price we pay” for medical progress every time a new study came out and then going back to doing exactly what they’d been doing before.
In my last post, I discussed the role of physicians in patient safety in the US and UK. Today, I’m going widen the lens to consider how the culture and structure of the two healthcare systems have influenced their safety efforts. What I’ve discovered since arriving in London in June has surprised me, and helped me understand what has and hasn’t worked in America.
Before I arrived here, I assumed that the UK had a major advantage when it came to improving patient safety and quality. After all, a single-payer system means less chaos and fragmentation—one payer, one regulator; no muss, no fuss. But this can be more curse than blessing, because it creates a tendency to favor top-down solutions that—as we keep learning in patient safety—simply don’t work very well.
To understand why, let’s start with a short riff on complexity, one of the hottest topics in healthcare policy.
Complexity R Us
Complexity theory is the branch of management thinking that holds that large organizations don’t operate like predictable and static machines, in which Inputs A and B predictably lead to Result C. Rather, organizations operate as “complex adaptive systems,” with unpredictability and non-linearity the rule, not the exception. It’s more Italy (without the wild parties) than Switzerland.
Complexity theory divides decisions and problems into three general categories: simple, complicated, and complex. Simple problems are ones in which the inputs and outputs are known; they can be managed by following a recipe or a set of rules. Baking a cake is a simple problem; so is choosing the right antibiotics to treat pneumonia. Complicated problems involve substantial uncertainties: the solutions may not be known, but they are potentially knowable. An example is designing a rocket ship to fly to the moon—if you were working for NASA in 1962 and heard President Kennedy declare a moon landing as a national goal, you probably believed it was not going to be easy but, with enough brainpower and resources, it could be achieved. Finally, complex problems are often likened to raising a child. While we may have a general sense of what works, the actual formula for success is, alas, unknowable (if you’re not a parent, trust me on this).
According to a CMS spokesperson, two violations relating to infection control and emergency care issues were “so serious that they triggered ‘immediate jeopardy’” for the hospital. In fact, the reasons for the citation were so heinous that CMS won’t even disclose them to the public until Parkland submits plans on how to fix those super secret problems. That’s the subject of another WTF discussion, but we’ll save that one for later.
The event triggering the CMS investigation involved a schizophrenic psychiatric patient with a heart condition who died while in the emergency department. The report states that the technicians who subdued the man did not have “effective training” and that the patient was not closely monitored before his death.
According to the article and an interview Parkland’s Chief Medical Officer, Parkland was cited for several reasons. Based on what I can gather from the article, two of the hospital’s citations were for:
– Moving patients with less serious symptoms to a separate urgent care center for medical screening
– Staff touching a patient and then touching other surfaces that people would come into contact with
When I completed my overnight shift and left the Medical ICU the morning of July 1, I raised my arms victoriously. I uttered, “Finally, internship is done!” I may have been one of the last to speak such words.
As of July 1, 2011, intern year forever changed. In the world of medicine the first year of residency, or intern year, is when doctors earn their stripes. Traditionally it is the most demanding year in a decade-long quest to become a practicing physician. But this year, the Accreditation Council of Graduate Medical Education (ACGME) mandated that interns can no longer work more than 16 hours straight, and must have 10 hours off between shifts. Second- and third-year residents can still work 28-hour shifts, but no more 30-hour shifts for interns.
To the outsider, this may seem like a common sense change that would only improve patient safety. Within the medical field, however, this change is arguably the most controversial in the history of medical education.
Advocates believe these duty-hour modifications will decrease medical errors and improve unacceptable working conditions for residents. ACGME officials still believe that residents should be able to handle the vigorous hours and workload, but believe launching the least experienced physicians — new interns — into those demanding conditions just days after medical school is inappropriate and unsafe. As well, the general public generally favors the new changes.
Around the world and now in the United States, there is a broadening discussion of how best to proceed down the path of approving and getting to market medicines called biosimilars. Biosimilars are non-identical copies of next generation medicines known as biologics. As the U.S. begins establishing new guidance for biosimilars, regulators and legislators should look to the European Union model on guidance policy and approve these important, often life-saving, drugs when they are proven to be safe for the patients they are intended to heal.
There is justified debate and concern both here in the EU and other nations on how best to introduce biosimilars into the marketplace. We know from the science, that it’s immensely more difficult to produce a biosimilar than a generic version of a traditional drug. And with this increased difficulty, comes increased risks to patients in the form of efficacy and drug-to-drug interactions. However, by adding biosimilars to the treatment regimen, we can hope to see long-term therapy at the lower costs that biosimilars may be able to provide. This is important to every country struggling to meet the demands of an aging population and rising health care costs.
As policymakers this dilemma is made easier because our focus must always be on patient safety. Citizens trust that their nation’s regulatory bodies are looking out for their best interests and doing their due diligence to ensure a safe drug supply. So patient safety is our starting point, our ending point, and our path along the way.Continue reading…