Uncategorized

The Crash of Air France 447: Lessons for Patient Safety

From the start of the patient safety movement, the field of commercial aviation has been our true north, and rightly so. God willing, 2011 will go down tomorrow as yet another year in which none of the 10 million trips flown by US commercial airlines ended in a fatal crash. In the galaxy of so-called “high reliability organizations,” none shines as brightly as aviation.

How do the airlines achieve this miraculous record? The answer: a mix of dazzling technology, highly trained personnel, widespread standardization, rigorous use of checklists, strict work-hours regulations, and well functioning systems designed to help the cockpit crew and the industry learn from errors and near misses.

In healthcare, we’ve made some progress in replicating these practices. Thousands of caregivers have been schooled in aviation-style crew resource management, learning to communicate more clearly in crises and tamp down overly steep hierarchies. Many have also gone through simulation training. The use of checklists is increasingly popular. Some hospitals have standardized their ORs and hospital rooms, and new technologies are beginning to catch some errors before they happen. While no one would claim that healthcare is even close to aviation in its approach to (or results in) safety, an optimist can envision a day when it might be.

The tragic story of Air France flight 447 teaches us that that even ultra-safe industries are still capable of breathtaking errors, and that the work of learning from mistakes and near misses is never done.

Air France 447 was the Rio de Janeiro to Paris flight that disappeared over the South Atlantic Ocean on June 1, 2009. Because the “black box” was not recovered during the initial searches, the only clues into how an Airbus 330 could plummet into the sea were 24 automatic messages sent by the plane’s flight computer to a computer system in Paris used for aircraft maintenance. The messages showed that the plane’s airspeed sensor had malfunctioned and that the autopilot had disengaged. With the black box seemingly unrecoverable (its acoustic pinger stopped transmitting after a few months, and the seabed near the crash site was more than two miles deep), the aviation industry steeled itself against the likelihood that the crash would remain a mystery forever.

Miraculously, in April 2011, a salvage boat recovered the plane’s black boxes, and their contents do reveal precisely what happened the night when Flight 447 vanished, killing all 288 people on board. The most gripping article I read this year – in Popular Mechanics, which seemed like an unlikely place for high drama – reconstructs the events, most of the narrative coming from the pilots themselves.

We now know that AF 447 was doomed by a series of events and decisions that hew perfectly to James Reason’s famous “Swiss cheese” model of error causation, in which no single mistake is enough to cause a catastrophic failure. Rather, multiple errors permeate relatively weak protections (“layers of Swiss cheese”), ultimately causing terrible harm.

In a nutshell, the first problem – which began the tragic chain of errors – was the crew’s decision to fly straight into a mammoth thunderstorm, in an equatorial area known as the “intertropical convergence,” where such storms are common. This may have been an example of what safety expert Edward Tenner calls a “Revenge effect”: safer systems that cause people to become complacent about risks and engage in more dangerous acts (the usual example is that safer cars lead people to drive faster). (My interview with Tenner is here.) We’ll never know why the pilots chose that route, but we do know that several other planes chose to fly around the worst of the storm that night.

Since commercial pilots are not permitted to fly more than eight hours consecutively, on this 13-hour flight senior pilot Marc Dubois left the cockpit for a nap about two hours into the flight. (The most chilling passage of the Popular Mechanics article: “At 2:02 am, the captain leaves the flight deck to take a nap. Within 15 minutes, everyone aboard the plane will be dead.”) This left the plane in the hands of the two co-pilots, David Robert, 37 and Pierre-Cédric Bonin, 32. Bonin, the least experienced of the three, took Dubois’ seat, which put him in control of the flight.

Now in the middle of the thunderstorm, the pitot tube, a four-inch gizmo that sits outside the plane underneath the cockpit and monitors airspeed, froze over, which caused the airspeed gauge to go blank. Even worse, robbed of its usual inputs, the plane’s autopilot disengaged, leaving the co-pilots to fly the old fashioned way, but in the dark and without airspeed information.

Moments later, Bonin made a terrible – and to many experts – inexplicable, decision: to pull up on the controls, lifting the planes nose and causing it to stall out in the thin air six miles up. (Note that the term “stall out” is misleading. As James Fallows points out in the Atlantic, the Airbus’s engines continued to work just fine; Bonin’s inappropriate ascent created an “aerodynamic stall” in which the angle of the wings to the wind created insufficient lift to keep the plane airborne.) It will never be known exactly why Bonin did this, or why he continued to do it until it was too late; experts speculate that he may have been overwhelmed by the storm, the sound of ice crystals forming on the fuselage, and the two-second alarm that signaled the disengagement of the autopilot, leaving him to do something that today’s pilots don’t do very much: fly a plane by themselves outside of takeoff and landing.

The technology on modern planes is so sophisticated that these aircraft have become virtually crash-proof – assuming, that is, that the pilots don’t mess things up. There’s even a joke that says that a modern aircraft should have a pilot and a dog in the cockpit: the pilot to watch the controls and the dog to bite the pilot if he tries to touch the controls. While today’s jetliners can nearly fly themselves, these sophisticated technologies can have unintended consequences, just as they do in healthcare. As the Popular Mechanics and Atlantic pieces both explain, Air France 447 had several of them, each a layer of Swiss cheese.

First, the pilots may have assumed that 447 could not stall, because the Airbus’s computers are designed to prevent this from happening. The crew may not have realized that most of the built-in protections were bypassed when the plane flipped out of autopilot.

Second, on most commercial airliners, the right and left seat controls are linked; on such a plane, Robert would have been able to detect Bonin’s mistaken decision to lift the plane’s nose and correct it. For unclear reasons, the Airbus designers delinked the 330’s controls, which made it possible for Robert, and later the pilot Dubois (who returned to the cockpit as the plane was falling), to remain unaware of Bonin’s error until it was too late to fix.

Third, the technological sophistication of modern aircraft means that new pilots are no longer well trained in flying without the assistance of modern gadgetry. When the computers break down, many young pilots are at a loss. “Some people have a messianic view of software, meaning that it will save us from all our problems,” aviation safety expert Michael Holloway told PBS’s NOVA. “And that’s not rational, or at least it’s not supported by existing evidence.” Many older commercial airline pilots first earned their wings in the military, where they gained experience in flying manually, sometimes without power or while dodging hazards like mountains and missiles. Bonin may have erred because he hadn’t received sufficient training to ensure the correct response.

Even these problems might not have been enough to allow an intact modern jetliner to fly into the ocean. The interaction between the two co-pilots during the moments of crisis demonstrates remarkably poor communication, despite their training in crew resource management. Moreover, there was a marked lack of situational awareness, with everyone focusing on a few small details while ignoring a blaring cockpit alarm, which repeated the word “stall” 75 times before the plane crashed.

Bob Helmreich of the University of Texas is probably the leading aviation safety expert who has worked to translate aviation safety to healthcare. He says, “we’ve seen accidents where people were actually too busy trying to reprogram the computer when they should have been looking out the window or doing other things.”

You can bet that since the discovery of the black boxes every commercial airline pilot in the world now knows what happened to Flight 447, and airlines and regulators such as the FAA have instituted new mandatory training requirements. A worldwide directive to replace the pitot tubes with more reliable sensors was quickly issued, and other technological fixes will be put in place as well. Aviation crashes are now so rare that those that do occur lead to rapid analysis and mandatory changes in procedures, technologies and training. Thankfully, this will make a repeat of AF 447 unlikely.

What are the lessons from this terrible tragedy for healthcare? Well, it certainly doesn’t mean that we should abandon aviation as a safety model. But the crash is a cautionary tale of the highest order. We need to ensure that our personnel have the skills to manage crises caused by the malfunction of technologies that they’ve come to rely on. We should continue to push crew resource management training and work on strategies to bolster situational awareness (I haven’t found anything better than the old House of God rule: “In a Code Blue, the first procedure is to take your own pulse.”) We need to redouble our efforts to promote realistic simulation training, and to build systems that allow us to learn from our mistakes and near misses so we don’t repeat them.

Those of us working in patient safety can only hope that one day our system approaches aviation’s safety record. When we do, we will congratulate ourselves for the lives we’ve saved, but the hard work will be far from over. James Reason, who calls safety a “dynamic non-event,” has pointed to the risks of complacency even in very safe systems. “If eternal vigilance is the price of liberty,” writes Reason, “then chronic unease is the price of safety.” The tragedy of Air France 447 teaches us that the quest for safety is never ending.

Robert Wachter, MD, is widely regarded as a leading figure in the modern patient safety movement. Together with Dr. Lee Goldman, he coined the term “hospitalist” in an influential 1996 essay in The New England Journal of Medicine. His posts appear semi-regularly on THCB and on his own blog, Wachter’s World.

7 replies »

  1. I think it is interesting that the beginning of the article hold the aviation industry up on a pedestal saying how great it is at avoiding tragedies but finishes by chastising pilots for not being able to deal with flying a plane in unusual circumstances. Obviously pilots do know how to fly planes in unusual circumstances because of how rarely planes crash horribly. While it does happen occasionally, aviation is an industry that is so safe we rarely even think about a plane crashing. What other industry can we say that about?

  2. I googled about blumenthal and litigation or malpractice, and I did not find a link explaining what you are referring to – could you please provide a link?

  3. Groopman and Harztband decry the EHR as a game of where’s waldo when doctors are trying to find key information. The modern medical record, an electronic medical device, is an impediment to fast and slow thinking alike and is the cause of the increaased incidents of missed diagnoses. You merely have to look at the med mal lawsuit of Dr. DAVID BLUMENTHAL, former ONC chief, to witness how the EHR burned him; or, did not save the patient from catastrophe.

  4. Nice post, Bob (scary though). The other rule that pilots learn in a crisis is “First of all, keep flying the plane…” The analogy in health care is to “keep track of the patient”. Too many easily avoided “crashes” in patient care occur because the patient’s progress/lack of is missed, especially post-discharge and in fragmented outpatient care.

  5. ” We need to ensure that our personnel have the skills to manage crises caused by the malfunction of technologies that they’ve come to rely on. We should continue to push crew resource management training and work on strategies to bolster situational awareness (I haven’t found anything better than the old House of God rule: “In a Code Blue, the first procedure is to take your own pulse.”) We need to redouble our efforts to promote realistic simulation training, and to build systems that allow us to learn from our mistakes and near misses so we don’t repeat them.”
    __

    This is precisely a core takeaway point from Drs. Weed & Weeds’ excellent book “Medicine in Denial.” (now available on Amazon. They sent me a pre-pub proof, which is now loaded down with yellow marker, red pen margin notes and stickies; it’s excellent. Highly recommended)

    to wit:

    “The minds of physicians do not have command of all the medical knowledge involved. Nor do physicians have the time to carry out the intricate matching of hundreds of findings on the patient with all the medical knowledge relevant to interpreting those findings. External tools are thus essential. But the tools are trustworthy only when their design and use conform to rigorous standards of care for managing clinical information.

    Without the necessary standards and tools, the matching process is fatally compromised. Physicians resort to a shortcut process of highly educated guesswork…

    …We use the term “guesses” because these key initial judgments are made on the fly, during the patient encounter, based on whatever enters the physician’s mind at the time. That mind may be highly informed and intelligent, but inevitably its judgments reflect limited personal knowledge and experience, and limited time for thought. Euphemistically termed “clinical judgment,” physician thought processes cause a fatal voltage drop in transmitting complex knowledge and applying it to patient data. The outcome is that the entire health care enterprise lacks a secure foundation.

    Equally insecure are the complex processes built on that foundation: decision making, execution, feedback and corrective action over time. Responsibility for all these processes falls on the mind of the physician. Here again the mind lacks external tools and accounting standards for managing clinical information.” [pp 2-3]
    _

    I’m now triangulating all this stuff with Kahneman’s “Thinking, Fast and Slow,” Groopman’s “How Doctors Think,” Sperber & Mercier’s “Why Do Humans Reason?” etc. Fascinating.

    I’ll be citing your article on my REC blog. Fits right in with my latest topics.

  6. The technology of the aircraft befuddled the pilots. This compares to the confusion of healthcare professionals in hospitals trying to use non-approved medical electronic record devices that remain poorly usable and errorgenic. Think fast or think slow?