I’m well aware that a good fraction of the people in this country – let’s call them Rush fans – spend their lives furious at the New York Times. I am not one of them. I love the Grey Lady; it would be high on my list of things to bring to a desert island. But every now and then, the paper screws up, and it did so in a big way in its recent piece on the federal program to promote healthcare information technology (HIT).
Let’s stipulate that the Federal government’s $20 billion incentive program (called “HITECH”), designed to drive the adoption of electronic health records, is not perfect. Medicare’s “Meaningful Use” rules – the standards that hospitals’ and clinics’ EHRs must meet to qualify for bonus payments – have been criticized as both too soft and too restrictive. (You know the rules are probably about right when the critiques come from both directions.) Interoperability remains a Holy Grail. And everybody appreciates that today’s healthcare information technology (HIT) systems remain clunky and relatively user-unfriendly. Even Epic, the Golden Child among electronic medical record systems, has been characterized as the “Cream of the Crap.”
These should be the best of times for the patient safety movement. After all, it was concerns over medical mistakes that launched the transformation of our delivery and payment models, from one focused on volume to one that rewards performance. The new system (currently a work-in-progress) promises to put skin in the patient safety game as never before.
Yet I’ve never been more worried about the safety movement than I am today. My fear is that we will look back on the years between 2000 and 2012 as the Golden Era of Patient Safety, which would be okay if we’d fixed all the problems. But we have not.
A little history will help illuminate my concerns. The modern patient safety movement began with the December 1999 publication of the IOM report on medical errors, which famously documented 44,000-98,000 deaths per year in the U.S. from medical mistakes, the equivalent of a large airplane crash each day. (To illustrate the contrast, we just passed the four-year mark since the last death in a U.S. commercial airline accident.) The IOM report sparked dozens of initiatives designed to improve safety: changes in accreditation standards, new educational requirements, public reporting, promotion of healthcare information technology, and more. It also spawned parallel movements focused on improving quality and patient experience.
As I walk around UCSF Medical Center today, I see an organization transformed by this new focus on improvement. In the patient safety arena, we deeply dissect 2-3 cases per month using a technique called Root Cause Analysis that I first heard about in 1999. The results of these analyses fuel “system changes” – also a foreign concept to clinicians until recently. We document and deliver care via a state-of-the-art computerized system. Our students and residents learn about QI and safety, and most complete a meaningful improvement project during their training. We no longer receive two years’ notice of a Joint Commission accreditation visit; we receive 20 minutes’ notice. While the national evidence of improvement is mixed, our experience at UCSF reassures me: we’ve seen lower infection rates, fewer falls, fewer medication errors, fewer readmissions, better-trained clinicians, and better systems. In short, we have an organization that is much better at getting better than it was a decade ago. Continue reading…
The debate over pay for performance in healthcare gets progressively more interesting, and confusing. And, with Medicare’s recent launch of its value-based purchasing and readmission penalty programs, the debate is no longer theoretical.
If we weren’t talking about the central policy question of a field as important as healthcare, we could call this a draw and move on. But the stakes are too high, so it’s worth taking a moment to review what we know.
In the U.S., the main test of P4P has been Medicare’s Hospital Quality Incentive Demonstration (HQID) program. A recent analysis of this program, which offered relatively small performance-based bonuses to a sample of 252 hospitals in the large Premier network, found that, after 6 years, hospitals in the intervention group had no better outcomes than those (3363 hospitals) in the control arm. Prior papers from the HQID demonstrated mild improvements in adherence to some process measures, but – as in a disconcerting number of studies – this did not translate into meaningful improvements in hard outcomes such as mortality.
The human capacity to deny reality is one of our defining characteristics. Evolutionarily, it has often served us well, inspiring us to press onward against long odds. Without denial, the American settlers might have aborted their westward trek somewhere around Pittsburgh; Steve Jobs might thrown up his hands after the demise of the Lisa; and Martin Luther King’s famous speech might have been entitled, “I Have a Strategic Plan and a Draft Budget.”
Yet when danger or failure is just around the corner, denial can be dysfunctional (see Karl Rove on Fox News), even suicidal (see climate change and Superstorm Sandy).
Healthcare is no exception. Emerging evidence suggests that patients and their surrogates frequently engage in massive denial when it comes to prognosis near the end of life. While understandable – denial is often the way that people remove the “less” from “hopeless” – it can lead to terrible decisions, with bad consequences for both the individual patient and society.
First, there is evidence that individuals charged with making decisions for their loved ones (“surrogate decision-makers”) simply don’t believe that physicians can prognosticate accurately. In a 2009 study, UCSF’s Lucas Zier found that nearly two-thirds of surrogates gave little credence to their physicians’ predictions of futility. Driven by this skepticism, one-in-three would elect continued life-sustaining treatments even after the doctor offered their loved one a less than 1% chance of survival.
In a more recent study by Zier and colleagues, 80 surrogates of critically ill patients were given hypothetical prognostic statements regarding their loved ones. The statements ranged from “he will definitely survive” to “he will definitely not survive,” with 14 statements in between (including some that offered percentages, such as “he has a [10%, or a 50%, or a 90%] chance of survival”). After hearing these statements, surrogates were asked to interpret them and offer their own survival estimates.
When the prognosis was optimistic (“definitely survive” or “90%” survival odds), surrogates’ estimates were in sync with those of the physicians. But when the prognosis was pessimistic (“definitely not survive” or “he has a 5% chance of surviving”), surrogates’ interpretations took a sharp turn toward optimism. For example, surrogates believed that when the doctor offered a 5% survival chance, the patient’s true chance of living was at least three times that; some thought it was as high as 40%. Remarkably, when asked later to explain this discordance, many surrogates struggled. Said one, “I’m not coming up with good words to explain this [trend] because I was not aware I was doing this.” The authors identified two main themes to explain their findings: surrogates’ need to be optimistic in the face of serious illness (either as a coping mechanism for themselves or to buck up their loved one), and surrogates’ beliefs that their loved one possessed attributes unknown to the physician, attributes that would result in better-than-predicted survival (the “he’s a fighter” argument).
I knew it would happen sooner or later, and earlier this week it finally did.
In 2003 US News & World Report pronounced my hospital, UCSF Medical Center, the 7th best in the nation. That same year, Medicare launched its Hospital Compare website. For the first time, quality measures for patients with pneumonia, heart failure, and heart attack were now instantly available on the Internet. While we performed well on many of the Medicare measures, we were mediocre on some. And on one of them – the percent of hospitalized pneumonia patients who received pneumococcal vaccination prior to discharge – we were abysmal, getting it right only 10% of the time.
Here we were, a billion dollar university hospital, one of healthcare’s true Meccas, and we couldn’t figure out how to give patients a simple vaccine. Trying to inspire my colleagues to tackle this and other QI projects with the passion they require, I appealed to both physicians’ duty to patients and our innate competitiveness. US News & World Report might now consider us one of the top ten hospitals in the country, I said, but that was largely a reputational contest. How long do you think it’ll be before these publicly reported quality measures factor heavily into the US News rankings? Or that our reputation will actually be determined by real performance data?
It’s been said that losing weight is much harder than kicking cigarettes or alcohol. After all, because one doesn’t need to smoke or drink, the offending substances can simply be kept out of sight (if not out of mind). Dieting, on the other hands, involves changing the way a person does something we all must do everyday.
It’s no surprise, then, that reports of problematic doctor interactions with social media are popping up with metronomic regularity. When it comes to the smorgasbord of information coursing through those Internet tubes, increasingly, we all have to eat. And that makes drawing boundaries a challenge.
While most early reports on the perils of social media concerned inappropriate postings by physicians, a new hazard has emerged recently: digital distraction. On WebM&M, the AHRQ-sponsored online patient safety journal that I edit, we recently presented a case in which a resident was asked by her attending to discontinue a patient’s Coumadin. As she turned to her smart phone to enter the order, she was pinged with an invitation to a party. By the time she had RSVPed, she had forgotten about the blood thinner – and neglected to stop it. The patient suffered a near-fatal pericardial hemorrhage.
In a commentary accompanying the case, the impossibly energetic John Halamka, ED doctor and Harvard’s Chief Information Officer, described all of the things that his hospital, Beth Israel Deaconess Medical Center, is considering to address this issue. It’s not easy: whereas the hospital owns the Electronic Health Record and can manage access to it, the vast majority of mobile devices in the hospital today – at BI and everywhere else – are the personal property of the users. So Halamka is testing various policies to place some digital distance between the personal and professional, including blocking personal email and certain social networking sites while on duty. He’s even investigating the possibility of issuing docs and nurses hospital-owned mobile devices at the start of shifts, collecting them at the end.
From the start of the patient safety movement, the field of commercial aviation has been our true north, and rightly so. God willing, 2011 will go down tomorrow as yet another year in which none of the 10 million trips flown by US commercial airlines ended in a fatal crash. In the galaxy of so-called “high reliability organizations,” none shines as brightly as aviation.
How do the airlines achieve this miraculous record? The answer: a mix of dazzling technology, highly trained personnel, widespread standardization, rigorous use of checklists, strict work-hours regulations, and well functioning systems designed to help the cockpit crew and the industry learn from errors and near misses.
In healthcare, we’ve made some progress in replicating these practices. Thousands of caregivers have been schooled in aviation-style crew resource management, learning to communicate more clearly in crises and tamp down overly steep hierarchies. Many have also gone through simulation training. The use of checklists is increasingly popular. Some hospitals have standardized their ORs and hospital rooms, and new technologies are beginning to catch some errors before they happen. While no one would claim that healthcare is even close to aviation in its approach to (or results in) safety, an optimist can envision a day when it might be.
The tragic story of Air France flight 447 teaches us that that even ultra-safe industries are still capable of breathtaking errors, and that the work of learning from mistakes and near misses is never done.
I’ve heard a lot of shocking things since arriving in England five months ago on my sabbatical. But nothing has had me more gobsmacked than when, earlier this month, I was chatting with James Morrow, a Cambridge-area general practitioner. We were talking about physicians’ salaries in the UK and he casually mentioned that he was the primary breadwinner in his family.
His wife, you see, is a surgeon.
This more than any other factoid captures the Alice in Wonderland world of GPs here in England. Yes—and it’s a good thing you’re sitting down—the average GP makes about 20% more than the average subspecialist (though the specialists sometimes earn more through private practice—more on this in a later blog). This is important in and of itself, but the pay is also a metaphor for a well-considered decision by the National Health Service (NHS) nearly a decade ago to nurture a contented, surprisingly independent primary care workforce with strong incentives to improve quality.
Appreciating the enormity of this decision and its relevance to the US healthcare system requires a little historical perspective.
As I mentioned in a previous blog, the British system cleaves the world of primary care and everything else much more starkly than we do in the States. All the specialists (the “ologists,” as they like to call them) are based in hospitals, where they have their outpatient practices, perform their procedures, and staff their specialty wards. Primary care in the community is delivered by GPs, who resemble our family practitioners in training and disposition, but also differ from them in many ways.
In my last post, I discussed the role of physicians in patient safety in the US and UK. Today, I’m going widen the lens to consider how the culture and structure of the two healthcare systems have influenced their safety efforts. What I’ve discovered since arriving in London in June has surprised me, and helped me understand what has and hasn’t worked in America.
Before I arrived here, I assumed that the UK had a major advantage when it came to improving patient safety and quality. After all, a single-payer system means less chaos and fragmentation—one payer, one regulator; no muss, no fuss. But this can be more curse than blessing, because it creates a tendency to favor top-down solutions that—as we keep learning in patient safety—simply don’t work very well.
To understand why, let’s start with a short riff on complexity, one of the hottest topics in healthcare policy.
Complexity R Us
Complexity theory is the branch of management thinking that holds that large organizations don’t operate like predictable and static machines, in which Inputs A and B predictably lead to Result C. Rather, organizations operate as “complex adaptive systems,” with unpredictability and non-linearity the rule, not the exception. It’s more Italy (without the wild parties) than Switzerland.
Complexity theory divides decisions and problems into three general categories: simple, complicated, and complex. Simple problems are ones in which the inputs and outputs are known; they can be managed by following a recipe or a set of rules. Baking a cake is a simple problem; so is choosing the right antibiotics to treat pneumonia. Complicated problems involve substantial uncertainties: the solutions may not be known, but they are potentially knowable. An example is designing a rocket ship to fly to the moon—if you were working for NASA in 1962 and heard President Kennedy declare a moon landing as a national goal, you probably believed it was not going to be easy but, with enough brainpower and resources, it could be achieved. Finally, complex problems are often likened to raising a child. While we may have a general sense of what works, the actual formula for success is, alas, unknowable (if you’re not a parent, trust me on this).
“Don’t get sick in July!” We’ve all heard patients and family members say this – part declaration, part wishful thinking – in reference to the perceived summertime risks of teaching hospitals. When I hear it, I usually respond with comforting bromides like “robust supervision” and “cream of the crop.” But deep down, if I had the choice of entering a teaching hospital in July and April, I’d choose the latter.
This preference comes partly from my recollections of my own training experience. The day before I began my residency at UCSF, my entire intern class gathered to meet our new bosses. We were on pins and needles – laughing at jokes that weren’t really funny, suspiciously eyeing our colleagues, whose admission to the program (unlike our own) could not be explained by clerical error. The chief of service, Greg Fitz, a brilliant gastroenterologist with a disarming “aw shucks” manner and a Southern drawl (he’s now Dean at UT Southwestern), stood to address us.
“I know you’re all nervous,” he said, catching our collective mental drift. “But don’t worry. If we can turn bread mold into penicillin, we can turn you guys into doctors.”
I was only partly reassured.
The next day, I began my internship on the wards at San Francisco General Hospital. I picked up the service from a graduating intern. His sign out to me was pithy. “Sucker!” he shouted gleefully, as he jammed his beeper into my abdomen like a football. Panicked, I managed to survive my first few weeks on the wards without killing any patients.