Here’s a quiz for Patient Safety Awareness Week (and after): The number of Americans who die annually from preventable medical errors is:
A) 44,000-98,000, according to the Institute of Medicine
B) None, thanks to the Institute for Healthcare Improvement’s “100,000 Lives Campaign”
D) No one’s really counting
The correct answer is, “D,” but I confess it’s a trick question. With a slight twist in wording, the right answer could also be “C,” from an as-yet-unpublished new estimate with a unique methodology. (More below.) The main point of this quiz, however, is to explore what we actually know about the toll taken by medical mistakes and to dispel some of the confusion about the magnitude of harm.
Answer “A” refers to a figure in the oft-quoted (and often incorrectly quoted) 1999 IOM report, To Err is Human. The IOM estimate of 44,000-98,000 deaths and more than 1 million injuries each year refers only to preventable errors, and then just in hospitals. The quiz asked about all preventable harm. As the sophistication and intensity of outpatient care has increased, so, too, have the potential dangers.
For example, the Centers for Disease Control and Prevention (CDC) reported in 2011 that the majority of central-line associated bloodstream infections (CLABSIs) “are now occurring outside of ICUs, many outside of hospitals altogether, especially in outpatient dialysis clinics.” CLABSIs are both highly expensive and kill up to 25 percent of those who get them. Even in garden-variety primary care, one analysis found a harm rate of one per 35 consultations, with medication errors the most common problem. To Err is Human was silent about those types of hazards.
Answer “B” refers to an initiative led by the IHI, whose founder, Dr. Donald Berwick, was one of the driving forces behind the IOM report. However, safe care and best-quality care are not synonymous, even if the title of the IHI campaign blurred the distinction. The results announced in 2006 by IHI did not match up with the IOM either temporally (the campaign was 18 months) or in measures (just three of six were related to safety-specific outcomes). Similarly, a successor effort, “5 Million Lives,” was a further attempt to improve care.
The lack of significant impact of those efforts, however, can be seen in a 2010 study examining hospitals’ error-reduction progress since the IOM report. It found that “harms remain common, with little evidence of widespread improvement,” despite data showing that focused efforts “can significantly improve safety.”
Answer “C” would be correct if the question referred to preventable deaths in hospitals. The 90,000 figure may yet become one of the most important in health care because of how it was calculated by staff at the Department of Health and Human Services and how it is being used.
Last April, HHS launched a safety improvement campaign called Partnership for Patients. It was an idea promoted by Berwick as acting head of the Centers for Medicare & Medicaid Services, but this campaign is more focused than its IHI cousin. HHS wants to reduce preventable “hospital-acquired conditions” (HACs) by 40 percent by the end of 2013 from the 2010 level. By way of perspective, the 1999 IOM report called for errors to be cut in half over five years and had no impact whatsoever. However, while the IOM relied on do-gooder declamations, HHS is deploying dollars.
In December, the department contracted with 26 “hospital engagement networks” to be safety improvement contractors for individual hospitals that join their group. The HENs will be paid $218 million during the first two years of contracts that contain specific improvement goals and measurable activities to reach them. HHS has budgeted $500 million over three years for the entire project, including efforts to reduce readmissions. To date, 3,835 of the nation’s 5,000 acute-care hospitals have joined a HEN, a step which at a minimum implies acknowledgement of a problem.
Before launching the program, though, the objective of hacking the frequency of HACs had to be expressed as a target related to a measurable numerical starting point. Just taking the middle point of the To Err is Human estimate wouldn’t work. The studies the IOM relied upon are old, the expert chart review methodology used is controversial and the individual types of harm are not adequately detailed. However, aggregating newer studies of individual HACs with different methodologies and different definitions into one credible figure poses a formidable challenge.
For example, another oft-cited study is a CDC report in 2007 that 99,000 Americans die annually from hospital-acquired infections. While the methodology is clear, it’s unclear what percentage of the deaths are preventable, much less how the 2002 data that was analyzed applies to current hospital admissions. More recently, a “global trigger tool” developed by David Classen and colleagues has been used to find “all-cause harm” in hospitals. Using that tool, the HHS Office of the Inspector General found that a hospitalized Medicare patient has a one-in-seven chance of suffering harm, a risk about four-to-seven times greater than in the IOM report. Still, the OIG looked at Medicare beneficiaries (not all patients) and did not estimate how much harm was preventable.
To produce a current estimate all hospital-related preventable harm, the Agency for Healthcare Research and Quality and other HHS staff harmonized research on nine specific, high-frequency HACs and some severe but less-frequent problems grouped into an “other” category. The bottom line, said agency staff in interviews, is that about 90,000 hospital patients die each year from preventable, treatment-caused injuries.
The Partnership for Patients and the HENs want to reduce that total by 40 percent – or 36,000 lives saved – by the program’s third year. Add in the lives saved during years one and two when hospitals are making incremental progress, and the total is “more than 60,000.” The Partnership also wants to eliminate 1.8 million injuries from HACs and 1.6 million hospital readmissions during that same period.
It’s an ambitious goal, but answer “D” in the quiz – no one’s really counting the number of dead patients – sadly remains the correct one. When the HENs are fully operational, they will be asking member hospitals to measure HACs in a standardized manner. However, different HENs may still have somewhat different measurement methods. Moreover, the extent of patient harm in primary care and in other, far more hazardous outpatient environments remains almost entirely a mystery.
If patients’ lives count, it’s long past time to count, and counter, every type of preventable harm.
Michael Millenson is a Highland Park, IL-based consultant, a visiting scholar at the Kellogg School of Management and the author of “Demanding Medical Excellence: Doctors and Accountability in the Information Age.
To your point, how do we measure HAC’s and standardize that process across different systems? A lot of ratings organizations measure complications like renal failure after surgery, but there is huge variability in how different institutions capture this kind of data. This includes everything from preoperative processes that capture Present on Admission (POA) data to how different practitioners define and capture coding of clinical conditions postoperatively.
The second pitfall you bring up is one of correlation. Great, so we figure out that someone has renal failure, but does decreasing that particular complication reduce patient deaths. Or is the latter too unsophisticated of a measure now? For example, in the old days, Anesthesia complications were measured by how many lives lost/saved. But now, their processes are so sophisticated, we see much less actual mortality. So how do we measure their success or performance to acknowledge that the bar has been raised and we need more sophisticated outcome measures.
Moreover, is it lives saved that matters or quality of life (i.e. renal failure may lead to dependence on dialysis on a percentage of patients who suffer the complication). One could argue that managing complications and quality ultimately leads to saving lives but that may not always be true.
Lastly, is your great point – there’s been so much attention focussed on surgery and in-hospital care, but that’s because it’s easiest to take on (how many people wash their hands, what’s the 30dy postop complication rate, etc). But how do we assess quality in outpatient centers and in primary care? And like with teachers, what are the measures we use to assess provider performance when so much of the result rests with the patients and how they manage their lives.