Lotta $$ flowing around health tech services this week. Jessica DaMassa asks me about Alphabet/Google putting $375m into Oscar, Best Buy $800m for GreatCall, no money for med school at NYU & pain for patients in a Netflix movie. All in Health in 2 point 00 minutes!–Matthew Holt
In large part due to the $35 billion, Health Information Technology for Economic and Clinical Health (HITECH) Act incentives more than 80% of acute care hospitals now use EHRs, from under 10% just 7 years ago. Despite considerable progress, we have not achieved all that was originally envisioned from this transformation and there have been numerous unexpected adverse consequences (UACs), i.e. unpredictable, emergent problems associated with health IT implementation, use and maintenance. In 2006, we described a set of UACs associated with use of computer-based provider order entry (CPOE) (see Table 1). Many of these originally identified UACs have not been completely addressed or alleviated, and some have evolved over time (e.g., more/new work, overdependence on technology, and workflow issues). Additionally, new UACs not just related to CPOE but to all aspects of EHR use have emerged over the last decade. We describe six new categories of UACs in this blog and then conclude with three concrete policy recommendations to achieve the promised, transformative effects of health IT.
1. Complete clinical information unavailable at the point of care
Adoption of EHRs was supposed to stimulate a tremendous increase in availability of patients’ clinical data, anytime, anywhere. This ubiquitous increase in data availability depended heavily on the assumption that once clinical data were routinely maintained in a computable format, they could seamlessly be transmitted, integrated, and displayed between health care systems’ EHRs, regardless of differences in the developer of the EHR. However, complete clinical information on all patients is not yet available everywhere it is needed.
Patient safety should be a major priority for the United States, and that requires designating a centralized entity or coordinating body to oversee efforts to ensure it. Such centralized oversight is one of the key recommendations of “Free from Harm,” a report published in December by the National Patient Safety Foundation. The report highlights the need to create a safety culture, since preventable medical errors in hospitals are estimated to result in as many as 440,000 deaths annually. That would make it the third leading cause of death – after heart disease and cancer.
A new report by the U.S. Government Accounting Office illuminates the challenges that hospitals face in implementing evidence-based safety practices. One of those challenges – determining which patient safety practices should be implemented – underscores the need for a coordinating entity and resource. The report states: “(Hospital) Officials noted that they face challenges identifying which evidence-based patient safety practices should be implemented in their own hospitals, such as when only limited evidence exists on which practices are effective. For example, officials from one hospital told GAO that the hospital tried several different practices in an effort to reduce patient falls without knowing which, if any, would prove effective.”
What’s more, preventing medical errors in hospitals is only part of the national challenge, as most health care is provided outside of hospital settings: in physicians’ offices and clinics; in outpatient surgical, medical, and imaging centers; and, in long-term, hospice, and home-care settings, among others. There are about 1 billion ambulatory visits each year in the United States, compared to 35 million hospital admissions. Those ambulatory settings are subject to medical errors as well. According to studies cited in “Free from Harm,” more than half of annual, paid, medical malpractice claims were for events in the outpatient setting.
In my opinion, the title of Dr. Koka’s post (“Very Bad Numbers“) is far too inflammatory for a subject that needs to be taken seriously. Dr. Koka’s summary of the approach I took in my JPS study is a reasonable summary, minus a few key points. Preventability of lethal errors is the problematic issue. The nine authors of the Classen paper did postulate that virtually all serious adverse events they found are preventable; I did not pull this out of the air. Preventability is a highly subjective area. A few years ago everyone assumed that hospital acquired infections were simply the cost of doing business. Now we know that the majority of infections can be prevented. The major difference Dr. Kota and I have is that he wants to rely exclusively on the Landrigan study, which is an excellent and large study, but it is not representative of the nation. It represented hospitals in North Carolina. That state was chosen because it was much more aggressive in efforts to reduce medical harm than the average state in the nation. The OIG study (2010) was in fact an attempt to be representative of the Medicare population across the country, but it is just Medicare beneficiaries. As I noted in my paper, none of the four studies can stand alone, not even the Landrigan paper.Continue reading…
Hospitals can get overwhelmed by the array of ratings, rankings and scorecards that gauge the quality of care that they provide. Yet when those reports come out, we still scrutinize them, seeking to understand how to improve. This work is only worthwhile, of course, when these rankings are based on valid measures.
Certainly, few rankings receive as much attention as U.S. News & World Report’s annual Best Hospitals list. This year, as we pored over the data, we made a startling discovery: As a whole, Maryland hospitals performed significantly worse on a patient safety metric that counts toward 10 percent of a hospital’s overall score. Just three percent of the state’s hospitals received the highest U.S. News score in patient safety — 5 out of 5 — compared to 12 percent of the remaining U.S. hospitals. Similarly, nearly 68 percent of Maryland hospitals, including The Johns Hopkins Hospital, received the worst possible mark — 1 out of 5 — while nationally just 21 percent did. This had been a trend for a few years.
The eminent physicians Martin Samuels and Nortin Hadler have piled onto the patient safety movement, wielding a deft verbal knife along with a questionable command of the facts.
They are the defenders of the “nobility” of medicine against the algorithm-driven “fellow travelers” of the safety movement. On the one side, apparatchiks; on the other, Captain America.
They are the fierce guardians of physician autonomy, albeit mostly against imaginary initiatives to turn doctors into automatons. By sounding a shrill alarm about straw men, however, they duck any need to define appropriate physician accountability.
Finally, as befits nobility, they condescend to their inferiors. How else to explain the tone of their response to the former chief executive officer of Beth Israel Deaconess Medical Center, Paul Levy? As for patients, Samuels and Hadler defend our “humanity.” How…noble.
To me, healing the sick is an act of holiness, not noblesse oblige. Fortunately, we Jews cherish a long tradition of arguing even with God Himself. A famous Talmudic story ends with God acknowledging that even Divine opinion isn’t enough to override the rule of law. Let’s take a closer look at Samuels’s and Hadler’s opinions in relation to the rules of medical evidence.Continue reading…
There are more than 50 in-flight medical emergencies a day on commercial airlines — or one for every 604 flights, according to a study published in 2013.
What are the odds that two emergencies would occur on the exact same flight, above the Atlantic Ocean and hours from the nearest airport?
My colleague Mark, a critical care physician with whom I’d worked as an ICU nurse, and I were traveling to the Middle East for a patient safety conference. We were comfortably tucked into our seats, as he snored next to me.
It must have been about 3 a.m. when I was awakened by an overhead announcement asking for a medical doctor. I nudged Mark, asking him to press his call light.
As the flight attendant approached, I told her that Mark was a doctor.
“And she’s an ICU nurse, and we work together,” he said, gesturing toward me.Continue reading…
Twenty years ago this month, the Boston Globe disclosed that health columnist Betsy Lehman, a 39-year-old mother of two, had been killed by a drug overdose during treatment for breast cancer at Dana-Farber Cancer Center. In laying out a grim trail of preventable mistakes at a renowned institution, the Globe prompted local soul searching and a new focus on patient safety nationally.
Although I didn’t know Betsy personally, we were about the same age, had two kids about the same ages and were in the same profession. (I, too, was a health care journalist.) That’s why I was particularly disappointed by a recent conference celebrating the reopening of the Betsy Lehman Center for Patient Safety and Medical Error Reduction. It was heavy on statistics and poll results; e.g., one in four Massachusetts adults say they’ve seen an error in their own care or the care of someone close to them.
While it’s true that Boston is the epicenter of thinking, writing and speaking about patient safety, words do not always translate into deeds.
The story of Chesley “Sully” Sullenberger – the “Miracle on the Hudson” pilot – is a modern American legend. I’ve gotten to know Captain Sullenberger over the past several years, and he is a warm, caring, and thoughtful person who saw, in the aftermath of his feat, an opportunity to promote safety in many industries, including healthcare.
In my continuing series of interviews I conducted for my upcoming book, The Digital Doctor: Hope, Hype, and Harm at the Dawn of Medicine’s Computer Age, here are excerpts of my interview with Sully, conducted at his house in San Francisco’s East Bay, on May 12, 2014.
Bob Wachter: How did people think about automation in the early days of aviation?
Sully Sullenberger: When automation became possible in aviation, people thought, “We can eliminate human error by automating everything.” We’ve learned that automation does not eliminate errors. Rather, it changes the nature of the errors that are made, and it makes possible new kinds of errors. The paradox of cockpit automation is that it can lower the pilot’s workload in phases of flight when the workload is already low, and it can increase the workload when the workload is already high.Continue reading…
While your humble correspondent continues to delight in the emerging science of “mHealth” as a newly minted start-up Chief Medical Officer, he ran across this interesting article on risk and patient safety.
Authors Thomas Lewis and Jeremy Wyatt worry that “apps” can lead to patient harm.
They posit that the likelihood of harm is mainly a function of 1) the nature of the mistake itself (miscalculating a body mass index is far less problematic than miscalculating a drug dose) and 2) its severity (overdosing on a cupcake versus a narcotic). When you include other “inherent and external variables,” including the display, the user interface, network issues, information storage, informational complexity and the number of patients using it, the risks can grow from a simple case of developer embarrassment to catastrophic patient loss of life.
In response, they propose that app developers think about this “two dimensional app space” that relies on a risk assessment coupled to a staggered regulation model. That regulation can range from simple clinical self assessment to a more complex and formal approval process.