Today, an intensive care unit patient room contains anywhere from 50 to 100 pieces of medical equipment made by dozens of manufacturers, and these products rarely, if ever, talk to one another. This means that clinicians must painstakingly review and piece together information from individual devices—for instance, to make a diagnosis of sepsis or to recognize that a patient’s condition is plummeting. Such a system leaves too much room for error and requires clinicians to be heroes, rising above the flawed environment that they work in. We need a heath care system that partners with patients, their families and others to eliminate all harms, optimize patient outcomes and experience and reduce waste. Technology must enable clinicians to help achieve those goals. Technology could do so much more if it focused on achieving these goals and worked backwards from there.
This week marks a step that holds tremendous promise for patients and clinicians. On Monday the Masimo Foundation hosted the Patient Safety Science & Technology Summit in Laguna Niguel, California, an inaugural event to convene hospital administrators, medical technology companies, patient advocates and clinicians to identify solutions to some of today’s most pressing patient safety issues. In response to a call made by keynote speaker former President Bill Clinton, the leaders of nine leading medical device companies pledged to open their systems and share their data.
Lack of interoperability between medical devices plays no small role in the 200,000 American deaths caused by preventable patient harm each year, such as in the case of 11-year-old Leah Coufal. After undergoing elective surgery, Leah received narcotics intended to ease her pain.
When Leah received too much medication, it suppressed her breathing, eventually causing it to stop altogether. Had she been monitored, a device could have alerted clinicians when Leah’s breathing slowed to a dangerous level.
But as we know, clinicians are busy and unfortunately don’t always respond to alarms from bedside machines. If a machine measuring her breathing had been linked with the device delivering her medication, it could have automatically stopped the drugs from infusing into her blue, oxygen-deprived veins.
All of this is possible today; technology is not a barrier. Until now, the only thing that’s stood in the way is a lack of leadership and a lack of willingness for device manufacturers to cooperate.
When you or a loved one enters a hospital, it is easy to feel powerless. The hospital has its own protocols and procedures. It is a “system” and now you find yourself part of that system.
The people around you want to help, but they are busy—extraordinarily busy. Nurses are multi-tasking. Residents are doing their best to learn on the job. Doctors are trying to supervise residents, care for patients, follow up on lab results, enter notes in patients’ medical records and consult with a dozen other doctors.
Whether you are the patient or a patient advocate trying to help a loved one through the process, you are likely to feel intimated—and scared.
Hospitals can be dangerous places, in part because doctors and nurses are fallible human beings, but largely because the “systems” in our hospitals just aren’t very efficient. In the vast majority of this nation’s hospitals, a hectic workplace undermines the productivity of nurses and doctors who dearly want to provide coordinated patient-centered care.
At this point, many hospitals understand that they must streamline and redesign how care is delivered and how information is shared so that doctors and nurses can work together as teams. But this will take time. In the meantime, patients and their advocates can help improve patient safety.
A little more than 13 years ago, the Institute of Medicine (IOM) released its seminal report on patient safety, To Err is Human.
You can say that again. We humans sure do err. It seems to be in our very nature. We err individually and in groups — with or without technology. We also do some incredible things together. Like flying jets across continents and building vast networks of communication and learning — and like devising and delivering nothing- short-of-miraculous health care that can embrace the ill and fragile among us, cure them, and send them back to their loved ones. Those same amazing, complex accomplishments, though, are at their core, human endeavors. As such, they are inherently vulnerable to our errors and mistakes. As we know, in high-stakes fields, like aviation and health care, those mistakes can compound into catastrophically horrible results.
The IOM report highlighted how the human error known in health care adds up to some mindboggling numbers of injured and dead patients—obviously a monstrous result that nobody intends.
The IOM safety report also didn’t just sound the alarm; it recommended a number of sensible things the nation should do to help manage human error. It included things like urging leaders to foster a national focus on patient safety, develop a public mandatory reporting system for medical errors, encourage complementary voluntary reporting systems, raise performance expectations and standards, and, importantly, promote a culture of safety in the health care workforce.
How are we doing with those sensible recommendations? Apparently to delay is human too.
I’ve been getting emails about the NY Times piece and my quotation that the penalties for readmissions are “crazy”. Its worth thinking about why the ACA gets hospital penalties on readmissions wrong, what we might do to fix it – and where our priorities should be.
A year ago, on a Saturday morning, I saw Mr. “Johnson” who was in the hospital with a pneumonia. He was still breathing hard but tried to convince me that he was “better” and ready to go home. I looked at his oxygenation level, which was borderline, and suggested he needed another couple of days in the hospital. He looked crestfallen. After a little prodding, he told me why he was anxious to go home: his son, who had been serving in the Army in Afghanistan, was visiting for the weekend. He hadn’t seen his son in a year and probably wouldn’t again for another year. Mr. Johnson wanted to spend the weekend with his kid.
I remember sitting at his bedside, worrying that if we sent him home, there was a good chance he would need to come back. Despite my worries, I knew I needed to do what was right by him. I made clear that although he was not ready to go home, I was willing to send him home if we could make a deal. He would have to call me multiple times over the weekend and be seen by someone on Monday. Because it was Saturday, it was hard to arrange all the services he needed, but I got him a tank of oxygen to go home with, changed his antibiotics so he could be on an oral regimen (as opposed to IV) and arranged a Monday morning follow-up. I also gave him my cell number and told him to call me regularly.
We expect a level of perfection from our doctors, nurses, surgeons and care providers that we do not demand of our heroes, our friends, our families or ourselves. We demand this level of perfection because the stakes in medicine are the highest of any field — outcomes of medical decisions hold our very lives in the balance.
It is precisely this inconsistent recognition of the human condition that has created our broken health care system. The all-consuming fear of losing loved ones makes us believe that the fragile human condition does not apply to those with the knowledge to save us. A deep understanding of that same fragility forces us to trust our doctors — to believe that they can fix us when all else in the world has failed us.
I am always surprised when people say someone is a good doctor. To me, that phrase just means that they visited a doctor and were made well. It is uncomfortable and unsettling — even terrifying — to admit that our doctors are merely human — that they, like us, are fallible and prone to bias.
They too must learn empirically, learning through experience and moving forward to become better at what they do. A well-trained, experienced physician can, by instinct, identify problems that younger ones can’t catch — even with the newest methods and latest technologies. And it is this combination of instinct and expertise that holds the key to providing better care.
We must acknowledge that our health care system is composed of people — it doesn’t just take care of people. Those people — our cardiologists, nurse practitioners, X-ray technicians, and surgeons — work better when they work together.
Working together doesn’t just mean being polite in the halls and handing over scalpels. It means supporting one another, communicating honestly about difficulties, sharing breakthroughs to adopt better practices, and truly dedicating ourselves to a culture of medicine that follows the same advice it dispenses.
In a time of EHR naysayers, mean-spirited election year politics, and press misinterpretation (ONC and CMS do not intend to relax patient engagement provisions), it’s important that we all send a unified message about our progress on the national priorities we’ve developed by consensus.
1. Query-based exchange – every country in the world that I’ve advised (Japan, China, New Zealand, Scotland/UK, Norway, Sweden, Canada, and Singapore) has started with push-based exchange,replacing paper and fax machines with standards-based technology and policy. Once “push” is done and builds confidence with stakeholders, “pull” or query-response exchange is the obvious next step. Although there are gaps to be filled, we can and should make progress on this next phase of exchange. The naysayers need to realize that there is a process for advancing interoperability and we’ll all working as fast as we can. Query-based exchange will be built on top of the foundation created by Meaningful Use Stage 1 and 2.
2. Billing – although several reports have linked EHRs to billing fraud/abuse and the recent OIG survey seeks to explore the connection between EHR implementation and increased reimbursement, the real issue is that EHRs, when implemented properly, can enhance clinical documentation. The work of the next two years as we prepare for ICD-10 is to embrace emerging natural language processing technologies and structured data entry to create highly reproducible/auditable clinical documentation that supports the billing process. Meaningful Use Stage 1 and 2 have added content and vocabulary standards that will ensure future documentation is much more codified.
3. Safety – some have argued that electronic health records introduce new errors and safety concerns. Although it is true that bad software implemented badly can cause harm, the vast majority of certified EHR technology enhances workflow and reduces error. Meaningful Use Stage 1 and 2 enhance medication accuracy and create a foundation for improved decision support. The HealtheDecisions initiative will bring us guidelines/protocols that add substantial safety to today’s EHRs. Continue reading…
The thesis is important, the honesty is admirable, and the timing seems right. Yet I found the book disappointing, sometimes maddeningly so. My hopes were high, and my letdown was large. If your political leanings are like mine, think Obama and the first debate.
Makary hits the ground running, with the memorable tales of two surgeons he encountered during his training: the charming but utterly incompetent Dr. Westchester (known as HODAD, for “Hands of Death and Destruction”) and the misanthropic “Raptor,” a technical virtuoso who was a horse’s ass. Of course, all the clinicians at their hospital knew which of these doctors they would see if they needed surgery, but none of the patients did. (Of HODAD, Makary writes, “His patients absolutely worshipped him… They had no way of connecting their extended hospitalizations, excessive surgery time, or preventable complications with the bungling, amateurish, borderline malpractice moves we on the staff all witnessed.”)
This is compelling stuff, and through stories like these Makary introduces several themes that echo throughout the book:
1) There are lots of bad apples out there.
2) Patients have no way of knowing who these bad apples are.
3) Clinicians do know, but are too intimidated to speak up.
4) If patients simply had more data, particularly the results of patient safety culture surveys, things would get much better.
Most tools used in medicine require knowledge and skills of both those who develop them and use them. Even tools that are themselves innocuous can lead to patient harm.
For example, while it is difficult to directly harm a patient with a stethoscope, patients can be harmed when improper use of the stethoscope leads to them having tests and/or treatments they do not need (or not having tests and treatments they do need). More directly harmful interventions, such as invasive tests and treatments, can harm patients through their use as well.
To this end, health information technology (HIT) can harm patients. The direct harm from computer use in the care of patients is minimal, but the indirect harm can potentially be extraordinary. HIT usage can, for example, store results in an electronic health record (EHR) incompletely or incorrectly. Clinical decision support may lead clinician astray or may distract them with unnecessary excessive information. Medical imaging may improperly render findings.
Search engines may lead clinicians or patients to incorrect information. The informatics professionals who oversee implementation of HIT may not follow best practices to maximize successful use and minimize negative consequences. All of these harms and more were well-documented in the Institute of Medicine (IOM) report published last year on HIT and patient safety .
One aspect of HIT safety was brought to our attention when a critical care physician at our medical center, Dr. Jeffery Gold, noted that clinical trainees were increasingly not seeing the big picture of a patient’s care due to information being “hidden in plain sight,” i.e., behind a myriad of computer screens and not easily aggregated into a single picture. This is especially problematic where he works, in the intensive care unit (ICU), where the generation of data is vast, i.e., found to average about 1300 data points per 24 hours . This led us to perform an experiment where physicians in training were provided a sample case and asked to review an ICU case for sign-out to another physician . Our results found that for 14 clinical issues, only an average of 41% of issues (range 16-68% for individual issues) were uncovered.
I recently had the great fortune of attending Health 2.0 in San Francisco. The conference was abuzz with new medical technologies that are harnessing the power of innovation to solve healthcare problems including many new mobile medical application companies showcasing their potential. As I walked and talked around the exhibit floor, one thing caught my ear, or I should say one thing didn’t catch my ear. Among the chatter about these products, the concern about FDA regulation of this product segment, or even FDA regulation in general was noticeably absent. While many of the application developers are well aware of potential FDA involvement, most would be hard-pressed to outline the impact this would have on their companies and products.
Being labeled a medical device, which is the direction the FDA is leaning, could have a significant impact on business model organization, top-line revenue, and product deployment. For unprepared start-ups, FDA regulation could signal an end for their company. This is in stark contrast to well informed developers who are preparing themselves for the change and would most likely be able to leverage these regulations to their advantage.
A time-and-technology challenged FDA, proliferation of software-controlled medical devices in and outside of hospitals, and growth of hackers have resulted in medical technology that’s riddled with malware. Furthermore, lack of security built into the devices makes them ripe for hacking and malfeasance.
Scenario: a famous figure (say, a politician with an implantable defibrillator or young rock star with an insulin pump) becomes targeted by a hacker, who industriously virtually works his way into the ICD’s software and delivers the man a shock so strong it’s akin to electrocution.
Got the picture?
Welcome to the dark side of health IT and connected health. Without strong and consistently adopted security technology and policies, this scenario isn’t a wild card: it’s in the realm of possibility. This is not new-news: back in 2008, a research team figured out how to program a common pacemaker-defibrillator to transmit a “deadly 830-volt jolt,” according to Barnaby Jack, a security expert.