In my opinion, the title of Dr. Koka’s post (“Very Bad Numbers“) is far too inflammatory for a subject that needs to be taken seriously. Dr. Koka’s summary of the approach I took in my JPS study is a reasonable summary, minus a few key points. Preventability of lethal errors is the problematic issue. The nine authors of the Classen paper did postulate that virtually all serious adverse events they found are preventable; I did not pull this out of the air. Preventability is a highly subjective area. A few years ago everyone assumed that hospital acquired infections were simply the cost of doing business. Now we know that the majority of infections can be prevented. The major difference Dr. Kota and I have is that he wants to rely exclusively on the Landrigan study, which is an excellent and large study, but it is not representative of the nation. It represented hospitals in North Carolina. That state was chosen because it was much more aggressive in efforts to reduce medical harm than the average state in the nation. The OIG study (2010) was in fact an attempt to be representative of the Medicare population across the country, but it is just Medicare beneficiaries. As I noted in my paper, none of the four studies can stand alone, not even the Landrigan paper.
The only one of the papers from which one can glean a rate of lethal preventable adverse events is Landrigan, and this is 9/14 = 64%, little different from the rate for all serious, preventable events (63%). Suppose I had ignored the postulate in the Classen paper (100%) and just used the lethal percentage from the Landrigan paper. The result would have been 34,400,000 X 0.64 X 0.0089 = 196,000, where 0.0089 is the weighted average (all four studies) of the decimal percentage of the occurrence of adverse events per hospitalization. With the additional factors I used originally, this comes to (196,000 X 2) + 20,000 = 410,000, which is hardly different than my original number of 440,000. The “20,000” number comes from a low estimate of the number of lethal diagnostic errors, which is generally in the range of 40,000 to 80,000 per year.
What of the factor of 2? As Dr. Koka notes, this was to compensate for the known limitations in the global trigger tool to detect errors of omission, context, and communication. Additionally, this factor was to compensate for the absence of information in the medical records that can be identified by a physician reviewer as a serious adverse event originating during hospitalization. The Weismann study of 2008 looked at the hospital records of 1000 heart patients and found that patients (confirmed by the investigating team) could identify three times as many serious adverse events as were evident during review of the medical records. So, should I have used a factor of 3, or maybe even more? To my knowledge, there is no other study like Weismann’s, so it is a singlepoint. With that in mind, I simply settled for a factor of 2, which is likely to be low – in my opinion.
I don’t like the idea of likening the results of my study to the crashing of 2-3 jumbo jets per day. The truth is that folks were in the hospital because they had a serious illness or condition. If a preventable adverse event hastened their death, then the cause of death in most cases was the original illness combined with a preventable event. For example, much attention is finally being given to sepsis. If I arrive at the hospital with sepsis and it takes the hospital a day to diagnose that and start the sepsis protocol, and I die in a few days, my death is due to sepsis and a delayed diagnosis, which is a preventable, lethal adverse event. Many premature “deaths” occur after hospital discharge. I cite the example of Rep. John Murtha who died of an infection because his bowel was nicked during surgery.
Early in his blog post Dr. Koka notes that I came to the cause of patient safety because of the care my son received in his college town that led to his death. According to my book (this is where he got some information), A Sea of Broken Hearts, four cardiologists had deemed my son’s care to have met the standard of care. A careful review of my son’s medical records and intelligent questions by a cognizant, unbiased cardiologist would have shown two lethal technical errors, lethal errors of communication and a remarkably unethical pathway while hospitalized. Below I summarize those failures.
Failure to follow guidelines: Patient’s cardiologists failed to apply a guideline published by the National Council on Potassium in Clinical Practice in the Archives of Internal Medicine two years before patient died. That guideline called for potassium replacement in patients with a serum potassium below 4 mEq/L and heart arrhythmia.The patient had both after his initial, non-fatal syncope.
Failure to make a diagnosis: Patient’s cardiologists failed to make a diagnosis of acquired long QT syndrome. Diagnostic criteria were developed in the 1980s by Peter Schwartz and colleagues. Since then the criteria have been published in monographs, cardiology textbooks and were published as a “Curriculum in Cardiology” in the American Heart Journal just 8 months before patient died. Patient’s score in that diagnostic system was 5.5, with anything over 4 being highly likely for that diagnosis.
Failure to communicate with patient: At the end of patient’s last invasive test (electrophysiology), he was warned not to resume running, but this warning came only when he was heavily sedated with Versed, a drug known to cause amnesia. His written instructions at discharge from the hospital were only “Do not drive for 24 hours.” He died 3 weeks later while running.
Failure to communicate doctor to doctor: Five days after hospital discharge, patient had an office visit with a physician in training in family medicine. She told him that there was nothing more his doctors could do for him. Any precaution against running is absent from her office visit record. Clearly, the cardiologists had not communicated with her about the restriction on running.
Missing Cardiac MRI: Patient’s cardiac MRI was the last non-invasive test before going to invasive tests. It was recommended by the cardiology consultant. A smart cardiologist would ask, “Why were the results of this test totally missing from the patients’ records?” They were missing because the test was never done properly. Why is that? Technicians had not been trained on new software for their machine. This is critically important because the patient was never told this before he was “consented” for invasive tests.
Record falsification: Inspection of the records show that patient was offered a loop monitor at the conclusion of his electrophysiology test and this is what the consulting cardiologist recommended in a letter to the lead cardiologist. Why do his medical records show after he was returned to the hospital in a deep coma following his fatal collapse that he had been offered and then refused a pacemaker? There is no record of him ever being offered a pacemaker. This suggests falsification of records.
While four cardiologists may have deemed that my son’s care met the standard of care, they were dishonest in their assessment. This dishonesty stems from the tendency of any closed community to protect its members. It may also come from a lack of time to evaluate my son’s medical records thoroughly or it may come from a lack of knowledge of cardiology. I’m certainly no cardiologist, but I can read and understand common medical journals and compare what I read with medical records. My son was betrayed by uninformed and unethical practice of medicine.
I point out that, to my knowledge, the global trigger tools in common use would not have detected any of the above events. It cannot know medical guidelines or diagnoses, it cannot detect errors of communication, it makes no attempt to assess informed consent, and it does not look for falsification of medical records.
Having written all this, my opinion is that physicians, nurses, and patients are going to have to work together to fix a non-system that is not making anyone very happy. Physicians are burning out, nurses are being grossly overworked in many cases, and patients do not know where to turn to get safe, cost-effective medical care. I appreciate the challenge Dr. Koka presented, but in the end we are both on the same side, I believe.
John T. James is the Chief Toxicologist for the National Aeronautics & Space Administration. Dr. James leads the Space Toxicology Office located at Lyndon B. Johnson Space Center in Houston, Texas.
Categories: Uncategorized
Of course preventable errors are a serious problem whether it is one person or million. I do believe though that throwing out big numbers to the point where it grossly exceeds our experience achieves the opposite effect than what is wanted. In my 20 years of nursing in positions with some oversight of all inpatient cases and in rather poor quality hospitals at that, I can still only count less a dozen fatal errors.
I appreciate the response. I have corresponded with Dr. James as well, and as the southern genteleman he no doubt is, he notes that we should let our issues rest in the realm of respectful disagreement. I do appreciate his efforts to align our interests and take some of the inflammatory rhetoric out of this issue. Our differences on his paper are one of assumptions.
The difference we have is one of assumptions. They are as follows.
1. The preventable harm rate is not accurate
The Classen paper does not, and was not intended to be a study of preventable harms. All because the authors (I can’t find where) say all harms may be preventable doesn’t mean all harms are. This disagreement would result in a very minor difference in the numbers however. Of course, I don’t think much of the number, because I take issue with the assumption that the total preventable harm rate is equal to the lethal harm rate.
2. The total preventable harm rate is equivalent to the lethal preventable harm rate
The only evidence we have that the rates may be equal are from the landrigan paper. Elsewhere, Dr. James’ takes issue with using just one paper to be representative of the whole population, but here he doesn’t seem to mind doing that. If he doesn’t mind doing that, why not take the only evidence based date we do have for lethal preventable harms (Landrigan) .004%, 4/1000, and then multiply by the total number to arrive at the number we seek?
3. The factor of 2.
This is the biggest leap of faith to take. I don’t see an evidence based estimate for the factor of 2. I read the Weisman paper. This was a paper that used post discharge interviews of patients to find events not in the medical record. So necessarily, these were all patients that survived. I still don’t see he came up with the factor based on this. Again, I could say its 10, and nobody(by Dr. James’ logic) could say I was wrong.
I also reviewed the serious adverse events deemed preventable in the Weisman paper. The first one is.. DVT develops after open heart surgery. If the patient was receiving appropriate DVT prophylaxis after their open heart surgery, was it really preventable? I guess if the open heart surgery was unindicated? But who knows?? After I read this paper, I did email Dr. landrigan to ask him if the clinical summaries on the 9 deaths in his study were available somewhere. He has not responded :/
Lastly Dr. James goes into the care of his son. I’m trying to step gingerly here. His examples of what went wrong do highlight some places where most cardiologists would disagree with him (as did the four cardiologists who did review the case). For instance, the electrolyte repletion paper that is referenced here would probably not support electrolyte repletion in his son’s case. But I think there is little that is constructive to be gained by me disputing the facts of a tragic case line by line. I’ll just say that it is not as simple as Dr. James makes it out to be. Regardless of our disagreement of what harms are preventable, there can be no doubt there are many errors, and broken parts of the system that everyone can agree on. I echo Dr. James’ final points. We are on the same side here.
Let me add one small item to the issue of “the patients were already sick.” To my knowledge, there has only been one systematic examination of whether the patients who were victims of medical error were likely to have died within one year, anyway, and that was in the pioneering Don Harper Mills study of California hospitals in the 1970s. Back then, the number was 25%. A lot has changed since then; nonetheless, it’s worth noting that when I extrapolated his findings on the frequency of death from medical error in California then to the number of hospital admissions nationwide in 1996, I came up with 180,000 preventable deaths annually in hospitals. That may suggest that the “25%” estimate may still be in the ballpark.