Uncategorized

Half full. Half empty.

To support this point, he presented the chart above from the AHRQ Center for Delivery, Organizations and Markets (full study here) that demonstrates improvement in hospital risk-adjusted mortality for important diagnoses and procedures. Whether you have a heart attack or pneumonia, or whether you have an aneurysm repair or a hip replacement, your chance of dying in a hospital has gone down over the years. (I know this data ends in 2004, but I would be confident that the trends have held.)

I hope you, like I, am impressed with these numbers. They are a story worth telling and retelling.

But there is another story that has to be retold, too. It remains a bit of a paradox for me, one I discuss in my speeches. The paradox is how this group of extremely able and well intentioned clinicians, while accomplishing these great things, also constitute an important public health hazard, in terms of the number of people who are killed or otherwise harmed while in hospitals.

The famous Institute of Medicine Report, To Err is Human, was published in 2000. It documented, in a way that many people find uncomfortable, the number of unnecessary deaths that occur in hospitals. We now understand that much of this harm is caused by the systems of care, by how work is organized in hospitals, by excessive levels of variation, or, to put it another way, by insufficient levels of standardization based on process improvement principles. I summarized Brent James on this point in a post below:

We continue to rely on the “craft of medicine”, in which each physician practices as an independent expert — in the face of huge clinical uncertainty (lack of clinical knowledge; rapidly increasing amount of medical knowledge; continued reliance on subjective judgment; and limitations of the expert mind when making complex decisions.)

And, as noted below, we also often do not draw on our greatest resource, patients, in the design of care delivery. And finally, many hospitals and doctors are held back by a fear or reluctance to publish clinical outcomes in real time so that organizations can hold themselves accountable.

Is the glass half full, or half empty? As in such cases, probably both. Let’s give tremendous credit to the medical profession for what it has accomplished. But let’s hope that members of the profession also take to heart the fact that the job of reducing harm is not nearly done.

Paul Levy is the President and CEO of Beth Israel Deconess Medical Center in Boston. Paul recently became the focus of much media attention when he decided to publish infection rates at his hospital, despite the fact that under Massachusetts law he is not yet required to do so. For the past three years, he has blogged about his experiences in an online journal, Running a Hospital, one of the few blogs we know of maintained by a senior hospital executive.

Categories: Uncategorized

Tagged as:

7 replies »

  1. bev MD,
    Re. the IOM report, the JAMA study was the one I could remember out of the top of my head. I think it delivers on its major point. As a side note, you may (or may not) be aware of research indicating peer reviewers being subject to both hindsight bias (rating care poor because one knows about the poor outcome) and outcome bias (rating the same care worse when damage to the patient is severe and permanent as opposed to mild/transient). The retrospective peer review of diagnostic errors is compeletely nonsensical.
    Re. hospital peer review, you are right, although I believe that the peer review of a legal threat (e.g. unexpected poor outcome/complaint/litigation threat) is far more common than the interesting case Dr. Wachter describes. And as a side note, even though I have witnessed docs being hard but fair or even very harsh (see bias above) towards their colleagues from the same institution, I think that peer review should be done blinded, and in a different area of the country. Thanks for the interesting link.

  2. rbar;
    I am aware of the criticism of the numbers used in the IOM report, but I think a broader examination of the literature following this report might yield a more convincing study than the JAMA one. If the study’s point is simply to document interobserver variability, then point made. However,I do not find it particularly compelling otherwise.
    I wish to correct any misapprehension that peer review is only conducted to review possible malpractice cases. In the hospital setting, peer review should be, and sometimes is, a proactive routine examination of several red flag indicators to detect poor practice patterns or even system defects leading to poor practice patterns. For more detail, one may see the post on Dr. Robert Wachter’s blog from UCSF, which parenthetically is about an actual fraudulent physician:
    http://community.the-hospitalist.org/blogs/wachters_world/archive/2010/05/11/can-peer-review-catch-a-rogue-doctor.aspx

  3. I would venture to say that any thoughtful, reasonable physician familiar with peer review (usually done for malpractice cases) would agree that peer review is a subjective undertaking and most problems are in the grey zone between appropriate and inappropriate care. Are there cases of egregious negligence (failure to examine the patient, failure to react on a concerning test result?). For sure, but that’s not the question here.
    bae, it is always nice to have high numbers (BTW, isn’t that part of stats: with what probability you can draw conclusions from smaller samples?), but do you have any idea how hard and timeconsuming it is to reconstruct a case from an MR (electronic or other)? That’s what the study shows – peer review is subjective. Unless you can really fault their stats (for instance, using parametric i/o nonparametric tests), I think your criticism is misguided.

  4. Reading this post and the one about how statistics can be manipulated offers a curious juxtaposition. The JAMA article cited by rbar uses this as its basis: “Fourteen board-certified, trained internists used a previously tested structured implicit review instrument to conduct 383 reviews of 111 hospital deaths at 7 Department of Veterans Affairs medical centers, oversampling for markers previously found to be associated with high rates of preventable deaths.” 14 people looking at 111 deaths (I’m not sure how you get 383 reviews out of that – sorry, but the math doesn’t work for me) seems like a really small sample, reviewed by very few people, based on the results only in one care environment (VA). I’m not sure the IOM study basis is any better because I don’t have the specifics. But if these are the kinds of studies that get published and are said to have validity, then I’m truly scared.

  5. I wholeheartedly agree with “harm is caused by the systems of care, by how work is organized in hospitals, by excessive levels of variation, or, to put it another way, by insufficient levels of standardization based on process improvement principles”.
    But why always cite the IOM study, which, with its excessive numbers, run against common sense? This very debatable report has become a mindless mantra by many, some of which are very smart and well intentioned people
    (compare
    http://jama.ama-assn.org/cgi/content/abstract/286/4/415 )