Uncategorized

Say it Ain’t So, Joe by Paul Levy

Fromjoe

I heard a great presentation this morning by Joe Newhouse,
from the Department of Health Policy and Management at Harvard Medical
School. There was one point that he made that really caught my
attention. It was a cite to a 2004 article in the Journal of the American Medical Association
(Dimick, et al, JAMA 2004; 292: 849) that presented the issue of how
many cases you would need to collect of a certain clinical procedure to
be able to make a determination that a given hospital’s mortality for
that procedure was twice the national average. It turns out that only
for CABGs (coronary artery bypass grafts) are there enough cases
performed to have statistical confidence that a hospital has that poor
a record compared to the national average. For other procedures (hip
replacements, abdominal aortic aneurysm repairs, pediatric heart
surgery, and the like) there are just not enough cases done to make
this assessment. (By the way, if you just want to know if a hospital is
say, 20%, worse on relative mortality, you need even a bigger sample
size.)

I have copied the basic chart above. Sorry, but I
couldn’t nab the whole slide. The vertical axis is "Observed 3 year
hospital case loads", or the number of cases performed over three
years. The horizontal access is "Operative mortality rates". The line
curving down through the graph shows the frontier at which statistical
significance can be determined. As you see, only CABGs are above the
line.

And, as Joe pointed out, this chart is based on three
years of data for each hospital. With only a year’s worth from each
hospital, you surely don’t have enough cases to draw statistically
interesting conclusions about relative mortality. And remember, too,
that this is hospital-wide data. No one doctor does enough cases to
cross the statistical threshold.

So, this would suggest that
publication of hospital mortality rates for many procedures would not
be helpful to consumers or to referring physicians.

Meanwhile, though, you might recall a post
I wrote on surgical results as calculated by the American College of
Surgeons in their NSQIP project. This program produces an accurate
calculation of a hospital’s actual versus expected outcomes for a
variety of surgical procedures. Unfortunately, the ACS does not permit
these data to be made public.

Where does this leave us?  Well, as I noted in a Business Week article,
the main value of transparency is not necessarily to enable easier
consumer choice or to give a hospital a competitive edge. It is to
provide creative tension within hospitals so that they hold themselves
accountable. This accountability is what will drive doctors, nurses,
and administrators to seek constant improvements in the quality and
safety of patient care. So, even if we can’t compare hospital to
hospital on several types of surgical procedures, we can still commend
hospitals that publish their results as a sign that they are serious
about self-improvement.

Categories: Uncategorized

Tagged as: ,

1 reply »

  1. Good points. As you suggest, there are a range of inferences that you COULD draw from Newhouse’s presentation…
    …that since it will be difficult to satify the statisticians with sufficient sample sizes and valid methodologies to compare specific surgical mortality rates that we ought not bother with ANY of this transparency and consumer empowerment stuff? I hope that’s not the inference he suggests.
    …that (as you point out)the measurement process is helpful to encourage internal improvement?
    …or that as a patient you need to look at a RANGE of measures to size up your choice of hospital, e.g, infection rates, error rates, satisfaction ratings.
    …or other inferences?
    Newhouse’s point is akin to a Consumer Reports rating of one aspect of a products features. I just bought a flat panel TV, and in the process learned that comparing lines of resolution (720 vs. 1080) is just one aspect of selection.
    Consumers will easily grasp that surgical mortality rates are just one aspect of comparison.