Uncategorized

QUALITY: Performance measures only have a little of the answer

(Hat-tip to Modern Healthcare for spotting this one). While there was lots of fuss about the IHI 1OOK lives campaign recently and whether it did or didn’t meet its target—and the NY Times gave it a pat on the back this morning in the Editorial section—there’s perhaps even more important news from a study published in JAMA today. A large multi-center team looked at the Medicare data for performance measures on post-heart attack patients with regard to how improved processes related to outcomes. These measures are the bedrock of the “we know what to do, but we don’t know how to do it” meme of IHI and the quality movement. In other words, the theory is that if we just did it all as well as the literature says we should, then there is potential for vast improvement. Unfortunately the outcomes are sobering for those of us who believe that if you apply relatively simple industrial processes to medicine it can markedly improve outcomes (and lower costs too).

We found moderately strong correlations (correlation coefficients ≥0.40; P values <.001) for all pairwise comparisons between beta-blocker use at admission and discharge, aspirin use at admission and discharge, and angiotensin-converting enzyme inhibitor use, and weaker, but statistically significant, correlations between these medication measures and smoking cessation counseling and time to reperfusion therapy measures (correlation coefficients <0.40; P values <.001). Some process measures were significantly correlated with risk-standardized, 30-day mortality rates (P values <.001) but together explained only 6.0% of hospital-level variation in risk-standardized, 30-day mortality rates for patients with AMI.

In other words, even when the hospitals did well on the performance measures, it only explained a small fraction of the overall variation in outcomes. So there are to my mind only two possible conclusions. Either performance measurements and controlling process variation don’t matter too much, or we actually—in this case at least—don’t know what works. Neither one is a particularly satisfying explanation.

Livongo’s Post Ad Banner 728*90

Categories: Uncategorized

Tagged as: ,

16
Leave a Reply

16 Comment threads
0 Thread replies
0 Followers
 
Most reacted comment
Hottest comment thread
9 Comment authors
Michael Eliastam MDMattSteve Beller, Ph.DTom Leithjohn Recent comment authors
newest oldest most voted
Michael Eliastam MD
Guest
Michael Eliastam MD

I like this idea of avoiding what we know does not work. Can we have some dialogue on that? What is good example that we can examine now? (not gastric freezing!).

Eric Novack
Guest

I feel a bit like a broken record, but since the discussion has strayed somewhat from Matthew’s original post, I would like to try to get Maggie to agree with me on one point. Maggie- I agree with your concept of a bell curve (in fact, I regularly talk with patients about ‘the great bell curve of life’ that explains variations in outcomes, intelligence, running speed, etc.). The way to move the curve ‘to the left’ as you say, is much easier if we focus on what SHOULD NOT be done for certain conditions, rather than trying to get agreement… Read more »

Steve Beller, Ph.D
Guest

“In other words, if ‘the professionals’ can’t measure how to choose a ‘good’ healthcare provider, then how can we expect consumers to make wise choices…”
That’s one way to say it!

Matt
Guest
Matt

“Isn’t consumer-directed healthcare based on the notion of having providers compete on outcomes and cost through transparency? The problem is, we need the data and information systems to generate truly useful outcomes and cost data before we can rationally expect consumer to select providers intelligently.”
In other words, if “the professionals” can’t measure how to choose a “good” healthcare provider, then how can we expect consumers to make wise choices…

Steve Beller, Ph.D
Guest

I agree that EMRs (EHRs) are an essential piece. But they must be applicable to all specialties and all clinical data sets, and, ideally, they should be designed to transmit the diagnostic, process, and outcomes data (stripped of identifiers) to researchers working in collaboration with the practitioners. In addition, they should be integrated with decision-support tools (including diagnostic assessment, medication management, basic alerts and reminders, and plan-of-care generation and execution management) if they are to help improve outcomes significantly. This means we need radical and affordable HIT innovation to evolve current day applications into transformational tools. Concerning Porter: Isn’t consumer-directed… Read more »

Maggie Mahar
Guest
Maggie Mahar

Matt and Steve–
I agree with both of you. If we want to measure quality, we need to be collecitng and analyzing data a much greater depth and breadth of data, and that is impossible without electronic medical records.
This reminds me of Michael Porter’s new book on healthcare and his assertion that all we need to do is “make outcomes trasnparent” and ask health care providers compete on quality.
Does he address the lack of EMRs needed to collect the data or the difficulty of “making outomes trasnparent?”

Matt
Guest
Matt

The current constraining factor for outcomes measurement is ease/burden on measurement.
We know that the measures we use are crude, but the burden on forcing – in a paper-based healthcare world – MORE chart review, etc. for measurement of P4P indicators can make the cost of measurement greater than (economic) reward.
I see lowering the burden/cost of indicator measurement (both for P4P and – God forbid – internal performance improvement efforts) as a key benefit of EMRs (of the not-so-distant future).

Steve Beller, Ph.D
Guest

Maggie’s response is excellent. I’ve always argued that it is foolish to measure care quality using only process measures, and this study validates my position. I’ve also argued that using only claims (administrative) data to measure care quality isn’t wise since comprehensive clinical (encounter) data is crucial. I am not at all surprised, therefore, that a weak correlation exists between a few generic processes and a few short-term claims-type outcomes measures (e.g., risk-adjusted 30-day mortality rates). The healthcare industry has to start collecting and analyzing lots of comprehensive, detailed, clinically-relevant data, including: • Diagnostic data about patients’ physical and psychological… Read more »

Tom Leith
Guest
Tom Leith

Matthew writes: > So there are to my mind only two possible conclusions One of my former boss/mentors taught me If you didn’t measure it, you don’t know it. Bureaucracy driven process control and reporting has enabled the measurements outlined in the abstract. This is science, and is reason enough to continue bureaucracy driven process control and reporting. Apparently, relatively simple industrial processes can improve 30-day mortality rates for AMI by 6% in spite of the other apparent fact that “we” don’t know much about what drives 30-day mortality rates. This is nothing to sneeze at. Billion dollar drugs are… Read more »

john
Guest
john

I swear I hadn’t read maggie’s comments before I posted this. But now I remember who posted the article a few weeks ago! Thanks Maggie!

john
Guest
john

Disclosure: I haven’t read this study in full yet. But a few quick comments: – How can anyone knock Pay for Performance while it is still in its infancy? We’ve finally aligned hospitals’ incentives with care that should in theory result in less business for the hospital…aligning hospitals’ incentives with patients’ is the Fermat’s Last Theorem of health care, and at least CMS is trying. – Someone on this blog referenced Atul Gawande’s New Yorker Article entitled “The Bell Curve” (http://www.newyorker.com/fact/content/?041206fa_fact) about a month ago. I think this article is a great illustration of the (in the words of nacho… Read more »

Maggie Mahar
Guest
Maggie Mahar

The JAMA study shows that outcomes research is still an infant science–which makes “pay for performance” premature, at best. Today,the measures that we use to rate performance are generally either too crude (did the patient live or die over the next six months?) or too narrow (did he receive an aspirin?) Measuring the quality of medical outcomes is far more complex than judging the quality of Toyotas as they roll off the assembly line. We need far subtler measures of quality. For exapmle, if the patient died, was he in extreme pain? Did he undergo an unncessary, stressful operation before… Read more »

Matthew Holt
Guest
Matthew Holt

Maybe Eric, but the truth is that the lack of a system now has lead to the tremendous variation in outcomes, and I find it more likely that we need more investigation of what works rather than just accepting the status quo. Of course that assumes that we care about outcomes. As you know I’m more interested in constraining inputs.

Eric Novack
Guest

Barry- that is a fabulous question… and one for which I would like to answer with a whole posting— I will try to find time today or tonight to give your answer its due.

Barry Carol
Guest
Barry Carol

Could it be that the most important variable here is the skill of the surgeon or interventional cardiologist, at least, with respect to those patients that required an intervention or surgery?