If Americans judged the quality of hospital care the way Newsweek judges high schools, we would soon be inundated with “charter hospitals” that only treat healthy patients.
As reported in The New York Times, thirty-seven of Newsweek’s top 50 high schools have selective admission standards, thereby enrolling the cream of the eighth grade crop. That means that when these high scoring eighth graders reach eleventh grade, they’ll be high scoring eleventh graders, helping the school move up the Newsweek rankings. These selective admission schools simply have to avoid screwing up their talented students.
That’s no way to determine how good a school is. The measure of a good education should be to assess how well students did in that school compared to how they would have been predicted to do if they had gone to other schools.
Imagine two liver transplant programs, one whose patients experience 90% survival in the year following their transplant and the other whose patients experience only a 75% survival rate. Based on that information, the former hospital looks like the place to go when your liver fails. But aren’t you curious about the kind of patients that receive care in these two hospitals? Wouldn’t you want to know whether that first hospital was padding its statistics by selectively transplanting relatively healthy patients?
When hospitals are judged by patient outcomes, savvy hospital administrators find ways to bolster their statistics. That’s why, according to a 2005 JAMA article, when New York State began reporting mortality rates and complication rates for patients undergoing cardiac surgery, hospitals in that state began to game the system. They found ways to avoid patients who were less likely to survive after surgery, minority patients, for example, or patients with lots of other illnesses, what doctors call co-morbidities.
New transplant programs, eager to qualify for Medicare reimbursement, work hard to bolster their transplant survival statistics, because Medicare looks for proof of success before counting a program as being eligible for reimbursement. The best way to achieve good survival statistics of course is to transplant the healthiest candidates. Heck, if you really want good survival rates, you should transplant me—I’m perfectly healthy (aside from a few worn-out joints)!
For the last two years, I have taught an undergraduate health policy course at Duke University with the assistance of Public Policy teaching assistants who have interests in education policy. These TAs often remark on the parallels between education policy and health policy. The challenges of measuring health care quality, for example, closely parallel the difficulties of assessing the quality of an educational institution. In healthcare, we “risk adjust” our measures, to even out the playing field between hospitals that care for otherwise different populations. These risk adjustments are not perfect by any means. But researchers are slowly improving their statistical measures.
Education policy needs to aggressively adopt the same kind of risk adjustment measures, if we hope to identify which high schools are truly doing the best job of educating their students and preparing them for the future.
Peter Ubel is a physician, behavioral scientist and author of Pricing Life: Why It’s Time for Health Care Rationing and Free Market Madness. He teaches business and public policy at Duke University. Peter’s new book, Critical Decisions will be available in the fall of 2012. You can follow him on his personal blog.
Categories: Uncategorized
“What if”
Yes Shimonoseki, what if indeed. What if patients designed health care and not corporate America.
Peter, Margalit, Brian, …
What if we re-framed the question regarding hospital quality and transparency?
What if rather than focusing on comparing hospitals based on various measures, we challenge ourselves and ask what can we do to make all hospitals top hospitals?
What if rather than listing all the barriers we accept that public reporting is as The Society of Thoracic Surgeons “believes the public has a right to know the quality of surgical outcomes and considers public reporting an ethical responsibility of the specialty”.
http://www.sts.org/quality-research-patient-safety/sts-public-reporting-online
What if rather than hospitals spending millions of dollars advertising their US News ranking they build on the “Philadelphia Plan”, that Chas. Scott Miller, M.D., , proposed in 1917, in which he laid out the foundation for the standardization of hospital statistics?
http://www.researchgate.net/publication/5836965_HOSPITAL_STATISTICS_AS_AN_AID_TO_PUBLIC_HEALTH_ADMINISTRATION-THE_PHILADELPHIA_PLAN
What if rather than fighting ideologically and special interest defined battles about healthcare, we spent our collective energies trying to figure out how to achieve quality, accessible, affordable evidence care for all Americans?
At citizens4health we are developing a plan that asks these and similar questions. We focus the conversation on one, simple question: How can we all, patients, doctors and health care institutions; citizens, non-for profit organizations and government; consumers and the “private” sector corporations; achieve effective solutions to the challenges facing the healthcare system?
Would love to expand on this but off to work…
You can learn more about our How Safe is Your Hospital? Initiative.. Its in development stage but you can get an idea of what our answer to the above questions look like.
http://www.citizens4health.org/How-safe-is-your-hospital
sm2012-My point wasn’t to discount the value of certification. It was to say that measure of competence is a complex thing to measure past a certain point, especially as we start to narrow the focus of our assessment (be it certification for health care providers of specific specialties, health care providers who are entrenched in the system for a certain number of years, etc.). I’m aware that there are systems in place to encourage professional education and growth over the years… we might do well to better understand the ‘soundness’ of these particular systems…the incentive to make them so across the board doesn’t seem to be there currently. Although as you point out and know more about-there are those organizations getting it right and using this information productively, in a ‘data savvy manner.’ I hope incentive will remain or increase for this to continue…
“There is a huge amount of testing and recredentialing and clinical skills maintenance and ongoing knowledge assessment that is mandated as part of these various certification processes.”
sm2012, are you saying all docs are created equal? Not sure if you’ve ever needed a doc for a complicated operation but would you or how would you seek out the best qualified surgeon? And if this “certification” process is so flawless then why is it so hard to get rid of incompetent docs?
Teachers go through certification and re-evaluation as well, what’s your opinion of the teaching profession?
“Just like airline pilots requiring regular testing and updating of abilities, so do physicians and nurses have continuing medical education and a variety of professional development systems.”
The difference is all these other groups are employees and their relationship to patients and the institution is not hierarchical. And the skill of airline pilots also protects their lives, so their stake in being proficient is based in part on self survival.
Maithri – if certification is useless, then get rid of the board certification process and all of the accompanying organizations. Go back to the days when anyone could hang up a shingle and call themselves whatever they wanted.
But that would be counter-productive to say the least. There is value in all of these systems. There is a huge amount of testing and recredentialing and clinical skills maintenance and ongoing knowledge assessment that is mandated as part of these various certification processes. Just like airline pilots requiring regular testing and updating of abilities, so do physicians and nurses have continuing medical education and a variety of professional development systems. All the various societies either have or are implementing MOC (maintenance of certificate) programs that require annual reassessment.
But that doesn’t fully answer the question that Peter1 is asking. For that, I won’t give the rote answer of peer review. There is much improvement needed in that system. I will instead point to Margalit’s comment, which I think is exactly correct. Data and transparency are both fine but the obsessive attempt to quantify everything leaves open the door manipulation (in whichever direction is desired). There is a place where data can improve the system and lead to a culture where the right questions are asked. This, btw, is the fundamental benefit of data – not to solve all the problems, but to help us ask the right questions and lead to an assessment of the deeper processes or individual performance issues underlying the results.
When this process is done well – and I have seen it done so by well run organizations – a data savvy culture emerges. Not one that is obsessed, but one where providers and administrators work with PI and Quality teams to assess data and use this information to help make system level changes that benefit everyone.
“What internal system exists to evaluate doctor competency?”
Good point–past the MCAT, or the boards, dare I say-how sound is certification? Who is putting forth the effort to make certification (which I suppose is a measure of ‘competence’) sound?
I think this obsessive-compulsive need to score and rate everything, as in top 10 this or 10 worse that, is becoming quite ridiculous, and counter productive.
Data is fine, transparency is fine, but manipulating and processing everything to death, does not necessarily shed any more light on the situation, and with a little bit of skill, may be used for the exact opposite.
What internal system exists to evaluate doctor competency?
I’m not clear why some people are objecting to this post. What is Peter saying really? Providers and administrators are sometimes gaming the system because there are challenges to risk adjusting accurately and the education system needs similar risk adjustment to be able to make fair comparisons. Those both seem like very fair points.
Brian, yes, analytics tools do incorporate for risk adjustment but as an analyst and consultant to hospitals changing their processes to accommodate public reporting, there is no question in my mind that accurately capturing co-morbidities on the front end is challenging and many people will just manage this issue by taking care of less sick patients on the back end.
Re the ‘shameful’ issue of the lack of data transparency, most professionals struggle to interpret and manage the data. There is a very punitive culture when insurers and the public assess data, rather than one of implementing change. Yes, data should be available but why any moreso in medicine than in any other industry such as venture capital, finance, politics, law, business, entertainment and of course, education?
The provider culture of everyone feeling they are excellent is a fair point. But to attack a surgeon with a high complication rate, for example, of renal failure postoperatively is not meaningful feedback. The vast majority of complications for a surgeon are medical and related to care that usually falls under the purview of the medical providers involved in a patient’s care. Showing this type of assessment simply convinces the surgeons that this type of data is useless, not reflective of his/her professional ability and demotivates them towards change. That is not the goal.
The point is to conduct a fair and careful analysis and to use the data in a meaningful way. When systems can learn to work together towards the correct goals and the data is not used punitively to ‘show which doctors suck’, but rather to genuinely improve care, that will happen.
“But would the good doctor really choose the wretched school for his own children, provided that on a risk adjusted basis it is no worse than the charter school across the street?”
No he wouldn’t, because the public school is not necessary to the success of his children who have everything they need to succeed via their doctor parent. The analogy may not be a good one but if you’re evaluating teachers, as opposed to doctors, the public ones should be getting paid twice what the charter ones are – too bad that doesn’t happen.
” would you go there?”
Good point, but if I was judging (without prejudice) on lots of experience treating difficult diagnosis then it may be my best option if I was a difficult case.
Schools are more about the right associations, prestige and bragging rights. Even poor parents want to send their kids to the rich school with already motivated and easily taught kids – less bad influences.
What a remarkable specimen of progressive intellectual virtuosity: take a perfectly pedestrian and noncontroversial staple of statistical technique (“Hey, let’s risk adjust those mortality statistics—brilliant!”) and then tendentiously extend the application to a witheringly more complex social experiment—inner city charter schools—as a means of impugning the latter. Yep, if you “risk adjust” in just the right way you’ll definitely be able to “prove” that that elitist charter school is actually no better than the wretched public school across the street, awash as it is in a sea of dysfunction and disadvantage. (Indeed, on a properly risk adjusted basis, that wretch is every bit the equal of Sidwell Friends or Phillips Exeter. Right?) Now, we all know how the good doctor will select the transplant surgeon for his beloved family member. But would the good doctor really choose the wretched school for his own children, provided that on a risk adjusted basis it is no worse than the charter school across the street?
Brian,
In primary care, at least, there are so few quality measures that hold up to any kind of rigorous analysis. Any thoughts on how to separate the good aples from the bad?
Let’s be honest for a minute….
If the data were adjusted for risk and socio-economic status and a variety of other things, and a school where 2% of kids go on to college, ended up being scored as excellent because of those adjustments, would you send your kids to that school?
If a hospital, had an absolute transplant survival rate of 10%, but after all the adjustments for treating very large numbers of homeless and poor people who are very sick and mostly hopeless, was rated as high quality, would you go there?
Peter,
The failure to understand and adjust for risk levels has been around certainly for as long as I’ve been an analyst. All credible analytics tools now incorporate them.
The larger and more shameful problem, though, is the failure of physicians, hospitals, health plans and everyone else in health care to make data available. Medicare physician data is still locked from public view, even though physicians taking Medicare assignment are vendors taking public dollars. Nearly all health plans treat their claims data as proprietary, and require clients receiving their own data to sign agreements that they won’t use it to conduct comparative provider analyses.
Further, the lack of meaningful feedback has led to a provider culture in which everyone believes him/herself an excellent clinician, without being forced to confront the facts. When those facts are made available – e.g., generating comparative episodic cost rates for top quality performers – many physicians are outraged rather than appalled.
Our wounds are self-inflicted. We are delusional about our own performances in health care, have not cultivated a quality improvement culture, and our quality suffers as a result.
I agree wholeheartedly with your point that we need to be careful about how we conduct analyses. What I was expecting you to say, but somehow missed, was that we need to move forward with those safeguards and get the analyses done, so we have a clue which doctors/services are doing a good job and which ones suck.
great post.
short. sweet. and to the point.
also correct.