Not All Ratings Are Equal

Earlier this month USNews and World Report released their annual list of America’s Best Hospitals. This list is terribly misleading and is a disservice to the readers of that magazine, in my opinion. The fine print is revealing:

“Central to understanding the rankings is that they were developed and the specialties chosen to help consumers determine which hospitals provide the best care for the most serious or complicated medical conditions and procedures—pancreatic cancer or replacement of a heart valve in an elderly patient with co- morbidities, for example. Medical centers that excel in relatively commonplace conditions and procedures, such as noninvasive breast cancer or uncomplicated knee replacement, are not the focus.”

Since when did breast cancer and knee replacements become so commonplace that they didn’t matter? On July 19, The New York Times published Doubt About Pathology Opinions for Early Breast Cancer, suggesting that diagnosing Stage 0 breast cancer was fairly difficult. And what is the bright-line test between “uncomplicated” and “complicated” knee surgery?

Research by Dr. Ash Sehgal recently published in the Annals of Internal Medicine has shown that the predictive nature of the USNews results is most closely tied to the opinions of the doctors surveyed over a three year period. About 100 doctors influence each of the 16 specialties annually. Four specialties of the 16 are 100% doctor opinion. In addition, only 1,892 of the 4,852 hospitals are even considered for ranking, and that only 152 get ranked (Read: that’s 3%)! Really, what if my pick-up truck won’t get there from here? What about the other 4,700 hospitals? Don’t they matter, or doesn’t the magazine care?

Full disclosure: I create health care ratings for a living, and that is my bias! I do it to help consumers find the most appropriate and best care for themselves and their families. I also do it to challenge the industry to improvement to reduce mortality, reduce complication rates, improve patient safety, increase efficiency and reduce costs, improve patient experience and finally to improve the functional status outcomes…the end result of recovering from illness. Once again, this all benefits people like you and me.I do this commercially for a fee, as I prefer to see foundation money used for medical innovation. I have been doing it longer than USNews, Thomson-Reuters, HealthGrades, WebMD, The Dartmouth Atlas, JDPower Healthcare, NCQA, Leapfrog Group, and dozens of Quangos and other self-proclaimed experts. With another round of rankings and ratings, it is time again to comment about our progress and purpose.

The U.S. News Most Serious or Complicated rating is a serious errata data. Now, if the magazine cover or name was America’s Best Hospitals for Most Serious or Complicated Care, then we could pause here. There is a huge difference (Read: statistically significant) between looking at only serious and complicated care and studying how hospitals do with the high volume of less serious and complex routine care! The vast majority of care delivered by the roughly 4,852 Medicare certified hospitals in this country is routinely complex! AND, plenty of things go wrong with routine care that we should all be concerned with. But does that make for good headlines?

Similarly, many policymakers and all members of the press other than Reed Abelson and Gardner Harris of the New York Timies are apparently unaware that The Dartmouth Atlas studies only Medicare Part A & B (i.e., no Medicare Advantage or Part D) beneficiaries’ costs during the last two years of life, for only those who ultimately died (Read: they don’t look at the success of those who survived). Perhaps calling it The Dartmouth Atlas of The Cost of Mortality would make the statistical relevance of the researchers’ important works even more evident. That, of course, would probably make The Dartmouth Atlas less popular in policy-making circles in Washington, D.C. if last summer’s talk of “death panels” is any indication. At least we would all know what bias there is in the denominator of the study group, without having to reference an antiquated footnote about prior research here, here and here. Note to researchers: If you are only going to look at high cost outliers at the end of life, you should not ignore the high cost outliers at the beginning of life too.

And then there is the just released Hospitals & Health Networks Magazine: 100 Most Wired Hospitals list. I guess if you are an HIT vendor you might be proud of a few clients who made it here or here. But the problem is, from my own analysis of the hospitals on the list (of which there are more than 100), less than 40% of them fall in the top quartile of performance on a variety of metrics relevant to hospital performance that those same information systems are supposed to help, such as patient experience, affordability and efficiency, patient safety, CMS Core Measures, 30-day re-admissions or 30-day mortality rates! So, if fewer than 40% of the “Most Wired” are in the top quartile of performance on the most basic metrics of quality and efficiency, what does that suggest for the likely success of HITECH and Meaningful Use and EMRs? If Hospitals & Health Networks Magazine and the CHIME board that selected these hospitals are adept at identifying the best technology, then the curious and the skeptics might ask what that technology is accomplishing for those institutions. If 10% of the “Most Wired” rank worse than the top 2,000 nationally (out of the same 4,852 hospitals) on the most important metrics of quality, safety, experience, efficiency and outcomes, the folks who think HITECH is a key part of reform may be in for a big surprise!

Certainly every researcher or magazine publisher would like to defend their works, but also every researcher, magazine publisher and licensor of researched methodology from academically funded research should embrace transparency, welcome constructive criticism and work to improve their works…else they aren’t really objective researchers seeking a solution to the problems we face.

In my opinion, the commercial world has done a better job of creating rating, ranking and review systems for hospitals and doctors than those in research or those affiliated with media outlets. WebMD, HealthGrades, DataAdvantage, Vitals.com, and Thomson-Reuters to name just a few, are all productively contributing toward moving the needle on provider performance and helping consumers in the end. Pick your poison, it isn’t an easy choice selecting healthcare, but if you stick with the ratings that are most transparent and functional, then at least you will know what you are reading. We have a long way to go to get it right…but just like in healthcare, we need to weed out the hype of those who are misleading and misinterpreting the data to sell a magazine or to influence health policy or to export the methodology to foreign corporations.

Let’s get with the guidelines folks; the consumer world is anxiously waiting our accurate help.

John R. Morrow has founded, created and contributed to a variety of national ratings programs including; 100 Top Hospitals : Benchmarks for Success℠ a Thomson-Reuters product, The Patient Satisfaction Index™ a National Research Corporation product, The Hospital Value Index™ a Press Ganey & Associates property, and is currently in Beta with Distinguished Doctor™, a new doctor profiling initiative. Morrow was a Principal at HCIA/Solucient, CEO of CHKS Ltd, SVP at HealthGrades and is Principal at The Ratings Guy LLC. John welcomes all comments.

Categories: Uncategorized

Tagged as: