Reputation versus quality: U.S. News Hospital Ranking

Each year, US News and World Report publishes its list of the top 50 hospitals in various specialties (example here). Now, an article has been published suggesting that one aspect of the methodology used by the magazine is flawed.

“The Role of Reputation in U.S. News & World Report’s Rankings of the Top 50 American Hospitals,” by Ashwini R. Sehgal, MD is in the current edition of theAnnals of Internal Medicine. (You can find an abstract here, and you can obtain a single copy for review from Dr. Sehgal by sending an email to axs81 [at] cwru [dot] edu.)

Dr. Sehgal finds that the portion of the U.S. News ranking based on reputation is problematic because reputation does not correlate with established indicators of quality:

The relative standings of the top 50 hospitals largely reflect the subjective reputations of those hospitals. Moreover, little relationship exists between subjective reputation and objective measures of hospital quality among the top 50 hospitals.

More detail is provided in the article:

The predominant role of reputation is caused by an extremely high variation in reputation score compared with objective quality measures among the 50 top-ranked hospitals in each specialty. As a result, reputation score contributes disproportionately to variation in total U.S. News score and therefore to the relative standings of the top 50 hospitals.

Because reputation score is determined by asking approximately 250 specialists to identify the 5 best hospitals in their specialty, only nationally recognized hospitals are likely to be named frequently. High rankings also may enhance reputation, which in turn sustains or enhances rankings in subsequent years.

Given the importance attributed to the U.S. News ranking, this article is bound to raise concerns. I know that the folks at the magazine have worked hard over the years to make their rankings as objective as possible, and it will be interesting to see their response to Dr. Sehgal’s critique.

Paul Levy is the President and CEO of Beth Israel Deconess Medical Center in Boston. Paul recently became the focus of much media attention when he decided to publish infection rates at his hospital, despite the fact that under Massachusetts law he is not yet required to do so. For the past three years he has blogged about his experiences in an online journal, Running a Hospital, one of the few blogs we know of maintained by a senior hospital executive.

Categories: Uncategorized

9 replies »

  1. Alot of interesting comments. Even though the US News rankings are well known, at the end of the day I wonder how much influence they have when individuals make care decisons? Even with a stellar reputation, I think folks are somewhat skeptical. Most people will seek care locally/regionally. I would bet that FaceBook rankings – if such a thing existed or is developed – is more influential. The impact of these social networks will be more important that expert rankings. It sounds simplistic but if you treat patients and families with respect, strive to provide quality care (everybody does but mistakes and bad outcomes happen even when everything is done right) and be honest every step of the way, the quality ratings will take care of themselves to large degree because the driver will be patient’s sharing their experiences not expert/survey opinions.

  2. It is no surprise that the so called Top Fifty Hospitals have closets full of Deadly secrets and hidden agendas. A mutual admiration society of who’s who and those with the best fund raising and lobbying Machine.
    It is a Known Fact,that Hospital Administrators play the Media like a fine violin.Manuvering to improve their public relations persona and exploiting Patients with a false perception. Hospitals are protected from disclosing Medical Errors and HAI’s.
    So if you believe that these places are safe. Think Again!

  3. The rankings are gamed. At least one prominent hospital alerts its staff as to whom to vote if they get a ballot. The hospitals brag as if the US News and World Report is the New York Times, which it ain’t. The hospitals cover up their HIT debacles, game their case mix, game their door to balloon time, their time to treat pneumonia, so why not game the best hospital or best doctor contest. One best doctor refused to return a phone call from a patient, one best hospital puts patients in to bankruptcy at a high rate. Paul Levy should remove his hospital from consideration in this sham.

  4. I certainly welcome the thrust of Dr. Sehgal’s article but … duh. Is the pope catholic?
    The problems with quality of care rankings:
    1) for most patients and a lot of docs, it’s entirely subjective. Did they do a fancy scan, or talked about research? Certainly a top notch facility.
    2) The objective measures are problematic in that mortality, morbidity are hard to adjust for severity, comorbidity … and moreover, outcome differences will not be great (rather very subtle) in most conditions where you have a high enough n. For routine work, a well run community hospital working with checklists is probably impossible to beat (in a statistically significant way). For the very rare stuff that is not only treatable but also somewhat complex in its management, however, you are better off at a regional or national referral center.

  5. I’m very thankful to Dr. Sehgal. Anybody with a big enough budget can have a good reputation, it’s called marketing. I wrote the editors of USNWR several years ago, having taken great exception to their rankings as my husband was hospitalized in on of those facilities and they couldn’t possibly in the Top 50 unless it was a list of the worst hospitals. The USNWR editors never replied. This is just another reason I wrote it all down in a book instead.

  6. These findings are hardly surprising since a decent reputation can be marketed to create the impression of a great reputation. This then tends to favor larger hospitals that have larger budgets and are also able to attract the stars of a particular field to further boost their rankings. This is despite there being many medical practitioners who are still very proficient, but have not achieved the same visibility.

  7. I have to say that this doesn’t surprise me at all, and have always viewed these “soft” kinds of ratings with skepticism. It is another example of the mismatch in our health care system between what people — even professionals — “think” is true and what data actually can show to be true. One can only imagine how much more disjointed from reality that similar ratings of “best doctor” must be.
    We really need to get to a health system that is much, much better about collecting, evaluating, and publishing data on quality & effectiveness. Otherwise it’s just magic or hand-waving, so to speak.