Uncategorized

For America’s “Best Hospitals,” Reputation Doesn’t Hold as Much Weight

U.S. News and World Report has released its annual lists of the best hospitals in America, but this year the rankings were based more on performance data and less on reputation.

U.S. News and World Report began rating hospitals in 1990 when clinical data comparing hospital performance didn’t exist, according to a blog post written by Avery Comarow, senior writer and health rankings editor for U.S. News. As a result, the first editions of the list were solely based on the hospitals’ reputations. The media outlet began turning away from reputation-based rankings in 1993 when it added mortality, nurse staffing and other objective measures that reflected patient care.

That focus on performance data has continued to grow. In fact, for 12 of the 16 specialties in the latest edition of Best Hospitals, more than 65 percent of a hospital’s ranking depends largely on clinical data, most of which is from the federal government. Hospitals in the four remaining specialties — ophthalmology, psychiatry, rehabilitation and rheumatology — are ranked solely by their reputation among specialists.

U.S. News says it took steps to strengthen its reputational rankings this year, including a modification that reduced the likelihood of hospitals with the highest number of physician nominations to “bob toward the top” of rankings. As a result, this “took some of the juice out of high reputational scores” and placed more emphasis on objective, clinical data. The media outlet said some hospitals that made it to the top may not have any reputational score at all — their inclusion is based wholly on clinical performance.

Many of year’s top contenders are familiar, although there have been some shifts. Most noticeably, Johns Hopkins Hospital in Baltimore fell from its number one post to second place for the first time in 21 years. Massachusetts General Hospital in Boston replaced it.

Below are the breakdowns for the U.S. News methodology for the 2012-2013 list. This breakdown is specific to those 12 specialties that use objective data, not the four that still rely on reputational scores.

Survival score (32.5 percent): Based on a comparison of the number of Medicare inpatients with certain conditions who died within 30 days of admission in 2008, 2009 and 2010 — the latest years for which data is available.

Patient safety score (5 percent): Reflects how hard a hospital works to prevent six harmful patient errors, such as injuries during surgery and bleeding after surgery.

Reputation: (32.5 percent): Each year, 200 physicians per specialty are randomly selected and asked to list hospitals they consider to be the best in their respective specialty. U.S. News then bases reputational scores on the combined results of three years of surveys to reduce the tendency of different physicians’ perspectives skewing the rankings.

Other care-related indicators (30 percent): Includes nurse staffing, technology and other measures related to patient care. This data was largely from the American Hospital Association’s 2010 nationwide survey.

This year, U.S. News also ranked hospitals at three geographic levels: states, metro areas, and other regions. Methodology for those designations can be found here.

Molly Gamble is a writer and lists editor for Becker’s Hospital Review, where this post originally appeared.

3 replies »

  1. David,

    totally agree with you. There are all kinds of perverse incentives and outcomes that come from the various objective measures.

    As with anything, when you give people grades or goals, they will do whatever they need to to get there.

    Having consulted in hospitals for quality outcomes work, the challenge, of course, is trying to actually use the measures as proxies to improve the system.

    The data is far from perfect, and its important that people understand the limits of the data, while organizations use it to continue to drive change.

  2. One concern I have is that some of the new ranking criteria (nurse staffing ratio, advanced technologies) may bias the rankings toward the highest-cost providers–not necessarily the ones with the best outcomes. If my hospital has a 9:1 Nurse to Patient ratio but I recruit the best nurses and pay top dollar and they have a process that’s really efficient for running a unit–but the hospital down the street has a 7:1 ratio but slow as molasses–I lose points in the rankings.
    It seems like the data and analytics on outcomes is often missing, so the “rankers” out there try to tiptoe around it and find proxies. I think that’s well and good–but there could be some perverse effects.
    What you measure is what you’re going to get.

  3. I think there is an assumption on the part of readers that the data used in rankings like this is relatively up to date.

    “Based on a comparison of the number of Medicare inpatients with certain conditions who died within 30 days of admission in 2008, 2009 and 2010 — the latest years for which data is available.”

    That’s insane.

    If I’m going to travel back in time to 2008 and pick a hospital, this is extremely helpful information.

    Otherwise, not so much.