Categories

Tag: Hospital rankings

Why Transparency Doesn’t Work.

The Cleveland Clinic is by far the best provider of cardiac care in the nation. If you have cancer there is no better place to be than Texas. Johns Hopkins is the greatest hospital in the America.

Why? Because US News and World Report suggests as much in its hospital rankings.

But which doctors at the Cleveland Clinic have the highest success rates in aortic valve repair surgeries? What are the standardized mortality rates due to cancer at University of Texas MD Anderson Cancer Center? Why exactly is Johns Hopkins the best?

We don’t have answers to these types of questions because in the United States, unlike in the United Kingdom, data is not readily available to healthcare consumers.

The truth is, the rankings with which most patients are familiar provide users with little. Instead, hospitals are evaluated largely by “reputation” while details that would actually be useful to patients seeking to maximize their healthcare experiences are omitted.

Of course, the lack of data available about US healthcare is not US News and World Report’s fault – it is indicative of a much larger issue. Lacking a centralized healthcare system, patients, news sources, and policy makers are left without the information necessary for proper decision-making.

While the United Kingdom’s National Health Service may have its own issues, one benefit of a system overseen by a single governmental entity is proper data gathering and reporting. If you’re a patient in the United Kingdom, you can look up everything from waiting times for both diagnostic procedures and referral-to-treatment all the way to mortality and outcome data by individual physician.

This is juxtaposed to the US healthcare system, where the best sources of data rely on voluntary reporting of information from one private entity to another.

Besides being riddled with issues, including a lack of standardization and oversight, the availability of data to patients becomes limited, manifesting itself in profit-driven endeavors like US News and World Report or initiatives like The Leap Frog Group that are far less well-known and contain too few indicators to be of real use.

The availability of data in the United Kingdom pays dividends. For example, greater understanding of performance has allowed policy makers to consolidate care centers that perform well and close those that hemorrhage money, cutting costs while improving outcomes.  Even at the individual hospital level, the availability of patient data keeps groups on their toes.

Continue reading…

Getting Quality Right: Exercise Due Caution When Grading Hospitals, Schools and Doctors

If Americans judged the quality of hospital care the way Newsweek judges high schools, we would soon be inundated with “charter hospitals” that only treat healthy patients.

As reported in The New York Times, thirty-seven of Newsweek’s top 50 high schools have selective admission standards, thereby enrolling the cream of the eighth grade crop. That means that when these high scoring eighth graders reach eleventh grade, they’ll be high scoring eleventh graders, helping the school move up the Newsweek rankings. These selective admission schools simply have to avoid screwing up their talented students.

That’s no way to determine how good a school is. The measure of a good education should be to assess how well students did in that school compared to how they would have been predicted to do if they had gone to other schools.

Imagine two liver transplant programs, one whose patients experience 90% survival in the year following their transplant and the other whose patients experience only a 75% survival rate. Based on that information, the former hospital looks like the place to go when your liver fails. But aren’t you curious about the kind of patients that receive care in these two hospitals? Wouldn’t you want to know whether that first hospital was padding its statistics by selectively transplanting relatively healthy patients?

Continue reading…

Hospital Rankings Get Serious

After years of breaking down, my sedan recently died.  Finding myself in the market for a new car, I did what most Americans would do – went to the web.  Reading reviews and checking rankings, it quickly became clear that each website emphasized something different: Some valued fuel-efficiency and reliability, while others made safety the primary concern.  Others clearly put a premium on style and performance.  It was enough to make my head spin, until I stopped to consider: What really mattered to me?  I decided that safety and reliability were my primary concerns and how fun a car was to drive was an important, if somewhat distant, third consideration.

For years, many of us have complained about the lack of similarly accessible, reliable information about healthcare.  These issues are particularly salient when we consider hospital care. Despite a long-standing belief that all hospitals are the same, the truth is startlingly different:  where you go has a profound impact on whether you live or die, whether you are harmed or not.  There is an urgent need for better information, especially as consumers spend more money out of pocket on healthcare.  Until recently, this type of transparent, consumer-focused information simply didn’t exist.

Over the past couple of months, things have begun to change. Three major organizations recently released groundbreaking hospital rankings.  The Leapfrog Group, a well-respected organization focused on promoting safer hospital care, assigned hospitals grades (“A” through “F”) based on how well it cared for patients without harming them*.

Continue reading…

In God We Trust. All Others Must Bring Data.

I knew it would happen sooner or later, and earlier this week it finally did.

In 2003 US News & World Report pronounced my hospital, UCSF Medical Center, the 7th best in the nation. That same year, Medicare launched its Hospital Compare website. For the first time, quality measures for patients with pneumonia, heart failure, and heart attack were now instantly available on the Internet. While we performed well on many of the Medicare measures, we were mediocre on some. And on one of them – the percent of hospitalized pneumonia patients who received pneumococcal vaccination prior to discharge – we were abysmal, getting it right only 10% of the time.

Here we were, a billion dollar university hospital, one of healthcare’s true Meccas, and we couldn’t figure out how to give patients a simple vaccine. Trying to inspire my colleagues to tackle this and other QI projects with the passion they require, I appealed to both physicians’ duty to patients and our innate competitiveness. US News & World Report might now consider us one of the top ten hospitals in the country, I said, but that was largely a reputational contest. How long do you think it’ll be before these publicly reported quality measures factor heavily into the US News rankings? Or that our reputation will actually be determined by real performance data?

Continue reading…

For America’s “Best Hospitals,” Reputation Doesn’t Hold as Much Weight

U.S. News and World Report has released its annual lists of the best hospitals in America, but this year the rankings were based more on performance data and less on reputation.

U.S. News and World Report began rating hospitals in 1990 when clinical data comparing hospital performance didn’t exist, according to a blog post written by Avery Comarow, senior writer and health rankings editor for U.S. News. As a result, the first editions of the list were solely based on the hospitals’ reputations. The media outlet began turning away from reputation-based rankings in 1993 when it added mortality, nurse staffing and other objective measures that reflected patient care.

That focus on performance data has continued to grow. In fact, for 12 of the 16 specialties in the latest edition of Best Hospitals, more than 65 percent of a hospital’s ranking depends largely on clinical data, most of which is from the federal government. Hospitals in the four remaining specialties — ophthalmology, psychiatry, rehabilitation and rheumatology — are ranked solely by their reputation among specialists.

U.S. News says it took steps to strengthen its reputational rankings this year, including a modification that reduced the likelihood of hospitals with the highest number of physician nominations to “bob toward the top” of rankings. As a result, this “took some of the juice out of high reputational scores” and placed more emphasis on objective, clinical data. The media outlet said some hospitals that made it to the top may not have any reputational score at all — their inclusion is based wholly on clinical performance.

Continue reading…

US Rumor and Hospital Report

It has been almost four years since I commented on the annual hospital ranking prepared by US News and World Report.  I have to confess now that I was relatively gentle on the magazine back then.  After all, when you run a hospital, there is little be gained by critiquing someone who publishes a ranking that is read by millions.  But now it is time to take off the gloves.

All I can say is, are you guys serious?  Let’s look at the methodology used for the 2011-12 rankings:

In 12 of the 16 [specialty] areas, whether and how high a hospital is ranked depended largely on hard data, much of which comes from the federal government. Many categories of data went into the rankings. Some are self-evident, such as death rates. Others, such as the number of patients and the balance of nurses and patients, are less obvious. A survey of physicians, who are asked to name hospitals they consider tops in their specialty, produces a reputation score that is also factored in.

Here are the details:

Survival score (32.5 percent). A hospital’s success at keeping patients alive was judged by comparing the number of Medicare inpatients with certain conditions who died within 30 days of admission in 2007, 2008, and 2009 with the number expected to die given the severity of illness. Hospitals were scored from 1 to 10, with 10 indicating the highest survival rate relative to other hospitals and 1 the lowest rate. Medicare Severity Grouper, a software program from 3M Health Information Systems used by many researchers in the field, made adjustments to take each patient’s condition into account.

Continue reading…

Not all Ratings Are Equal: Part II

By Read Part I here.

Why are all ratings not equal? Because they are designed for different purposes!
Herein lay the underlying truth to the many objections posed by organizations being rated. Rightfully so, the Three R’s (ratings, rankings and reviews) of providers must be kept in the context of overall purpose. This is one of the challenges to getting The Three R’s accepted and to making report cards right.

Health care is a big industry to rate and it is going to take more than one blog entry to develop a clearer picture of how best to move forward and embrace ratings systems, but let’s put down some context and history, as it is important to our current day objections and it is instructive to our future direction.

In the Beginning… in a fee-for-service market, before we had enough data to understand the enormous variability of clinical care, and before HCFA first contemplated releasing mortality data, performance measurement was all about financial performance measures. The ratings and rankings were quite simply all about financial and operating ratios, and hospitals were the institutional providers who were the rated with the CFO taking the bullet. Thanks to the public debt markets of the municipal bond industry, the hospital industry’s bricks, mortar and technology were mostly financed by long-term tax-exempt municipal bonds. Like most all other financial instruments these bonds are purchased and sold in the secondary markets long after the initial raising of capital, in some case decades. Being a predominantly not-for-profit industry, there exists no statutory reporting of a hospital’s financial results, and thus the Bloomberg terminals used by traders were void of hospital performance data, and the secondary bond market and the portfolio surveillance by large bond funds and bond insurers was a real challenge! No current data, no timely ratios, no real-time analysis…plenty of risk for those trading bonds. Sound familiar?

Continue reading…

Not All Ratings Are Equal

Earlier this month USNews and World Report released their annual list of America’s Best Hospitals. This list is terribly misleading and is a disservice to the readers of that magazine, in my opinion. The fine print is revealing:

“Central to understanding the rankings is that they were developed and the specialties chosen to help consumers determine which hospitals provide the best care for the most serious or complicated medical conditions and procedures—pancreatic cancer or replacement of a heart valve in an elderly patient with co- morbidities, for example. Medical centers that excel in relatively commonplace conditions and procedures, such as noninvasive breast cancer or uncomplicated knee replacement, are not the focus.”

Since when did breast cancer and knee replacements become so commonplace that they didn’t matter? On July 19, The New York Times published Doubt About Pathology Opinions for Early Breast Cancer, suggesting that diagnosing Stage 0 breast cancer was fairly difficult. And what is the bright-line test between “uncomplicated” and “complicated” knee surgery?Continue reading…

Rating or Narrating, that is the question.

This April 6–7, the Health 2.0 Europe conference will feature the many ways in which Web 2.0 tools are providing innovative solutions to, amongst others, our fundamental need for self-expression, known more recently as “user-generated content”.

Several panels will refer to these issues, but we will focus in this post on the Hospital and Payers’ panel. Payers want to ensure that their patients are being oriented to good care. Hospitals want to know that they are being considered “justly”. The Health 2.0 panel will include demonstrations by Guide Santé (France) and Patient Opinion (UK), both web 2.0 sites created by physicians concerned by patient satisfaction with hospitals and clinics. Payers like the UK NHS and Big-Direkt from Germany will participate in the conversation and Big-Direkt will also demo their new online tools.

Rating sites in health are high profile in France, especially amongst those who are rated and some early entrants have bit the dust for methodological reasons. Rating sites, however are not all identical and they are certainly not alone in capturing the patient experience. They live alongside online story telling or narrative tools, deployed in a variety of ways on sites that will be featured in Paris from a dozen countries.

How did all of this come about?

A quick review of the world of hospital ratings will remind us that consumers and professionals have long been seeking comparative guides to the quality of hospitals. Twenty years ago, US News and World Report launched its  “best hospitals” special issue, and so the concept of comparative hospital ratings for consumers was born. Such “best of” lists quickly became popular, despite the lack of consensus on the choice of quality indicators. In France, so many of the major national dailies and weeklies provide “best of” lists that new ones come out throughout the year and create a certain level of confusion since the institutions listed are never quite the same.

In the US, the HealthGrades Annual Hospital Quality and Clinical Excellence study examines patient outcomes at all 5,000 nonfederal hospitals in the United States, based on 40 million hospitalization records obtained from the Centers for Medicare and Medicaid Services. In the most recent HealthGrades study released on Jan 26 2010, “hospitals rated in the top 5% in the nation by HealthGrades have a 29% lower risk-adjusted mortality rate and are improving their clinical quality at a faster pace than other hospitals.”

With the arrival of Web 2.0 technologies, the first generation of hospital comparison tools took the form of rating sites; consumers would express their opinions essentially through response to multiple choice questions regarding their degree of satisfaction. At the same time other  tools made it possible to pursue the narrative approach via the posting of the “patient story.”

According to Wikipedia, Narrative Medicine is actually “a practice of medicine, with narrative competence and marked with an understanding of the highly complex narrative situations among doctors, patients, colleagues, and the public.” Narrative Medicine aims not only the validate the experience of the patient, but also to encourage creativity and self-reflection in the physician. Patient narrative of course, does not necessarily imply the contribution of anyone other than the patient!

Dr Paul Hodgkin, the founder of Patient Opinion is an NHS physician who still practices part-time. He wanted to give patients a place to express their personal stories and to enable the story to reach the managers of the establishment concerned by the story. According to Dr Hodgkin,

“We now understand that the experience of being a patient, far from being peripheral to health care, is actually central to understanding the effectiveness and efficiency of services, and how they can be improved. Because the author is unconstrained by pre-set questions, they may tell their story in ways that suit them, and address whatever they see as important. Sometimes a single story will motivate staff and managers to take immediate action to put something right. And it is often the case that the patient themselves, through their experience, sees clearly how a problem could be avoided or put right. We can now make a contribution – small or large – towards co-creating, with professionals and other patients, better care, better services, and perhaps even better professionals and better policy. And as we do this, we will see the health care system itself slowly shift to becoming more transparent, more responsive.”

As the narrative approach develops in popularity, does this mean that the end is in sight for rating sites? Not really. There are several well-known rating sites in the US and many sites including a rating feature. In France, while firmly in the “rating category” although still including commentary, Guide Santé is the only such site to have experienced significant development to date. Drs Del Bano and Bach of Marseilles, the founders, are former directors of a clinic and public health specialists. Their past experience has helped them from falling into the many pitfalls of rating methodology and policy.

Drs Del Bano and Bach’s goal was to launch a successful hospital comparison web site, based on a mix of user-generated content and government data. They cite three problems that explain the attraction of le Guide Santé.

“The French national health system evaluation data on hospitals is not accessible to consumers.  It does not allow the comparison of establishments on a same criterion. Up until the launch of Le Guide Santé, there was no French survey site where patients could anonymously report on hospital quality. We offer both the right to rate the establishment and to comment on it.”

Le Guide Santé is launching its V2 in the near future and has become the exclusive supplier of benchmarking information for one of France’s key digital and paper properties, “Le Figaro”.

Oh yes, and when asked the question, both sites Patient Opinion and Guide Santé report having published nearly all stories and comments that have been submitted.

We hope you’ll join us for the conversation at Health 2.0 Europe.

Denise Silber of, Basil Strategies is Health 2.0’s European partner. Basil Strategies is based in Paris, where the Health 2.0 Europe Conference will be held on April 6–7.

assetto corsa mods