Until recently, there was little patient-generated information about doctors, practices or hospitals to help inform patient decisions. But that is rapidly changing, and the results may be every bit as transformative as they have been in traditionally consumer-centric industries like hospitality. Medicine has never thought much of the wisdom of crowds, but the times, as the song goes, they are a-changin’.
Even if one embraces the value of listening to the patient, several questions arise. Should we care about the patient’s voice because of its inherent value, or because it can tell us something important about other dimensions of quality? How best should patient judgments be collected and disseminated – through formal surveys or that electronic scrum known as the Internet? And what are some of the unanticipated or negative consequences of measuring patient satisfaction and experience? All of these questions are being debated actively, and some newly published data adds to the mix.
For the past few years, Medicare has been administering the HCAHPS (Hospital Consumer Assessment of Healthcare Providers and Systems) survey to a random sample of 300-1000 patients discharged from every U.S. hospital. Results are now posted on Medicare’s Hospital Compare website. Starting in late 2012, hospital payments will be on the line, as part of Medicare’s pay-for-performance program, known as “Value-based Purchasing” (VBP).
When I lecture about VBP, I often ask audiences what weights they believe should be given to clinical quality data (process and outcome measures such as appropriate antibiotics for pneumonia or readmission rates) vs. HCAHPS survey results. Physicians invariably give answers like 80-20 or 90-10. I’ve even heard some say 100-0; namely, the patient’s voice should carry no weight. Such responses are usually accompanied by grumbling about how unfair it is to be dinged because of a hospital’s disastrous parking or inedible food.
Medicare has chosen to use a 70-30 ratio. In other words, fully 30% of a hospital’s bonus – or cut – under VBP will be determined by patient survey responses. For a large hospital like mine, our score on a single item (“rate the quality of nurse communication”) could be worth over $60,000 a year.
I’ve written before about how “patient-centeredness” has become nearly meaningless since it means so many different things to different people. But the knowledge that patient experience scores now carry real weight has provided tangible focus to efforts to promote patient-centeredness. For example, UCSF Medical Center now pays employees bonuses based on patient satisfaction scores – and these scores have improved markedly since this practice began. On my own medical service, the patient satisfaction committee now scours our results and has launched a program to observe our physicians as they interact with patients, then provide feedback. There is even a communication checklist that offers something of a script, with items such as Knock/Ask (“Hi, is it ok if I come in”), Concerns (“I’d like to review a few things with you, but first, is there anything you’d like to be sure we talk about today?” ….. “I see. So you’re concerned the headache may be due to a tumor?”) and Check Understanding (“To be sure I’ve been clear, can you just repeat back to me your understanding of the plan?”).
While the idea of scripting can seem inauthentic – such as when the bank teller asks you if you are having a great day or have plans for the weekend – it can be extremely useful. I now use a script of my own when introducing myself to a hospitalized patient. Since many patients and families still don’t quite understand what a hospitalist is or does, I often say something like, “You may get a survey after you leave, asking ‘did you have any sense that someone was in charge of your care in the hospital.’ I hope you’ll answer yes, because that is precisely my job… to be your orchestra conductor while you’re here.” Patients seem to get it.
While today’s HCAHPS survey focuses on the hospital, another survey – currently being pilot tested in two states –will roll out soon, asking about individual doctors. Medicare plans to publish the results of these MD surveys in the next few years. Don’t be surprised if a physician-level VBP plan, incorporating these data, follows in short order.
While many traditionalists object to the very notion of using patient experience ratings as part of transparency and payment initiatives, these objections were muted when the data were gathered via a well-validated survey, professionally constructed and administered. But that orderly world is being rapidly supplanted by one that centers on web-based ratings, in all their über-democratic, Yelpy glory. Predictably, the squawking is getting louder.
Enough Internet physician ratings sites have popped up to fill a large bubble, perhaps of the dot-com variety. For example, RateMDs, started by the same guy who started the popular RateMyProfessors.com site (where profs are rated by their quality, clarity, helpfulness, and “hotness”), now hosts reviews on more than 1 million docs in the US and Canada. Other sites in this “space” include Vimo, RevolutionHealth, Vitals.com, HealthGrades, and Angie’s List.
Attempting to bring order to this world, in 2008 the UK’s National Health Service launched its own patient ratings portal. Called “NHS Choices,” it allows patients to rate practices and hospitals, but not individual doctors. Comments are screened (“inflammatory” comments are blocked) and practices are encouraged to post responses. A 2010 JAMA article by Lagu and Lindenauer praised NHS Choices and encouraged Medicare to begin experimenting with a similar site.
It would be an understatement to say that the physician community has not been enthusiastic about on-line reviews and ratings. One concern relates to the possibility that the most disgruntled patients would be the one likeliest to complete surveys or enter comments. This concern is exacerbated by the relatively small number of responses per physician on many of the websites.
While these concerns are understandable, emerging data suggests that most reviews, of both practices and doctors, are positive. For example, a recent study of 386,000 physician ratings on RateMDs found that nearly 50% were a perfect 5 out of 5, and only 12 % were below 2 out of 5. Similarly, two-thirds of patients posting on NHS Choices said that they would recommend the practice or hospital to a friend.
A second objection is that ratings would be frivolous, capturing the “hotel” aspects of hospital care but not the substance. In fact, a recent New York Times article, written by an oncology nurse, argued that “we hurt people because it’s the only way we know to make them better… which is why the growing focus on measuring ‘patient satisfaction’ as a way to judge the quality of a hospital’s care is worrisomely off the mark.” I found this argument specious. Yes, there are times we do have to hurt people to help them (invasive procedures or surgery, for example), but that’s true for all hospitals and physicians. Some are undoubtedly better than others at helping patients prepare for the discomfort, minimizing it, and empathizing with and supporting the patient who experiences it. I’d like to know who they are.
In any case, the argument that patients focus on thread counts and arugula is increasingly being poked full of holes. In the recent study of RateMDs, physicians who were board certified, went to highly rated medical schools, and had never been sued for malpractice received better ratings. While disentangling cause and effect is challenging, these results support the notion that patient ratings are capturing other important elements of care.
An even more persuasive study was recently published by a group of researchers at Imperial College London led by Dr. Felix Greaves (I had the privilege of working with this group during my recent sabbatical, and am a co-author). We examined more than 10,000 patient ratings of hospitals (the average hospital received 62 ratings) on NHS Choices. We found that positive ratings correlated with lower overall mortality and readmission rates. Moreover, hospitals rated by patients as cleaner had a 42% lower MRSA rates than those with poorer ratings. Clearly, patients are clued into some central truths about clinical aspects of their care.
Another objection is that ratings might be submitted by individuals – who may not even be patients – with axes to grind. After finding one horrid rating of himself, and few other ratings, on DrScore.com, Dr. Kent Sepkowitz, a Memorial Sloan Kettering ID specialist, gleefully confessed to entering his own ratings on the site. Writing in Slate, he says that after reading the nasty review:
… I did what any normal American male under e-assault would do. I stuffed the ballot box. I pretended to be a patient of mine… and talked up my friendly attitude and thoroughness, gushed over the oodles of time I spent examining me, and declared my overall treatment a success. Not to limit the kudos, I also gave high marks to parking availability by my office. [A quick editorial aside: We’re talking about parking on Manhattan’s Upper East Side, so now we’re getting into some really serious fiction.] …. With my unceasing selfishness campaign, I was able to hike my scores to levels that would make my mother and even my mother-in-law proud.
Concerns about fraudulent entries can cut in both directions. I’m reminded of the mini-scandal that hit Amazon.com in 2004, when the Times reported that a glitch in Amazon’s Canadian site briefly revealed the true identity behind some anonymous book reviews. Turns out some, like Sepkowitz, had praised their own work. Others – including several prominent authors – had trashed the books of competitors.
While these concerns are real, they are similarly real for reviews of hotels and restaurants. My sense is that – with large enough numbers – the truth generally wins out. And there are ways to mitigate this potential hazard. Amazon, for its part, now allows readers to vote on reviews (“Was this review helpful to you?”) and to “report abuse.” The solution to problems with voting, it seems, is more voting.
Personally, my greatest concern relates to the potential tension between patient ratings and appropriate care. There will be times when giving a patient with a viral URI an unnecessary antibiotic is the surest path to a happy patient and a good review. One hopes that future quality measures will include not only patient experiences but also other measures of appropriateness and evidence-based care designed to counteract this perverse incentive.
The Bottom Line On Patient Ratings
Several years ago, I needed to see a dermatologist for a skin lesion. I was referred to a doctor in a downtown San Francisco medical office building. I decided to not play the “I’m a doctor” card, but rather to simply take in the experience. After entering his shabby office, I was ignored by the receptionist for about 10 minutes before she brusquely shoved a clipboard in my direction and told me to fill out a form. I was ushered in to see the doctor about 30 minutes after my scheduled appointment. The doctor, an elderly man in a white coat, was clearly in a rush. He barely looked at me while taking my history with staccato, closed-ended questions, leaving no room for nuance or embellishment. He then spent about 45 seconds looking at the lesion in question, looking up to offer a monotonic (and indecipherable, to a lay person) diagnosis and some vague recommendations. He scribbled a prescription, offering no explanation as to its purpose or its risks. Before I could say a word, and after a visit that couldn’t have lasted more than 5 minutes, he turned for the door and was gone. I was pissed.
At the time, there were no surveys to complete and no websites on which to rate his care. I would have drawn great satisfaction from writing a damning review, and suspect that a few of them might have led to a change in his behavior. At least, I hope so.
As we work our way through this new world of patient surveys and ratings, there will be some hazards to overcome and some unfair results to contend with. We’ll need to do all we can to anticipate these problems and mitigate them, and to try to bring some order to a chaotic marketplace. These seem like surmountable issues, and I am confident that the outcome of capturing the patient’s voice and giving it some real weight is sure to be better care.
Robert Wachter, MD, professor of medicine at UCSF, is widely regarded as a leading figure in the patient safety and quality movements. He edits the federal government’s two leading safety websites, and the second edition of his book, “Understanding Patient Safety,” was recently published by McGraw-Hill. In addition, he coined the term “hospitalist” in an influential 1996 essay in The New England Journal of Medicine and is chair-elect of the American Board of Internal Medicine. His posts appear semi-regularly on THCB and on his own blog, Wachter’s World.