Uncategorized

Satisfaction Scores: How I Almost Made a Hotel Manager Lose His Job

Recently, I was asked to fill a questionnaire during check-out at a hotel in India. I was very pleased with my stay so I agreed to providing feedback. It is worth pointing out that if I was only mildly satisfied I would not have agreed. If I was disappointed with my stay I would have filled the form more enthusiastically.

When I offer feedback I am in one of two extreme emotions: I either love the service or, more commonly, loathe it. There is no time to talk about the average. And I have given up on Comcast.

The form had about twenty questions asking how satisfied I was with various components of their hospitality. I had to choose between one and ten, the higher number for greater satisfaction. I decided to set a record for the fastest completion of the questionnaire. I quickly chose ‘9’ and ‘10’. To appear objective I gave a ‘7’ to a service, randomly. Seven meant “above average”. Nine and ten meant “outstanding” – that is satisfaction cannot be measurably higher.

In the section which asked “how can we do better?” I said “put some more trees.” I didn’t really think the hotel premise needed more trees, but I was on a roll of objectivity. I had to say something.

I was about to leave the hotel when the manager stopped me. He looked worried. “Sir, what can we do to make the fitness center better?”

“Nothing.” I replied, puzzled.

“The fitness center and swimming pool are the pride of our hotel. We have the latest elliptical and treadmills,” he protested. I nodded in agreement, still confused.

“Sir, you gave the fitness center 7 out of 10. If we get less than 8 out of 10, I look bad,” he said. I glanced at the form. Just above the question about the fitness center was one about in-room dining.

“Sorry, I picked the wrong one. You are right the fitness center is excellent. I meant to give in-room dining 7 out of 10. The masala chai is nice but has too much masala.”

I was lying. I never stepped inside the fitness center. The masala chai was excellent, which is why I had six cups a day. But the manager seemed relieved that it wasn’t the fitness center.

“Sir, there is no space for more trees in the hotel.”

“But you must plant a couple of trees in the drive, or somewhere.” I wasn’t relenting on that one. My credibility depended on something about the hotel remaining pseudo-unsatisfactory. Since my flippant remark about paucity of trees did not have a score – that is it was qualitative rather than quantitative – the manager let it go. But he did offer a free ride to the airport, and masala chai with less masala.

I don’t know who designs multi-attribute surveys, whether they are over-intellectualizers, have any friends, or the type who fill a 30-question feedback form about an espresso from Starbucks. These surveys have become ubiquitous. A variant, the Press Ganey, will help determine hospital quality and provider bonuses.

The multiple dimensions have an aura of scientific precision. But these surveys are, to put it gently, useless.  They have as much granularity as words in our contracted vocabulary. Awesome is the new mean. Average means below average. Super nice means nice. Thus, 7 out of 10 is bad because 9, outstanding, is the new normal, and not because people are uniformly outstanding, but because we can’t be bothered to discern. Yet some will be persuaded that a hotel which scores 7.27 is meaningfully better than one which scores 7.22.

I once was filling out such a form for a resident applying for a job, who listed me as a referee. There were over forty questions about various components which measured the resident’s critical thinking, work ethics and professionalism. It looked like a psychometric test that people joining the Central Intelligence Agency take.

I marked the attributes with fours and fives, the highest scores. Of course, the resident’s strengths were not uniform in all respects. But he was an affable resident, had impeccable work ethic and was a competent radiologist. That was all that mattered, so I paid scant attention to the “he actively seeks new knowledge and applies evidence-based medicine.” Do these questions really add more signal than “he is a jolly good fellow and so say all of us?”

Though I find answering about multiple dimensions a nuisance, it does not mean such attributes do not exist. But it is a leap of faith by those writing these questions to think that responders will cognitively burden themselves by separately thinking about the various attributes.

Despite our grand attempts at objectifying subjectivity we remain subjectively objective. Not only is the pursuit of such objectivity an utter waste of time, it is faux precision. To think that all sorts of statistics are run on these scores, as if they are continuous variables such as temperature and length, reminds me of the quip attributed to Mark Twain: Lies, Damn Lies and Statistics.

In the meantime I would like to apologize to the guests of that hotel if the masala chai tastes odd or if they have to fight the bushes to get inside. Just remember to visit their gym.

Saurabh Jha is a radiologist who studies the value of imaging and believes in arguing for pedagogic benefits. His opinions do not reflect the opinions of his employer. In fact, his opinions do not reflect his own opinion. Follow him on Twitter @RogueRad

10 replies »

  1. I’m one of the few who take the time and think through each response, but then again I studied Sociology at University and a big chunk of that is Qualitative and Quantitative analysis. We even had to create our own surveys, learn the best formulated questions, analyze results and even decide if there is any err to the findings. Unfortunately for today’s society (Yelp can attest) the public is only ever going to submit a survey if they are extremely dissatisfied (Yelpers are often a bit idiotic in their reasoning) or extremely satisfied and have come to “love” that business and its employees.

    When it comes to patient satisfaction surveys, its going to work the same way. The difference between those and Yelp reviews, however, is that the business on Yelp aren’t being punished for poor reviews (unless they get shut down because yes, you did have rats in the freezer.) I’m not sure this is the best way to measure quality. We are going to be more inclined to give honest answers to the nurses that see us every day, not through an anonymous survey that gets emailed or handed to us at discharge.

  2. ” Trying to turn unstructured data (narrative) into structured data is equally, if not more dangerous.”

    Yes, you nailed it.

    That is exactly the problem.

  3. This not only goes for patient satisfaction scores, but for patient history questionnaires that are set up in this same way. Trying to turn unstructured data (narrative) into structured data is equally, if not more dangerous. What we want to hear from our patients are narratives of how they feel about our care and how we could do better. In the same way, we need narratives of their medical history, not boxes checked. If the goal is “patient engagement” (the blockbuster drug of the century, I am told), then having an interactive narrative is what constitutes true engagement. Engaging customers is more than just giving out a survey, and engaging patients is more than filling out a template.

    Sigh. Health care will end up in a sarcophagus of structured data. We need health care that is more like soylent green: good health care is people.

  4. Yes, but how can we divvy bonuses based on a real voice?

    John, we need to measure, measure, measure

  5. Patient satisfaction scores are at best a clumsy tool useful for rating wait times at the pharmacy counter or the walk in clinic in your local superstore. The idea that they would be used to rate surgical oncologists and trauma surgeons be should tell you something about what it is going on ..

    Note this does not imply that patients should not have a voice. Rather that they should be given a real voice, not a card with check boxes on it and a token delegate at every health care conference ..