Understanding Cultural Differences in Customer Satisfaction Ratings

Sorry I have been neglectful of my blog for the last week; I have been headsdown working on a report detailing the best practices for attracting, developing and retaining talent in India.  The SSPA has a committee of members who have been working on this project for the last year, surveying and interviewing hiring managers in Indian tech support organizations.  The amount of data they uncovered, and the indepth findings and recommendations from the members is amazing, and my job was to merge all the information into a research report.  I completed a draft of the report…all 46 pages…today so am coming up for air.

The report will be released at our Spring Best Practices Conference, and the head of the committee, Microsoft’s Dheeraj Prasad, will lead a session highlighting the findings. I hope you will attend his session; any company with either owned or outsourced support resources in India should leverage these findings.

While I was emerged in writing the report, I came across a fascinating article I wanted to share with everyone.  Over the years I have received inquiries from support managers who were perplexed about how the same service organization could receive such different ratings in post-interaction surveys from different regions of the world.  I have waxed poetic on this topic many times…but never in writing for fear of being politically incorrect.

Last week I had a briefing with CustomerSat, an SSPA partner providing customer satisfaction analytics for the support industry, and we talked about how people in different cultures rate the same experience differently. Marya Darabont, Research Consultant for CustomerSat Professional Services in Europe, wrote an excellent article on this. If you service customers outside of the US, I encourage you to read it. Some highlights discussing propensity for selecting certain scores on a numbered scale:

  • Anglo (USA, Canada, Australia), Nordic and Western European cultures tend…to use all points on the scale, as they are less concerned with any social consequences of strong opinions.
  • In countries such as China, Hong Kong and Japan…a middle response style is more common. Another observation is that such cultures tend to think dialectically in that they see both good and bad attributes in their interactions, i.e., they don’t see everything in black and white, hence the use of the middle of the scale.
  • Latin cultures are…high in uncertainty avoidance, which emphasizes rules and intolerance for ambiguity. Research suggests that cultures that are high in uncertainty avoidance prefer the endpoints of a scale because they are more definite and clear.

No, I don’t think this gives you an excuse for poor sat scores in other geographic regions, but it does help explain ongoing trends toward certain groups scoring similarly.  I hope you will pass this article along to the owner of your customer sat program.

Ok everyone, back to work!  Thanks for taking a moment to check my blog, and if you have any input, please add a comment or drop me an email.

Explore posts in the same categories: Best Practices, Technology

Tags: , ,

You can comment below, or link to this permanent URL from your own site.

11 Comments on “Understanding Cultural Differences in Customer Satisfaction Ratings”

  1. Haim Toeg Says:

    John,

    This is an interesting article, but I think there is more to interpreting international satisfaction results. Usually ratings given by Americans are higher than almost anybody else’s for the same level of service, including the countries you conveniently bundled as ‘Anglo and Western Europe’, there are two reasons for that:
    1. American response rates tend to be higher and they are more likely to respond when the service was according to expectations whereas many Europeans would only respond if the service was poor
    2. American would rate service within expectations as ‘excellent’ whereas Europeans would rate it as ‘fine’

    I’d be interested in hearing your perspectives on that.

    Haim

  2. jragsdale Says:

    Hi Haim;
    I’m surprised to hear Americans have a high response rate. We were so late to the table with ‘Do not call” and SPAM legislation (at least compared to Europe) that I figured we had one of the worst response rates by now!

    I don’t have much real data on this issue so I defer to the experts. At some point I should compare the average sat scores in the SSPA benchmark by geography and see if there are obvious differences.

    A related issue I’ve always been curious about–I am sure there are differences in how different US regions approach surveys. Having lived all over the US, there are such varying attitudes about confrontation and providing negative feedback in the South vs. North East vs. MidWest vs. us laid-back Californians.

    –J


  3. John,

    A couple of comments about response rates. Haim makes an interesting point that I have noticed in cross-cultural responses. Americans, almost exclusively, have no hesitation to bestow an excellent as a response to a customer satisfaction question – even when it was not so. Most of the reasoning behind this is psychological, and a need to feel accepted or liked. This is actually the reason behind any of the responses you will get for all your surveys.

    One thing I discovered long time ago when doing feedback management, psychology plays a far more important role than most people assume. It is all about how you write the questions, phrase the potential answers, and what words you use (on the organization’s part), and how your relationship is with the inquiring party (For the respondent). The only fool-proof method for collecting true feedback (beyond writing neutral questions) is to make the survey completely anonymous. In the same way that surveyors offer rewards or the potential to win money or prizes to get higher ratings (if I say you are great, I am more likely to win the prize — flawed reasoning goes), truly anonymous surveys will open up the bottom 60% of the scale for responses.

    Of course, I could just be wrong — but so many years working with feedback management and surveys taught me that you can just as easily prepare a survey to get the results you want as you can collect good feedback if you want to. Cultural differences notwithstanding (and, yes – there are real differences across cultures, but you have to incorporate that into your survey planning and execution).

    This is indeed a fascinating topic…

  4. Haim Toeg Says:

    Esteban, John,

    I think you are spot-on in describing the impact that the way a survey is written has on the results. Political polling is the most obvious example of the application of this art (“80% of Americans are supporting —— “). I think where the difference in US responses as opposed to European responses is the assignment of excellence vs. the customer’s expectations, as well as an aversion to anything that may have negative connotations (how did the word “issue” ever become a negative term?). Whereas usually in the US if you meet expectations the service is considered ‘excellent’, Europeans would usually say it was ‘fine’, you did what you were supposed to do. I never did analyze regional responses in the US, so I can’t comment on that specific point.

    A good illustration is the response to the question ‘how are you today?’ – if an American tells you they are ‘OK’, your next question would be ‘why, what’s going on?’ where as elsewhere OK means that everything is fine, it is OK. Many years ago, when I just moved to the US, I had a colleague who would always ask for clarification “so, is this an American OK or an Israeli OK?”

    Haim

  5. jragsdale Says:

    I just checked our benchmark database for some related metrics. I see that when asked to rate their overall satisfaction with support, North America averages 55.4% of customers selecting 5 on a 5 point scale, compared to 43.0% in EMEA. (Unfortunately I don’t have a large enough sample size on LA or APAC to give a meaningful number for those regions.) Is this because NA support is pleasing customers more, or because NA customers are more likely to select 5 out of 5? Your comments are making me realize the answer likely is: a little bit of both.

    My takeaway from this ‘conversation’ is:
    * Global averages for customer satisfaction aren’t very meaningful due to geographic differences in the way customers approach survey responses.

    * Satisfaction benchmarks are a useful tool to track how satisfaction rises and falls when you make program/staffing/tool changes, but in and of themselves, sat scores alone don’t tell the whole story.

    * Our industry obsession with sat scores (which drive bonus payouts for front line agents up to CEOs) are only one tool in the customer satisfaction toolbox, and companies need to be sure they are having meaningful dialogues with customers to understand what they like, don’t like, and what their requirements are, because if you assume you are collecting all you need to know with surveys using a 1-5 scale you are fooling yourself.

    This is all very elightening to me, as I hear of so many “Voice of the Customer” initiatives involving nothing more than surveys.

    Esteban and Haim, do you have any other advice you would offer companies using surveys as the primary means to assess satisfaction?

    Thanks so much for joining this discussion!
    –John

  6. Haim Toeg Says:

    John,

    I think you make some very valid points, but I am not sure that I can answer your question about the difference in service level between the US and Europe with same confidence you have. Also, I differ on the usability of a global number. On it’s own it may not seem very useful, but it has the advantage of being the single number that represents the results of the collective effort made by the support organization and is a good internal and external communication tool. You are right, though, that operational numbers would most likely be taken based on numbers representing segments of the entire operation (region / team / product line ).

    Customer sat metrics are an excellent measurement tool, however, managers must ensure that they understand their weaknesses as well as strengths and that transactional customer surveys are not the only metric they and their teams are measured on. Some points from my experience:

    – The best goal is to have continuous, steady improvement, so that every measurement period the organization does a little better than previously. Rapid improvements are not always sustainable, create expectations of more rapid improvement to follow and may cause counter reaction in subsequent surveys if rapid improvement does not continue. This, in turn, will lead to questioning the survey and the quality of information it provides

    * Use benchmarking smartly – teams, regions and the global organization need to be tracked against their own past performance and given improvement targets based on that, compare regions, product lines or teams is futile and will inevitably lead to poor decisions. Generally, I am not a big fan of compensating individual technicians on their own customer sat performance as that may discourage teamwork and drive undesirable behavior

    * Surveys track the past, any improvement made to processes now will only be reflected somewhere down the line, sometimes as far away as six months or even more. Declines, however, are reflected much faster. Therefore:
    – Ensure you understand very well what process or functionality needs to improve, professional statistical analysis of the survey results will provide you very useful information. Remember that improvements take time to be reflected in surveys and hence the organization needs to trust it’s initiatives and continue refining them until they bear the desired results
    – Continuous improvement requires that initiatives are introduced to maintain momentum, so once a certain behavior or activity becomes embedded in the processes managers will need to address the next challenge
    – Measure improvement on metrics representing your improvement initiatives as well customer sat. Operational metrics provide the immediate feedback for tuning the processes and finding their strengths and weaknesses whereas customer sat measurement provides the opportunity to enjoy the fruits of the hard work done

    Transactional surveys are fine, but companies need to implement relationship surveys as well as have conversations with customers for qualitative information and feedback on what they are doing right and what they are not, as well as competitive information and initiatives implemented by other vendors.

    Despite the lengthy response I am not sure if I fully answered your questions — it’s a fascinating discussion with many different perspectives and I am happy to continue it.

    Haim


  7. Without turning this into a debate over the accuracy of a survey, let me chime in to complement the excellecnt comments made by Haim.

    The most difficult part of doing a multi-cultural survey is to make sure the answers match each other. There is a marked difference between words across languages, even across sections of the country (as is the case in the US). A question that, for example, asks about customer satisfaction will get different results in Atlanta, GA, in Winston Salem, NC, in Seattle, WA, and in Los Angeles, CA — regardless of the level of service. There was a study done where four similar people were asked the same question about an experience they had shared. They had all gone through the same experience, at the same time, they had similar backgrounds. Yet, their answers were different – not dramatically, but sufficient to sway a much larger sample base into creating a biased result.

    Why? two reasons, the meaning of the words to different cultural sections of the country, and the way they feel about satisfaction. They way you put words together in a survey can make a significant difference (political polls are the best example – how can you have a 40% support and a 60% support for the same issue, at the same time?) in how a survey turns out. Asking “Do you THINK…” vs. “Do you BELIEVE…” vs “Do you FEEL…” will completely change the results of a survey based on what you are asking customers to do.

    Now, take this to a larger context. Take the question you phrased in English — and translate it into several different languages. Chances are extremely high that you are going to get really skewed results. You won’t be able to match the way people understand a word from one language to another. The word FEEL in American English has a different meaning that the word SIENTE in spanish… yet, that is the way it would be translated. Even beyond that, the reaction of the respondent to each word is going to be different. Using the same question, across different cultures will not yield comparable results.

    So, in closing, what to do? Change your survey questions. I have been advocating for a long time that surveys should not be about SATISFACTION, FEELINGS, THOUGHTS — rather about met expectations. You are more likely to get a better idea of what your customers feel if you ask whether they got what they needed, in the time they expected than asking them if you are satisfied. One of them deals with facts, the other one with FEELINGS.

    Sorry for the long rant, signing off now.

  8. Paul Cline Says:

    When it comes to Customer Service all that matters from a cultural perspective is that the customer know that you CARE about them. Be friendly, be humble, come to serve … and this will translate across any barrior.

    – Paul Cline – Advanced Training Seminars

  9. Haim Toeg Says:

    Paul,

    You are absolutely right that these qualities are necessary for the delivery of good service, but even those are expressed differently across cultures. However, every business needs a feedback loop to assess the impact their initiatives are making and surveys provide an ongoing measurement and benchmarking for the organization on the way their activities are perceived by their customers.

    Esteban,

    First, thanks for the compliments.

    I mostly agree with what you say, and definitely when building a survey it is critical to get the translation done as well as you possibly can, including multiple reviews by people native to the culture in which the survey is performed (you may end up having several version of Spanish, two of French, etc). However, what I would advise against is ditching your existing survey solely for the implementation of a better translation. The gains are minimal, but the ability to benchmark is lost for some time, and still, in my opinion, the organization will not be able to, and should not measure regions against each other.

  10. John Says:

    I thought that was surprising as well. It indicates some readers feel strong affinity and trust for their online tribes of fellow readers. Customer Satisfaction Survey Questions


  11. Thank you for the good writeup. It in fact was a amusement account
    it. Look advanced to more added agreeable from you! By the way,
    how can we communicate?


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


%d bloggers like this: