(Time: approx. 2,5 minutes read)
Why do we recommend this KPI?
The Customer Effort Score (CES) is a measurement of customer experience (like the CSAT score), but instead of asking how satisfied customers are with a particular product, service or interaction, you ask them to rate the ease of their experience, e.g. "How much effort did you have to put in to solve your problem?" This way, CES measures how much effort a customer personally has to put forth to get the job done, i.e. to get an issue resolved, a question answered, or a product purchased/returned. The assumption is that a low-effort experience is a good experience.
CES is a strong predictor of future purchase behavior and highly actionable. The metric correlates with business outcome since customer effort is an indicator of loyalty and customer churn is a key business driver. According to research from CEB presented by Harvard Business review in the article "Stop Trying to Delight Your Customers” (länka till källa nedan), the easiest way to increase customer loyalty was not through “wowing your customers”, but rather through making it easier for them to get their job done. In fact, the study showed that 94% of customers reporting low effort said they would repurchase and 88 % would increase their spending.
How do you measure and interpret the results?
CES is best measured immediately after an interaction with a product or service that lead to a purchase or subscription or an interaction with customer service. It is typically measured through an automated post-interaction survey through a website or sent out via email, asking the customers to rank their experience on a seven-point scale ranging from 1 “Very Easy” to 7 “Very Difficult”. This way it determines how much effort was required by the customer to use the product or service and how likely they are to continue paying for it.
Typically, the collected answers are averaged to give you an idea of how much effort a certain interaction required of customers. There is not industry standard for CES, but since it is recorded on a numeric scale, a higher score would represent a better user experience. Using a seven-point scale, responses with five or higher would be considered a good score.
However, the average does not tell you the whole story, to find potential for improvements you have to look at the variation of your result and see how big the group of respondents are that not yet have the easiest experience in their interaction. For example, you may have almost 50% with the highest score who answered, “Very Easy” and thereby get a high average, but still have about 50% not yet having the best experience. This is a great indication that there is room for improvement.
4 additional factors to consider when you build your survey:
- Think ‘Mobile first’: Since more than half of digital customer interactions occur on mobile you simply must optimize your survey for mobile. Further, try to remove additional content and put the positive options at first and the negative at the bottom.
- Make sure it’s automated: Surveys are best sent out automatically after an interaction or specific touch point.
- Less is more: Stick to one or two questions and avoid leading questions.
- Make it accessible: The survey is not worth much if the results are not shared and used with people who can follow it up and take action and close the loop of the interaction.
In addition, don’t forget to: