I have written a lot about the decline of self-service success, based on SSPA benchmark data, and one of my most common inquiry questions is how to track this metric.

Self-Service Success 2003-2009
The truth is, accurate tracking of self-service success isn’t easy, and it doesn’t help when knowledgebase vendors make ridiculous claims like “80% of consumers who access one of our self-service sites find what they need.” How do they figure that? I’ll tell you: anyone who views content and then leaves the website is counted as a successful visit, even if they left because they were disgusted by the poor quality content. If this is how you calculate self-service effectiveness, your numbers are meaningless.
Here is how I recommend figuring self-service success, and how SSPA members calculate the metric to enter in our benchmark. There are three approaches, each more detailed:
- The easiest and most direct way is to give customers a prompt on every knowledgebase article that says, “This article solved my problem. Yes/No.” The problem with this approach is response rate. My research tells me that the average response rate for these prompts is under 5%, and some members tell me that their response rate for article prompts is less than 1%. If you can’t capture enough responses to have a good sample size, go to step 2.
- The next approach is to send a survey to every customer that accesses your self-service site asking if they were successful. The response rate for these surveys can average as high as 30% (according to members), though the current benchmark average response rate for self-service experience is only 7%. If you still need more details, go to step 3.
- The most detailed approach is to use cross-application reporting to see how many customers who accessed self-service had an assisted support interactions afterwards. In other words, did any of the self-service customers call or email you, or create a support incident online, within 24 hours of the self-service attempt? If so, these can be counted as unsuccessful attempts.
There are obviously challenges. For consumer companies, who don’t require a logon to access self-service, steps 2 and 3 may not be possible. Other companies think the ’24 hour’ rule in step 3 isn’t enough, and look at the following 48 hours. In other words, your mileage may vary.
How do you calculate self-service success? If there are other accurate approaches I’d love to know! As we speak, Michael Israel, our field service research expert, is working on the overhaul of the SSPA benchmark questionnaire, including beefing up many of the definitions of metrics and how to calculate them. This would be a great time to identify any emerging best practices for calculating self-service effectiveness. If you have any thoughts, please add a comment or drop me an email. And as always, thanks for reading!