Quality Monitoring Best Practices: How often should agents be monitored?
As those of you who know me can attest, I have an opinion on just about anything. And after spending 5.5 years as an analyst for Giga and Forrester, where I was encouraged to have strong opinions on everything, I’m usually happy to add my 2 cents on any topic or question. But a man has to know his limits, and when pressed for information in an area where I happen to know an expert, I’m quick to reach out for help.
Having been around for the birth pangs of CRM, I’ve amassed a pretty good network of experts over the years. So, when I was asked last week for some ‘best practice’ advice on quality monitoring (QM), I turned to the man I see as the ‘best practice’ guru for all things QM, Oscar Alban. Oscar is Principal Global Market Consultant for Witness Systems (recently acquired by Verint), which means he provides best practice advice to Witness customers, including ongoing ‘tuneup’ calls to be sure customers are maximizing the software to achieve their quality goals. Like myself, Oscar had a long career managing call center operations before stepping over to the vendor side of the house.
Here is the question I received from a member: “What are the best practices for how frequently to monitor agent interactions?” And here is Oscar’s reply:
Divide the agents into one of three groups:
- New. 30 days out of initial training
- Vets. Anyone over 30 days in good standing
- Problems. Any agent who is performing below minimum performance standards and is now on a performance-warning. Track the agent for 30 days or whatever the ‘probationary’ period may be. At the end of the period they either move back up to the Vet group or may be counseled out of the organization.
Number of monitorings/evaluations:
- New. 10 per agent per month
- Vets. 6 per agent per month
- Problems. 10 per agent per month for the duration of the probationary period
This is a recommended starting point. The key here is to not only record a call and evaluate it but to spend time coaching the agent! This is the part of the process that is most missed. If you need to decrease the number of calls evaluated in order to conduct the critical coaching phase then do it. The goal here isn’t to see how many evaluation forms can be filled out but to help agents get better at what they do. It is quality over quantity.
According to the 2006 SSPA Technology survey, 60% of members report using some kind of quality monitoring software, either home grown or packaged technology, and a sizeable percentage, 22%, planned to make an investment in QM in 2006-2007. (For more info, see the October 25, 2006 SSPA Accelerator, Deriving BI from Recorded Interactions: Trends in Quality Monitoring.) In the article, I predicted that QM will find greater adoption within technical support, and in B2B environments in general, for a number of reasons, including:
- Increased focus on the customer experience. With more organizations understanding the importance of service interactions in maximizing long term customer value, assessing the performance of technical support agents means more than reading case notes and testing technical knowledge. Monitoring recorded interactions is an easy way to identify abrasive or arrogant behavior that prevents highly skilled technicians from being successful.
- Maximizing technology investments. With the ability to record agent screen activities as well as phone calls, management can identify usability issues with applications and agents needing additional training on product features. With lack of user adoption typically in the top three reasons for CRM and eService project failures, verifying that employees are leveraging available technology is important to make certain that new support products achieve the maximum return on investment.
- Soft sales skills important for upsell/cross-sell. Consumer companies learned that inbound contact center agents, though highly skilled in responding to customer problems, did not necessarily have the skills for extending offers successfully. For companies interested in upsell/cross-sell where appropriate, helping agents transition a conversation toward a sales question, and handling the offer extension correctly, requires ongoing coaching.
- Interaction volumes too great for manual QA. Manually reviewing cases, email and chat transcripts is not a realistic way to assess quality with interaction volumes exploding. QM software allows aspects of desired agent behavior to be modeled, so exemplary and problem interactions are automatically identified and routed to a supervisor for a quality review.
What QM software is your company using? How often do you monitor your agents? If you have any questions, or perhaps some best practices to share, please post a comment!