Sometimes the best coaching in the contact center comes from folks who don’t even work there.
As experienced and proficient as your supervisors and team leads might be at providing feedback on how agents can improve, it’s your customers’ direct comments that often have the biggest impact on agent development.
This is not to suggest that agents don’t require and value feedback from their superiors as well as from experienced peers, but there’s something about hearing things straight from the customer’s mouth that causes agents to really stand up and take notice. (Just make sure they don’t stand up for too long – they might end up out of adherence.) Having your supervisor tell you that you need to work on your empathy doesn’t hit you quite the same way as reading “The agent I spoke to had all the charm of a morgue attendant” on a survey completed by a customer you recently interacted with. Where agents may occasionally feel a supervisor’s or QA specialist’s take on their performance is subjective and unfair, there’s no arguing with the “voice of the customer”.
Some contact centers have modeled their entire quality program around the “customer as coach” concept. The North Texas Tollway Authority (NTTA) is one such center. The NTTA uses a VOC/performance management tool that enables the contact center to efficiently capture agent-specific customer feedback across all contact channels. Supervisors then share this feedback with agents to identify behaviors and skills that need improvement as well as those worthy of positive recognition. The center’s agents can access the system themselves whenever they want to view direct customer feedback on recently handled contacts. As much as 50% of the feedback received by agents following a monitoring session and during annual reviews comes directly from customers.
The NTTA’s agents wouldn’t have it any other way.
“Agents love the initiative,” says John Bannerman, Assistant Director of the NTTA’s Contact Center. “They get far more positive feedback from customers than a supervisor would have time to provide for their entire team on a daily basis. This provides encouragement and motivation for agents to continue doing things well, and makes them more willing to accept suggestions for improvement.”
Whether you share customer comments taken from post-contact surveys, emails/letters sent from customers, customer’s direct conversations with supervisors/managers (following an escalated call, etc.), or from gas station bathroom walls, those words can do a whole lot to engage (or wake up) agents and drive them to overcome challenging performance barriers.
Your customers are much more than just potential revenue sources lined up in a virtual queue; they are viable contact center coaches. It doesn’t matter if they know this or not – what matters is that you do.
Quality monitoring is as old a practice in contact centers as sending electric shocks through agents’ headsets to help keep handle time down. But just because centers have been conducting quality monitoring forever doesn’t mean they have been doing it right.
Effective quality monitoring is so important, I’m going to do two successive blog posts on the topic. This week and next my posts will highlight the quality monitoring tactics and strategies shared by contact centers that are better than yours. Here we go:
Gain agent understanding of and buy-in to monitoring from the get-go. In top contact centers, managers introduce the concept of monitoring during the “job preview” phase of the hiring process. Agent candidates learn (or, if experienced, are reminded) of the reasons behind and value of monitoring, as well as how much monitoring will occur should they be offered and accept a job in the center. Managers clarify that monitoring isn’t intended to catch agents doing something wrong, it just often works out that way. They explain how monitoring is not only the best way to gauge an agent’s strengths and where they can improve, but also to pinpoint why the people who designed the center’s workflows and IVR system should be fired.
Gaining agent buy-in to monitoring goes beyond mere explanations and definitions. The best contact centers show new-hires and sometimes even job applicants how quality monitoring actually works by having them listen to recorded calls with a quality specialist. The specialist goes over the center’s monitoring form/criteria, shows how each call was rated, and lets the newbies decide on a fitting punishment for the agent evaluated.
Use a dedicated quality monitoring team/specialist. In many contact centers, quality monitoring is carried out by busy frontline managers and supervisors. In the best contact centers, the process is carried out by dedicated quality assurance nerds – folks whose sole responsibility is making sure that the center’s agents and systems aren’t making customers nauseous.
I’m not saying that frontline managers/supervisors don’t know how to monitor; rather I’m saying that they typically don’t have time to do so effectively and provide timely coaching. With a dedicated quality monitoring team (or, in smaller/less wealthy centers, a single quality specialist) in place, there is time to carefully evaluate several customer contacts per month for each agent, and to provide prompt and comprehensive feedback to those agents about why they should have stayed in school.
Develop a comprehensive and fair monitoring form. A good quality monitoring form contains not only all of the criteria that drives the customer experience, but also all the company- and industry-based compliance items that keep your organization from facing any indictments.
In top contact centers, the monitoring form is broken into several key categories (e.g., Greeting, Accuracy, Professionalism/Courtesy, Efficiency, Resolution, etc.), with each category – and the specific criterion contained within – assigned a different weighting depending on its perceived impact on customer satisfaction. For example, “Agent provided accurate/relevant information” and “Agent tactfully attempted to up-sell after resolving customer issue” would likely be weighted more heavily than “Agent didn’t spit while saying ‘thank you for calling’" or “Agent remained conscious during after-call wrap-up”.
In developing an effective monitoring form that agents deem fair and objective, smart managers solicit agent input and recommendations regarding what criteria should or should not be included, and how agents feel each should be weighted. Showing agents such respect and esteem is a great way for you to foster engagement and a great way for me to make money if I ever write a book aimed at agents.
Invest in an automated quality monitoring system. There are contact centers that still rely mainly on real-time remote listening to evaluate agent-customer interactions. There are also doctors that still use leeches for bloodletting.
If your center is staffed with more than 20 agents and you want a shot at lasting customer satisfaction, continuous agent improvement, and an invitation to private vendor cocktail parties at conferences, you must invest in an automated quality monitoring system. There is simply no better and faster way to capture customer data, evaluate performance and spot key trends in caller behavior and agent incompetence.
I’m certainly not saying that other monitoring methods are not useful. Real-time remote observations, side-by-side live monitoring, mystery shopper calls, hiding beneath agents’ workstations – these are all excellent supplementary practices in any quality monitoring program. But they should do just that – supplement, not drive the program.
Monitor ALL contact channels, not just phone calls. As a researcher, I’m always amazed by how many multichannel contact centers have formal monitoring process in place only for live agent phone calls. According a study by ICMI, fewer than two thirds of contact centers that handle email contacts monitor customer email transactions, and fewer than half of centers monitor customers’ interactions with IVR or web self-service applications.
By virtually ignoring quality outside of the of traditional phone channel, contact centers allow poor online and automated service to continue, creating a breeding ground for customer ire and high operating costs. Failure to monitor the email and chat channel will not only lead to agents’ errors and poor service going unnoticed, it can actually propagate bad service. Agents who see that the center is so focused on the phones but not on email or chat are likely to give it their all during customer calls but let quality slip a bit when tackling contacts via text. They may even use…gulp…emoticons. :0
The best contact centers have a formal process in place for evaluating agents’ email and chat transcripts for information accuracy, grammar/spelling, professionalism, and contact resolution. In addition, these centers continually test their IVR- and web-based self-service apps to ensure optimal functionality, as well as monitor those apps during actual interactions to make sure that customers aren’t being thrown into IVR dungeons or abandoning web pages to rip the company a new one on Twitter.
That’s it for Part 1. I’ll share several more key quality monitoring practices in Part 2 next week. If you simply cannot wait that long, you have no other choice but to purchase a copy of my ebook immediately: http://www.greglevin.com/full-contact-ebook.html.
|