Last week in Part 1 of this post, I cited several quality monitoring practices commonly embraced by the world’s best contact centers, then stopped midway through in a desperate attempt to make you come back to my website this week.
Here we go with Part 2. I hope the wild anticipation didn’t cause you to lose too much sleep.
Incorporate customer satisfaction ratings and feedback into monitoring scores. Here is where quality monitoring is really changing. This shift in quality monitoring procedure is so important, it’s underlined here – and just missed getting typed out in ALL CAPS.
Quality is no longer viewed as a purely internal measure. Many contact centers have started incorporating a “Voice of the Customer” component into their quality monitoring programs – tying direct customer feedback from post-contact surveys into agents’ overall monitoring scores. The center’s internal QA staff rate agents only on the most objective call criteria and requirements – like whether or not the agent used the correct greeting, provided accurate product information, and didn’t call the customer a putz. That internal score typically accounts for anywhere from 40%-60% of the agent’s quality score, with the remaining points based on how badly the customer said they wanted to punch the agent following the interaction.
Add a self-monitoring component to the mix. The best contact centers usually give an agent the opportunity to express how much he or she stinks before the center goes and does it for them. Self-evaluation in monitoring is highly therapeutic and empowering. When you ask agents to rate their own performance before they are rated by a quality specialist (and the customer), it shows agents that the company values their input and experience, and it helps to soothe the sting of second- or third-party feedback, especially in instances when a call was truly flubbed.
Agents are typically quite critical of their own performance, often pointing out mistakes they made that QA staff might have otherwise overlooked. Of course, the intent of self-monitoring sessions is not to sit and watch as agents eviscerate themselves – as much fun as that can be – but rather to ensure that they understand their true strengths and where they might improve, as well as to make sure they and your quality personnel are on the same page. Self-evaluations should cease if agents begin to slap themselves during the process, unless it is an agent you yourself had been thinking about slapping anyway.
Provide positive coaching soon after the evaluated contact. Even if you incorporate all of the above tactics into your monitoring program, it will have little impact on overall quality, agent performance or the customer experience if agents don’t receive timely and positive coaching on what they did well and where they need to improve. Notice I said “timely” AND “positive” – this is no either/or scenario: Giving agents immediate feedback is great, but not if that feedback comes in the form of verbal abuse and a kick to the shin; by the same token, positive praise and constructive comments are wonderful, but not if the praise and comments refer to an agent-customer interaction that took place during the previous President’s administration.
At the end of each coaching session during which a key area for improvement is identified, the best centers typically have the coach and the agents work together to come up with a clear and concise action plan aimed at getting the agent up to speed. The action plan may call for the agent to receive additional one-on-one coaching/training offline, complete one or more e-learning modules, work with a peer mentor, and/or undergo a lobotomy.
Reward and recognize agents who consistently deliver high quality service. While positive coaching is certainly critical, high-performing agents want more than just a couple pats on the back for consistently kicking butt on calls. Top contact centers realize they must reward quality to receive quality, thus most have some form of rewards and recognition tied directly to quality monitoring results. Agents in these centers can earn extra cash, gift certificates, preferred shifts and plenty of public recognition for achieving high ratings on all their monitored calls during a set month or quarter. In some centers, if an agent nails there quality score during an even longer period (six months or a year), they may earn a spot on the center’s “Wall of Fame”, and perhaps even the opportunity to serve as a quality coach who can boss around their inferior peers.
To foster a strong sense of teamwork and to motivate more than just a select few agents, many centers have built team rewards/recognition into the fold. Entire groups of agents – not just the center’s stars – can earn cash and kudos for consistently meeting and exceeding the team’s quality objective over a set period of time. Such collective, team-friendly incentives not only help drive high quality center-wide, they help protect the center’s elite agents from being bludgeoned with their own “#1 in Quality” trophy by co-workers.
If you have some other key quality monitoring practices you’d like to share, please do so in the comment box below. If you’d like to take serious issue with the practices I’ve highlighted, get your own blog.
Quality monitoring is as old a practice in contact centers as sending electric shocks through agents’ headsets to help keep handle time down. But just because centers have been conducting quality monitoring forever doesn’t mean they have been doing it right.
Effective quality monitoring is so important, I’m going to do two successive blog posts on the topic. This week and next my posts will highlight the quality monitoring tactics and strategies shared by contact centers that are better than yours. Here we go:
Gain agent understanding of and buy-in to monitoring from the get-go. In top contact centers, managers introduce the concept of monitoring during the “job preview” phase of the hiring process. Agent candidates learn (or, if experienced, are reminded) of the reasons behind and value of monitoring, as well as how much monitoring will occur should they be offered and accept a job in the center. Managers clarify that monitoring isn’t intended to catch agents doing something wrong, it just often works out that way. They explain how monitoring is not only the best way to gauge an agent’s strengths and where they can improve, but also to pinpoint why the people who designed the center’s workflows and IVR system should be fired.
Gaining agent buy-in to monitoring goes beyond mere explanations and definitions. The best contact centers show new-hires and sometimes even job applicants how quality monitoring actually works by having them listen to recorded calls with a quality specialist. The specialist goes over the center’s monitoring form/criteria, shows how each call was rated, and lets the newbies decide on a fitting punishment for the agent evaluated.
Use a dedicated quality monitoring team/specialist. In many contact centers, quality monitoring is carried out by busy frontline managers and supervisors. In the best contact centers, the process is carried out by dedicated quality assurance nerds – folks whose sole responsibility is making sure that the center’s agents and systems aren’t making customers nauseous.
I’m not saying that frontline managers/supervisors don’t know how to monitor; rather I’m saying that they typically don’t have time to do so effectively and provide timely coaching. With a dedicated quality monitoring team (or, in smaller/less wealthy centers, a single quality specialist) in place, there is time to carefully evaluate several customer contacts per month for each agent, and to provide prompt and comprehensive feedback to those agents about why they should have stayed in school.
Develop a comprehensive and fair monitoring form. A good quality monitoring form contains not only all of the criteria that drives the customer experience, but also all the company- and industry-based compliance items that keep your organization from facing any indictments.
In top contact centers, the monitoring form is broken into several key categories (e.g., Greeting, Accuracy, Professionalism/Courtesy, Efficiency, Resolution, etc.), with each category – and the specific criterion contained within – assigned a different weighting depending on its perceived impact on customer satisfaction. For example, “Agent provided accurate/relevant information” and “Agent tactfully attempted to up-sell after resolving customer issue” would likely be weighted more heavily than “Agent didn’t spit while saying ‘thank you for calling’" or “Agent remained conscious during after-call wrap-up”.
In developing an effective monitoring form that agents deem fair and objective, smart managers solicit agent input and recommendations regarding what criteria should or should not be included, and how agents feel each should be weighted. Showing agents such respect and esteem is a great way for you to foster engagement and a great way for me to make money if I ever write a book aimed at agents.
Invest in an automated quality monitoring system. There are contact centers that still rely mainly on real-time remote listening to evaluate agent-customer interactions. There are also doctors that still use leeches for bloodletting.
If your center is staffed with more than 20 agents and you want a shot at lasting customer satisfaction, continuous agent improvement, and an invitation to private vendor cocktail parties at conferences, you must invest in an automated quality monitoring system. There is simply no better and faster way to capture customer data, evaluate performance and spot key trends in caller behavior and agent incompetence.
I’m certainly not saying that other monitoring methods are not useful. Real-time remote observations, side-by-side live monitoring, mystery shopper calls, hiding beneath agents’ workstations – these are all excellent supplementary practices in any quality monitoring program. But they should do just that – supplement, not drive the program.
Monitor ALL contact channels, not just phone calls. As a researcher, I’m always amazed by how many multichannel contact centers have formal monitoring process in place only for live agent phone calls. According a study by ICMI, fewer than two thirds of contact centers that handle email contacts monitor customer email transactions, and fewer than half of centers monitor customers’ interactions with IVR or web self-service applications.
By virtually ignoring quality outside of the of traditional phone channel, contact centers allow poor online and automated service to continue, creating a breeding ground for customer ire and high operating costs. Failure to monitor the email and chat channel will not only lead to agents’ errors and poor service going unnoticed, it can actually propagate bad service. Agents who see that the center is so focused on the phones but not on email or chat are likely to give it their all during customer calls but let quality slip a bit when tackling contacts via text. They may even use…gulp…emoticons. :0
The best contact centers have a formal process in place for evaluating agents’ email and chat transcripts for information accuracy, grammar/spelling, professionalism, and contact resolution. In addition, these centers continually test their IVR- and web-based self-service apps to ensure optimal functionality, as well as monitor those apps during actual interactions to make sure that customers aren’t being thrown into IVR dungeons or abandoning web pages to rip the company a new one on Twitter.
That’s it for Part 1. I’ll share several more key quality monitoring practices in Part 2 next week. If you simply cannot wait that long, you have no other choice but to purchase a copy of my ebook immediately: http://www.greglevin.com/full-contact-ebook.html.
Contact center managers have been clamoring for more surefire hiring methods for years. They have lost faith in traditional hiring tactics like telephone pre-screenings, personality tests and live interviews – complaining that such tactics provide little insight into whether or not a candidate will remain committed to customer care and a life of poverty.
Great news: A team of top-notch doctors and psychiatrists recently developed a contact center-specific medical exam that promises to revolutionize agent hiring and retention. Following is a detailed description of each test that makes up the exam:
disStress Test. This is somewhat similar to the traditional stress test used by many physicians, but instead of placing agent candidates on a treadmill to evaluate their cardiovascular condition, they are put in a room with a phone and then sent 100 customer calls in 60 minutes.
Candidates who handle between 70-100 calls before losing consciousness should be hired by the contact center immediately. Those who handle between 40-70 calls before losing consciousness should be kept for further testing. Those who handle between 1-40 calls should be rejected immediately. And those who refuse to take even a single call should be placed on the company’s “executive training” track.
Electro-mail-ogram. This test is similar to the more familiar electromyogram, but where the latter features the sticking of painful electric needles into the candidate’s muscles to test for degenerative tissue/nerves, the former features the sticking of painful electric needles into the candidate’s frontal lobe to test for degenerative spelling/grammar. After each EMG, managers receive a full diagnostic report on the candidate’s written communication skills – including a ranking of each candidate from 1-10, with 10 being “masterful wordsmith” and 1 being “college graduate.”
The test is absolutely essential for contact centers in need of e-support agents who will be able to effectively handle customer email. It’s also good for contact centers that enjoy making their applicants cry.
CHAT scan. Not to be confused with a CAT scan, which provides a highly detailed computerized image of a subject’s brain and inter-cranial fascia, a CHAT scan provides a highly detailed computerized image of a subject’s wrist and fingers. The latter test determines whether or not an agent candidate has the proper carpal/metacarpal makeup to succeed in the physically demanding and fast-paced web chat environment. Specifically, the test reveals if there is any existing or potential weakness/abnormalities in any of the muscles and tendons needed for rapid typing or for flicking off managers when their back is turned.
A thorough CHAT scan will also identify if a candidate’s wrist/hand strength is overly excessive. Such brute strength can be a detriment to e-support efficiency, as the agent will be less likely to focus on chat sessions and more likely to focus on trying to remove the shackles that confine him to his workstation.
Rep-lex Test. Just like a reflex test, only completely different. Where a reflex test features the tapping of the patient’s patella tendon to see if they respond with an involuntary kick, a Rep-lex test features the flashing of the phrase “200 calls in queue” across a readerboard to see if the agent candidate responds with a panic attack. Such a traumatic response shows that the candidate truly takes customer care to heart. If, instead of the desired panic attack, a candidate responds by yawning or taking a book out and reading calmly, it’s best to eliminate the candidate from the running, or, if yours is a software support contact center, hiring them as a senior agent.
Flex-ray. This is like an X-ray, but focuses only on the patient’s spinal column. A typical Flex-ray test measures the flexibility of the spine and determines whether or not the candidate is likely to bend over completely backward for the contact center.
Candidates with abnormally rigid vertebrae should not be considered for contact center work, unless of course the company is in need of a scheduler. The ideal is to find candidates with virtually no backbone to speak of, as such individuals are not only easy to boss around, they are able to scrunch up enough to work in cubicles as small as 2’ x 2’, thus saving the company thousands of dollars in facility expenses.
NOTE: No contact center agents were harmed in the making of this blog post. The same will not be said if you actually end up using the medical exam Greg has described.
One of the keys to success in managing a call center is being prepared for the chaos and destruction that lurks around every corner in this dangerous world. Despite the constant threat of total catastrophe and mayhem, many call centers don’t have a formal disaster recovery plan in place.
It isn’t clear whether the leaders in such centers are unaware that complete calamity could strike at any second, or if they are aware of it but simply don’t know how to deal with it. Assuming the latter is the more common case, I’ve decided to share several tips on developing an effective disaster recovery plan. Follow this list of “do’s” and “don’ts” to ensure that your call center has at least a snowball’s chance of surviving the cataclysm that will likely befall it soon.
DO organize a Disaster Recovery Task Force to develop and oversee the implementation of your call center contingency plan. The task force should be composed of managers, supervisors, agents, IT specialists and a good bartender, and should come up with recovery guidelines for a variety of disaster scenarios, including an earthquake, extensive system failure, or center-wide hangover following a staff barbeque. Be sure to select a leader of the task force, as having a single individual in charge helps to keep everybody focused and ensures that there is somebody to blame outright should the recovery plan fail miserably.
DON’T select an outsourcer located in the same zip code as your call center to handle customer contacts during disasters. If your center is scooped up by a tornado, you’re going to need a company that still has a roof and walls to pick up your slack. Agents at outsourcing firms are under enough pressure as it is without having to worry about dodging shards of flying glass during calls.
DO take advantage of your own resources if your organization has more than one call center site. Having multiple call center locations is the best way to minimize damage during a crisis situation. Meet with managers of each call center site to discuss such important disaster planning issues as emergency call-routing processes, data security, customer notification, and how to have a panic attack without your agents knowing it. If your company currently has only one call center, start looking on craigslist for good deals on some used ones.
DON’T forget to put the plan in writing, and make sure it contains short and concise sentences that clearly state each step because if it contains long and complex sentences then it becomes harder to understand and thus increases the likelihood that your call center will be unable to react quickly in the event of a disaster and stuff like that. Clarity and brevity are key.
DO educate the entire frontline regarding the disaster recovery plan. It’s important not to leave your agents in the dark – there will be plenty of time for that when a hurricane or flood destroys all the power lines in your area. Make sure your agents fully understand the plan and their specific role in it. Occasionally test the plan by screaming “Fire! Fire!” and observe how agents react under pressure. Such drills are most effective if you are able to hide your giggling.
To assist you even more with your disaster recovery efforts, buy a copy of my Full Contact ebook (http://goo.gl/kVMhf). I actually don’t cover that topic, but you can print out the pages of the ebook and use them to prop up any table or desk that might get damaged during a disaster.
|