When it comes to social customer care (providing service and support via social media channels), there are two key practices that contact centers must embrace: 1) monitoring; and 2) monitoring.
No, I haven’t been drinking, and no, there isn’t an echo embedded in my blog. The truth is, I didn’t actually repeat myself in the statement above.
Now, before you recommend that I seek inpatient mental health/substance abuse treatment, allow me to explain.
Monitoring in social customer care takes two distinctly different though equally important forms. The first entails the contact center monitoring the social landscape to see what’s being said to and about the brand (and then deciding who to engage with). The second entails the contact center’s Quality Assurance team/specialist monitoring agents' 'social' interactions to make sure the agents are engaging with the right people and providing the right responses.
The first type of monitoring is essentially a radar screen; the second type of monitoring is essentially a safety net. The first type picks up on which customers (or anti-customers) require attention and assistance; the second type makes sure the attention and assistance provided doesn’t suck.
Having a powerful social media monitoring tool that enables agents to quickly spot and respond to customers via Twitter and Facebook is great, but it doesn’t mean much if those agents, when responding…
- misspell every other word
- misuse or ignore most punctuation
- provide incomplete – or completely incorrect – information
- show about as much tact and empathy as a Kardashian.
- fail to invite the customer to continue his/her verbal evisceration of the company and the agent offline and out of public view.
All of those scary bullet items above can be avoided – or at least minimized – when there’s a formal QA process in place for social media customer contacts. Now, if you’re thinking your QA and supervisory staff are too busy to carefully monitor and evaluate agents’ Twitter/Facebook interactions with customers (and provide follow-up coaching), then what the Zuckerberg are you thinking even offering such channels as contact options? I’ve said it before and I’ll say it again (and again, and again): If your contact center isn’t ready to monitor a particular contact channel, then it isn’t ready to HANDLE that channel.
Customers don’t applaud organizations for merely being progressive. If Toyota came out with a new automobile that ran on garbage but that had a 20% chance of exploding when you put the key in the ignition, customers’ response wouldn’t be, “Deadly, yes, but I might make it across the country on just banana peels!”
Social customer care is still new enough where organizations offering it are considered progressive. If your contact center is one such organization, are your customers applauding the strong and consistent social service and support your agents are providing, or is your center overlooking the quality component and losing too many customers to explosions?
For more insights (and some irreverence) on Social Customer Care, be sure to check out my blog post, “Beginner’s Guide to Social Customer Care”. Also, my book, Full Contact, contains a chapter in which best (or at least pretty good) practices in Social Customer Care are covered.
In the eyes of many customers, self-service is not a compound word but rather a four-letter one. It’s not that there’s anything inherently bad about IVR or web self-service applications – it’s that there’s something bad about most contact centers’ efforts to make such apps good.
Relatively few contact centers extend their quality assurance (QA) practices to self-service applications. Most centers tend to monitor and evaluate only those contacts that involve an interaction with a live agent – i.e., customer contacts in the form of live phone calls or email, chat or social media interactions. Meanwhile, no small percentage of customers try to complete transactions on their own via the IVR or online (or, more recently, via mobile apps) and end up tearing their hair out in the process. In fact, poorly designed and poorly looked-after self-service apps account for roughly 10% of all adult baldness, according to research I might one day conduct.
When contact center pros hear or read “QA”, they need to think not only “Quality Assurance” but also “Quality Automation.” The latter is very much part of the former.
To ensure that customers who go the self-service route have a positive experience and maintain their hair, the best contact centers frequently conduct comprehensive internal testing of IVR systems and online applications, regularly monitor customers' actual self-service interactions, and gather customer feedback on their experiences. Let's take a closer look at each of these critical practices.
Testing Self-Service Performance
Testing the IVR involves calling the contact center and interacting with the IVR system just as a customer would, only with much less groaning and swearing. Evaluate such things as menu logic, awkward silences, speech recognition performance and – to gauge the experience of callers that choose to opt out of the IVR – hold times and call-routing precision.
Testing of web self-service apps is similar, but takes place online rather than via calls. Carefully check site and account security, the accuracy and relevance of FAQ responses, the performance of search engines, knowledge bases and automated agent bots. Resist the urge to try to see if you can get the automated bot to say dirty words. There’s no time for such shenanigans. Testing should also include evaluating how easy it is for customers to access personal accounts online and complete transactions.
Some of the richest and laziest contact centers have invested in products that automate the testing process. Today's powerful end-to-end IVR monitoring and diagnostic tools are able to dial in and navigate through an interactive voice transaction just as a real caller would, and can track and report on key quality and efficiency issues. Other centers achieve testing success by contracting with a third-party vendor that specializes in testing voice and web self-service systems and taking your money.
Monitoring Customers’ Self-Service Interactions
Advancements in quality monitoring technologies are making things easier for contact centers looking to spy on actual customers who attempt self-service transactions. All the major quality monitoring vendors provide customer interaction recording applications that capture how easy it is for callers to navigate the IVR and complete transactions without agent assistance, as well as how effectively such front-end systems route each call after the caller opts out to speak to an actual human being.
As for monitoring the online customer experience, top contact centers have taken advantage of multichannel customer interaction-recording solutions. Such solutions enable contact centers to find out first-hand such things as: how well customers navigate the website; what information they are looking for and how easy it is to find; what actions or issues lead most online customers to abandon their shopping carts; and what causes customers to call, email or request a chat session with an agent rather than continue to cry while attempting to serve themselves.
As with internal testing of self-service apps, some centers – rather than deploying advanced monitoring systems in-house – have contracted with a third-party specialist to conduct comprehensive monitoring of the customers' IVR and/or web self-service experiences.
Capturing the Customer Experience
In the end, the customer is the real judge of quality. As important as self-service testing and monitoring is, even more vital is asking customers directly just how bad their recent self-service experience was.
The best centers have a post-contact C-Sat survey process in place for self-service, just as they do for traditional phone, email and chat contacts. Typically, these center conduct said surveys via the same channel as the customer used to interact with the company. That is, customers who complete (or at least attempt to complete) a transaction via the center’s IVR system are invited to complete a concise automated survey via the IVR (immediately following their interaction). Those who served themselves via the company’s website are soon sent a web-based survey form via email. Customers, you see, like it when you pay attention to their channel preferences, and thus are more likely to complete surveys that show you’ve done just that. Calling a web self-service customer and asking them to compete a survey over the phone is akin to finding out somebody is vegetarian and then offering them a steak.
It’s Your Call
Whether you decide to do self-service QA manually, invest in special technology, or contract with third-party specialists is entirely up to you and your organization. But if you don’t do any of these things and continue to ignore quality and the customer experience on the self-service side, don’t act surprised if your customers eventually start ignoring you – and start imploring others to do the same.
True contact center success comes when organizations make the critical switch from a “Measure everything that moves” mindset to one of “Measure what matters most.” Given that we are now living in the Age of Customer Influence, “what matters most” is that which most increases the likelihood of the customer not telling the world how evil you are via Twitter.
No longer can companies coast on Average Handle Time (AHT) and Number of Calls Handled per Hour. Such metrics may have ruled the roost back when contact centers were back-office torture chambers, but the customer care landscape has since changed dramatically. Today, customers expect and demand service that is not only swift but stellar. A speedy response is appreciated, but only when it’s personalized, professional and accurate – and when what’s promised is actually carried out.
AHT and other straight productivity measurements still have a place in the contact center (e.g. for workforce management purposes as well as identifying workflow and training issues). However, in the best centers – those that understand that the customer experience is paramount – the focus is on a set of five far more qualitative and holistic metrics.
1) Service Level. How accessible your contact center is sets the tone for every customer interaction and determines how much vulgarity agents will have to endure on each call. Service level (SL) is still the ideal accessibility metric, revealing what percentage of calls (or chat sessions) were answered in “Y” seconds. A common example (but NOT an industry standard!) SL objective is 80/20.
The “X percent in Y seconds” attribute of SL is why it’s a more precise accessibility metric than its close cousin, Average Speed of Answer (ASA). ASA is a straight average, which can cause managers to make faulty assumptions about customers’ ability to reach an agent promptly. A reported ASA of, say, 30 seconds doesn’t mean that all or even most callers reached an agent in that time; many callers likely got connected more quickly while many others may not have reached an agent until after they perished.
2) First-Call Resolution (FCR). No other metric has as big an impact on customer satisfaction and costs (as well as agent morale) as FCR does. Research has shown that customer satisfaction (C-Sat) ratings will be 35-45 percent lower when a second call is made for the same issue.
Trouble is, accurately measuring FCR is something that can stump even the best and brightest scientists at NASA. (I discussed the complexity of FCR tracking in a previous post.) Still and all, contact centers must strive to gauge this critical metric as best they can and, more importantly, equip agents with the tools and techniques they need to drive continuous (and appropriate) FCR improvement.
3) Contact Quality and 4) C-Sat. Contact Quality and C-Sat are intrinsically linked – and in the best contact centers, so are the processes for measuring them. To get a true account of Quality, the customer’s perspective must be incorporated into the equation. Thus, in world-class customer care organizations, agents’ Quality scores are a combination of internal compliance results (as judged by internal QA monitoring staff using a formal evaluation form) and customer ratings (and berating) gleaned from post-contact transactional C-Sat surveys.
Through such a comprehensive approach to monitoring, the contact center gains a much more holistic view of Contact Quality than internal monitoring alone can while simultaneously capturing critical C-Sat data that can be used not only by the QA department but enterprise-wide, as well.
5) Employee Satisfaction (E-Sat). Those who shun E-Sat as a key metric because they see it as “soft” soon find that achieving customer loyalty and cost containment is hard. There is a direct and irrefutable correlation between how unhappy agents are and how miserable they make customers. Failure to keep tabs on E-Sat – and to take action to continuously improve it – leads not only to bad customer experiences but also high levels of employee attrition and knife-fighting, which costs contact centers an arm and a leg in terms of agent re-recruitment, re-assessment, re-training, and first-aid.
Smart centers formally survey staff via a third-party surveying specialist at least twice a year to find out what agents like about the job, what they’d like to see change, and how likely they are to cut somebody or themselves.
For much more on these and other common contact center metrics, be sure to check out my FULL CONTACT ebook at https://offcenterinsight.com/full-contact-book.html.
Show me a call center that does not bother to measure Service Level – and do so correctly – and I’ll show you a call center that likely struggles in practically every key area of customer contact management. Service Level is THE metric for gauging accessibility, and as such it is tied to and has an immense impact on customer satisfaction, workforce management decisions, call center budgeting/costs, and agent sanity.
Service Level (SL) is defined as X% of calls (or chat sessions) answered in Y seconds. A common (but NOT an industry standard!) SL objective is to answer 80% of all customer calls in 20 seconds – typically stated as “80/20”. This means that out of every 100 calls, the call center aims to route at least 80 of them to a live agent within 20 seconds. If the agent to whom a call is routed is not alive, its best to dispose of the body immediately before it affects the health and/or morale of others on the team.
So why doesn’t every call center strive to answer 100% of calls in 20 seconds (or 15 seconds, or 10 seconds)? Well, while doing so would positively delight customers, they would not remain delighted for very long, as the company they are calling would likely go out of business. To deliver on a 100/20 or 100/15 SL objective, a call center would require a daily staffing budget bigger than the CEO’s country club dues. (The exception, of course, is emergency services call centers – e.g. 911 centers – which must answer 100% of calls in a very short period of time by law, and are thus staffed/funded accordingly.)
Such infeasible SL objectives aren’t even necessary; most customers don’t mind waiting 20 or 30 seconds or even a little more before reaching a live agent – especially if they are informed beforehand of the expected wait. That’s why many of the best call centers implement a “visible queue” tool – an automated attendant that tells callers the estimated time until an agent will be available. Studies have shown that call centers with visible queues are 74% less likely to be burned to the ground by a disgruntled customer, and 26% more likely to not be burned to the ground by a disgruntled customer.
Now, an 80/20 SL objective does not indicate that the center ignores or doesn’t care about what happens to the 20% of calls not answered in 20 seconds; it simply means that those callers may experience longer wait times and be given a chance to sing or hum along with the call center’s on-hold music. Top call centers focus on side metrics such as Longest Current Wait Time and # of Customer Curse Words per Hour to ensure that no callers are being avoided like the plague, and to stay abreast of call volume trends that may require real-time action to evade an accessibility crisis and/or customer revolt.
Selecting an SL Objective
So, what is the right SL objective for your call center? I have no idea – and neither does anybody else outside of your organization (except perhaps for an experienced consultant familiar with the ins and outs of your operation). The best SL objective depends on several variables specific to your center: Your average call volume; your customer’s expectations and tolerance levels; your staffing budget; as well as the SL objective of competing call centers in your industry (though, still, you don’t want to just play copy-cat, as other key variables may differ).
That is not to say that there aren’t some common SL objectives shared by many centers – e.g., 80/20, 80/30, 90/20, 90/30. However, just arbitrarily picking one of these objectives without first conducting careful analysis of your call center’s resources and your customers’ expectations will often lead to either very angry callers (and agents) or very angry executives (and stockholders) – or both.
Keep Quality in the Equation
Of course, no conversation about SL is complete without mentioning quality. You could take the time to carefully select a solid SL objective and consistently achieve that objective (without going too far over, as that indicates costly over-staffing), but it won’t mean jack squat if those calls that are routed quickly are being handled sloppily. Accessibility means nothing without quality. Getting seated immediately at a trendy, popular restaurant is great, but not if the maitre d' laughs at your tie, the waiter spills your wine, and the cook burns your steak.
Leading call centers understand this, and therefore never let efficiency supersede proficiency and professionalism. From the moment a new agent is hired, these centers indoctrinate them into a customer-centric service culture where things like empathy, accuracy and not comparing customers unfavorably to microorganisms are strongly emphasized and coached to. When such behaviors and values are encouraged and embodied, fewer mistakes are made, fewer call-backs are required, and fewer agents and customers burst into flames – thus making it much more likely that the call center (assuming good forecasting and scheduling has occurred) will meet or even exceed its SL objective.
|