Problem Statement

Customer care operations have little insight into the topics and volume of customer care calls.

By understanding the reasons why customers are reaching out to support staff, product teams can identify bugs, issues, and potential improvements of the systems.

Background

In February 2017, LogMeIn acquired the products division of Citrix Online, resulting in the layoff of the customer care team. Tier one care operations were outsourced to a call center in Costa Rica, causing turmoil as the new team struggled to handle customer issues.

The switch to outsourced care teams led to a push to reduce care calls and save costs. However, the call center's operating procedures only allowed 30 seconds between calls for agents to document the subject, making it difficult to accurately determine call topics. Agents were provided with a drop-down list of possible reasons to select post-call, but analysis revealed that over 70% of calls were being funneled into the top three categories. This indicated that agents were choosing an incorrect category to speed up their post-call time.

Goals

Initially, the goal was to simplify the call reason selection process by transforming the drop-down list into a two-step form, allowing agents to select a category before choosing an individual call topic.

However, as the project progressed and the company partnered with Clarabridge software, the objective shifted to creating a comprehensive list of call reasons to develop an AI algorithm for accurate call sorting based on keywords spoken during the call.

A secondary goal was to determine why customers were calling for issues that could be resolved through self-service on the support website. By directing customers to the website before calling, unnecessary care calls could be reduced, resulting in cost savings for the call center.

Methods

To identify call reasons, the first step was to conduct call listening sessions. My colleague and I listened to care calls via the system that supervisors use for quality assurance and training for a week. We created a spreadsheet for each call to document call information, such as the date, the product the support inquiry was for, the case number created for the call in Salesforce, and the customer’s email. Then we documented the customer issue in their own words and listed each step the agent went through to troubleshoot or solve the issue during the call. Finally, we marked the call as “resolved” or “unresolved” based on whether the agent was able to help the customer during their call.

The call listening sessions provided many insights that helped improve the customer call experience. For example, we noticed that customers were being transferred multiple times between support teams that specialized in various product areas. Additionally, we saw that agents had difficulty transcribing customer emails over the phone, taking on average over a minute going back and forth on phonetics and spelling. Moreover, we found that customers were calling in to verify account information that was available to view online, driving up call volume for non-issues, which is a waste of company resources.

We forwarded these insights to the care operations teams, who implemented changes such as better routing from the support site and creating a case on the website before contacting support, where users could provide their email before speaking with an agent. The call listening sessions gave me a better understanding of the frustrations faced by both customers and agents during call interactions. They also provided a rough estimate of the volume of various call issues received by care agents on a daily basis, which helped me communicate better with agents about the common issues they solved and how frequently they encountered them.

Phase 1: Call listening

Phase 2: Affinity mapping

To better understand the volume of call reasons, I conducted affinity mapping exercises with the care operations team and care agents. During interviews, I asked participants to list all the call reasons they worked on and group them using post-it notes. One way they sorted the notes was by affinity, with care specialists creating their own groups and names. This formed the basis of a two-tier model for streamlining call reason selection.

Another sorting method involved classifying call reasons by issue complexity and whether they could be solved through self-service or required a care agent. This revealed two possible solutions: simple issues requiring agent involvement could be made self-serviceable, while complex issues could be addressed through improved documentation and walk-throughs to reduce the number of customer support calls.

Phase 3: Training the algorithm

After collecting enough data, I restructured the system for selecting care call topics into a logical two-step process. Agents could first choose a category (such as billing) and then select a specific topic (such as updating payment method).

Around this time, the company began a trial of Clarabridge, a customer experience software that uses AI-powered text and speech analytics to track call topics, volume, and sentiment. With this tool, we could analyze audio from customer support calls and have the AI determine the call topic without relying on the agent's selection.

During the trial phase, I assisted the customer care team in establishing various algorithms for determining the call topic. This involved inputting different words, phrases, or combinations of words that would tag the call with the corresponding topic when detected in the audio.

To further improve the call reasons model, I conducted additional call listening sessions and compared the results to the call topics identified by Clarabridge. Through this process, I identified various issues that needed to be addressed, such as accounting for phonetic variations that the speech-to-text system often picked up, such as "admin center" being transcribed as "Edmond center" or "I am in center," as well as recognizing product or company names not being detected by the software. After a few minor adjustments, the system became highly accurate in identifying call reasons.

Results

By the end of the project, the company had an accurate method for measuring the volume of different support call topics. This allowed product teams to assess which issues to prioritize and listen to individual calls on those topics for more context. It also helped the customer care team address agent training and identify areas where the self-service support site lacked information for customers to solve their issues before contacting support.

Subsequently, the company became a paying customer of Clarabridge due to the successful implementation of call topic analysis. Furthermore, they expanded the use across the company to integrate feedback data from other sources such as NPS.