Coaching Sales Agents? Use AI And Human Coaches
Researchers from Temple University, Sichuan University, and Fudan University published a new paper in the Journal of Marketing that explores the growing use of AI to coach sales agents to determine if there are any caveats that inhibit the effective use of this technology.
The study forthcoming, in the the Journal of Marketing, is titled “Artificial Intelligence (AI) Coaches for Sales Agents: Caveats and Solutions” and is authored by Xueming Luo, Shaojun Qin, Zheng Fang, and Zhe Qu.
Many companies now turn to artificial intelligence (AI) to provide sales agents with coaching services that were originally offered by human managers. AI coaches are computer software solutions that leverage deep learning algorithms and cognitive speech analytics to analyze sales agents’ conversations with customers and provide feedback to improve their job skills. Due to their high computation power, scalability, and cost efficiencies, AI coaches are more capable of generating data-driven training feedback than human managers. MetLife, an insurance giant adopted an AI coach named Cogito to offer training feedback to its call center frontline employees to improve customer service skills. Similarly, Zoom uses its AI coach, Chorus, to offer on-the-job training to its sales force.
Precisely because of the big data analytics power of AI coaches, one concern is that feedback generated by the technology may be too comprehensive for agents to assimilate and learn, especially for bottom-ranked agents. Further, despite their superior “hard” data computation skills, AI coaches lack the “soft” interpersonal skills to communicate the feedback to agents, which is a key advantage of human managers. The lack of soft skills may result in agents’ aversion to receiving feedback from AI coaches, thus hampering their learning and performance improvement. Indeed, the design of AI coaches often focuses on information generation, but less on learning by agents who may differ in learning abilities. Therefore, it would be naïve to expect a simple, linear impact of AI coaches, relative to human managers, across heterogeneous sales agents.
With this background, the study addresses several research questions:
Which types of sales agents, bottom-, middle-, or top-ranked, benefit the most or the least from AI vis-à-vis human coaches? Is the incremental impact of AI coaches on agent performance heterogeneous in a non-linear manner?
What is the underlying mechanism? Does learning from the training feedback account for the impact of AI coaches?
Can an assemblage of AI and human coach qualities circumvent caveats and improve the sales performance of distinct types of agents?
These questions are answered by a series of randomized field experiments with two fintech companies. In the first experiment, a total of 429 agents were randomly assigned to undergo on-the-job sales training with an AI or human coach. Results show that the incremental impact of the AI coach over human coach is heterogeneous in an inverted-U shape. While middle-ranked agents improve the most, both bottom- and top-ranked agents show limited incremental gains. Results suggest that this pattern is driven by a learning-based underlying mechanism. Bottom-ranked agents encounter the most severe information-overload with the AI coach. By contrast, top-ranked agents display the strongest AI aversion problem, which obstructs their incremental learning and performance.
The slim improvement in bottom-ranked agents is an obstacle for AI coach adoption because they have the largest room and most acute needs to sharpen their job skills. The researchers re-designed the AI coach by restricting the amount of feedback provided to bottom-ranked agents. With a separate sample of 100 bottom-ranked agents, the second experiment affirmed a substantial improvement in agent performance with a restricted AI coach. A third experiment tackled the limitations of either AI or human coaches alone by examining an AI-human coach assemblage, wherein human managers communicate the feedback generated by the AI coach to the agents. A new sample of 451 bottom- and top-ranked agents were randomly assigned to the AI coach, human coach, and AI-human coach assemblage conditions. The results suggest that both bottom- and top-ranked agents in the AI-human coach assemblage condition enjoy higher performance than their counterparts in the AI coach alone or the human coach alone condition. In addition, bottom-ranked agents gain more performance improvement than top-ranked agents with the hybrid of AI and human coaching. Thus, this assemblage that harnesses the soft communication skills of human managers and hard data analytics power of AI coaches can effectively solve both problems faced by bottom- and top-ranked agents.
As Luo explains, “Managerially speaking, our research empowers companies to tackle the challenges they may encounter when investing in AI coaches to train distinct types of agents. We show that instead of simply applying an AI coach to the entire workforce, managers ought to prudently design it for targeted agents.” Qin adds, “Companies should be aware that AI and human coaches are not dichotomous choices. Instead, an assemblage between AI and human coaches engenders higher workforce productivity, thus allowing companies to reap substantially more value from their AI investments.”