customer service is in the hands of artificial intelligence: the difficult balance that chatbots must strike
If they are too intelligent the consumer will receive them with suspicion, but if they are too inept they will not like the experience either.
More and more companies are putting customer service functions in the hands of artificial intelligence. In fact, the overload on their customer service departments during the pandemic, when many consumers stayed home, bought more online and talked much more with remote customer services. Faced with work overload, many companies gave more weight to their bots and their other services based on artificial intelligence (AI).
But leaving customer service – whether primarily or in part – to artificial intelligence forces many new factors to take into account. There is the demographic question and if your consumers will have enough technological knowledge to interact with a chatbot and not with a real person. There is also the question of comfort. Are your buyers comfortable talking to a robot?
Here age issues are not necessarily so relevant, since comfort or discomfort can be marked by many factors. The very nature of technology makes some consumers feel rejection. In addition, you also have to take into account how you react to how those chatbots are.
As The Wall Street Journal has just pointed out, based on data from a set of studies conducted by researchers at Stanford University, companies face a kind of double-edged sword. They can’t make their chatbots too smart, because that will scare off consumers.
At the same time, however, they can’t make them too basic either, because if they do, users won’t make the effort to use them. They must find the balance, the middle ground, but that is not always easy.
What the researchers discovered
In two of the studies, participants had to interact with chatbots, which were described to them in different terms that appealed to different degrees of sophistication of ‘personality’. They could be from “trained professionals” to nursery children or adolescents with little experience, going through other profiles.
Afterwards, the study participants had to explain whether they would use that agent again. The data made it clear that, quite strikingly, users gave higher scores to those chatbots that were not described in a way that made them seem very confident, but did generate a certain warmth. That is, it was preferred that the chatbot had the personality of a young person who is starting to that of an executive who knows everything.
In a third study, the chatbots that generated the most rejection were studied. In general, the chances of someone using a chatbot dropped if the chatbot was perceived as incompetent.
How then should chatbots be to work for companies? Researchers believe that they should be presented in a friendly way, but that overwhelming promises of efficiency should not be made. If companies make big promises, consumers will be ruthless if the chatbot doesn’t do things right, but they will accept the situation if the chatbot somehow acknowledges that it is learning.