The generative Artificial Intelligence (AI) chatbot ChatGPT was launched at the end of 2022 and hasn’t left the spotlight since. Journalists have been covering it endlessly and marvelling at the incredible ability of this bot to answer almost any question with authority.
In the customer service environment chatbots have been used for a long time, but most customers groan when presented with a bot as a communication option. They have often been poorly implemented and limited in what they can do. ChatGPT is showing how natural conversation with a bot really is possible, whereas most of us still struggle to find a missing package using bots.
It has been interesting to see the media coverage of how such a smart chatbot might impact the contact centre. Forbes published some thoughtful analysis that said the contact centre and human advisers were still essential, but their role will change and evolve. The Guardian claimed that contact centres are now toast and the robots can entirely replace advisers.
It’s easy to see why The Guardian was so amazed by ChatGPT. It is truly impressive and really does return very intelligent answers. You can ask it to write a song about your cat in the style of Bob Dylan and it will do so – because it can immediately study the entire Dylan catalogue and use the same style in a fraction of a second.
Surely any customer service question could be handled this way?
There are a couple of reasons why not. The first is the empathy created when one human connects with another. That’s very hard to replace with a bot, no matter how smart it is. Empathy is why an angry customer can call customer service, but leave the call saying what a fantastic company this is, because the adviser listened, connected, and resolved the problem quickly.
The second is the training algorithms. ChatGPT is pre-trained, it does not learn on the job. There is a fixed body of information that it has studied. This could change for specific instances, such as a large retailer creating their own version of ChatGPT with learning capabilities built-in, but if you allow customers the ability to train the AI then things can go wrong very quickly – as Microsoft found out just a few years ago.
Making sure that users cannot train the bot has helped to guarantee that ChatGPT is only trained on reliable information. However it does mean that in situations where there is a problem that is new, it will not know what to do. That’s a big problem if all your human advisers have left the building.
I believe the Forbes analysis is a more accurate view on the situation. Think about the modern customer journey today. What happens when a customer has a problem with a product?
- Search the Internet or social media for help
- Contact the automated help system
- Call or message for help from a live adviser
Every support journey is different, but this is typical. The first response for most people is to ask Google or Alexa for help, or to put out a request on their favourite social network. If they still don’t have any answers they will contact the brand, which usually means the automated chatbot that is designed to answer common questions. If this fails, then they call.
This means that by the time the customer gets into a conversation with an adviser they have not found an answer to their problem, although they have tried through several channels. It means that the adviser needs to be better than Google.
Replacing step 2 with a much better chatbot system modelled on a generative AI system like ChatGPT will immediately improve the process. It will allow brands to have a generative engine that is trained on all their product manuals, technical design documents, and the FAQ. The bot will be able to immediately find answers from all this information and this will be far more helpful than a bot that just directs customers to a page in the FAQ.
But, as I mentioned, there will still be new problems and problems that require an empathetic approach and these will still require the human touch. This changes the customer journey and the role of the adviser.
The adviser becomes a much more skilled troubleshooter in this situation. The only problems they handle are ones where the customer really needs to be talking to a human or entirely novel issues that are unknown to the chatbot. The majority of simple problems will never get to a human adviser. This also makes the role of the adviser more interesting because each day is different and they are more like an investigator, rather than just noting details from a complaining customer.
These new bots are amazing, but we are still nowhere near to robotic empathy. I’m not sure if it is really possible because even humans get it wrong sometimes. However, there is an opportunity to elevate the role of the customer service adviser and that’s a welcome development for both the advisers and customers.