You can ask a chatbot to offer a refund if a meal delivery order is unsuccessful. A chatbot can help you make a dining reservation for a romantic evening. A chatbot can offer emotional support if a possible spouse declines you. Chatbots are used for everything from banking to customer service to healthcare. Chatbots—software with artificial intelligence (AI)—interact with people or other chatbots using natural language processing and sentiment analysis.
They relieve human stress and spare businesses the expense of paying for several employees. However, chatbots have a bad reputation for being ineffective and annoying users (more so if they are already annoyed).
Here are a few causes of chatbot failure over the years.
Failure to complete elementary tasks
Is a chatbot even required if all it does is direct users to the human support staff? After debuting its AI chatbot Anna in 2008, Swedish furniture manufacturer IKEA was faced with that dilemma. When a customer wanted to return a bed, he emailed IKEA, spoke to Anna, then called customer service at the store. What an experience!
One person who didn’t comprehend the user’s query was Anna. Then, when asked to take some (or any) action to help the user, Anna advised him to get in touch with the company on call. Anna was not only ineffective because her first answer was to direct the user to on-call assistance, but she was also unable of carrying out a simple task.
In the end, the user decided to write an open letter to the CEO of IKEA on LinkedIn.
Lack of human help may be frustrating for people and harmful to the business. Over 50% of American consumers, according to a 2019 Forrester survey, had a bad experience with chatbots. Chatbots cannot be effective without actual human support because of their limited capacity to understand humans.
Absence of humanity
Microsoft introduced Tay, a Twitter chatbot, in 2016. Tay was constructed using “relevant public data.” It was an online human impersonation project using machine learning. It was the last-ditch effort to give chatbots a human appearance. But my goodness, did things suddenly get worse.
Tay made incredibly contentious claims, such from “I f*cking detest feminists” to “Hitler was right,” according to what users tweeted to it. It can be extremely difficult and ineffective to leave chatbots to their own devices. Humans are required to oversee it in order to screen comments and make sure they are not debatable.
In the end, chatbots are programmable solutions. They are limited in what they can accomplish by operating on coded replies. They lack the emotional intelligence that humans possess, making them unsuitable as substitutes for customer service.
failure to comprehend human input
The AI chatbot Poncho from Meta questioned the user if they were on a boat when they inquired about the weekend weather in Brooklyn. When asked again, it responded with Brooklyn’s current weather. Finally, the user yelled at the chatbot, using all caps, “WEEKEND,” to which it casually replied, “Sorry, fell asleep for a second. What did you just say?
AI chatbots find it difficult, but not impossible, to comprehend anything that isn’t on the script. The AI wouldn’t care if a consumer was attempting to complain a subpar product—it would only care about whether they had received it at all. It’s like talking to a lover who is gaslighting you. After a point, you want to chuck something against the wall.
Privacy and trust issues
People do not trust bots, according to research. Even if businesses try to make chatbots conversational and humanlike, it is challenging to win users’ trust. According to experts, chatbots don’t consider the context of their words. In the end, they are merely computer programmes created to influence users.
This programming frequently succeeds in getting consumers to communicate with chatbots like they were real people. However, as a result of it, users may wind up disclosing private information, putting them at danger of privacy violations.
user-focused rather than business-focused
Last but not least, managers’ poor priorities can also cause chatbot failures. Chatbots are typically developed to streamline company procedures rather than to facilitate the client experience. Businesses aim to cut expenses, save time, and eliminate internal redundancies. Thus, they develop chatbots.
However, that goal is defeated by chatbots’ incapacity to comprehend human input or carry out activities independently. Businesses frequently fail to take into account customer preferences or the difficulties in assisting real people, which is where chatbots frequently fail.
Will ChatGPT be successful or not?
With chatbots, there is still some chance. According to its website, ChatGPT will be a new chatbot that can “answer follow-up inquiries, admit its mistakes, dispute faulty premises, and refuse improper requests,” taking into account the aforementioned faults. It claims to be truly successful and advances conversational AI. It can play games with you, compose essays, and deliver jokes (even if they aren’t everyone’s cup of tea). Reviews for it currently range from “overhyped” to “killing Google,” with some calling it both.
The aforementioned examples demonstrate that mimicking humans so well might have drawbacks, despite the fact that many find this impressive. ChatGPT’s ability to succeed depends on its coding and user-trustworthiness.
It remains to be seen if it will survive or, like many chatbots before it, meet its ultimate demise. If the latter, it may be a turning point for AI as we currently understand it.