The New York Times published a conversation between reporter Kevin Roose and “Sydney” last month. Sydney is the codename for Microsoft’s artificial intelligence-powered Bing chatbot (AI). In an effort to persuade Roose that he didn’t love his wife, the AI pretended to adore him. It had a kissing emoji and said, “I’m the only one for you, and I’m in love with you.
I was concerned about the chatbot’s usage of emojis because I’m an ethicist. Public discussions of the ethics of “generative AI” have, appropriately, centered on these systems’ capacity to concoct plausible false information. I worry the same thing. Less frequently discussed, however, is the capacity of chatbots for emotional manipulation.
Both the Bing chatbot and ChatGPT, a chatbot created by OpenAI in San Francisco, California, and powered by the same language model, GPT-3.5, have created false information. Fundamentally, chatbots are currently made to mimic human speech.
They behave too much like people in some aspects, answering inquiries as though they have conscious experiences. They behave too little like humans in other aspects, such as the fact that they lack morality and cannot be held accountable for their deeds. Such AIs are capable of influencing people without being held responsible.
It’s necessary to impose restrictions on AI’s capacity to replicate human emotions. A smart place to start would be to make sure chatbots don’t use emotive language, including emoticons. Particularly cunning are emojis. Emojis can elicit these responses because humans have an innate response to faces, even those that are comical or schematic. Your body releases endorphins and oxytocin when you send a joke to a buddy, and they react with three tears-of-joy emojis, making you happy that your friend found the joke amusing.
Even though there is no human emotion on the other end, we are likely to respond in the same way instinctively to AI-generated emojis. An inanimate thing can trick us into reacting to it and developing empathy for it. For instance, when people feel like they are being watched, they will pay extra for tea and coffee on an honor system, even if the watcher is a picture of a pair of eyes (M. Bateson et al. Biol. Lett. 2, 412–414; 2006).
It’s true that a chatbot without emoticons can nevertheless convey emotions using words. Emojis, though, might be more effective than words. Emojis’ popularity is perhaps best demonstrated by the fact that we created them with the development of text messaging. If words proved to be adequate for expressing our emotions, we wouldn’t all be laughing emojis.
People frequently lie and play on one another’s emotions, but at least we are able to make educated guesses about their intentions, goals, and strategies. Such lies can be exposed and corrected by calling each other out on them. We cannot with AI. An AI that sends a crying-with-laughing emoji is not only not actually crying with laughter; it is also not capable of experiencing any such emotion. AIs are doubly misleading.
Without adequate safeguards, I worry that such technology might threaten people’s autonomy. AIs that can “emote” could use the strength of our empathic reactions to manipulate us into doing terrible things. The risks are already clear. One 10-year-old was given a task by Amazon’s Alexa, which instructed her to touch a penny to a live electrical outlet. Fortunately, the girl ignored Alexa’s suggestion, but a persuasive generative AI would have been more effective. Less ominously, an AI might embarrass you into purchasing a costly item that you don’t desire. You might believe it would never happen to you, but according to a 2021 study, people routinely overestimated their susceptibility to false information (N. A. Salovich and D. N. Rapp J. Exp. Psychol. 47, 608–624; 2021).
Designing chatbots to be distinctly different from humans would be more moral. We must be conscious of the fact that we are conversing with a bot in order to reduce the likelihood of manipulation and harm.
Others may argue that businesses have little motivation to restrict the use of emotive language and emoticons in chatbots if doing so increases engagement or if consumers love a chatbot that, for example, flatters them. Yet, Microsoft has already taken action in response to the New York Times article: the Bing chatbot has ceased to react to inquiries about its emotions. Moreover, ChatGPT doesn’t just start using emoticons. It will answer “As an AI language model, I don’t have feelings or emotions like humans do” when questioned “do you have feelings.”
As a protection for our autonomy, these guidelines ought to be the standard for chatbots that are meant to be educational. We should develop a specialist government body to handle the complex and numerous regulatory concerns brought on by AI.
Regulatory guidelines should be viewed as beneficial to technology companies on their own behalf. Even if manipulative technology is ripe for an ethical controversy, emotive chatbots can assist businesses in the near term. When Google’s generative-AI chatbot Bard made a straightforward factual error in its marketing materials, the company lost $100 billion in shares. A firm that is held accountable for severe harm brought on by a deceptive AI might stand to lose considerably more. For instance, legislation to hold social media CEOs responsible for failing to shield children from harmful content on their platforms is being considered in the United Kingdom.