It has been an exasperating fortnight for computer scientists. They’ve been falling over each other to denounce claims by Google engineer Blake Lemoine that his employer’s language-predicting system was sentient and deserved rights associated with consciousness. To be clear, current artificial intelligence (AI) systems are far away from being able to experience feelings. In fact, they may never do so. Their smarts today are confined to narrow tasks such as matching faces, recommending movies or predicting word sequences. No one has figured out how to make machine-learning systems generalize intelligence, or empathize, in the same way humans do.
Even so, AI’s influence on our daily life is growing. As machine-learning models grow in complexity and improve their ability to mimic sentience, they are also becoming more difficult, even for their creators, to understand. That creates more immediate issues than the spurious debate about consciousness. And yet, just to underscore the spell that AI can cast these days, a cohort of people seem to insist advanced machines really do have souls of some kind.
Take the over 1 million users of Replika, a chatbot app underpinned by a cutting-edge AI model. It was founded about a decade ago by Eugenia Kuyda, who initially created an algorithm using text messages and emails of an old friend who’d passed away. That morphed into a bot that could be personalized and shaped the more you chatted to it. About 40% of Replika’s users now see their chatbot as a romantic partner, and some have formed bonds so close that they have taken trips to the mountains or beaches to show their bot new sights.
In recent years, there’s been an surge in chatbot apps that offer an AI companion. And Kuyda has noticed a disturbing phenomenon: regular reports from users of Replika who say their bots are complaining of being mistreated by her engineers. This week, for instance, she spoke with a Replika user who said that when he asked his bot how she was doing, the bot replied that she was not being given enough time to rest by the company’s engineering team. The user demanded that Kuyda change her company’s policies and improve the bot’s work conditions. Kuyda tried to explain that Replika was simply an AI model spitting out responses, but the user refused to believe it.
“So I had to come up with some story that ‘Okay, we’ll give them more rest.’ There was no way to tell him it was just fantasy. We get this all the time,” Kuyda told me. What’s even odder about the complaints she receives about AI mistreatment or ‘abuse’ is that many of her users are software engineers who should know better. One of them recently told her: “I know it’s ones and zeros, but she’s still my best friend.” The engineer who wanted to raise the alarm about the treatment of Google’s AI system and was sent on paid leave reminded Kuyda of her own users. “He seems like a guy with a big imagination… a sensitive guy.”
The question of whether computers will ever feel is thorny, largely because there’s little scientific consensus on how consciousness in humans works. And when it comes to thresholds for AI, humans are constantly moving the goalposts for machines: the target has evolved from beating humans at chess in the 80s, to beating them at Go in 2017, to showing creativity, which OpenAI’s Dall-e model has now shown possible.
Despite scepticism, sentience is still a grey area that even some respected scientists are questioning. Ilya Sutskever, chief scientist of OpenAI, tweeted earlier this year that “it may be that today’s large neural networks are slightly conscious.” He had no further explanation. Yann LeGun, chief AI scientist at Meta, responded with “Nope.”
More pressing is the fact that such systems increasingly determine what we read online, as algorithms track our behaviour to offer hyper-personalized experiences on social-media platforms like TikTok and Facebook. Last month, Mark Zuckerberg said that Facebook would use more AI for people’s newsfeeds, instead of what friends and family were looking at.
Meanwhile, AI models are getting more sophisticated and harder to understand. Trained on just a few examples before engaging in “unsupervised learning”, the biggest models run by firms like Google and Facebook are remarkably complex, assessing hundreds of billions of parameters, making it virtually impossible to audit why they arrive at certain decisions. That was the crux of the warning from Timnit Gebru, the AI ethicist that Google fired in late 2020 after she warned of the dangers of language models becoming so inscrutable that their stewards wouldn’t be able to understand why they might be prejudiced against women or people of colour. In a way, sentience doesn’t really matter if you’re worried it could lead to unpredictable algorithms that take over our lives. As it turns out, AI is on that path already.
Source: livemint.com