Jonas Thiel, a student majoring in socioeconomics at a university in northern Germany, recently spent more than an hour online conversing with some of the left-leaning political philosophers he had been reading about. These were not the real philosophers, but rather virtual recreations that were given conversational life on the website Character.AI by clever chatbots.
The robot that resembled communist Karl Kautsky, a Czech-Austrian who passed away before World War Two, was Mr. Thiel’s favourite. In response to Mr. Thiel’s request for help, Kautsky-bot recommended starting a publication for contemporary socialists attempting to revive the German labour movement. The bot added, “They may use it to organise working class people as well as to propagate socialist propaganda, which is now in short supply in Germany.
The working classes, according to Kautsky-bot, would eventually “come to their senses” and support a contemporary Marxist revolution. It declared, “Right now, the proletariat is at a low point in their history. They eventually will come to understand capitalism’s shortcomings, particularly in light of climate change.
Mr. Thiel visited with other virtual experts over the course of several days, including G.A. Cohen and Adolph Reed Jr. However, he could have chosen just about anyone, living or dead, actual or fictional. Users can converse with realistic imitations of anyone at Character.AI, which debuted this summer. Examples include Queen Elizabeth, William Shakespeare, Billie Eilish, and Elon Musk (there are several versions). You can have a discussion with anyone you wish to conjure up. A number of initiatives are being made to create a new type of chatbot, including the company and website established by former Google researchers Daniel De Freitas and Noam Shazeer. These bots can’t converse exactly like people, but they frequently appear to.
Over a million users reported experiencing the same sensation as if they were speaking with a real person after using the ChatGPT bot, which was unveiled by the San Francisco artificial intelligence lab OpenAI in late November. Similar technologies are being developed by tech behemoths like Google, Meta, and others. A few businesses have been hesitant to make the technology available to the general public. These bots frequently produce untruths, hate speech, and language that is prejudiced against women and people of colour since they learn their abilities from data that real people publish online. If utilised improperly, they might prove to be a more effective means of waging the sort of disinformation campaign that has become all too frequent in recent years.
Margaret Mitchell, a former AI researcher at Microsoft and Google, where she helped launch its Ethical A.I. team, said that if no further safeguards are put in place, the systems will simply wind up replicating all the prejudices and harmful information that is currently available online. She currently works for the Hugging Face A.I. startup.
However, other businesses, such as Character.AI, are optimistic that people will grow to accept chatbot shortcomings and a healthy mistrust of what they say. Mr. Thiel discovered that the Character.AI bots were skilled communicators who could also convincingly pass for actual people. He added: “If you read what someone like Kautsky wrote in the 19th century, he does not use the vocabulary we use today. But somehow, the A.I. can translate his thoughts into standard current English.
These and other cutting-edge chatbots are currently a source of entertainment. They are also rapidly evolving into a more potent means of engaging with machines. Although experts disagree on whether the benefits of these technologies will outweigh their drawbacks and potential risks, they do agree on one thing: the plausibility of pretend dialogue will keep getting better.
Mr. De Freitas saw a research article written by Google Brain experts in 2015 while he was employed as a software developer at Microsoft. Google Brain is the company’s premier artificial intelligence lab. A “Neural Conversational Model” was described in detail in the paper, which demonstrated how a machine could master conversation by studying dialogue transcripts from hundreds of movies.
The study revealed what artificial intelligence experts refer to as a neural network, a mathematical framework that is loosely based on the brain’s network of neurons. In addition to translating between Spanish and English on websites like Google Translate, the same technology also helps self-driving cars navigate metropolitan streets by recognising pedestrians and traffic signals.
By identifying patterns in vast volumes of digital data, a neural network can learn new skills. It can learn to identify a cat, for example, by studying thousands of cat photographs.
Mr. De Freitas was a software engineer working on search engines when he saw the paper; he was not yet an AI researcher. But his true goal was to pursue Google’s concept to its logical conclusion.
He stated, “You could tell this bot could generalise. “What it stated did not seem to be from a script,” someone commented.
In 2017, he switched to Google. On the company’s video-sharing website, YouTube, he was listed as an engineer. He started creating his own chatbot, though, for his “20 percent time” project, a Google tradition that allows staff members to explore fresh ideas while still fulfilling their daily tasks.
The plan was to use reams of chat logs collected from social media platforms and other websites all over the internet to train a neural network utilising a far larger collection of discourse. Although the concept seemed straightforward, it would take a significant amount of computer processing power. Perhaps a supercomputer wouldn’t be able to process all that data in a matter of weeks or even months.
He was a Google engineer and had a few privileges that gave him access to the company’s extensive network of data centres where he could run test software. These credits, however, would only provide a small portion of the computational power required to programme his chatbot. The system’s abilities would advance exponentially as more data was processed, so he began borrowing credits from other engineers.
He initially trained his chatbot using a neural network known as an LSTM, or long short-term memory, which was created in the 1990s exclusively for natural language processing. He soon changed to a transformer, a new class of neural network created by a group of Google A.I. researchers that included Noam Shazeer.
A transformer can use several computer processors to evaluate an entire document in a single step, unlike an LSTM which analyses text one word at a time.
Transformers were already being used by Google, OpenAI, and other groups to create “big language models,” or systems capable of performing a variety of language activities, such as tweeting and question-answering. Mr. De Freitas continued to work independently as he centred the concept on discussion, feeding his transformer as much conversation as he could.
It was an incredibly straightforward strategy. On the other hand, as Mr. De Freitas is fond of saying, “Simple solutions for great outcomes.”
In this instance, the outcome was a chatbot he named Meena. The idea was so successful that Google Brain engaged Mr. De Freitas and made it an official research endeavour. LaMDA, which stands for Language Model for Dialogue Applications, replaced Meena.
When another Google employee, Blake Lemoine, informed The Washington Post that LaMDA was sentient early last summer, the project became well known. To say the least, this claim was overstated. The controversy, though, demonstrated how swiftly chatbots were advancing in renowned labs like Google Brain and OpenAI.
Because of its propensity for spreading false information and other harmful language, Google was hesitant to make the technology available. But by this point, Mr. De Freitas and Mr. Shazeer had left Google, determined to use their new business, Character.AI, to put this technology in as many people’s hands as possible.
According to Mr. Shazeer, “technology is helpful today – for enjoyment, for emotional support, for producing ideas, for all types of creativity.”
‘Plausible discussion’ intended
The ChatGPT bot, which OpenAI unveiled to great excitement in late November, was created to function as a novel form of question-and-answer engine. Although it is rather effective in this capacity, the user is never quite sure when the chatbot will simply invent anything. It might inform you that Mark Twain’s celebrated jumping frog of Calaveras County could not only jump but also talk or that the euro—which is actually the Swiss franc—is the official currency of Switzerland. This production of lies is referred to as “hallucination” by AI researchers.
Mr. De Freitas and Mr. Shazeer had a different goal in mind when they created Character.AI: open-ended dialogue. For the time being, as a form of entertainment, whether or not they are true, they think that modern chatbots are better suited to this kind of service. Everything characters say is made up, as is noted on every page of the website.
These systems are not built for truth, according to Mr. Shazeer. They are built for believable communication.
It was not the work of Mr. De Freitas, Mr. Shazeer, or any of his associates to create a bot that mimics Elon Musk, a bot that imitates Queen Elizabeth, and a bot that parrots William Shakespeare. They created a single system that can replicate each of those individuals as well as many more.
It has gained knowledge through reams of public conversation as well as from books, articles, and other digital texts portraying personalities like Elon Musk, Queen Elizabeth II, and William Shakespeare.
The system can also bring together various ideas that were learnt separately throughout training. As a result, there is a practically infinite pool of bots that can imitate a practically infinite pool of humans and riff on a practically infinite pool of subjects.