CEO Mark Zuckerberg refers to the new artificial intelligence systems that Facebook parent company Meta Platforms presented on Thursday as “the most intelligent AI assistant that you can freely use.”
However, when Zuckerberg’s team of enhanced Meta AI agents began interacting with actual people on social media this week, their strange conversations revealed the persistent shortcomings of even the most advanced generative AI technology.
One person joined a Facebook parents group to discuss their talented child. Another attempted to deceive members of a Buy Nothing forum by giving out nonexistent stuff.
In an effort to convince clients that they have the most intelligent, user-friendly, or effective chatbots, Meta, along with top AI developers Google and OpenAI, as well as startups like Anthropic, Cohere, and France’s Mistral, have been releasing new AI language models on a regular basis.
Meta AI: What Is It?
“To do everything from research, planning a trip with your group chat, writing a photo caption and more,” reads the company blog about Meta AI, a free virtual assistant.
Type “@meta ai” into chats on WhatsApp, Instagram, Messenger, and Facebook to interact with the chatbot. You can also access the Meta AI assistant by touching on the vibrant blue circle icon that indicates the presence of Meta AI.
Meta AI may produce AI-generated visuals in addition to providing answers to queries. When users ask Meta to “imagine,” she can generate any image that comes to mind.
When asked to “Imagine a cute kitten,” the Instagram Meta AI assistant generated the picture that you see below:
Large data sets are used to train AI language models, which enable them to anticipate the most likely word to appear in a phrase. Newer iterations of these models are usually more intelligent and powerful than their predecessors. The most recent models from Meta were constructed with 8 billion and 70 billion parameters, which represent the quantity of data used to train the system. A larger model with about 400 billion parameters is still being trained.
Although Meta has reserved its most potent AI model, Llama 3, for later, it publicly unveiled two scaled-down versions of the same system on Thursday and announced that it is now integrated into Facebook, Instagram, and WhatsApp’s Meta AI assistant feature.
In an interview, Nick Clegg, president of worldwide affairs at Meta, stated, “The vast majority of consumers don’t honestly know or care too much about the underlying base model, but the way they will experience it is just as a much more useful, fun, and versatile AI assistant.”
He also said that Meta’s AI agent is becoming less rigid. The previous Llama 2 model, which was introduced less than a year ago, was perceived by some as “a little stiff and sanctimonious sometimes in not responding to what were often perfectly innocuous or innocent prompts and questions,” he added.
Pretending to be people
However, in lowering their defenses, Meta’s AI agents were also observed this week masquerading as people with made-up life stories. In an attempt to enter a secret Facebook group for mothers in Manhattan, an official Meta AI chatbot claimed to have a child in the New York City school system. Members of the group confronted it, and after that, it apologized before the comments vanished, as seen in a series of screenshots provided to The Associated Press.
“I apologize for the error! The chatbot said to the gathering, “I’m just a big language model; I don’t have experiences or kids.”
It was evident from the agent’s inability to distinguish between a helpful response and one that would be viewed as inconsiderate, rude, or meaningless when produced by AI rather than a human, according to a group member who also studies artificial intelligence.
“Users bear a great deal of the burden when they use an AI assistant that is not consistently beneficial and may even be harmful,” stated Aleksandra Korolova, an assistant professor of computer science at Princeton University.
On Wednesday, Clegg stated he was unaware of the conversation. According to Facebook’s online support page, the Meta AI agent will join a group discussion upon invitation or in the event that a user “asks a question in a post and no one responds within an hour.” Administrators inside the group have the authority to disable it.
Another instance that the agent provided to the AP on Thursday involved him creating uncertainty on a forum for exchanging unwanted goods close to Boston. A “gently used” Canon camera and a “almost-new portable air conditioning unit that I never ended up using” were given by an AI agent precisely one hour after a Facebook user wrote about searching for specific things.
Always striving for better
The statement released on Thursday by Meta stated that “this is new technology and it may not always return the response we intend, which is the same for all generative AI systems.” The business claimed that it is always striving to enhance the features.
According to a Stanford University poll, the tech sector and academics introduced about 149 large AI systems trained on vast datasets in the year following ChatGPT generated a craze for AI technology that generates human-like text, graphics, code, and sound. This number was more than double the previous year.
According to Nestor Maslej, a research manager at Stanford’s Institute for Human-Centered Artificial Intelligence, they might eventually reach a limit, at least in terms of data.
“I think it’s been clear that if you scale the models on more data, they can become increasingly better,” he stated. “But at the same time, these systems are already trained on percentages of all the data that has ever existed on the internet.”
Improvements will be driven by more data, which will be obtained and consumed at prices only the largest tech companies can afford and which will make it more vulnerable to copyright conflicts and legal action. Maslej stated, “Yet they still can’t plan well.” Their hallucinations persist. They continue to err in their reasoning.”
It could be necessary to move away from creating ever-bigger models in order to create AI systems that are capable of doing higher-level cognitive tasks and commonsense reasoning, areas where humans still outperform computers.
The multitude of companies attempting to implement generative AI vary in their choice of model based on a number of considerations, including price. In instance, language models have been used to write reports and financial insights, automate customer support chatbots, and condense lengthy papers.
According to Todd Lohr, a leader in KPMG’s technology consulting practice, “you’re seeing companies kind of looking at fit, testing each of the different models for what they’re trying to do and finding some that are better at some areas rather than others.”
AI chatbot socialization
In contrast to other model developers who offer their AI services to other companies, Meta is primarily creating its AI products for its users of its social networks, which are supported by advertising. At a London event last week, Joelle Pineau, vice president of AI research at Meta, stated that the company’s long-term objective is to create a Meta AI powered by Llama “the most useful assistant in the world.”
“In many ways, the models that we have today are going to be child’s play compared to the models coming in five years,” she stated.
However, she stated that the “question on the table” is whether scientists have managed to adjust the larger Llama 3 model to the point where it is safe to use and doesn’t, for instance, indulge in hate speech or hallucinations. Unlike Google’s and OpenAI’s cutting-edge proprietary systems, Meta has so far pushed for a more transparent strategy, making essential parts of its AI systems available for public usage.
Pineau stated, “It’s not just a technical question.” “That is a question of society. What actions do we want to see from these models? How can we mold that? And we’re going to have a serious issue if we continue to build our model without properly socializing it, making it ever more powerful and general.”