According to Yann LeCun, chief AI scientist at Meta, large language models (LLMs), the brains behind AI chatbots like ChatGPT and Gemini, will never be able to reason and plan as well as humans can.
Large language models, according to LeCun, “do not understand the physical world, do not have persistent memory, cannot reason in any reasonable definition of the term, and cannot plan… hierarchically.” LeCun made these claims in an interview with the Financial Times. He also claimed that these models have a “very limited understanding of logic.”
He went on to say that large language models such as ChatGPT are “intrinsically unsafe” since their ability to respond appropriately to prompts depends on the accuracy of the data they are trained on. Large language models, according to LeCun, have only been able to evolve so far because humans are the only ones who can feed data to them. In other words, reasoning is essentially “exploiting accumulated knowledge from lots of training data.” However, the scientist noted that LLMs such as ChatGPT and Gemini are quite helpful despite their shortcomings.
LeCun said that Meta’s Fundamental AI Research (Fair) lab, which employs around 500 people, is developing a new AI system that can acquire common sense and learn how the world functions when someone asked how AI can achieve human-level intelligence. For Meta, this “world modelling” strategy could be dangerous because investors are looking for rapid returns on their AI investments.
According to LeCun, the creation of artificial general intelligence, or AGI, is a scientific endeavor rather than a design or technological development challenge.
The social network behemoth Meta lost almost $200 billion in valuation after CEO Mark Zuckerberg declared last month that he would invest more in artificial intelligence (AI) and that he aimed to make Meta “the leading AI company in the world.”