To create a more powerful new chatbot that it thinks will be as intelligent as OpenAI’s GPT-4, Meta reportedly has been purchasing AI training chips and developing data centers. The corporation intends to begin training the new enormous language model as early as 2024, with CEO Mark Zuckerberg pushing for it to once again be free for businesses to develop AI products.
The Journal reports that Meta has upgraded its infrastructure and added more Nvidia H100 AI-training processors to avoid using Microsoft’s Azure cloud platform this time around to train the new chatbot. To speed up the development of AI capabilities that can replicate facial expressions, the company reportedly assembled a team to develop the model earlier this year.
The business just announced Code Llama, an AI tool for writing new code and fixing human-written work. The large language model (LLM) can generate and discuss code using text prompts.
Code Llama is state-of-the-art for publicly accessible LLMs on coding assignments. According to a statement made public by Meta, it might accelerate and streamline development workflows as well as lower entry hurdles for individuals just learning to code.
Furthermore, thanks to the installation of a robust reinforcement learning framework by Meta, the AI model will be able to continuously learn and adapt in real time. By employing this technique, the AI will be able to improve its comprehension of language and user interactions, leading to the production of responses that are more accurate and context-aware in the end.
After the hypothetical generative AI functionalities that Meta has allegedly been developing, this objective appears to be a reasonable next step. Later this month, unannounced AI “personas” are rumored to make their debut in the company’s products, and in June, leaks stated that an Instagram chatbot with 30 distinct personalities was still being tested. This suggests a strategic alignment towards improving AI-driven capabilities and user experiences.
Because computing resources were split among multiple LLM projects this year, Meta reportedly had to deal with a lot of AI researcher churn. Furthermore, the generative AI industry is highly competitive. Although OpenAI stated in April that it would not train a GPT-5 and “won’t for some time,” Apple is said to have been investing millions of dollars every day into its own “Ajax” AI model, which it reportedly believes is better than even GPT-4.
In addition to growing the use of AI in their productivity tools, Microsoft and Google also plan to implement generative AI in Google Assistant. Amazon is also engaged in generative AI studies that might result in an Alexa chatbot.