Microsoft, the IT behemoth, launched Phi-3-mini, a lightweight artificial intelligence model, on Tuesday. This action demonstrates the company’s strategic attempts to serve a wider customer base by providing more reasonably priced solutions in the quickly changing AI technology market.
Microsoft has released the first of its three small language models (SLMs), Phi-3-mini. The business is placing a large wager on these models because it understands how they could transform a number of industries and alter how people interact with technology at work.
Sébastien Bubeck, vice president of GenAI research at Microsoft, discussed the affordability of Phi-3-mini and emphasized its notable cost advantage over similar models available in the market. Bubeck declared, “Phi-3 is not slightly cheaper, it’s dramatically cheaper,” citing a cost savings of up to ten times when compared to rivals with comparable capabilities.
SLMs like Phi-3-mini, which are meant to handle lesser jobs, provide workable solutions suited for businesses with constrained resources. This strategic focus is in line with Microsoft’s mission to democratize AI and increase its usability for companies of all sizes.
The machine learning model platform Hugging Face, the framework for local machine model deployment Ollama, and Microsoft’s cloud service platform Azure’s AI model catalogue are all now stocking Phi-3-mini. To further improve accessibility and performance, the SLM is also integrated with Nvidia’s software tool, Nvidia Inference Microservices (NIM), and optimized for Nvidia’s graphics processing units (GPUs).
Founder of TechWhisperer UK Limited Jaspreet Bindra said, “Microsoft’s Phi-3 Mini is the newest example of a ‘Small Language Model.'” This artificial intelligence model is lightweight and was created as a member of the small language model (SLM) family. Microsoft intends to introduce three SLMs in the near future: Phi-3 Small and Phi-3 Medium. This is the first of them. Phi-3 Mini is more economical and accessible for enterprises with limited resources because of its 3.8 billion parameter capacity and simplified task design. The dataset used to train Phi-3 Mini is smaller than the dataset used to train large language models like GPT-4. It is a component of Microsoft’s larger plan to launch a number of SLMs that are designed for easier jobs and are therefore perfect for companies with little resources. This method promises lower costs, faster models because they are on the cutting edge, and more generative AI application cases for consumers and enterprises.”
Co-founder of Shorthills AI Paramdeep Singh stated, “Good things come in little packages.” Large Language Models (LLM) and AI advancements have been largely attributed to larger models. One of the main obstacles to the advancement of generative AI is GPU and compute. Microsoft has published Phi-3-mini, a Small Language Model (SLM) that reversibly alters the trend. When it comes to performance, this model is on par with some models that are 100 times larger. This model can run on your phone and doesn’t require powerful computing hardware like a GPU. The icing on the cake is that you can use it for business and academic purposes—it is free and open-source! This could significantly transform the field of generative AI.”
Microsoft invested $1.5 billion in the UAE-based AI company G42 as recently as last week. Furthermore, Microsoft has established strategic alliances with forward-thinking startups such as Mistral AI, which has enabled the incorporation of state-of-the-art AI models into its Azure cloud computing platform.