Tech companies are shifting focus from building the largest language models (LLMs) to developing smaller ones (SLMs) that can match or even outperform them.
Meta’s Llama 3 (400 billion parameters), OpenAI’s GPT-3.5 (175 billion parameters), and GPT-4 (an estimated 1.8 trillion parameters) are famously larger models, while Microsoft‘s Phi-3 family ranges from 3.8 billion to 14 billion parameters, and Apple Intelligence “only” has around 3 billion parameters.
It may seem like a downgrade to have models with far fewer parameters, but the appeal of SLMs is understandable. They consume less energy, can run locally on devices like smartphones and laptops, and are a good fit for smaller businesses and labs that cannot afford expensive hardware setups.
David vs. Goliath
As IEEE Spectrum reports, “The rise of SLMs comes at a time when the performance gap between LLMs is quickly narrowing, and tech companies look to deviate from standard scaling laws and explore other avenues for performance upgrades.”
In a recent round of tests conducted by Microsoft, Phi-3-mini, the tech giant’s smallest model with 3.8 billion parameters, rivaled Mixtral (8x 7 billion) and GPT-3.5 in some areas, despite being small enough to fit on a phone. Its success was down to the dataset used for training, which was composed of “heavily filtered publicly available web data and synthetic data.”
While SLMs achieve a similar level of language understanding and reasoning as much larger models, they are still limited by their size for certain tasks and can’t store too much “factual” knowledge. This is an issue that can be addressed by combining the SLM with an online search engine.
IEEE Spectrum’s Shubham Agarwal compares SLMs with how children learn language and says, “By the time children turn 13, they’re exposed to about 100 million words and are better than chatbots at language, with access to only 0.01 percent of the data.” Although, as Agarwal points out, “No one knows what makes humans so much more efficient,” Alex Warstadt, a computer science researcher at ETH Zurich, suggests “reverse engineering efficient humanlike learning at small scales could lead to huge improvements when scaled up to LLM scales.”
More from TechRadar Pro
Services Marketplace – Listings, Bookings & Reviews