Large language models (LLMs), SLMs can generate human-like language but are trained on smaller datasets with fewer parameters. They are said to be easier to train and use, consuming less computational power, more cost-effective and better suited for specific tasks. The year 2024 saw launches of a slew of lightweight models, from Microsoft’s Phi family of SLMs to Google’s Gemma and a smaller variant of Meta’s Llama model.