4 A.I. Themes That Defined 2025 and Are Shaping What Comes Next
In November, ChatGPT turned three, with a global user base rapidly approaching one billion. At this point, A.I. is no longer an esoteric acronym that needs explaining in news stories. It has become a daily utility, woven into how we work, learn, shop and even love. The field is also far more crowded than it was just a few years ago, with competitors emerging at every layer of the stack.
Over the past year, conversation around A.I. has taken on a more complicated tone. Some argue that consumer chatbots are nearing a plateau. Others warn that startup valuations are inflating into a bubble. And, as always, there’s the persistent anxiety that A.I. may one day outgrow human control altogether.
So what comes next? Much of the industry’s energy is now focused on the infrastructure side of A.I. Big Tech companies are racing to solve the hardware bottlenecks that limit today’s systems, while startups experiment with applications far beyond chatbots. At the same time, researchers are beginning to look past language models altogether, toward models that can reason about the physical world.
Below are the key themes Observer has identified over the past year of covering this space. Many of these developments are still unfolding and are likely to shape the field well into 2026 and beyond.
A.I. chips
Even as OpenAI faces growing competition at the model level, its primary chip supplier, Nvidia, remains in a league of its own. Demand for its GPUs continues to outstrip supply, and no rival has yet meaningfully disrupted its dominance. Traditional semiconductor companies such as AMD and Intel are racing to claw back market share, while some of Nvidia’s largest customers are designing their own chips to reduce dependence on a single supplier.
Google’s long-in-the-making Tensor Processing Unit, or TPU, has reportedly found its first major customer, Meta, marking a milestone after years of internal use. Meta, Microsoft and Amazon are also deep into developing in-house chips of their own—Meta’s Artemis, Microsoft’s Maia and Amazon’s Trainium.
World models
To borrow from philosopher Ludwig Wittgenstein, the limits of language are the limits of our world. Today’s A.I. systems have grown remarkably fluent in human language—especially English—but language captures only a narrow slice of intelligence. That limitation has prompted some researchers to argue that large language models alone can never reach human-level understanding.
Meta’s longtime chief A.I. scientist, Yann LeCun, has been among the most vocal critics. “We’re never going to get to human-level A.I. by just training on text,” he said during a Harvard talk in September.
That belief is fueling a push toward so-called “world models,” which aim to teach machines how the physical world works—how objects move, how space is structured, and how cause and effect unfold. LeCun is now leaving Meta to build such a system himself. Fei-Fei Li’s startup, World Labs, unveiled its first model in November after nearly two years of development. Google DeepMind has released early versions through its Genie projects, and Nvidia is betting heavily on physical A.I. with its Cosmos models.
Language-specific A.I.
While pioneering researchers look beyond language, linguistic barriers remain one of A.I.’s most practical challenges. More than half of the internet’s content is written in English, skewing training data and limiting performance in other languages.
In response, developers around the world are building models rooted in local cultures and linguistic norms. In Japan, companies such as Sanaka and NTT are developing LLMs tailored to Japanese language and values. In India, Krutrim is working to support the country’s vast linguistic diversity. France’s Mistral AI has positioned its Le Chat assistant as a European alternative to ChatGPT. Earlier this year, Microsoft also issued a call for proposals to expand training data across European languages.
A.I. wearables
It’s only natural that there’s a consumer hardware angle of A.I. This year brought a wave of experiments in wearable A.I.—some met with curiosity, others with discomfort.
Friend, a startup selling an A.I. pendant, sparked backlash after a New York City subway campaign framed its product as a substitute for human companionship. In December, Meta acquired Limitless, the maker of a $99 wearable that records and summarizes conversations. Earlier in the year, Amazon bought Bee, which produces a $50 bracelet designed to transcribe daily activity and generate summaries.
Meta is also developing a new line of smart glasses with EssilorLuxottica, the company behind Ray-Ban and Oakley. In July, Mark Zuckerberg went so far as to suggest that people without A.I.-enhanced glasses could eventually face a “significant cognitive disadvantage.” Meanwhile, OpenAI is quietly collaborating with former Apple design chief Jony Ive on a mysterious hardware project of its own. This all suggests the next phase of A.I. may be something we wear, not just something we type into.