Siri expected to gain these 7 new features when Google Gemini takes over
A couple of days ago, Apple and Google confirmed widespread reports and announced that the upcoming revamp of Siri will be based on Google’s Gemini AI platform. Apple has struggled to build its own AI tech, so this move appears to be a sensible shortcut to “innovative new experiences” for its users.
At the time of the announcement, the firms commented only in general terms on the nature of the partnership, but a new report from The Information, a usually reliable site, has revealed seven new features that are believed to be coming to Siri as a result of Google’s input. Plus a few additional details that may reassure Apple fans who are worried about the Googlification of their products.
Basing its predictions on testimony from a “person who has been involved in the project” and a (separate, by implication) “person familiar with the partnership talks,” The Information this week posted a detailed examination (subscription required) of how the arrangement will work. In general, this stresses a degree of continuity: Siri and Apple product interfaces in general won’t simply look and behave like Google Gemini. Apple will be able to fine-tune Gemini to work the way it wants, or ask Google to make tweaks. Current prototypes don’t even feature any Google branding, although it’s unclear if Google will be happy for that to remain the case once the project is rolled out to the public.
Similarly, sources are optimistic on the privacy front. “To maintain Apple’s privacy pledge,” they explain, “the Gemini-based AI will run directly on Apple devices or its private cloud system… rather than running on Google’s servers.”
So far, so promising. But the key is what Gemini can offer Apple that Siri can’t already achieve. The following new features and improvements are all on the way, according to The Information’s sources:
- Answer “factual questions” more accurately, in a conversational way, and citing the source
- Tell stories
- Provide thorough and conversational emotional support, “such as when a customer tells the voice assistant it is feeling lonely or disheartened”
- Agentic tasks such as booking travel
- Other types of tasks “such as creating a Notes document with a cooking recipe or information about the top causes of drug addiction”
- Remember past conversations and use them as context to more accurately understand new commands
- Proactive suggestions, such as leaving home early to avoid traffic
Not all of these features will land at the same time, The Information indicates. Some are expected to launch in the spring, likely with iOS 26.4, while others (specifically the last two items on the list above) won’t be announced until WWDC in June.
Given how long we’ve already waited for Siri 2.0 to launch, a timeframe of between two and five months before we get a batch of new features is likely to be better than most Apple fans expected. Watch this space for more updates as the release approaches.