Google I/O 2025 was dedicated to AI.
At its annual developer conference, Google announced updates that put more AI into Search, Gmail, and Chrome. Its AI models were updated to be better at making images, taking actions, and writing code.
Google also previewed some big swings for the future. Plans to revamp video calls, make a more aware and conversational assistant, and partner with traditional glasses companies on smart glasses.
And while there wasn’t a big Android presence in the main keynote, Google had plenty to announce about its OS last week, including a redesign and updates to its device tracking hub.
Read below for all of the news and updates from Google I/O 2025.
Android Auto will get Spotify Jam and support for video apps and web browsers
Spotify gets new templates for Android Auto. Screenshot: YouTubeAndroid Auto is getting more than just Google’s Gemini assistant after the Google I/O developer conference. The company has also announced or otherwise shown off a slew of changes coming to the infotainment operating system, including an updated Spotify app, a light mode, and the introduction of web browsers and video apps.
Let’s start with Spotify. Google revealed in a video last week that the Spotify app for Android Auto is getting an overhaul through new media app templates the company is making available to developers. One feature the music service is adding to Android Auto is Spotify Jam, a feature that lets users share control of an audio source from their individual devices.
Read Article >Google’s Veo 3 AI video generator is a slop monger’s dream
These boots were not made for walking, but you can use AI to make them do it anyway. Image: Allison Johnson / Google VeoEven at first glance, there’s something off about the body on the street. The white sheet it’s under is a little too clean, and the officers’ movements are totally devoid of purpose. “We need to clear the street,” one of them says with a firm hand gesture, though her lips don’t move. It’s AI, alright. But here’s the kicker: my prompt didn’t include any dialogue.
Veo 3, Google’s new AI video generation model, added that line all on its own. Over the past 24 hours I’ve created a dozen clips depicting news reports, disasters, and goofy cartoon cats with convincing audio — some of which the model invented all on its own. It’s more than a little creepy and way more sophisticated than I had imagined. And while I don’t think it’s going to propel us to a misinformation doomsday just yet, Veo 3 strikes me as an absolute AI slop machine.
Read Article >I/O versus io: Google and OpenAI can’t stop messing with each other
Image: Cath Virginia / The Verge, Getty ImagesThe leaders of OpenAI and Google have been living rent-free in each other’s heads since ChatGPT caught the world by storm. Heading into this week’s I/O, Googlers were on edge about whether Sam Altman would try to upstage their show like last year, when OpenAI held an event the day before to showcase ChatGPT’s advanced voice mode.
This time, OpenAI dropped its bombshell the day after.
Read Article >Google I/O revealed more updates for Wallet, Wear OS, Google Play, and more
Image: Cath Virginia / The VergeThe Google I/O keynote may have been all about AI, but there were a handful of other meaningful updates that didn’t make it to the main stage. In addition to updates coming to Google Wallet, the company’s developer sessions also revealed handy features that will roll out to smartwatches, the Google Play Store, and Google TV.
Here are some of the updates Google didn’t highlight during the keynote.
Read Article >Google’s AI product names are confusing as hell
I would like to buy a vowel, please. Photo: Allison Johnson / The VergeGoogle executives took the stage this week at I/O to unveil their latest AI technology: Deep Think. Or was it Deep Search? Then there’s the new subscription plan, Google AI Pro, which used to be Gemini Advanced, plus the new AI Ultra plan. Then there’s Gemini in Chrome, which is different from AI Mode in search. Project Starline is now Google Beam, there are Gems and Jules, Astra and Aura… you get the idea. The products overlap in confusing ways, the naming conventions are diabolical, and I’m begging Google to return some semblance of sanity to its product line before we all lose our DeepMinds.
In Google’s defense, at least we’re not calling any of these things Bard. That was Google’s original name for its AI chatbot during the Great Chatbot Rush of 2023. OpenAI shipped ChatGPT, and apparently Google decided it had to ship something before there was time to consider not naming it Bard. The company corrected that mistake and went with Gemini, folding in Duet along the way. This was all a very good idea.
Read Article >Android 16 adds AI-powered weather effects that can make it rain on your photos
Sorry, Mario. Image: Wes Davis / The VergeGoogle’s latest Android 16 beta adds a bunch of new wallpaper and lock screen options for Pixel phones, including live-updating weather animations and a feature that automatically frames subjects of photos within a variety of bubbly shapes.
When you select an image to use as a wallpaper in the beta, you can tap the sparkly collection of starbursts that has become the de facto symbol for AI features to access the new effects. One of them, “Shape,” washes your screen in a solid color, with a punchout frame in the middle centered on the subject of your photo, be it a person, animal, or object. You can choose from five different shape options: a slanted oval, rounded rectangle, an arched opening, a flowery shape, and a hexagon. It’s a little like the iOS “Depth Effect” feature that partially obscures the clock on your lock screen with a person’s head.
Read Article >Google teases an Android desktop mode, made with Samsung’s help
Windows in Android’s desktop mode can stretch and move across your screen. Screenshot: The VergeGoogle is working with Samsung to bring a desktop mode to Android. During Google I/O’s developer keynote, engineering manager Florina Muntenescu said the company is “building on the foundation” of Samsung’s DeX platform “to bring enhanced windowing capabilities in Android 16,” as spotted earlier by 9to5Google.
Samsung first launched DeX in 2017, a feature that automatically adjusts your phone’s interface and apps when connected to a larger display, allowing you to use your phone like a desktop device.
Read Article >Google has a big AI advantage: it already knows everything about you
Shahram Izadi, Google’s head of Android XR, talking about the advantage of using Gemini. Photo: Allison Johnson / The VergeGoogle’s AI models have a secret ingredient that’s giving the company a leg up on competitors like OpenAI and Anthropic. That ingredient is your data, and it’s only just scratched the surface in terms of how it can use your information to “personalize” Gemini’s responses.
Google first started letting users opt in to its “Gemini with personalization” feature earlier this year, which lets the AI model tap into your search history “to provide responses that are uniquely insightful and directly address your needs.” But now, Google is taking things a step further by unlocking access to even more of your information — all in the name of providing you with more personalized, AI-generated responses.
Read Article >Google’s future is Google googling
Google CEO Sundar Pichai at Google I/O 2025. Photo by Allison Johnson / The VergeGoogle I/O was, as predicted, an AI show. But now that the keynote is over, we can see that the company’s vision is to use AI to eventually do a lot of Googling for you.
A lot of that vision rests on AI Mode in Google Search, which Google is starting to roll out to everyone in the US. AI Mode offers a more chatbot-like interface right inside Search, and behind the scenes, Google is doing a lot of work to pull in information instead of making you scroll through a list of blue links.
Read Article >- Sergey Brin on our world possibly existing within “a stack of simulations.”
The last question during the AI fireside chat at I/O 2025 was an invitation to make headlines, and the Google co-founder did his best, saying... something about reality and our existence. Listen in for yourself.
- Sergey Brin: “Anyone who is a computer scientist should not be retired right now.”
Brin showed up to crash Google DeepMind CEO Demis Hassabis’ fireside chat at I/O 25, where he laid out what he does all day when asked by host Alex Kantrowitz.
The answer? “I think I torture people like Demis, who is amazing, by the way.” ”...there’s just people who are working on the key Gemini text models, on the pretraining, post training. Mostly those, I periodically delve into some of the multi-modal work.”
- Shorter and longer NotebookLM AI podcasts.
You can now have NotebookLM make you Audio Overviews that are short (around 5 minutes) and long (around 20 minutes) in addition to the default length of around 10 minutes.
- Sergey Brin deals with a busted AI demo at I/O.
The Google co-founder has said he was “pretty much retired right around the start of the pandemic,” but came back to the company to experience the AI revolution.
This afternoon, we spotted him troubleshooting problems with this demo of Google Flow, which the company announced today as “the only AI filmmaking tool custom-designed for Google’s most advanced models — Veo, Imagen, and Gemini.”
We tried on Google’s prototype AI smart glasses
Here in sunny Mountain View, California, I am sequestered in a teeny-tiny box. Outside, there’s a long line of tech journalists, and we are all here for one thing: to try out Project Moohan and Google’s Android XR smart glasses prototypes. (The Project Mariner booth is maybe 10 feet away and remarkably empty.)
While nothing was going to steal AI’s spotlight at this year’s keynote — 95 mentions! — Android XR has been generating a lot of buzz on the ground. But the demos we got to see here were notably shorter, with more guardrails, than what I got to see back in December. Probably because, unlike a few months ago, there are cameras everywhere and these are “risky” demos.
Read Article >- It’s Dieter again!
[Insert Leonardo DiCaprio pointing meme here, which I should really have saved last time!!!]
- Darren Aronofsky is involved in a new film with AI-generated visuals.
The film, called Ancestra, “is directed by Eliza McNitt and blends emotional live-action performances with generative visuals, crafting a deeply personal narrative inspired by the day she was born,” according to a description from the movie’s trailer.
- AI Overviews are going global.
Sure, they tell you to eat rocks and put glue on your pizza, but Google says AI Overviews are a smash, and it’s expanding the feature to a bunch of new countries and languages. They’re now available in more than 200 countries and more than 40 languages, Google says — and they’re starting to appear on more and more queries, too.
Google made an AI coding tool specifically for UI design
These three examples provided by Google show what Stitch is capable of generating. Image: GoogleGoogle is launching a new generative AI tool that helps developers swiftly turn rough UI ideas into functional, app-ready designs. The Gemini 2.5 Pro-powered “Stitch” experiment is available on Google Labs and can turn text prompts and reference images into “complex UI designs and frontend code in minutes,” according to the announcement during Google’s I/O event, sparing developers from manually creating design elements and then programming around them.
Stitch generates a visual interface based on selected themes and natural language descriptions, which are currently supported in English. Developers can provide details they would like to see in the final design, such as color palettes or the user experience. Visual references can also be uploaded to guide what Stitch generates, including wireframes, rough sketches, and screenshots of other UI designs.
Read Article >Google is bringing real-time AI camera sharing to Search
Image: GoogleAt its I/O developer conference today, Google announced two new ways to access its AI-powered “Live” mode, which lets users search for and ask about anything they can point their camera at. The feature will arrive in Google Search as part of its expanded AI Mode and is also coming to the Gemini app on iOS, having been available in Gemini on Android for around a month.
The camera-sharing feature debuted at Google I/O last year as part of the company’s experimental Project Astra, before an official rollout as part of Gemini Live on Android. It allows the company’s AI chatbot to “see” everything in your camera feed, so you can have an ongoing conversation about the world around you — asking for recipe suggestions based on the ingredients in your fridge, for example.
Read Article >Google found a way to make virtual meetings suck less
Since it was first demoed in 2021, Project Starline has felt like the kind of thing only a company like Google would bother trying to build: a fancy 3D video booth with no near-term commercial prospects that promises to make remote meetings feel like real life.
Now, Starline is nearly ready for primetime. It’s being rebranded to Google Beam and coming to a handful of offices later this year. Google has managed to shrink the technology into something it says will be priced comparably to existing videoconference systems. The real bet is that other companies will want to make their own hardware for Beam calls. “The devices aren’t really the point,” says Andrew Nartker, the project’s general manager. “The point is that we can beam things anywhere we need to with the infrastructure that we built.”
Read Article >Google reveals $250 per month ‘AI Ultra’ plan
Image: GoogleGoogle has announced a new AI subscription plan with access to the company’s most advanced models — and it costs $249.99 per month. The new “AI Ultra” plan also offers the highest usage limits across Google’s AI apps, including Gemini, NotebookLM, Whisk, and its new AI video generation tool, Flow.
The AI Ultra plan lets users try Gemini 2.5 Pro’s new enhanced reasoning mode, Deep Think, which is designed for “highly complex” math and coding. It offers early access to Gemini in Chrome, too, allowing subscribers to complete tasks and summarize information directly within their browser with AI.
Read Article >AI Mode is obviously the future of Google Search
So far, AI Mode is a tab in Search. But it’s also beginning to overtake Search. Image: GoogleThere’s a new tab in Google Search. You might have seen it recently. It’s called AI Mode, and it brings a Gemini- or ChatGPT-style chatbot right into your web search experience. You can use it to find links, but also to quickly surface information, ask follow-up questions, or ask Google’s AI models to synthesize things in ways you’d never find on a typical webpage.
For now, AI Mode is just an option inside of Google Search. But that might not last. At its I/O developer conference on May 20th, Google announced that it is rolling AI Mode out to all Google users in the US, as well as adding several new features to the platform. In an interview ahead of the conference, the folks in charge of Search at Google made it very clear that if you want to see the future of the internet’s most important search engine, then all you need to do is tab over to AI Mode.
Read Article >Google’s ‘universal AI assistant’ prototype can now do stuff for you — and you don’t even have to ask
Image: GoogleSince its original launch at Google I/O 2024, Project Astra has become a testing ground for Google’s AI assistant ambitions. The multimodal, all-seeing bot is not a consumer product, really, and it won’t soon be available to anyone outside of a small group of testers. What Astra represents instead is a collection of Google’s biggest, wildest, most ambitious dreams about what AI might be able to do for people in the future. Greg Wayne, a research director at Google DeepMind, says he sees Astra as “kind of the concept car of a universal AI assistant.”
Eventually, the stuff that works in Astra ships to Gemini and other apps. Already that has included some of the team’s work on voice output, memory, and some basic computer-use features. As those features go mainstream, the Astra team finds something new to work on.
Read Article >Google says its new image AI can actually spell
An image of an egg carton created by Imagen 4. Image: GoogleGoogle is launching a new version of its image generation model, called Imagen 4, and the company says that it offers “stunning quality” and “superior typography.”
“Our latest Imagen model combines speed with precision to create stunning images,” Eli Collins, VP of product at Google Deepmind, says in a blog post. “Imagen 4 has remarkable clarity in fine details like intricate fabrics, water droplets, and animal fur, and excels in both photorealistic and abstract styles.” Sample images from Google do show some impressive, realistic detail, like one showing a whale jumping out of the water and another of a chameleon.
Read Article >Google will let you ‘try on’ clothes with AI
Image: GoogleGoogle is taking its virtual try-on feature to a new level. Instead of seeing what a piece of clothing might look like on a wide range of models, it’s now testing a feature that lets you upload a photo of yourself to see how it might look on you.
The new feature is rolling out in Search Labs in the US today. Once you opt into the experiment, you can check it out by selecting the “try it on” button next to pants, shirts, dresses, and skirts that appear in Google’s search results. Google will then ask for a full-length photo, which the company will use to generate an image of you wearing the piece of clothing you’re shopping for. You can save and share the images.
Read Article >