Google’s annual developers conference has come and gone, but I still have no idea what was announced.

I mean, I do. I know that Gemini was a big part of the show—the week’s primary onus—and that the plan is to infuse it into every part of Google’s product portfolio, from its mobile operating system to its web apps on the desktop. But then that was it.

There was little on the advent of Android 15 and what it would bring to the operating system. We didn’t get the second beta reveal until the conference’s second day. Google usually comes right out of the gate with that one toward the end of the first-day keynote—or at least, that’s what I expected, considering it was the status quo at the last few developer conferences.

Advertisement

I’m not alone in this feeling. Others share my sentiments, from blogs to forums. It was a challenging year to go to Google I/O as a user of its existing products. It felt like one of those timeshare presentations, where the company sells you on an idea and then placates you with fun and free stuff afterward, so you don’t think about how much you put down on a property you only have access to a few times a year. But I kept thinking about Gemini everywhere I went and what it would do to the current user experience. The keynote did little to convince me that this is the future I want.

Put your faith in Gemini AI

A photo from Google I/O

Photo: Florence Ion / Gizmodo

Advertisement

I believe that Google’s Gemini is capable of many incredible things. For one, I actively use Circle to Search, so I get it. I’ve seen how it can help get work done, summarize notes, and fetch information without requiring me to swipe through screens. I even tried out Project Astra and experienced the potential for how this large-language model can see the world around it and hone in on minor nuances present in a person’s face. That will undoubtedly be helpful when it comes out and fully integrates into the operating system.

Advertisement

Or is it? I struggled to figure out why I’d want to create a narrative with AI for the fun of it, which was one of the options for the Project Astra demonstration. While it’s cool that Gemini can offer contextual responses on physical aspects of your environment, the demonstration failed to explain exactly when this kind of interaction would happen on an Android device specifically.

Advertisement

We know the Who, Where, What, Why, and How behind Gemini’s existence, but we don’t know the When. When do we use Gemini? When will the technology be ready to replace the remnants of the current Google Assistant? The keynote and demonstrations at Google I/O failed to answer these two questions.

Google presented many examples of how developers will benefit from what’s to come. For instance, Project Astra can look at your code and help you improve it. But I don’t code, so I didn’t immediately resonate with this use case. Then Google showed us how Gemini will be able to remember where objects were last placed. That’s indeed neat, and I could see how that would benefit everyday people dealing with, say, being too overwhelmed by all that’s required of them. But there was no mention of that. What good is a contextual AI if it’s not shown being used in context?

Advertisement

I’ve been to ten Google I/O developer conferences, and this is the first year I’ve walked away scratching my head instead of looking forward to future software updates. I’m exhausted by Google pushing the Gemini narrative on its users without being explicit about how we’ll have to adapt to stay in its ecosystem.

Perhaps the reason is that Google doesn’t want to scare anyone off. But as a user, the silence is scarier than anything else.

Services MarketplaceListings, Bookings & Reviews

Entertainment blogs & Forums

Advantages of overseas domestic helper.