I don’t know if you’ve heard, but Google’s latest products are filled with AI. There’s Magic Editor, a photo editing tool powered by generative AI; there’s Conversation Detection, an audio transparency feature powered by AI; there are improved heart rate algorithms, which, yes, are also powered by AI.

Seven years of OS updates? Sounds like a great way to get more AI features from Google. New photography features? All AI. Tensor processor? Designed for AI, baby! “As always, our focus is on making AI more helpful for everyone, in a way that’s both bold and responsible,” Google’s hardware chief Rick Osterloh said in an introduction that, by my count, included the word “AI” over a dozen times. Over the course of the hour-long launch, Google’s presenters referenced AI over 50 times.

Google’s presenters said the phrase “AI” over fifty times during hour-long event

As early as 2019, AI was already being used as a buzzword to sell everything from toothbrushes to TVs. But Google’s recent presentations have seen the company aggressively position itself as a leader in the AI space. Critics suggested it had been caught off guard by the overnight success of OpenAI’s ChatGPT and the speed with which competitor Microsoft had integrated the new technology into its products. But in its eagerness to respond, Google risks emphasizing the AI-ness of it all at the expense of the useful features that its customers will actually use.

Osterloh inadvertently illustrated this when he referenced the original 2016 Pixel launch onstage and said how much Google was focused on AI even back then. “Looking around the room here, I see a few people who were at our first Pixel launch seven years ago,” Osterloh said, noting that back then, it had “explained that Pixel is designed to bring hardware and software together, with AI at the center, to deliver simple, fast and smart experiences.”

Except, when I went back to watch Google’s 20-minute presentation on the original Pixel, I didn’t catch anyone actually saying the phrase “AI” onstage. There was a lengthy Google Assistant voice control demonstration, a discussion of computational photography, and even a proud boast that the phone was “made for mobile virtual reality,” but seemingly zero explicit mentions of artificial intelligence.

I’m not saying that Osterloh was lying when he said that the original Google Pixel was AI-powered, but the comparison illustrates how differently Google is talking about its products and services in 2023 versus 2016. There are moments in that original Pixel launch where the Google of today would surely utter “AI,” like when product manager Brian Rakowski referred to the camera’s “incredible on-device software algorithms.” But Google’s 2016 presentation is less worried about changing perception about the company’s technical prowess and more concerned with what those features mean for potential buyers.

The difference is even more stark when compared with Apple’s presentations, in which the company seems to actively avoid saying the two magic letters. As my colleague James Vincent pointed out earlier this year, Apple still makes reference to technology that many other companies would call AI, but it does it much more sparingly and uses the “sedate and technically accurate” phrase “machine learning”:

Its preference is to stress the functionality of machine learning, highlighting the benefits it offers users like the customer-pleasing company it is. As Tim Cook put it in an interview with Good Morning America today, “We do integrate it into our products [but] people don’t necessarily think about it as AI.”

Google, in contrast, has no such qualms. 

For the most part, I don’t think this is a huge problem. Who cares how the Pixel Watch 2’s heart rate algorithm works so long as it’s accurate? And ultimately, the results of the Pixel 8’s photography pipeline will have to speak for itself, regardless of how much “AI” was involved along the way. 

But there were also moments during this week’s presentation when I found myself asking if we really needed some of the AI-powered features that Google showed off. During a segment on Google’s new generative AI-powered virtual assistant, Google’s Sissie Hsiao showed off how Assistant with Bard could auto-generate a social media post to go up alongside a photo of Baxter the dog.

a:hover]:text-black [&>a:hover]:shadow-underline-black dark:[&>a:hover]:text-gray-e9 dark:[&>a:hover]:shadow-underline-gray-63 [&>a]:shadow-underline-gray-13 dark:[&>a]:shadow-underline-gray-63″>“Baxter the hilltop king!”
a:hover]:text-gray-63 [&>a:hover]:shadow-underline-black dark:[&>a:hover]:text-gray-bd dark:[&>a:hover]:shadow-underline-gray [&>a]:shadow-underline-gray-63 dark:[&>a]:text-gray-bd dark:[&>a]:shadow-underline-gray”>Image: Google

“Baxter the hilltop king!” Assistant with Bard drafted in response. “Look who’s on top of the world! #doglover #majestic #hikingdog.”

As a tech demo for generative AI, it makes sense. AI is increasingly good at recognizing and describing images, and one of generative AI’s greatest strengths is writing in a particular style (particularly one as laden with cliches as peppy social media image captions). 

But ignore the AI part and think of this purely as a smartphone feature, and I think it’s utterly baffling. How on earth have we gotten to a point where it makes sense for a smartphone to draft our personal social media posts for us? What’s the point? If you’re asking a machine to draft an image caption for a photo, then why are you publishing the caption in the first place? What are we doing here?

I have a theory, and it’s that — in the absence of a killer app for generative AI — Google is throwing features at a wall and seeing what sticks. It feels like the search giant has a hammer labeled “generative AI,” and its search for nails is taking the company to weird places. And that’s before we get into the messy implications of building generative AI directly into Google Photos

All of which begs the question: who is Google trying to impress with all this talk of AI? Obviously, to some extent, having an “AI-powered smartphone” is appealing to potential customers. ChatGPT wasn’t an overnight success for nothing, and clearly, there’s some appetite to see what the AI fuss is all about. 

But I don’t think that’s the whole story, not when you say the phrase “AI” more than once every 1 minute and 15 seconds, on average, during a smartphone launch, and not when one of your biggest competitors (Apple) avoids saying it entirely. 

If anything, it seems to reflect an anxiety at Google to avoid being seen as left behind in the maelstrom of AI hype. When Microsoft announced it was integrating generative AI into Bing, CEO Satya Nadella positioned it as a direct shot across the bow at Google. “I hope that, with our innovation, they will definitely want to come out and show that they can dance,” Nadella said. “And I want people to know that we made them dance.” Since then, Google has enthusiastically tap-danced through every one of its presentations.

None of this is a problem for Pixel owners or would-be buyers of Google’s hardware. But while Google can call itself an AI company all it likes, people ultimately just want phones filled with useful features. At a certain point, it risks putting the AI technology cart in front of the feature horse. 

Services MarketplaceListings, Bookings & Reviews

Entertainment blogs & Forums