close
close

Gemini AI gets support on Pixel 9 and Pixel Buds Pro 2

Google’s Gemini AI model took center stage Tuesday at its Made by Google event, where the tech giant also unveiled its new Pixel 9 line of phones, as well as a smartwatch and earbuds. The executives who took the stage mentioned Gemini 115 times during the 80-minute presentation.

There, both the chatbot itself and the following Gemini products are mentioned:

  • Gemini Advanced, a $20/month subscription service that provides access to Google’s latest AI model, Gemini 1.5 Pro
  • Gemini Assistant, the AI ​​assistant on Pixel devices
  • Gemini Live, the conversational interface for Gemini
  • Gemini Nano, an AI model for smartphones

There were also repeated references to the “Gemini era”.

An example would be:

“We’re fully in the Gemini era, with AI woven into almost everything we do at Google, across our entire technology stack,” Rick Osterloh, senior vice president of platforms and devices at Google, said at an event in Mountain View, Calif. “All to bring you the most helpful AI possible.”

AI Atlas Art Badge AI Atlas Art Badge

Google executives also touched on the topic of helpful AI, emphasizing that they believe AI will change the way we use our devices. That’s because competitors like ChatGPT creator OpenAI are also trying to get us to talk to chatbots and let AI do more of the heavy lifting in searches and other everyday tasks, like checking dates on our calendars or messaging friends. For Google, more powerful devices mean we can do more with generative AI outside of our laptops and tablets. But as Google’s Dear Sydney ad debacle during the Paris Olympics showed, there’s still a gap between what we’re willing to do with AI and what tech companies think we want AI to do.

While we got to see most of the Gemini news during Tuesday’s Google I/O developer event in May, there are two new hardware updates worth highlighting:

Faster Processing to Power Gemini on Pixel

Generative AI can produce impressive results in image creation and the creation of emails, essays, and other texts, but it requires a lot of power. A recent study found that generating a single image with an AI model uses as much energy as fully charging a phone. That’s the kind of power you’d typically find in data centers.

But when the Pixel 8 devices arrived in October, Google introduced its first processor designed specifically for AI. This powerful silicon helps with generative AI on the device — “on the device” means the processing happens on the phone, not in a distant and expensive data center. The first was the Tensor G3 processor. Now we have the Tensor G4, which was developed in partnership with Google’s AI research lab DeepMind to help Gemini run on Pixel 9 devices and power everyday activities like recording and streaming videos with less drain on the battery.

Google calls the Tensor G4 “our fastest, most powerful chip yet.” That means 20% faster web browsing and 17% faster app launches than the Tensor G3, according to Shenaz Zack, senior director of Pixel product management.

She noted that the TPU in the Tensor G4 can generate a mobile output of 45 tokens per second. Here’s what that means:

TPUs are tensor processing units. They help accelerate generative AI.

Tokens are fragments of words. AI models are like readers who need help, so they break text into smaller parts—tokens—to better understand each part and then the overall meaning.

One token corresponds to about four characters in English. This means that Tensor G4 can generate about three sentences per second.

The Tensor G4 is the first processor to support Gemini Nano with Multimodality, an on-device AI model that helps Pixel 9 better understand user input from text, images, and sounds.

Google has also improved the memory in its Pixel 9 devices—to 12 to 16 gigabytes—so the generative AI works quickly and the phone will be able to keep up with future advances. At least until the next big thing comes along.

14 Ways Android 15 Will Change Your Phone (And They’re Not All AI)

See all photos

Access to Gemini without having to look

Like the Pixel 9 family, the new Pixel Buds Pro 2 headphones are equipped with the Tensor A1 processor, which is also responsible for the artificial intelligence functionality.

You can think of the earbuds as just another audio interface for the Gemini — but it’s a screenless interface. You can ask for information from your email, as well as tips, reminders, and song recommendations, but you can’t take pictures or ask questions.

AI Atlas Newsletter Subscription Notification AI Atlas Newsletter Subscription Notification

To talk to Gemini Live while wearing Pixel Buds Pro 2, first say, “Hey Google, let’s talk live.”

There’s one caveat: You’ll first need a Google One AI Premium subscription. This $20-per-month plan gives you access to Google’s latest AI models, as well as Gemini for Google services like Gmail and Docs, along with 2TB of storage.

Google is offering a free 12-month subscription to Google One AI Premium to anyone who purchases a Pixel 9 Pro, 9 Pro XL, or 9 Pro Fold now.

“I thought about asking Gemini different kinds of questions than when I have my phone in front of me. My questions are a lot more open-ended,” Sandeep Waraich, product management lead for Google Wearables, said of using Gemini Live on Pixel Buds Pro 2. “There’s more walking and talking, longer sessions that are a lot more contemplative than not.”

That may be true, but as my CNET colleague David Carnoy pointed out, when you ask these different questions it’s like having Mentos candies in your ears.