Google demos Android XR smart glasses with Gemini AI, visual memory, and multilingual capabilities

Skye Jacobs

Posts: 685   +15
Staff
Forward-looking: The race to define the future of wearable technology is heating up, with smart glasses emerging as the next major frontier. While Meta's Ray-Ban collaboration has already made waves, tech giants like Apple, Samsung, and Google are rapidly developing their own projects. The latest development comes from Google, which recently gave the public its most tangible look yet at Android XR-powered smart glasses during a live demonstration at the TED2025 conference.

Until now, Google's Android XR glasses had only appeared in carefully curated teaser videos and limited hands-on previews shared with select publications. These early glimpses hinted at the potential of integrating artificial intelligence into everyday eyewear but left lingering questions about real-world performance. That changed when Shahram Izadi, Google's Android XR lead, took the TED stage – joined by Nishtha Bhatia – to demonstrate the prototype glasses in action.

The live demo showcased a range of features that distinguish these glasses from previous smart eyewear attempts. At first glance, the device resembles an ordinary pair of glasses. However, it's packed with advanced technology, including a miniaturized camera, microphones, speakers, and a high-resolution color display embedded directly into the lens.

The glasses are designed to be lightweight and discreet, with support for prescription lenses. They can also connect to a smartphone to leverage its processing power and access a broader range of apps.

Izadi began the demo by using the glasses to display his speaker notes on stage, illustrating a practical, everyday use case. The real highlight, however, was the integration of Google's Gemini AI assistant. In a series of live interactions, Bhatia demonstrated how Gemini could generate a haiku on demand, recall the title of a book glimpsed just moments earlier, and locate a misplaced hotel key card – all through simple voice commands and real-time visual processing.

But the glasses' capabilities extend well beyond these parlor tricks. The demo also featured on-the-fly translation: a sign was translated from English to Farsi, then seamlessly switched to Hindi when Bhatia addressed Gemini in that language – without any manual setting changes.

Other features demonstrated included visual explanations of diagrams, contextual object recognition – such as identifying a music album and offering to play a song – and heads-up navigation with a 3D map overlay projected directly into the wearer's field of view.

Unveiled last December, the Android XR platform – developed in collaboration with Samsung and Qualcomm – is designed as an open, unified operating system for extended reality devices. It brings familiar Google apps into immersive environments: YouTube and Google TV on virtual big screens, Google Photos in 3D, immersive Google Maps, and Chrome with multiple floating windows. Users can interact with apps through hand gestures, voice commands, and visual cues. The platform is also compatible with existing Android apps, ensuring a robust ecosystem from the outset.

Meanwhile, Samsung is preparing to launch its own smart glasses, codenamed Haean, later this year. The Haean glasses are reportedly designed for comfort and subtlety, resembling regular sunglasses and incorporating gesture-based controls via cameras and sensors.

While final specifications are still being selected, the glasses are expected to feature integrated cameras, a lightweight frame, and possibly Qualcomm's Snapdragon XR2 Plus Gen 2 chip. Additional features under consideration include video recording, music playback, and voice calling.

Permalink to story:

 
Put a piece of white tape around the bridge and you'll look like every nerd did back in the 80's LOL.
Who would want to wear this nonsense?
 
So these eye wear will have a camera to take pictures, videos and record audio. Where are all the privacy advocates? But I guess if it has a recording light on the frame it will be ok, and the advocates will probably be using it too. But people take videos and pictures with their phones anyway. I'll be waiting for the first car accident when someone wears it while driving.

 
It’s impressive tech, no doubt, but we’ve seen flashy AR glasses demos before that never made it past the hype phase. Real question is: how long until this is something people actually wear in public without feeling weird?
 
This is impressive tech, unfortunately you couldn't pick a company less trustworthy or more inappropriate to be your eyes and ears in your day to day life than Google or Meta. The whole reason they are doing this is so they can insert themselves even more perniciously in your lives in order to monetize and manipulate you even more than they do already.
 
I really don't need to have advertisements shoved directly into my field of vision all the time, and I don't need the information constantly in my field of view either. That would be too distracting. I can just use my phone when I need to see it. I just don't see these catching on this time either.
 
What we're slowly moving towards is a life of following suggestions. Don't have to think much at all. The same way Google Maps has atrophied most people's sense of direction, we'll move through life doing pretty much what we're told, letting algorithms set our direction. For a lot of folks it'll be fine. For some, a dystopian nightmare.
 
Real-time translation, object recognition, and always-on AI processing through a tiny pair of glasses?
I’m skeptical about battery life and privacy... sounds like either it gets hot, dies fast, or both.
 
Maybe there's someone here who knows:

Where can I get video recording glasses that record in LANDSCAPE rather than portrait mode (like the Meta glasses).

I need to be able to film videos driving cars or piloting airplanes in 1080p or 4k.
 
This is the kind of tech that, once matured, can become a critical everyday tool for the average person. Someone has to be first, and the first few won't be world-changing. But eventually we'll have on-device systems that are fully private by design, sleek, and subtle. Imagine a heads-up-display that responds to your intentions. Integrate MIT's AlterEgo tech or something similar to process simple subvocalization, and you end up with a personal AI assistant that is private, simple to use, useful, and tailored to your preferences. At first, advertisements are likely to be limited or gone entirely. Unfortunately, just like everything good, ads will probably sneak in over time, but at least for now we can enjoy the tech while it's new.
 
Back