Google unveils 'Tensor' SoC for on-device machine learning in Pixel 6

Daniel Sims

Posts: 1,372   +43
Staff
The big picture: When Google unveiled its new Pixel 6 phones today, it spent a good amount of its presentation talking about Google's Tensor chip. The company points out how the new system on a chip (SoC) uses machine learning to enhance many of the phone's features.

Both of the Pixel 6 phones will incorporate Google's new Tensor processor. The SoC lies at the core of Google's machine learning model and enhances features like Google Assistant, Google Translate, Photos, and even mundane phone calls.

When texting on the Pixel 6, Tensor can make speech recognition more accurate. Google showed off how it can do things like insert words into a transcription when you want to amend what you just said, match spoken names with names on your contact list, and automate punctuation. Instead of making corrections based on key proximity, speech recognition on the Pixel 6 will make corrections based on phonetics.

Pixel 6 and Tensor also introduce a "Live Translate" feature for Google Translate, which can translate conversations between people speaking different languages in real-time. It can do this directly inside chat apps like Message and WhatsApp. It can also apply speech recognition and translation to videos.

The presentation also went into how Tensor helps the Pixel 6 easily make changes to photos. Google's Magic Eraser feature can cleanly remove what Google calls "distractions" from photos, like extra people and objects. Users can easily edit images right on the Pixel 6 to remove, change, or add motion blur in photos so users can control the sense of motion they convey. It can also read and translate text within photos.

According to Google, the Pixel 6 phones will also make calling businesses easier. Google's Phone app will show projected wait times at different times of the day when you call a company that puts you on hold. Additionally, when reaching an automated answering service, the Pixel 6 speech recognition can transcribe selections on the screen. Those callers can tap the displayed prompts instead of pressing a number key.

Permalink to story.

 
I don't know who would care about this, because machine learning on mobile phones is a gimmick. ML can only be done on a powerful discrete video card, to train up a model, based on a lot of data, with a lot of calculation, and to build a model which is then shipped integrated into mobile apps. Nobody in his right mind will be training a model on a mobile. It's just nonsense.
 
I don't know who would care about this, because machine learning on mobile phones is a gimmick. ML can only be done on a powerful discrete video card, to train up a model, based on a lot of data, with a lot of calculation, and to build a model which is then shipped integrated into mobile apps. Nobody in his right mind will be training a model on a mobile. It's just nonsense.


Yee of little faith - I'm sure it comes with a tuned soc - just needs to be fine tuned .
In wetware we have a humongous amount of synapses /connections - even given that - read up on how some insects etc have so few - yet can walk . fly, image process - neural processors are amazing in their efficiency

Continuing my rant against Apple - Siri loses every time say one goes on how good their apple phone - so I get them to test test Siri- google wins everytime . Siri is good for targeted type questions - am I pretty siri? or some such BS - as they have programmed answers

I with google on this - I sure it can also communicate with The Hive if given permission .
Plus with shrinking parts and more power efficiency - blah blah - My tooth brush has more power than a 1970 Cray computer blah blah ( last part made up -so most def, not true )
 
I don't know who would care about this, because machine learning on mobile phones is a gimmick. ML can only be done on a powerful discrete video card, to train up a model, based on a lot of data, with a lot of calculation, and to build a model which is then shipped integrated into mobile apps. Nobody in his right mind will be training a model on a mobile. It's just nonsense.

What if you have thousands of smaller chips, all working on improving the algorithms, alongside mega servers in the cloud that process big data?
 
So how powerful is this new chip from Google? Is one of these new Pixel phones faster or slower than it’s Qualcomm and Apple competition?

Right now all we have is marketing speak and it’s boring.
 
I don't know who would care about this, because machine learning on mobile phones is a gimmick. ML can only be done on a powerful discrete video card, to train up a model, based on a lot of data, with a lot of calculation, and to build a model which is then shipped integrated into mobile apps. Nobody in his right mind will be training a model on a mobile. It's just nonsense.

That is not true, it is just like a video decoder. Custom silicon designed for a task is going to do it better than general purpose when it comes to power efficiency. Light AI workloads like those used by camera apps are prime example of in hardware usage.

ML is best done in the cloud. Period. For most this isn't something economical to work with. For Dev Purposes sure a local method is great, but your single video card isn't going to do the complex AI at scale.
 
Back