Google Lens can detect over 1 billion objects

Greg S

Posts: 1,607   +442
The big picture: After little more than a year, Google Lens has been improved to recognize more than a billion different everyday items, some not so common. Machine learning algorithms have come a long way in a very short time span.

Originally showcased at Google I/O, image recognition app Google Lens made its debut in October 2017. A little over a year later, and the service is now capable of differentiating between more than a billion things in photos.

Instead of relying on text queries for search, Google is promoting the use of images that can be used to much better describe what you are looking for. Estimates of user habits show that between 10 and 15 percent of pictures taken on smartphones may be of somewhat practical items such as receipts and lists.

Google Lens runs a TensorFlow machine learning framework to match images with certain keywords. Labels are then linked with Google's Knowledge Graph, which is a collection of tens of billions of pieces of factual information. All in the blink of an eye, Google returns some results that are generally accurate.

In the case of photos containing text, a character recognition system is used to allow copying and translation of any words found. Google makes use of its spelling correction suggestions from Search within Lens to try and discern whether any mistakes have been made, allowing proper results to be returned more often.

As cameras become a quicker means of finding the right information faster, Google is ensuring that Lens is available to as many people as possible. Android and iOS both have full Google Lens support. For Pixel owners, Lens is native to the camera app and does not require an extra download.

Permalink to story.

 
Back