At a small press event in San Francisco today, Google dropped a mention of a big new feature on the way: Google Lens support is coming to Google Image Search.
For the unfamiliar, Google Lens (previously available as a dedicated app, and as part of Google Photos) taps the company’s computer vision work to figure out the contents of an image and provide more details about exactly what you’re looking at.
One example Google demonstrated: in a search for “nursery” you might see a crib you like and want to buy. With the existing search interface, finding that exact model of crib with nothing but that image might prove challenging. Besides the color and “crib,” what keywords do you type in?
Tap the new Lens button, however, and Google will throw all of its computer vision chops at the image to tear it apart and try to work backwards to identify it. Want to identify something else in the image — like, say, a lamp in the background — instead? Use your finger to highlight that specific section, and it’ll focus on that object instead.
It’s not limited to random pieces of furniture though — it can identify dog breeds in a photo, or landmarks, or clothing, or cars, or any number of abstract categories. If Google has seen enough images of that object or thing to model some level of understanding of it, Lens should be able to work backwards to tell you more about it.
Lens should start rolling into Image Search later “soon”, though thats as specific as they’ll get.
Update: This post originally said the Lens feature would roll out this week; while Google says other Image Search features announced at the event will roll out this week, a rep for the company tells me Lens integration might take a bit longer.