Advertisement

Google Lens image and text multisearch will soon be available in more languages

The Near Me function is coming to the US this fall as well.

Google

Multisearch, a Google Lens feature that can search images and text simultaneously, will soon be more broadly available after arriving in the US as a beta earlier this year. Google says multisearch will expand to more than 70 languages in the coming months. The company made the announcement at an event focused on Search.

In addition, the Near Me feature, which Google unveiled at I/O back in May, will land in the US in English sometime this fall. This ties into multisearch, with the idea of making it easier for folks to find out more details about local businesses.

Multisearch is largely about enabling people to point their camera at something and ask about it while they're using the Google app. You could aim your phone at a store and request details about it, for instance, or ask about a screenshot of any unfamiliar item, like an item of clothing. You could also look up what a certain food item is called, like soup dumplings or laksa, and see what restaurants around you offer it.

Also on the Lens front, there will be some changes when it comes to augmented reality translations. Google is now employing the same artificial intelligence tech it uses for the Pixel 6's Magic Eraser feature to make it appear like it's replacing the original text, instead of superimposing the translation on top. The idea is to make translations look more seamless and natural.

Google is also adding shortcuts to the bottom of the search bar in its iOS app, so you'll more easily find features like translating text with your camera, hum to search and translating text in screenshots.