Google’s annual Search On event was held yesterday, including several updates to search and Maps. Altogether, several moves advance Google’s ambitions to build an Internet of places. Also what we call the metavearth, this is all about adding digital dimension to the real world.

These ambitions stem from Google’s history of creating immense value indexing the web over the past 20+ years. Now it sees ample opportunity to do something similar by indexing the physical world. And part of that playbook is to build a corresponding knowledge graph, just like the web.

Initiatives under this tent so far include Google Lens and Live View which let users identify objects and navigate using AR overlays. Standing behind these front-end products are Google’s unique data that its spent years building, such as Street View and indexed image libraries.

Altogether, Google is becoming more visual, which aligns with the proclivities of the camera-forward Gen-Z. This is all about future-proofing, given that the generation continues to gain purchasing power as it cycles into the adult consumer population. Google has its eye on that prize.

With that backdrop, here are the top four product updates we observed at the Search On event that advanced Google’s metavearth play.

The AR Space Race, Part I: Google

1. Multisearch Near Me

Starting with Multisearch Near Me, as we’ve examined, it lets users combine various search inputs to find the things they’re looking for. For example, start a search for a new jacket using an image (or Google Lens live feed), then refine the search with text (e.g. “the same jacket in blue.”)

After previewing this feature at Google I/O, it announced this week that it’s coming “this Fall” to the U.S. Moreover, it will bake in local discovery use cases. In addition to fashion and finding local retailers, you can now search for local restaurants with food images.

For example, say you discover a dish on Instagram. You can use that image to identify the dish with Google Lens, then use Multisearch Near Me to find local restaurants that serve the same or similar fare. This atomizes search down to individual items, rather than links and listings.

“This new way of searching is really about helping you connect with local businesses,” Google VP and GM of Search Cathy Edwards said at the event, “whether you’re looking to support your local neighborhood shop or you just need something right away and can’t wait for shipping.”

Google I/O: The AR Angle

2. Search with Live View

Live View is Google’s 3D/AR urban navigation feature. It utilizes the Street View image database to localize a device (recognizing where you’re standing) and overlay AR directional arrows on your route. Google recently spun out this capability to developers in the GeoSpatial API.

Yesterday’s update adds a search component. In addition to navigating to a known address, you can discover businesses. By holding up your phone to a given streetscape, you can search for businesses (say, ATMs) to see them revealed visually. You can then navigate to the closest one.

In addition to indicating the whereabouts of these businesses, Google will reveal details like hours of operation and other vitals. Google is uniquely positioned to do this given (again) the entity data it’s been assembling for years. And it’s only scratched the surface in visual search.

“You can just lift up your camera and see overlaid on the real world the ATM that’s nearby,” said Google VP and GM of Geo Chris Phillips at the event. “You can also see coffee shops, grocery stores, and transit stations. You really get a sense of what an area is like at a glance.”

XR Talks: Google Scales Up Geospatial AR

3. Language Translation

Google also previewed upcoming updates to Google Translate that will let users activate it on the fly with their cameras. This involves more integrated translations. In other words, rather than pop-ups that translate text visually (or audibly), the new text is visually integrated to a scene.

For example, if you hold your phone up to a sign, Google Translate will “erase” the existing text and visually replace it with the translated text. This builds from the “magic eraser” function in some Android phones to alter photos on the fly, such as backgrounds and other effects.

Boiling it down, this move blends magic and utility in practical real-life scenarios like international travel. But the true impact could come in the long term. Google has positioned language translation as a central use case in its AR glasses strategy. So think of this as a practice round.

Put another way, Google needs to refine the technology to get it ready for prime time. It may have ample time, given the longer-term reality of AR glasses. But the goal in any case is to hit the ground running with use cases like language translation when that day comes.

Are ‘Captions for the Physical World’ AR’s Killer App?

4. Immersive View

Lastly, Google’s Immersive View is less about algorithmic refinements and more about front-end sex appeal. Also previewed at Google I/O, Immersive View features stylized birds-eye views of various locales (see header image). Use cases include travel planning and wanderlust.

This move also follows ongoing mapping one-upmanship as Apple Maps continues to raise its game. Its last few updates have included similar immersive 45-degree birds-eye views. Now, Google has fired the latest shot with animated and stylized renditions of popular locales.

But this isn’t just about fancy graphics. It is Google after all, so it wants to differentiate by tapping into its data assets referenced above. In this case, it will surface dynamic meta-data for a given place, including weather, traffic, and crowds – giving it an algorithmic edge over Apple.

Immersive Maps has already racked up views of 250 global landmarks, and will launch for users in the coming months in Los Angeles, New York, San Francisco, London, and Tokyo. More cities will be added as Google continues to accelerate its ambitions to index the earth.

More from AR Insider…