One of our longstanding picks for potential AR killer apps is visual search. Its utility and frequency mirror that of search, while it carries a relatable “search what you see” proposition. It’s also naturally monetizable and Google is highly motivated to make it happen.

As background, visual search has several meanings including reverse-image search on Google desktop. But what we’re talking about here is pointing your smartphone camera at an item to contextualize what that thing is, or where you can buy it. It takes form so far in Google Lens.

A close cousin of visual search is visual mapping. It applies similar computer vision and machine learning to help users navigate. Taking form so far in Google Live View, it overlays 3D urban walking directions on an upheld smartphone screen, and is likewise inherently monetizable.

So how will Google develop these opportunities? To return to our ongoing analysis of visual search and mapping, we’ll examine the latest updates from Google. But it’s not alone….Apple and others have signaled interest in spatially mapping the world and making it searchable.

Data Dive: Google Lens Recognizes 15 Billion Products

Down with IOP

First, to level set on some definitions and background, AR doesn’t “just work” the way most people assume it does. Overlaying the right graphics on a given physical space first requires geometric and semantic understanding of that space. It’s a key step in the process.

The AR cloud will be that key. GPS alone won’t cut it because AR graphics need millimeter-level precision. So the AR cloud (assisted by 5G) is proposed to offer a spatially relevant data layer that coats the earth and dynamically informs AR devices where they should anchor graphics.

Another name for this is the Internet of Places (IOP), which puts it in Google terms. Just like Google created massive value by indexing the web, today’s opportunity is to index the physical world. Outcomes include visual search, mapping, and lots of other monetizable AR products.

For example, to develop Live View, Google used its Street View imagery as a visual database against which to match a live camera feed and then “localize” that device. It’s a clever hack for visual mapping that applies existing assets and advances Google’s competitive position.

Orbiting Efforts

Google’s latest visual search play combines orbiting efforts like Lens and Live View. While navigating with Live View, Google offers small click targets on your touchscreen when it recognizes a business storefront. When tapped, expanded business information appears.

This is something Google has been teasing for a few years. As shown above, it includes business details that help users discover and qualify businesses. The data flow from Google My Business (GMB) and the current version offers dynamic business info like hours of operation.

In addition to Lens and Live View, this local discovery use case could incorporate Google’s less-discussed Earth Cloud Anchors. Another vector in Google’s overall visual search and AR master plan, this ARCore feature lets users geo-anchor digital content for others to view.

Cloud Anchors could engender a user-generated component of visual search. It could have social, educational and business use cases such as geo-tagged messaging or local business reviews. Speaking of user-generated, Google recently followed this up with its connected photos.

Google’s ‘Internet-of-Places’ Inches Forward

Not Alone

As mentioned, Google is not alone. Apple’s moves to revamp Apple Maps could spin out 3D visual mapping capability. On the surface, it wants to improve Maps to make iPhones more attractive. But a valuable byproduct could be 3D visual mapping that competes with Live View.

As background, Apple’s efforts to refresh Maps include getting street-level imagery. And it’s decided to future-proof itself by also gathering 3D mapping data (Street View, by comparison, is 2D imagery). Along with GeoAnchors, this could play into Apple’s AR master plan.

Other clues for Apple’s visual search ambitions came with iOS 14. Though it’s for 2D mapping, a new feature lets users raise their phones to scan surrounding buildings to recognize where they are. This comes in handy in GPS-degraded urban canyons where mobile mapping tends to fail.

Just like Google Live View, Apple uses street-level imagery — in this case its LookAround feature in Maps — as a visual database to match and identify these live scans. Though it’s being used to improve 2D mapping today, it could have a bigger place in Apple’s AR road map.

Is AR Navigation Coming Next from Apple?

The Plot Thickens

Visual search, AR navigation and the broader IOP picture don’t end with Google and Apple. There’s an entire subsector of AR cloud startup building spatial maps. For example, Facebook-owned Scape has been doing so in major cities for various consumer and enterprise use cases.

Speaking of Facebook, it launched its Live Maps initiative (which Scape presumably now supports) as well as Project ARiA. The latter will trial publicly-worn AR glasses to both test their social dynamics, and capture spatial maps. The latter represents Facebook’s AR cloud play.

Snap is meanwhile building spatial maps that can improve how its local-lenses interact. And Niantic — now with 6d.ai — has a crowdsourced approach where legions of Pokémon Go players can feed spatial maps into its Real World Platform, including its local Mapping Tasks program.

All of the above will be accelerated by evolving hardware, including LiDAR. Speaking of which, autonomous vehicles, while collecting point clouds to navigate properly, could feed into an IOP database. Just like “data is the new oil” in other areas, it will be a key component of AR’s future.

More from AR Insider…