Building-block technologies are a key piece of the XR puzzle as we recently examined. And one instrumental type of building block is graphical tools that lower friction for developers to build content, a la Sketchfab, or Google Blocks.

Google recently took this principle to a new level with its Maps API for XR. For those unfamiliar, this lets developers utilize the underlying data in Google maps as a foundation to build virtual worlds. That can be extensive game scapes in VR, or geo-relevant overlays in AR.

In other words, rather than reinvent the wheel and build virtual worlds for gaming, mapping or other XR apps, developers can “reskin” the graphical data that Google has assembled over years for its mapping engine. That includes 2D lat-long mapping, but also 3D structures and buildings.

Using that data as a framework for virtual worlds, developers can focus instead on other parts of the user experience. And the worlds they build can have additional relevance because they sync with the real world. We’re talking Pokemon Go-like experiences or scavenger hunt games.

But beyond easier world-building, this could have another benefit that hasn’t been talked about as much: object recognition for AR. Given that Google’s map data contains real world geography and structures, could it be an image database that helps AR devices identify objects in the real world?

As background, AR’s promise is to overlay graphics or information on items you point your phone (or glasses) at. But to do that reliably requires scene mapping so an AR device knows what it’s looking at. The AR cloud will have a big part in delivering that data to devices (see 6d.ai).

Part of this hinges on AR devices knowing where they are what direction they’re facing. That helps with simple AR a la Pokemon Go. But more robust AR requires object recognition so that the device knows what it’s looking at contextually, and thus what overlays are relevant.

We like to think of this as marker-based AR on steroids. Markers have been around for a while in print media to trigger AR graphics. But extended to the real world, AR devices should recognize  unique contours of a given tree, building or street corner, then deliver AR accordingly.

This is what Google is doing with Lens, still in early development. The idea is to point your phone at a storefront to get useful overlays like operating hours or Yelp reviews. Google has the unique data to pull this off, such as Street View imagery that can be applied to AR object recognition.

This gets back to the Maps API. Could it offer a similar object-recognition database for AR apps? Google’s mapping imagery probably isn’t granular enough, but the API could be a step towards the right tool for any developer to build scene-aware AR apps. That brings it well beyond gaming.

Either way, this is another signal that Google’s vast IP will plug into XR in lots of ways. As we explored last week, XR will be accelerated by adjacent sectors like self-driving cars (computer vision) and voice (XR input). Google’s XR synergies will continue to come out of the woodwork.

As for our speculations on the Maps API, we’ll have to wait to see if it plays out that way. Meanwhile, we’re hearing whispers about the XR focus of Google I/O in May. That was the case last year when it launched Google Lens, and we expect to see a lot more this year.


For a deeper dive on AR & VR insights, subscribe to ARtillry Intelligence Briefings, and sign up for the free ARtillry Weekly newsletter. 

Disclosure: ARtillry has no financial stake in the companies mentioned in this post, nor received payment for its production. Disclosure and ethics policy can be seen here.