The term metaverse continues to be a runaway train in the gaming, media and XR worlds. Though it has legitimate principles and promise — most notably presented in Matthew Ball’s Metaverse Primer — it’s been obscured through overuse in editorials and marketing materials.

Broadly speaking, metaverse denotes virtual domains that host placeshifted participants for synchronous interaction. Mark Zuckerberg calls it an “embodied internet,” while Tim Sweeney calls it a “real-time 3D social medium where people can create and engage in shared experiences.”

In most of these cases, the metaverse is discussed in VR contexts, or at least in today’s non-VR metaverse-like fiefdoms such as Fortnite and Roloblox. But what about AR? Will there be a metaverse that’s geo-anchored to the real world? This is what we’ve been calling the metavearth.

To be fair, the concept of geo-achchored data that enables AR experiences is already classified under the AR cloud. It’s also a topic we’ve unpacked in our Space Race series, and a report by our research arm, ARtillery Intelligence. But could the metavearth further contextualize it?

Geolocal AR: The Metavearth Materializes

Land Grab

As background for the metavearth, one of AR’s foundational principles is to fuse the digital and physical. The real world is a key part of that formula….and real-world relevance is often defined by location. That same relevance and scarcity are primary drivers for real estate value.

To that end, one of AR’s battlegrounds will be in augmenting the world in location-relevant ways. That could be wayfinding with Google Live View, or visual search with Google Lens. It’s about pointing your phone (or future glasses) at places and objects to contextualize them.

As you can tell from these examples, Google will have a key stake in this Internet of Places. It’s driven to future proof and pave a future path for its core business, given Gen-Z’s affinity for the camera. Also seen in Snap Scan, visual content joins text and voice as a search input.

And Google is well-positioned to do this, given existing assets. For example, it utilizes imagery from Street View as a visual database for object recognition so that AR devices can localize. That forms the basis for its storefront recognition in Google Lens and urban navigation in Live View.

The AR Space Race, Part I: Google

Turf Battle

Google isn’t alone. Apple signals interest in location-relevant AR through its geo-anchors. These evoke AR’s location-based underpinnings by letting users plant and discover spatially-anchored graphics. And Apple’s continued efforts to map the world in 3D will be a key puzzle piece.

Meanwhile, Facebook is similarly building “Live Maps.” As explained by Facebook Reality Labs’ chief scientist Michael Abrash, this involves building indexes (geometry) and ontologies (meaning) of the physical world. This will be the data backbone for Facebook’s AR ambitions.

Then there’s Snapchat, the reigning champion of consumer mobile AR. Erstwhile propelled by selfie-lenses, Snap’s larger AR ambitions will flip the focus to the rear-facing camera to augment the broader canvas of the physical world. This is the thinking behind its Local Lenses.

Speaking of consumer mobile AR champions, Niantic is a close second given the prevalence of Pokémon Go and the geographic augmentation that’s central to its game mechanics. And its bigger play — its Lightship platform — aims to offer geolocated-AR as a service.

Beyond tech giants, there are startups positioned at the intersection of AR and geolocation. These include Darabase, Resonai, YouAR, Gowalla, Foursquare, (acquired by Niantic) Scape Technologies (acquired by Facebook) and ARWay (acquired by NexTech*).

The AR Space Race, Part IV: Snap

Layered Meaning

As noted earlier, all of the above is aligned with a guiding principle for AR’s future: the AR Cloud. As AR enthusiasts know, this is a conceptual framework in which data are anchored to the inhabitable earth in order to enable AR devices to trigger the right graphics (or audio content).

As background for those unfamiliar, AR devices must understand a scene and localize themselves before they can integrate AR graphics believably. That happens with a combination of mapping the contours of a scene (LiDAR will help), and tapping into previously-mapped spatial data.

That data could be a great source of value, hence the tech-giant land grab outlined above. But though the metaverse could have walled gardens and fiefdoms, it could be modeled after, or built on, the web in that it has common standards, protocols, and languages for interoperability.

In the metavearth, these proprietary data could be in “layers” rather than sites or places. And AR devices could reveal those layers based on user intent and authentication. View the Instagram layer for geo-relevant stories from friends, and the Pinterest layer to discover new products.

But if there is indeed a physical-world metaverse, it will take years or even decades to fully actualize. That makes it a corollary to the virtual-world metaverse. As Matthew Ball says, it will build gradually and incrementally on everything that came before it….and could take a while.

*The author of this article owns stock in NexTech AR Solutions. See our full disclosure and ethics policy here

Header image source: Esri

More from AR Insider…