XR Talks is a series that features the best presentations and educational videos from the XR universe. It includes embedded video, narrative analysis and top takeaways. Speakers’ opinions are their own. For a full library of indexed and educational media, subscribe to ARtillery PRO.


The AR cloud is a foundational principle that helps contextualize the future of augmented reality. Also known as mirroworld, magicverse and other monikers, it’s all about data that’s anchored to places and things, which AR devices can ingest and process into meaningful content.

This concept has been a central topic of industry discussions over the past two years. Charlie Fink even wrote a whole book about it (disclosure: we wrote a chapter in the book). Applying that knowledge, Fink recently led a Niantic-hosted panel discussion about making the world clickable.

“We’re here to talk about mirrorworlds,” Fink said, “an invisible machine-readable mesh that’s a perfect one-to-one skin, if you will, for the real world. […]  So if there were an invisible mesh that is machine-readable by cameras, or with the help of cameras, then it follows that there would be information in that world that you could click to retrieve. So I am looking at a stadium, and I can click the stadium and it tells me all about the Rose Bowl, to use one example.

Fink’s Convergence Connects the Dots on an Augmented Future

Internet of Places

One historical parallel is the mighty Google. Just as it began indexing the web 20 years ago, the next technological revolution could be for someone to index the physical world in potentially more valuable ways. In fact, Google is a natural candidate to build this “Internet of Places.”

And it’s already started. After spending years assembling a knowledge graph on the web, Google has the building blocks for a “spatial web.” That includes a vast image database for object recognition (Google Lens) and geo-specific place data from Street View, among other sources.

The latter powers Google’s Live View AR navigation tool for urban walking directions. It works by activating the smartphone camera to “localize” a device after recognizing where it is. And that object recognition happens through years of mapping data, says Google’s Justin Quimby.

“What Live View does take the camera information and figure out where you are by comparing the pixels on the screen and the camera to our 3D model of the world, which we’ve collected using Google Street View. And the interesting thing is for Google, we use a variety of sources of data to build the model of the world: We use Google Street View cars, backpacks, cameras on planes, as well as satellite imagery — they all blend together to build this geo-aligned model of the world.”

XR Talks: Lessons From Building AR for Google Maps, Part I

Last Mile 

One implication from Google’s many data sources is that there won’t just be one AR cloud. It will be a large effort to map the physical world, with many data sources. And just like today’s web, there will be various use cases, proprietary data, walled gardens and permission layers.

6D.ai, recently acquired by Niantic, will be one of those sources. Among other things, it enables mobile users to actively and passively assemble spatial maps. According to founder Matt Miesnieks, this will unlock the AR cloud’s last-mile, where Google and others don’t reach.

“For the places that you use your phone most, generally there isn’t always Street View data for those places — whether that’s your home, business or some small part of the world. […] So 6D.ai was built on this technology that lets you build the model on your phone.”

For all of the above to come together in a compelling front-end user experience, there are many considerations says HTC’s Amy Peck. For one, it needs to be easy to access, per our ongoing training wheels construct; and have practical value that’s easy for average users to understand.

“What’s the utility of AR relative to the consumer? What does consumer engagement look like for retail? We don’t want to have the ‘hyperverse’ where we have ads plastered all over the world. So [it’s about] leveraging this technology in a way where it has utility and it’s in context: context of where that person is, why they’re there, and making sure that we’re able to deliver the right content to the right person in the right place at the right time.”

Will an ‘Internet of Places’ Serve as AR’s Framework?

Visual Browser

Speaking of the front-end user experience — on smartphones today and potentially glasses someday — Fink has long said that the spatial web will require a filtration system. This will ensure that all of this data, including intrusive advertising as noted by Peck, doesn’t bombard users.

This leads to his construct of a “universal visual browser.” With that analogy, we can think of items in the physical world as being searchable and clickable. Just like today’s browser provides a front-end UX for the 2D web, what will be the spatial web’s browser? We’ll pick it up there in Part II.

More from AR Insider…