One critical component to AR’s promise is the AR Cloud. As a data resource, it’s not as sexy as AR’s highly-visual front end, and therefore doesn’t get the airtime it deserves. But it’s fan base is starting to grow.

The leading voice for the AR cloud has been Super Ventures partner and founder Ori Inbar. And his latest move to educate us on the AR Cloud is an explainer video (embedded below), adapted from an ARBA presentation. It’s a much-needed crash course.

The background is that consumer AR apps have fallen short of expectations because they’re fairly one-dimensional and underwhelming. Inbar likens this to surfing the web in 1996: We didn’t have the level of content and social interaction that would come to define web 2.0 a decade later.

Image Credit: Ori Inbar

And if you look around at successful apps and online experiences today, most have a common element: social interaction and collaboration. We’re talking social-graph driven collaboration and sharing of media, sentiment and status. That’s a component mostly missing from AR today.

Of course we have rudimentary AR that’s been infused with social apps (Snapchat Lenses, etc). But that’s recorded media, shared and viewed asynchronously by someone else. Inbar asserts that true social collaboration is to share AR experiences in real time, not a recording of it.

And the AR version of that principle that’s largely missing in today’s apps and toolkits is image persistence. For more compelling social collaboration, graphics should remain in place accross separate AR sessions (come back and it’s still there), and between different users.

This is all to say that the AR cloud will enable persistence through geo-relevant content as well as data that AR devices can tap to perform object recognition wherever they are. This makes the AR cloud a sort of upgrade to Google’s mission statement to organize the world’s information.

Image Credit: Ori Inbar

In other words, it could organize the world’s information visually, and “in situ,” meaning where items actually are. So instead of a search index that gives us information through typed queries, the AR cloud could give us information on any item by pointing a camera at it (millennial-friendly).

And it’s not just a matter of consuming the AR cloud, but also creating it. That can happen through a sort of crowdsourced approach, where all of these outward facing cameras capture data and feed the AR cloud. So it can perpetually build over time, just like the web (and Google’s index).

Of course there are several questions like who will own the AR cloud, which we explored yesterday. And there are several technical requirements for a functioning AR cloud, which Inbar walks through. See the video below, and stay tuned for more educational videos every Friday.

For a deeper dive on AR & VR insights, see ARtillry’s new intelligence subscription, and sign up for the free ARtillry Weekly newsletter. 

Disclosure: ARtillry has no financial stake in the companies mentioned in this post, nor received payment for its production. Disclosure and ethics policy can be seen here.

Header image credit: