This post is adapted from ARtillry’s latest Intelligence Briefing, AR Cloud and the ‘Internet of Places.’ It includes some of its data and takeaways, including original survey research. More can be previewed here and subscribe for the full report.


In last week’s report excerpt, we left off talking about Google’s efforts in AR such as visual search and VPS. But it’s important to note that Google can pull off many of these AR and visual search initiatives because… it’s Google.

It not only takes heft and deep pockets to do things like 3D scan hundreds of Lowes Hardware stores, but Google sits on a massive image database that will enable visual search through image matching. It will be able to deliver that to devices dynamically and with location relevance.

For example, Google’s aspirations for location-based visual search queries like storefronts will tap into its Street View database. Because it has street-level imagery of most of the storefronts in the U.S., it can use that as an object recognition database to power Google Lens.

And its work with autonomous vehicles in its Waymo division will further spin out computer vision and geographically-relevant object recognition for an Internet of places. Google can also power “general interest” visual search using its Google Images database and other work in AI.

But that begs the question “what about everybody else?” Other tech giants are similarly positioned to apply assets to their AR products, but what about the startups of the world? Google’s positioning in its many AR initiatives unfortunately isn’t representative of the rest of us.

To obtain some of those Google-like capabilities, more shared resources are needed for visual search, AR object recognition and other components of local AR. And the theoretical entity that’s missing to unlock such capabilities for a robust AR ecosystem? It’s the fabled AR Cloud.

Enter the AR Cloud

The AR Cloud is like dark matter. Still theoretical, it’s a missing puzzle piece whose adjacent pieces provide evidence of its existence and possible shape. Replace the puzzle metaphor with theoretical physics equations and that’s dark matter. It has to exist for the equation to balance.

The AR cloud is similarly something we know needs to happen for AR’s fully intended vision to materialize. It’s the critical, yet still non-existent, piece of nearly every glowing and futuristic AR vision that you may have heard in conference presentations, generalist op-eds or YouTube clips.

The “What?”

So what is it exactly? Though its still-theoretical status dictates that it will take shape in unknown ways, it’s generally a cloud data repository that enables AR devices to perform the actions outlined in previous sections. That includes geo-relevant data and object-recognition blueprints.

Stepping back for a minute, AR works by mapping its environment before overlaying graphics. True AR works as less of an overlay, and infuses graphics in dimensionally accurate ways such as occluding physical objects. ARkit and ARcore have democratized some of that, with lots to go.

In the local AR examples explored earlier in the report, scene mapping happens when the smartphone scans its surroundings. This applies computer vision, object recogniton and GPS data. But the issue is that all that data can’t be stored locally on your phone.

In other words, ARCore and ARkit perform well in individual AR sessions including mapping a given space using surface detection and localization. But to map large (outdoor) areas, or come back to previously mapped areas requires more computational muscle than smartphones offer.

That’s where the AR cloud comes in. It will help AR devices to “recognize” scenes, rather than exhaust computational muscle (and battery life) mapping already-chartered territory. It will register devices’ location, then serve mapping and object recognition data tagged to that location.

The “Why?”

Why is this important? It gives AR apps more functionality, and advances the industry by giving developers the capability of a Google – a la Google Lens and VPS. Instead of having to build those extensive and enabling data sets, developers can focus on UX and business models.

In fact, the AR cloud will enable consumer AR to reach ARtillry Intelligence’s revenue projections. Specifically, the sector will grow from $454 million last year to $14 billion in 2021. As we detailed in February’s report, this will mostly involve mobile app-related revenue in the near term.

Another benefit, like cloud computing generally, is offloading computational burden. Because, mapping and object-recognition data for the entire world is too extensive to store on-device, phones can tap the AR cloud to conserve computational muscle – a precious resource in AR.

The AR cloud will also enable a key function: image persistence. This refers to AR graphics that remain in place across separate AR sessions, and between users. The latter is key for social AR experiences and multiplayer support — both projected to drive AR’s killer apps (see Charlie Fink).

Social AR is a big topic, and one that we’ll devote an entire Intelligence Briefing in Q3. In short, social connectivity will accelerate AR’s growth through network effect that fuels adoption and engagement. Image persistence and multiplayer, care of the AR cloud, will be a key enabler.

Preview more of the report here and subscribe to ARtillry PRO to access the whole thing.


For deeper XR data and intelligence, join ARtillry PRO and subscribe to the free ARtillry Weekly newsletter. 

Disclosure: ARtillry has no financial stake in the companies mentioned in this post, nor received payment for its production. Disclosure and ethics policy can be seen here.