This is the latest in a series on mobile AR business models developing today; those we project to hold opportunity in the near future; and where there are gaps. See the rest of the series here.


AR continues to capture the imagination of the tech world. But sometimes, “imagination” is the operative word: AR is far from its potential — in both capability and consumer adoption. But it’s getting there and is showing early signals for best practices and business models.

One area where AR will shine is local commerce, as it can inform purchases and activities beyond search and local apps’ (e.g. Yelp) current capabilities. But before we come close to that vision, a few things need to happen, including hardware evolution, location data and the AR Cloud.

For those unfamiliar, AR works by mapping its environment before overlaying graphics. True AR works as less of an overlay, and infuses graphics in dimensionally accurate ways such as occluding physical objects. ARkit and ARcore have democratized some of that, with lots to go.

As for local commerce, a close cousin of mobile AR is visual search. Google Lens for example will identify products and storefronts using your smartphone camera. Back to scene mapping, that will happen through a combination of computer vision, object recognition, and GPS data.

In Google Lens’ case, it will use Street View imagery for object recognition and Google My Business data for overlays. But what about the non-Googles? How will the rest of us create AR and visual search apps that map environments reliably and invoke the correct info or graphics?

Image credit: Google

Enter the AR Cloud

One answer is the AR Cloud. As some readers know, it will be a data repository that AR devices can tap wherever they are. Because, mapping and object-recognition data for the entire world is too extensive to store on-device, smartphones can offload some of that burden to the AR cloud.

Another way to think about the AR cloud is an extension to Google’s mission statement to organize the world’s information. But instead of a search index and typed queries, the AR cloud delivers information “in situ” (where an item is) when pointing a camera at it (millennial-friendly).

And it’s not just a matter of consuming the AR cloud, but creating it. That can happen through a sort of crowdsourced approach, where all of these outward facing cameras capture data and feed the AR cloud. So it perpetually builds over time, just like the web (and Google’s index).

This is what 6D.ai is doing. Think of it like Waze for the AR Cloud: Devices using the app not only tap into that data, but feed back the 3D mapping data they capture to the cloud. And through that, the physical world slowly gets mapped. Charlie Fink has a great article that goes deeper on this.

Image Credit: Google

Persistence Pays

The AR cloud will also enable a key function: image persistence. In other words, AR graphics remain in place accross separate AR sessions, and between different users. The latter is key for social AR experiences and multiplayer support — both projected to drive AR’s killer apps.

At this point you may be thinking, don’t we already have that? There are indeed mini-AR clouds or systems that perform similar functions. Google Lens (via Street View and GMB data) will be one. And Pokemon Go works through a GPS database built over years from its forebear, Ingress.

But these are smaller and proprietary AR clouds. And in the above cases, it involves giants that can afford, or have spent years building those clouds. To unlock the AR app economy, what’s needed is a universal and open AR cloud that can be tapped and fed by billions of phones.

That’s where things get tricky. With self-interested tech giants competing to lead the next computing era, there’s understandably little incentive to share their proprietary and hard-earned data. So that raises possibilities of centralized authorities (think: ICANN and DNS).

The more likely answer is blockchain. Its capability aligns with the construction, maintenance and authentication needs of the AR Cloud. Beyond the above matters of building it out, there will be blockchain-centric issues like IP and ownership of graphical assets.

Image Credit: Lowes

Act Local

But before getting too deep on those issues, one way this ties back to local commerce is that a visual-search world will place lots of value on location data. It’s currently applied to areas like listings optimization and SEO, but could take on new life in what I’ll call “VSEO” for visual search.

That means holders of location data like Yext, Foursquare Aisle411 and Placed (recently acquired by Snapchat) could play a key part in the AR cloud. We’re talking data sets like business details, menu items, store layouts, real-time product availability, and foot traffic-based popularity scores.

Meanwhile, there will be a need for 3D graphical assets, such as accurate product renderings like cars and furniture (see what rooOomy is doing). The point is that there will be a broad ecosystem of supporting players in the tech stack that brings us AR-fueled local commerce.

But back to an earlier point, it will take a while before consumer usage and demand cause any of this to scale meaningfully. That’s good news and bad for anyone with products that apply to the local AR ecosystem. You have time, but don’t stall to long: it will build slow, then happen fast.


For a deeper dive on AR & VR insights, see ARtillry’s new intelligence subscription, and sign up for the free ARtillry Weekly newsletter. 

Disclosure: ARtillry has no financial stake in the companies mentioned in this post, nor received payment for its production. Disclosure and ethics policy can be seen here.

Header image credit: BMW