Yesterday Apple announced ARkit V.1.5, including updates that push AR forward. And the theme is vertical expansion: ARkit will have newfound ability to literally map vertical planes, while figuratively expanding down the vertical tech stack.

First, the headline is vertical plane detection. For those unfamiliar, ARkit applies some software magic to reconcile the lack of a depth camera in iPhones. But that has drawbacks, including ability to only map — and thus position graphics on — horizontal surfaces like tables and floors.

But now the ability to map vertical planes will achieve better positional accuracy among other things. ARkit’s vertical blindness currently makes localization a bit funky as horizontal planes seem to go on forever instead of stopping at the bounds the room, otherwise known as walls.

This caused ARkit apps to appear buggy and drive users up the wall (sorry). But V.1.5 alleviates that, and puts better tools in developers hands — fitting after AR AppShow. That’s key, because more compelling AR apps can attract a larger user base, and tap into a wider range of use cases.

“Developers have been waiting for this, it opens up a whole new class of apps,” ARkit developer and Atakote Studios founder Janet Brown told ARtillry. “Developers so far have been doing workarounds, but recognizing vertical planes has implications for industries such as real estate.”

Image Credit: Apple

Vertical Expansion

But the part we’re most intrigued by is ARkit V1.5’s expansion into more substantial computer vision. Along with detecting walls, it will recognize objects on the wall, like a poster or painting. Those images are then queried against an image database to return relevant info or graphics.

You can picture this being useful in art museums for self-guided tours that have more dimension and personalization than current self-tour (audio) tech. And more generally, it opens the door for all kinds of content discovery, location-based gaming and commerce (buy items on a billboard).

What makes this most interesting is that it gets into Google’s territory of visual search, a la Google Lens. But like maps, voice search and other data-intensive areas (Google’s forte), Apple could learn that execution requires much more than a polished front end (Apple’s forte).

In fact, this brings ARkit down the stack from app layer to a more robust back-end for object recognition. That has machine learning and AI components, where Apple has painfully shown its shortcomings (read: Siri). We’ll have to see if its visual search is better than its voice search.

Object recognition also brings Apple into another key topic: The AR Cloud. Computer vision happens in tandem with geo-data and scene geometry to map spaces before applying graphical overlays. And the AR cloud is the data scaffolding that will enable it across apps and locations.

Image Credit: Google

Federating the AR Cloud

This all leads to a question we’ve asked before but that won’t be answered anytime soon: Who will own the AR cloud? It will be a critical component to AR’s functionality, including graphics that are pervasive accross AR sessions and between different AR users.

If history is any indication, Apple could take a closed and proprietary approach to the AR cloud. For example, the image data for art museums referenced above could come from its own databases. That will limit it’s functionality (the Siri approach), and preclude a robust shared cloud.

Google could likewise utilize it’s vast data  — search, maps, Street View, etc. — to embolden the AR cloud. But it’s a question if whether or not it will share AR cloud data. It will most likely make it accessible to developers that build apps on ARcore, and Apple will do the same for ARkit.

But will Google, Apple, Facebook and others that map the world with outward facing cameras feed into one common AR cloud? The answer is likely no, even though that could theoretically create a more comprehensive and valuable AR cloud, given all the ground it needs cover.

Meanwhile, these players will continue to collect data for their own AR endevors. Apple will begin with better computer vision and object recognition in ARkit V.1.5. These are all positive steps, but the question will need to be answered eventually about a federated versus fragmented AR cloud.


For a deeper dive on AR & VR insights, see ARtillry’s new intelligence subscription, and sign up for the free ARtillry Weekly newsletter. 

Disclosure: ARtillry has no financial stake in the companies mentioned in this post, nor received payment for its production. Disclosure and ethics policy can be seen here.

Header image credit: IKEA