One striking realization about spatial computing is that we’re almost seven years into the sector’s current stage. This traces back to Facebook’s Oculus acquisition in early 2014 that kicked off the current wave of excitement….including lots of ups and downs in the intervening years.

That excitement culminated in 2016 after the Oculus acquisition had time to set off a chain reaction of startup activity, tech-giant investment, and VC inflows for the “next computing platform.” But when technical and practical realities caught up with spatial computing….it began to retract.

Like past tech revolutions – most memorably, the dot com boom/bust – spatial computing has followed a common pattern. Irrational exuberance is followed by retraction, market correction, and scorched earth. But then a reborn industry sprouts from those ashes and grows at a realistic pace.

That’s where we now sit in spatial computing’s lifecycle. It’s not the revolutionary platform shift touted circa-2016. And it’s not a silver bullet for everything we do in life and work as once hyped. But it will be transformative in narrower ways, and within a targeted set of use cases and verticals.

This is the topic of ARtillery’s recent report, Spatial Computing: 2020 Lessons, 2021 Outlook. Key questions include, what did we learn in the past year? What are projections for the coming year? And where does spatial computing — and its many subsegments — sit in its lifecycle?

Spatial Computing: 2020 Lessons, 2021 Outlook

Foundational Principle

Picking up where we left off in the last installment in this series, the AR cloud is a foundational principle for enabling spatial interactions. Also known as mirrorworld and other monikers, it’s about data that’s anchored to places and things that AR devices can process into meaningful content.

In order for AR to work in the ways we all envision, it first must understand its surroundings. Before placing graphics in a room, an AR device has to understand that room. And because the world’s spatial mapping data is too extensive to fit on one device, it must tap the cloud.

Another way to view the AR cloud – framed in the terminology of today’s technology – is an architecture that makes the physical world clickable. Technically, things won’t be clicked but they’ll be activated and viewed as AR graphics or informational overlays come to life.

Yet another analogy is the “Internet of places.” Just as Google began indexing the web 20 years ago, the next evolution could be for someone to index the physical world in potentially more valuable ways. In fact, Google is a natural candidate to build this physical-world index.

And it’s already started. After spending years assembling a knowledge graph on the web, Google has the building blocks for a spatial web, as we examined last week. That includes a vast image database for object recognition and geo-specific place data from Street View.

The latter powers Google’s Live View AR navigation tool. It works by activating the smartphone camera to “localize” a device by recognizing where it is. Other Google data can then be utilized for informational overlays, such as storefront details from Google My Business.

The AR Space Race, Part I: Google

The Plurality

One implication from Google’s many data sources is that there won’t just be one AR cloud. There will rather be a large decentralized effort to map the physical world. And just like today’s web, there will be various use cases, proprietary data, walled gardens, and permission layers.

For example, 6D.ai — recently acquired by Niantic — will be one such source. It enables mobile users to actively and passively assemble spatial maps as they’re playing games like Pokémon Go. This approach could unlock the AR cloud’s last-mile, where Google and others don’t reach.

Meanwhile, AR cloud goals and approaches will vary. If Google is the spatial web’s knowledge layer, Facebook could be its social layer, Microsoft the enterprise productivity layer, and Amazon the commerce layer. Apple will be a hardware powerhouse for the physical touchpoint.

The pattern is that each company’s spatial web persona will mirror its core competency.  Each is motivated to future-proof its core business, so each player’s “version” of the AR cloud will trace back to its primary revenue stream – a trait of our ongoing “follow the money” principle.

One key question that emerges from this “plurality” is the AR cloud’s openness. Will it be open like the web, with common languages (HTML) and protocols (HTTPS) that anyone can use to plant their flag? Or will the spatial web be a constellation of walled gardens that don’t talk to each other?

The answer is probably “both” just like we have today on the web. There’s an open web, unlocked and accessed through the browser. And then there are apps – everything from Snapchat to Salesforce – that connect in some ways to the web, but are otherwise self-contained and gated.

The latter is inevitable given tech giants’ incentive to build moats around monetizable assets. This is okay as long as there are common languages – again like HTML. This could involve standards such as “geo-pose” (position + heading) – which could be a sort of URL for the spatial web.

Niantic’s AR Lessons: The Platform Angle

Historical Lessons

The remaining question is what will be the AR cloud’s killer apps? We’re bullish on visual search – everything from identifying items with your camera to navigating a new neighborhood. But as shown historically, killer apps take years to materialize (think: Uber on smartphones).

Speaking of historical lessons, one likely trait of AR cloud killer apps will be to elevate or improve various things we already do. This not only resonates with consumers but it’s less of a logical leap or education process, as it simply enhances the things we already understand.

Examples so far include Pokémon Go’s utilization of AR to enhance mobile gaming. Snapchat lenses meanwhile bring color and animation to the already-popular area of media sharing. AR was eased into the use case rather than forced — our ongoing “training wheels” principle.

If we were to project the areas that could have those same ingredients, communications is a high-impact behavior that has room for AR enhancement (which Snapchat has already done to a degree). Visual search likewise has inherent utility and usage frequency, just like web search.

Finally, a key consideration for AR killer apps will be privacy. Given the level of personal data revealed through such immersive technology, strategic positioning and technical aptitude will mean nothing without user trust. This will be more challenging for ad-centric companies.

Indeed, privacy awareness has been heightened in the past five years, which could jump into hyperdrive in spatial computing due to its sensory immersion. That will create myriad signals and inputs that reveal consumer intent — everything from biometrics to spatial maps of your bedroom.

We’ll pause there and circle back next week with more from this report. Meanwhile, check out the entire thing here.

More from AR Insider…