Spatial computing – including AR, VR, and other immersive tech – continues to offer the means to enhance the ways that we work, play, and live. But though that potential lies underneath, there have been ups and downs, characteristic of early-stage industries and hype cycles.
So where are we now in spatial computing’s lifecycle and what subsectors lead the way? This is the topic of a recent report from our research arm, ARtillery Intelligence. Entitled Reality Check: The State of Spatial Computing, it’s also the topic of a recent episode of ARtillery Briefs.
See below for the episode takeaways and embedded video…
So what we’re talking about is the full spatial spectrum. That’s AR, VR, and all their subsegments. For example, one leading segment continues to be AR marketing. This involves sponsored lens placement in places like Snapchat, collectively projected to bring in $3.3 billion this year.
Another influential subsector is consumer VR. Contrary to public sentiment, VR isn’t dead. It’s even growing at healthy paces in some corners of the industry. Most disappointment around VR is that it didn’t live up to the revolutionary impact that was touted in its circa-2017 hype bubble.
When looking at the more measured and sober growth in VR today, the biggest driving factor is Meta. It continues to push VR’s gradual but meaningful traction through massive investments. That includes marketing, R&D, and loss-leader pricing for its VR hardware such as Quest 2.
Speaking of Meta, the metaverse still looms. Though it’s cringeworthy in its overuse, the M-word holds legitimate principles worth exploring so that we can project our spatial future, and innovators can start building towards it. But we’re talking decades before it really materializes.
Continuing with the M-word, there are two main tracks that we and others have delineated. One is virtual and multiplayer worlds, usually discussed in a VR context. And the second track is all about adding digital dimension to the physical world. And that’s more in an AR context.
That second track has seen noteworthy building blocks such as Niantic’s Lightship platform. For those unfamiliar this is an ARDK (like an SDK but for AR), which takes all of the learnings and architecture of Pokémon Go and bakes it into a platform for developers to run with.
Elsewhere in the physical-world metaverse, we have tools like Google Lens that represent what we call “captions for the real world.” This represents real utility and high-frequency usage, not to mention natural monetization, given similar user intent that drives web search.
These attributes (high-frequency, utility, etc.) are the ingredients for killer apps. But there’s still a ways to go in underlying tech and cultural acceptance: Holding up your phone to identify things isn’t a natural behavior nor an existing user habit. These things take a while to condition.
Crack the Code
Moving from mobile to faceworn, AR glasses are the real endgame but they face design challenges such as the tradeoff between wearability and UX. You can have sleek but optically-underwhelming devices like Ray-Ban Stories; or bulky AR-rich glasses like Hololens 2.
One way AR glasses innovators could start to solve this riddle is to design for the wearability end of the spectrum, then evolve towards more graphical richness. They could also take the inverse approach with bulky hardware that slims down over time, a la Vision Pro.
To get away with the “underwhelming” approach – in service of wearability – one way to pull it off may be to develop use cases that are experientially meaningful, even if optically underpowered. This would be all about finding simple and elegant use cases that empower people.
For example, we’re talking integrations such as social signals, biometrics from your Apple Watch, or spatial audio that syncs with your AirPods. As these examples suggest, Apple could be the one to crack this code in its AR glasses that could someday follow behind Vision Pro.
For more color, see the full episode below and the report it’s based on here…