As we enter a new year, it’s time for our annual ritual of synthesizing the lessons from the past twelve months and formulating the outlook for the next twelve. This has been an action-packed year for AR & VR as the world slowly emerges from the grips of a pandemic.
Moreover, 2021 was marked by the emergence of metaverse mania. Though it has legitimate principles and promise, the term has been ambiguated through overuse. It’s also been overhyped in terms of the timing of its arrival. A fully-actualized metaverse is decades away.
Beyond the metaverse, AR and VR continue to be defined by steady progress in several areas. We’re talking mobile AR engagement & monetization; AR marketing and commerce; continued R&D in AR glasses; enterprise adoption; and the gradual march of consumer VR.
So where is spatial computing now, and where is it headed? What’s the trajectory of the above subsegments? This was the topic of a report from our research arm, ARtillery Intelligence. Entitled Spatial Computing: 2021 Lessons; 2022 Outlook, it joins our report excerpt series.
Key Ingredients
To pick up where we left off last week, the metaverse’s key ingredients are being built. But unlike the metaverse itself, these building blocks exist today. They include AR, which is being applied to enterprise productivity, brand marketing, gaming, and utilities like visual search.
To pause for definitions, we consider AR to be any technology that digitally enhances the physical world. That includes everything from immersive product try-ons to geo-located mobile gaming (e.g., Pokémon Go), to line-of-sight annotations that support industrial work.
The form factor can also be mobile or head-worn. The latter is AR’s fully-actualized modality and the technology’s endgame. The former conversely isn’t as natural to real-world augmentation, including arm fatigue, but it has scale today as it piggybacks on smartphone ubiquity.
To quantify that, there are 3.46 billion smartphones today, 3.03 billion of which are compatible with AR, including web AR. This ubiquity is not only a pathway to scale, but a stepping stone: mobile AR acclimates the world so that consumer AR glasses can hit the ground running.
Lastly, smartphones won’t just be a means to an end, but a key piece of the AR glasses formula. By handling and housing CPU, GPU, and connectivity, smartphones enable AR glasses to be lighter, cheaper, and more powerful. This will involve a progression of wired to wireless tethers.
Building Blocks
Mobile AR doesn’t just tap into sheer scale but also a wide variety of platforms. In other words, it’s not just about the volume of AR-compatible smartphones but the creation and delivery channels to reach all of those devices. These include native app SDKs such as ARKit and ARCore.
These SDKs have democratized AR app creation on trusted and ubiquitous mobile operating systems. Apps like Instagram and Snapchat meanwhile have gained traction for AR lenses that enhance social activity. These are likewise coupled with platforms for lens creation and distribution.
Then there’s web AR, including developer platforms like 8th Wall and Zappar. Web AR operates within the mobile browser. Its advantages include less friction to launch AR experiences, and amplified reach. Lowering friction is the name of the game for emerging tech like AR.
Software is also developing from Niantic Lightship, and enterprise-geared platforms such as Microsoft Mesh and PTC Vuforia. We also see innovators elsewhere in the tech stack launching AR developer platforms. For example, Qualcomm in November launched its Spaces AR SDK.
By doing so, Qualcomm has doubled down on its position as the gold standard in chips that power AR and VR devices. With a developer platform, it can now also realize the business and technological advantages of vertical integration, including tighter stystems of software and silicon.
We’ll pause there and circle back next week for more AR landscape analysis and key lessons…