During Meta’s recent Connect event, Mark Zuckerberg said something that caught our attention. Buried beneath sexier unveilings and crowd cheers for Quest 3’s impressive and up-leveled specs, Zuckerberg characterized Meta’s intention to lean into AR’s near-term realities.
Specifically, he painted a picture of AR that deviates from common connotations and future gazing. Rather than a field of vision populated by holographic dragons and whales, it will be more about useful information delivered through text and audio… sort of an AR-enabled personal assistant.
Before the last year’s AI breakthroughs, I kind of thought that smartglasses were only really going to become ubiquitous once we really dialed in the holograms and the displays, which we are making progress on but is somewhat longer. But now I think that the AI part of this is going to be just as important in smartglasses being widely adopted as any of the augmented reality features.
This vision aligns with our past construct of “lite AR.” The thinking is that it’s much more realistic and practical today. This “art of the possible” viewpoint also sidesteps issues of AR glasses’ style crimes: Lite AR can be delivered on sleek hardware that people will actually wear.
Panning back, we may see two tracks develop. One is lite AR, delivered through realistic wearable smart glasses. The second is a key trend happening elsewhere in the XR spectrum: Mixed Reality delivered through passthrough AR. Eventually, these two tracks could converge.
Prompted and Proactive
Sticking with lite AR, a key challenge – just ask Google Glass and North Focals – is that text or voice overlays can be a bit underwhelming. After years of hype around AR’s media-rich future, proponents of lite AR could be accused of thinking small. It falls short of AR’s past promises.
But the risk of an underwhelming UX can be counterbalanced by things that are experientially meaningful. We’re talking notifications by text, audio, or haptics that tap into the social graph (nearby friends) the interest graph (food & fashion), or any other situational awareness.
All these things tie back to Zuckerberg’s comment and the magic word: AI. With recent inflections in AI – including generative and conversational AI – the “experientially meaningful” part comes into view. It’s all about intelligent and personalized AI to augment our daily travels in smart ways.
Put another way, AR content delivery could carry value that’s not tied to graphical intensity nor dimensionality but rather personalization and situational relevance. And that can be done with text and audio which means, again, stylistically-viable smart glasses that people will actually wear.
Lite AR also deviates from common AR connotations in that it’s ambient. It sits in the background until prompted or proactive. The former involves pull-based on-demand info (think: ChatGPT), while the latter plays the role of push-based discovery engine that speaks up when relevant.
To be fair, the marriage of AI and AR isn’t new. Flavors of AR such as visual search and “captions for the physical world” rely on machine learning and object recognition, which require AI training. But the latest wave of AI has pushed the technology up the list of AR’s key ingredients.
Achilles Heel
The remaining question is, who’s best positioned to do all of the above? AR success factors previously included things like scene mapping and optics. Meta is working on those things and is better positioned than anyone given massive investments in Meta Reality Labs.
But now, AR could be won on a combination of those factors and whoever has the best AI. The latter has been democratized to some degree given APIs available from OpenAI. Indeed, its primary business model will likely be a B2B2C licensing play (and some B2C revenue).
But when looking broadly at who has the best underlying AI among big tech players, Google stands out, as does Microsoft (including its investments in OpenAI). Meta is also investing heavily in proprietary AI technology including Llama 2, which runs parallel to its XR efforts.
The elephant in the room of course is Apple. It could be best positioned in hardware elegance and vertical integration, and it has set the bar with Vision Pro. But as we’ve examined, its throbbing Achilles heel is Siri. Can it solve that problem from now until AVP’s release?
Either way, AR and AI will be tied at the hip. Any meaningful AR experience – whether lite or heavy – will be fueled by AI. It’s no longer just a game of localization, scene mapping, and dimensional interaction. In other words, SLAM may no longer be AR’s most important acronym.