As we approach a new year, it’s time for our annual ritual of synthesizing the lessons from the past twelve months and formulating the outlook for the next twelve. 2023 was an incremental year for AR & VR, which both continue to gradually trudge uphill toward mainstream traction.
Highlights this year include the beginnings of XR’s convergence with AI, Apple Vision Pro’s unveiling, and some ups and downs for VR. 2023 was also defined by the rise of passthrough AR that’s incubated within VR a la Quest 3, as well as some smart glasses milestones.
These ends of the spectrum – passthrough AR and lightweight smart glasses – represent two paths towards AR’s holy grail. The former will get there through graphically rich UX in bulky form factors that slim down over time; while the latter gains UX richness to accompany its wearability.
Until then, what does spatial computing’s near-term look like? Aligned with the more extensive predictions of our research arm, ARtillery Intelligence, we’ve devised 5 AR Insider predictions for 2024. We’ll break them down weekly here, starting with prediction 1: AR & AI get Hitched.
Prediction 1: AR and AI Get Hitched
Prediction 2: A Smart Glasses Turning Point
Prediction 3: New APIs & SDKs Elevate XR
Prediction 4: Mixed Reality, The New VR Standard
Prediction 5: Apple Vision Pro is Propelled by Wearables
There’s a common misperception that the latest wave of excitement and investment in AI replaces that of XR. This fails to see the reality that XR and AI go together and make each other stronger. XR can be the face of AI, while AI is the brains of XR. There are several ways this materializes.
For example, one convergence point is what we’ve been calling “generative XR.” Like generative AI creates 2D art from text prompts, generative XR can achieve similar outcomes for XR creators. This creates opportunities to streamline content creation bottlenecks like 3D model generation.
But in the same breath, it should be mentioned that AI isn’t a silver bullet… talent and rigor are still needed. AI can rather streamline rote aspects of XR workflows. It can also be an inspirational tool to rapidly prototype concepts and designs – an idea that’s only getting started.
We’ll also see more development around conversational AI, a la ChatGPT. It’s driven by large language models that enable conversational UX. This evolves past the search-dominant era where we’ve been trained to retrieve information via keywords rather than full sentences.
Sticking with conversational AI, the opportunity in headworn AR is to unlock natural language interfaces, which add value due to a lack of traditional inputs like keyboards. Apple Vision Pro’s gestural inputs are step forward, but a conversational UX will enable full agency.
Beyond the interface, the brains behind it have to be functionally adept (not Siri). This involves personalized ambient assistants that guide users through their day via audio or text, including everything from wayfinding to commerce to relevant alerts for your social graph.
One benefit of such exchanges is that their relevance can take the place of advanced graphics as a primary value driver. And the timing is right as the realization has set in that advanced and dimensional visual AR won’t arrive anytime soon in wearable all-day glasses.
So in 2024, AI-driven personal assistants and information delivery – as opposed to a graphically-driven UX – will become a design target and value driver in smart glasses. At the higher end, Apple will develop an AI engine for Vision Pro that isn’t Siri… at least in its current form.
We’ll pause there and circle back next week with another 2024 prediction. Meanwhile, see more color in our full report on the convergence of XR and AI.