As we roll into 2025, it’s time for our annual ritual of synthesizing the lessons from the past twelve months and formulating the outlook for the next twelve. 2024 was an incremental year for AR & VR, which both continue to gradually push forward in gaining mainstream traction.

Highlights include the rise of mixed reality as a standard in VR, non-display AI glasses, and next-gen hardware like Meta Orion and Snap Spectacles. We also saw the symbolic and practical end of the previous era of XR, as defining devices like Microsoft HoloLens retreated from the market.

So where is spatial computing, and where is it headed? Our research arm ARtillery Intelligence’s recent report Spatial Computing: 2024 Lessons, 2025 Outlook tackles these questions. After recently publishing 2025 predictions here on AR Insider, we shift gears to 2024 lessons.

To that end, what were the biggest takeaways in 2024 in the wide world of spatial computing? There were many, but today we’ll zero in on XR’s most promising format: headworn AR. What’s the state of the sector? Where is value being created today? And who’s leading the way?

Annual Predictions: 2024 Lessons, 2025 Outlook

The Landscape

After the last installment of this series covered Mobile AR, we switch gears to headworn AR. Though AR glasses haven’t arrived en masse, they represent the modality that will unlock AR’s true potential. In fairness, AR glasses have arrived more meaningfully in the enterprise.

There, AR glasses’ style crimes aren’t the barrier that they are in consumer markets. But longer term, consumer/enterprise spending shares could flip as AR glasses gain style and wearability. Consumer markets are generally larger than enterprise markets due to population sizes.

In the meantime, today’s AR glasses landscape is subdivided by various hardware classes. At one end of the spectrum is dimensional AR (SLAM), represented by Meta Orion and Snap Spectacles. This is AR’s most immersive modality, but it involves higher cost and hardware bulk.

Dimensional AR is also represented in passthrough video devices like Apple Vision Pro and Meta Quest 3. Also known as mixed reality, graphics interact with an incoming video feed from passthrough cameras (we classify this under VR, covered in the next installment of this series).

Elsewhere in the AR hardware spectrum is flat AR. This defines AR devices that dial down the optical complexity in order to gain wearability and affordability. For example, devices like Xreal One offer private immersive viewing for flat/floating content such as movies and 2D games.

Though less immersive than dimensional AR, private big-screen viewing of familiar formats (2D movies & games) resonates with users, especially if cheaper and in a more stylistically-viable vessel. Consumer surveys from our research arm, ARtillery Intelligence, validate this.

From Audible to Augmented: Segmenting the Spatial Spectrum

Information vs. Optics

Elsewhere in the AR hardware spectrum is another newfound consumer favorite: audio-only AI glasses. These involve basic information via audio. Here, the value lies not with graphical complexity nor dimensionality, but content relevance. It’s more about information than optics.

We’re talking social signals (are my friends nearby?), interest-graph signals (where is the closest coffee shop?), and commerce signals (where do I buy that jacket?). This capability has been amplified due to recent inflections in underlying AI technologies such as large language models.

Or, as Mark Zuckerberg put it from the stage at the recent Meta Connect conference:

“Before the last year’s AI breakthroughs, I kind of thought that smart glasses were only really going to become ubiquitous once we really dialed in the holograms and the displays, which we are making progress on but is somewhat longer. But now I think that the AI part of this is going to be just as important in smart glasses being widely adopted as any of the augmented reality features.”

All the above takes form most notably in Ray-Ban Meta Smartglasses. Multimodal AI lets them “see” the world with visual inputs, then communicate to the wearer through audio output. This includes visual search functions like identifying real-world objects or shoppable products.

All the above boils down to the art of the possible. Audio-AI glasses “get real” by leaning into the best of what’s possible today, rather than trying – and failing – to be something they can’t. The latter has defined many overly-ambitious attempts at AR glasses over the past decade.

Is the AR Industry Getting More Realistic?

Tomorrow’s Tech

That said, we don’t wish to discourage technological ambition. In fact, though there’s a general “get real” movement in AR, as noted, we’re also seeing worthwhile moonshots for tomorrow’s tech. A key distinction from past efforts is that these devices admit that they’re prototypes.

The exemplar here is of course Meta Orion. It’s a glimpse into the future, seen through the extent of AR technical achievement that’s possible today. Though it’s a working prototype for internal development, it demonstrates to the broader public where AR is headed.

In that sense, though the device isn’t available for public use, nor will it be in the near term, it has reinvigorated excitement in the AR industry. And as we predicted recently, Orion could renew faith in the challenged area of optical see-through AR. Apple may or may not follow suit.

Meanwhile, Snap Spectacles launched in June to offer a dimensional AR experience in a package that – though somewhat bulky – is a step closer to stylistic viability for consumer markets. For now, it’s only available to developers but it likewise signals the direction that AR is heading.

We’ll pause there and return in the next installation with more sub-sector breakdowns, including VR and mixed reality. Meanwhile, check out the full report

More from AR Insider…