A trend has emerged over the past year: the AR world has started to get real. After spending a decade obsessed with an overly-ambitious vision to achieve immersive visuals and wearability, AR glasses are beginning to be self-aware that they can’t do both those things at the same time.

This is seen in devices with more narrow, focused, and purpose-built use cases. They contrast the previous standards in hardware like Microsoft Hololens 2 and Magic Leap 2 – venerable devices in their own right, but lugging do-everything bulk that only works for enterprises.

One example of the focused alternative is Xreal Air 2. It’s built for the single purpose of private massive screen viewing for games or entertainment. It’s not only focused, but it targets a use case that is relatable and resonant with most consumers. Our survey data supports that.

Another example is Ray Ban Meta Smartglasses. They achieve a lauded UX through a “lite AR” approach. Here, Meta didn’t just tone down the visual experience but sidestepped it altogether. There’s no display system at all, but rather audio annotations for the physical world.

Ray-Ban Metas Steal the Show

Experientially Meaningful

We’ve been obsessed with this “lite AR” construct for years. But there was always one big challenge: it’s a bit underwhelming. At a time when XR advocates dreamed big and loud, “lite” wasn’t in the vocabulary. But two things changed: “getting real” (per the above), and AI.

Though AI isn’t new, recent inflections have elevated personal assistant functions. Applied to AR, this takes the burden off visuals as a central selling point. Freed from that burden, there’s less of a dilemma in the classic tradeoff between a robust visual UX and style/comfort.

The result is experiences whose value lies not in graphical intensity but in the personalization and relevance of the information being delivered. It’s more about information than optics. It’s experientially meaningful, even if graphically underpowered. And, again, it can be stylish.

That brings us back to Ray-Ban Meta smart glasses. Besides upgraded audio, video capture, and live-streaming functions, multimodal AI brings meaningful utility to the table. It can identify and contextualize real-world objects using visual input and voice output (hence multimodal).

The New Face of AR: Ambient & Intelligent

Art of the Possible

All the above boils down to the art of the possible. Meta, Xreal, and a few others have internalized AR’s shortcomings and designed around them. The result is the best version of what’s possible today, rather than trying – and failing – to be something that’s unrealistic or impossible.

The other thing that the latest batch of focused smart glasses represents is a bridge to the fully-actualized smart glasses we really want. And until we get to that dream of immersive optics and style/comfort in the same package, individual devices are starting to pick a lane.

But rather than feeling like a means to an end, many of these devices demonstrate real appeal and utility. And that will only get better. In the meantime, Apple Vision Pro looms over everything, showing what’s possible at the extent of the UX spectrum. This is smartglasses’ counterpoint.

Put another way, Vision Pro sits at one end of the sliding scale between UX and style/comfort, while smart glasses sit at the other end. The goal – and it could take decades – is for these two endpoints to achieve the best of both worlds when they meet somewhere in the middle.

More from AR Insider…