One of the most pronounced trends in spatial computing over the past year is the rise of smart glasses. Compared to previous standards in AR glasses, these are toned down in their visual UX. Doing so has brought down their price and paradoxically broadened their appeal.

So what are the dynamics of this emerging XR device class? This is a topic of a report from our research arm, ARtillery Intelligence, which breaks down drivers & dynamics. It’s also the topic of a recent AR Briefs episode, which you can see below, along with summarized takeaways.

First, to define smart glasses, they aren’t as optically advanced as full-fledged AR glasses that have dimensional interactions with physical space. Instead, they have either flat overlays or no visuals at all (more on that in a bit). This simplified approach is sometimes called “lite AR.”

One factor driving this approach is that the XR world has begun to get more realistic. After spending a decade obsessed with an overly ambitious vision that achieves graphical richness and style/ wearability, most XR players have realized they can’t do both – at least not yet.

Is the AR Industry Getting More Realistic?

Relatable & Resonant

One example of this realistic approach is display glasses from Xreal and VITURE. They’re built for the focused purpose of private virtual screens for 2D gaming and entertainment. This is not only focused, but it targets a use case that’s relatable and resonant with a large consumer market.

Another factor driving smart glasses is everyone’s favorite topic: AI. As it’s increasingly integrated into smart glasses like Ray-Ban Metas, it unlocks utilities and use cases such as object recognition and visual search. This takes the burden off complex visuals as AR’s selling point.

The result is experiences whose value lies not in graphical intensity but in the personalization and relevance of the information being delivered. This includes personal alerts, social signals, and identifying people & things. It can all occur via simple text or even just audio.

The point is that it’s more about information than optics. Or to put it another way, it’s experientially meaningful, even if graphically underpowered. But the real kicker is that without all those optical requirements, smart glasses can hit an elusive target: style and comfort.

How Many XR Devices Did Meta Sell in Q2?

Art of the Possible

Zeroing in on audio-only smartglasses – also known as non-display AI glasses – the exemplar is Ray-Ban Meta Smartglasses, as noted. Besides some fundamental specs like great audio quality in inputs and speakers, multimodal AI has emerged as the device’s primary selling point.

This can identify and contextualize real-world objects using visual and audible inputs (e.g., “Hey Meta, what am I looking at?”), and audible outputs. This functionality will continue to improve as AI models themselves do; and use cases will broaden as consumers get creative with it.

Winding down, one lesson from all the above is to focus on the art of the possible. Many of the above devices do their best with today’s available tech, rather than trying and failing to be something overly ambitious, which was the path taken in AR glasses (e.g., Magic Leap) to date.

The end result is approachable, affordable, and stylish flavors of AR that are a bridge to the eventual full-fledged AR glasses we’ll see someday. This training wheels approach is arguably where AR glasses should have started from the beginning, and are now back on track to do.

More from AR Insider…