Smart glasses have seen their ups and downs – from the early excitement around Google Glass, to the device’s famous flameout. The before-its-time and identity-confused device has since become a cautionary tale and the butt of jokes throughout the AR land.

But that cautionary tale ended up serving the market well, leading to the evolutionary point where we now sit. Though there’s still miles to go, the past year has seen an inflection in excitement, investment, and market reception for smart glasses. It’s the XR sector’s new hope.

This emergence of smartglasses was born from a practical reset in the AR world, combined with the emergence of AI. The former is all about a reality check around XR’s prevailing design principles of the past decade. The focus was on graphical richness rather than practicality.

That approach birthed devices like Magic Leap 2 – boasting a rich UX but a form factor that no one will wear in public. But the pendulum has more recently swung towards a toned-down UX, whose selling point is situational intelligence rather than visuals, a la Ray-Ban Metas.

This is the topic of a recent report from our research arm, ARtillery Intelligence. As such, it joins our weekly excerpt series to highlight the best bits and bites from long-form works. This week, we dive into sections of the report that break down the driving factors for smart glasses today.

Smart Glasses: A New Hope for Augmented Reality

Information vs. Optics

In the last installment of this series, we examined factors driving the smart glasses movement, including new design standards and a trend towards prioritizing utility over visuals. This can be seen in “flat AR” glasses from the likes of Xreal, and non-display glasses like Ray-Ban Metas.

Beyond more focused approaches, another development in smart glasses is the same one that has captured the tech world’s attention: AI. This involves personal-assistant functions to deliver relevant situational intelligence to the user through simple graphics or even just audio.

By doing so, it compensates for shortcomings in graphical performance by excelling at other valuable functions such as information relevance. In other words, personalized assistant functions and multimodal AI take the burden off visuals as the primary smart-glasses selling point.

Freed from that burden, it sidesteps the hardware bulk that comes with optical systems. The value lies not in graphical intensity but the relevance of the content being delivered. It’s more about information than optics. It’s experientially meaningful, even if it’s graphically underpowered.

What’s Driving the Smart Glasses Movement?

Prompted & Proactive

As for use cases, we’re talking about notifications by text, audio, (or haptics) that tap into the social graph (nearby friends), the interest graph (food & fashion), or other situational awareness. Meanwhile, multimodal AI has become a killer app in identifying and contextualizing objects.

Because it’s often text or audio, this avoids AR design challenges noted above, and enables hardware that consumers will wear in public. These use cases also deviate from traditional AR connotations in that they’re ambient. They sit in the background until prompted or proactive.

“Prompted” functions involve pull-based on-demand info (think: ChatGPT), while proactive functions play the role of a push-based discovery engine that speaks up when relevant. This idea of ambient computing has been AR’s promise since its inception. And AI will unlock it.

Either way, AR and AI will be tied at the hip. The former is no longer just a game of optics, localization, scene mapping, and dimensional interaction (a.k.a., simultaneous localization and mapping). In other words, S.L.A.M. may no longer be AR’s most important acronym.

We’ll pause there and circle back in the next installment of this series with more smart glasses market dynamics. Meanwhile, check out the full report... 

More from AR Insider…