As AR evolves, it continues to branch into several form factors. For example, two overarching approaches are passthrough AR and seethrough AR. The former is employed by Quest 3 and Apple Vision Pro while the latter is seen in Snap Spectacles and Meta Orion, among others.

Within seethrough AR, we can further segment by functionality and UX. This ranges from non-interactive visuals (which we call “flat AR”), to fully interactive and immersive visuals (which we call “dimensional AR”). Sometimes, there’s no display at all, such as AI-smartglasses.

As these approaches continue to evolve and diverge, they often cause confusion among those outside of the XR world. To help define and contextualize this spatial spectrum, our research arm ARtillery Intelligence recently published a report on the latest developments in AR glasses.

So to amplify that exercise, we’re relaying takeaways and breaking down some of these device classes and their attributes. To stay focused, we’ll stick with AR in this article. That said, VR is present if you consider that passthrough AR is most often housed in VR headsets.

Slim & SLAM: The Long Road to AR’s Holy Grail

Main Divisions

Diving in, what are the main divisions in headworn AR? The first delineation is passthrough and seethrough as noted. Let’s tackle that first, then further subdivide.

Passthrough AR includes devices like Quest 3 and Apple Vision Pro (and technically, any smartphone-based AR). They employ exterior HD color cameras that project the outside world into the device. Also known as mixed reality, graphics are then integrated in ways that interact with physical spaces. Advantages include larger field of view, control over every pixel, and better color contrast. It can also do things that you can’t do with line-of-sight vision, such as magnifying objects, correcting vision, and other “superpowers” that will be developed over time. But downsides include device bulk and social isolation.

Seethrough AR involves transparent lenses on which graphics are projected. Though approaches can vary (more on that in a bit) this often includes waveguide-based projection systems that guide light to its destination and liquid crystal on silicon (LCOS) microdisplays. Examples include Snap Spectacles and Meta Orion, while advantages include greater awareness of one’s surroundings and, often, a lighter form factor. This theoretically translates to greater potential for mobility and social acceptability – at least in seethrough AR’s theoretical endpoints.

Formats and Flavors

Sticking with seethrough AR, we can further subdivide into various formats and flavors. Here they are, in order of graphical dimensionality.

Audio-Only AI Glasses don’t have a display system at all but still augment one’s experiences through audio output. They often do so by seeing and sensing physical spaces via external cameras, then processing informational and audible outputs. The exemplar there is Ray-Ban Meta Smarglgasses, whose multimodal AI can visually scan physical objects and audio inputs (e.g., “What am I looking at?”) and then reply audibly. This is a form of visual search, which is a flavor of AR that holds ample promise. Meanwhile, one of the hallmarks of AI glasses is that the lack of a display system lets them shed heat, bulk and cost. This makes them stylistically viable. In other words, people will actually wear them.

Flat AR involves display systems, but low levels of immersion. Sometimes called “lite AR,” this involves non-interactive visuals and often a single-purpose use case. One example is Xreal Air 2, which executes on the focused use case of private, lean-back entertainment through large virtual displays. This approach was prompted after years chasing the elusive dream of dimensional AR (see below), and all its vexing design challenges and tradeoffs. Besides avoiding those technical challenges, flat AR engenders a slimmer form factor – though not as slim as non-display glasses – that makes it more viable for consumer markets.

Dimensional (SLAM) AR is defined by visuals that understand and interact with their surroundings, otherwise known as simultaneous localization and mapping (SLAM). These scene-interaction and dimensionality functions are computationally intensive, which makes heavy AR challenging. To date, this has involved bulky hardware such as Microsoft HoloLens and Magic Leap, that largely failed in consumer markets and was forced to pivot to the enterprise. Even there, HoloLens has recently retreated from the market altogether. But despite those stumbling blocks, we’ve seen some bright spots in the last few months alone, including Snap Spectacles ’24 and Meta Orion. Dimensional AR also exists within passthrough AR, such as Apple Vision Pro (see graphic above).

Snap Uplevels AR: Hands on With Spectacles Gen 5

Element of Affordability

So there you have it. And to be fair, there are even more subdivisions. Within seethrough AR, there are various configurations in optical systems such as waveguides and LCOS microdisplays. Things further branch into display illumination (OLED, Micro-LED, etc) and even retinal lasers.

Back to the initial demarcation of passthrough AR and seethrough AR, the former is mostly agreed to be more marketable today; while the latter is heralded as the long-run dominant form factor. That’s simply because it’s more conducive to wearability and social acceptability.

The thought is that it’s not realistic to expect people to wear passthrough-AR devices like Quest 3 while socializing, driving or generally being seen in public. In other words, AR’s ideal form factor is an all-day wearable, which is more aligned with seethrough AR’s theoretical endpoints.

For all these reasons, many see passthrough AR as a good option today. That’s propelled by its optical advantages noted above, and its validation from Apple’s technical achievement with Vision Pro. Meta also deserves credit for combining passthrough AR and the element of affordability.

But the same proponents consider see-through AR the dominant modality in the long run – at least in consumer markets. But the “long run” could mean decades. The good news, in the meantime, is that the steps in this evolutionary path are producing compelling devices today.

More from AR Insider…