Last August, ARtillery Intelligence’s monthly report focused on smart glasses. Given new standards set by Ray-Ban Meta Smartglasses, Xreal, Viture, and others, smart glasses are seeing demand inflections. Much of this success is built on the principle of “lite AR.”

What is lite AR? After a decade of chasing heavy AR ambitions, many XR players got realistic about the technology’s shortcomings. They’re zeroing in on AR’s best self today, rather than aiming for overly ambitious ends. This means simpler visuals… or no visuals at all.

Following that narrative, we now embark upon the next chapter: heavy AR. Also known as dimensional AR, this is defined by visuals that understand and interact with their surroundings in dimensionally-accurate ways. It’s all about simultaneous localization and mapping (SLAM).

These scene-interaction and dimensionality functions are computationally intensive. That in turn forces design tradeoffs. For example, SLAM requires high-end graphical processing and presents challenges like heat dissipation, device bulk, and cost (Karl Kuttag explains it best).

Though these challenges persist, we’re seeing notable evolution, such as hardware in the past year from Snap (Spectacles Gen 5) and Meta (Orion). But there’s still a long way to go. This is the topic of a recent ARtillery Intelligence report, which joins our weekly excerpt series.

Slim & SLAM: The Long Road to AR’s Holy Grail

Inching Ahead

After the last installment of this series examined some of the flavors of optical see-through (OST) AR – including non-display AI glasses, flat AR, and dimensional AR  – we now go deeper into the latter. Specifically, what are some of the technical challenges that face the format?

Among other challenges, there are ongoing debates about OST display technologies. These include OLED, which may not be bright enough for AR glasses meant to compete with daylight. Micro-LED displays are meanwhile promising but may not be ready for prime time.

Laser beam scanning (LBS) also has detractors for its practical hurdles, but those are being addressed by the likes of Amalgamated Vision. Liquid Crystal on Silicon (LCoS) has inched ahead with some consensus, but is likewise challenged. So there’s no silver bullet, yet.

Beyond displays are other issues. When looking at AR headsets to date – everything from Magic Leap to HoloLens – they’re too bulky for consumer adoption or all-day use. To be fair, the latest hardware is making strides, such as Snap Spectacles and Meta’s Orion prototype.

Many of these design challenges are due to the inverse relationship between graphical UX and wearability. The more visually immersive the user experience – including graphics that interact with physical scenes – the more bulk, heat, and cost are involved in the hardware.

Meanwhile, Moore’s Law benefits digital technologies and defines ongoing reductions in cost and size. But it applies to chip-based technologies, which only cover certain parts of AR glasses. There are other aspects, such as the way light moves, that aren’t aided by Moore’s Law.

Amalgamated Vision: The Second Coming of Virtual Retinal Displays?

Moving Targets

These physics-based challenges include vergence accommodation conflict (VAC) and focal rivalry. Both principles stem from our three-dimensional monocular perception. Our eyes automatically and dynamically focus on foreground and background objects as needed.

But with AR glasses, virtual images are only in focus when our eyes gaze where those images are: at the distance of the display. That means that digital objects are at different focal distances than the farther-away real-life objects they’re meant to interact with, causing focal rivalry.

Moving on to vergence accommodation conflict, AR devices project a slightly different image to each eye, to create a stereoscopic impression of depth. However, when you see an object, your eyes naturally converge on it – doing so more for objects that are closer to you.

Your eyes also focus or “accommodate” at the distance you are looking. In real life, vergence and accommodation are always working in harmony. The more your eyes verge, the closer you accommodate. But with virtual objects, those distances are out of sync.

While vergence works naturally with virtual images, accommodation is always at the distance of the display – for the same reasons noted above in the explanation of focal rivalry. The brain therefore receives contradicting information about vergence and accommodation.

So how are these issues resolved? There are a few methodologies, including LBS (see above), varifocal lenses that move forward & back, multiple depth planes (see Magic Leap 2), and Lightfields (see CREAL). These will each be moving targets and important evolutions in AR.

We’ll pause there and pick things up in the next installment of this excerpt series to go deeper on dimensional AR dynamics…

More from AR Insider…