“Vantage” is AR Insider’s editorial contributor program. It enlists spatial computing executives and innovators for first-hand strategic insights. Find out more or contact us to participate here. Authors’ opinions are their own.  


Charting a Path to Viable Consumer AR Glasses, Part II

Industry Assumptions and their Implications

by Jason McDowall

Welcome back to our series on the path to viable consumer AR. Over four parts, we’ll explore the biggest barriers and potential solutions to making consumer-grade Augmented Reality (AR) glasses that people will actually wear.

Part 1: The two biggest barriers holding wearable displays back are visual quality and device comfort, which hinge on the display and optics technology.
Part 2 (You are here): The industry currently holds several assumptions about how displays and optics need to work, which impose very difficult implications on the overall design of AR glasses.
Part 3: MicroLED and laser scanning displays hold promise and challenges.
Part 4: A different set of assumptions, with the right breakthrough innovation, may yield a better path.

Dead on Arrival

Despite billions of dollars invested by the world’s biggest brands and some crafty startups, Augmented Reality (AR) glasses have been dead on arrival so far. In part 1 of this 4-part series, we explored how the display and optics technologies have an outsized impact on the visual quality (both the quality of the digital image and the real world) and device comfort (both the physical and social comfort). We also highlighted some of the multi-layered challenges in making truly wearable displays viable for everyday use. (Remember, we are using the term “AR glasses” broadly to reference head-worn devices that allow you to directly see the real world as well as digital content, whether basic 2D content or more immersive 3D experiences tied to the real world.)

The predominant approach by the industry is to take dumb displays and pair them with complex and inefficient and/or bulky optics. To understand why today’s displays are dumb and optics are inefficient and/or bulky, let’s break down the problem and take a closer look at the current solutions.

Spray and Pray Approach

In the quantum realm, a single element of light, a photon, is born from an electron. An energized electron will emit a photon when the electron returns to a lower energy state. The direction the photon begins its journey is random. The result is the light is spread out as it emerges from its source within a display. When that display hangs on the wall or rests in our hands, this attribute is a feature: we can view the content from a wide variety of angles, which is helpful when a group of people gathers around to watch the season opener.

However when we wear the display on our face, there is only one person who views the content and we don’t look directly at the display—it sits off to the side. Somehow we have to channel the light from the display and combine it with the light from the real world, while at the same time keeping the whole contraption small and light enough to be comfortable (physically and socially). Any light that doesn’t reach our eyes is wasted, and wasted light contributes to more heat, bigger batteries, and bigger device size.

In AR glasses, the “feature” of spreading light becomes a massive bug. This brings us to the first point about the current industry mindset: Display technology is dumb. Dumb displays spew light where it’s not useful, either resulting in wasted light (which equates to wasted energy as heat and bigger battery) or wasted space (which equates to added weight and bulk).

Corralling The Light

This “bug” in today’s display technologies is solved by adding optical elements (e.g., lenses and reflective surfaces). We use “collimating optics” to catch the light as it comes out of the display (and/or the separate light source within the display system for DLP and some LCD) and get it pointed in a useful direction. Useful in this case means pointing towards the lens used to see both the digital information and real world. These techniques aren’t perfect, and the more of the light we try to catch and the more the direction of the light needs to be changed, the bigger the optical elements (and/or number and spacing between them) and the lower the overall efficiency.

The challenge of redirecting light within see-through wearable displays gets harder from here. After pointing the light in a useful direction, the light then needs to get inserted into the see-through lens. These “input coupling optics” take the light from the display and get it traveling into the side of the lens so that it can then be redirected out into our eyes at the right spot.

The effectiveness and efficiency of “in-coupling” the light depends on the type of lens the light is going into. These “combiner optics” are typically semi-transparent curved mirrors (e.g., the birdbath design of Nreal) or diffractive waveguides (e.g., Hololens and Magic Leap). Curved mirrors are relatively efficient at in-coupling and out-coupling the light, but they are bigger and bulkier.

[For readers that are more optically inclined, I know I’m simplifying a broad range of mirror/reflector and wave-guide approaches, but the challenges are similar. And the most popular approaches right now are bird-bath style curved mirrors and diffractive waveguides.]

To their credit, diffractive waveguides are much thinner. But unfortunately, they are also much less efficient, not to mention much more complex, expensive, and fragile. Here, the input coupling is a massive challenge. On the way in, the light gets bounced in many different directions, most of them are harmful to the visual experience. The challenges of diffractive waveguides are compounded by the fact that the light in-coupling area is typically much smaller than the display; this difference means more light manipulation is required, which brings with it more optics, bulk, and inefficiency.

Companies such as WaveOptics, DigiLens, Dispelix, Holographix, Vuzix, Microsoft, Magic Leap, and others are developing and/or licensing diffractive waveguides. Lumus is pursuing a slightly different approach: reflective waveguides, which are used in the latest offering from Lenovo. These reflective waveguides are more efficient at coupling the light, but more complex to manufacture.

Due to their thin profile, these types of waveguide technologies have been deemed the best hope of achieving wearable displays that look like normal glasses. The consensus exists despite the inefficiencies and implication on the display and other optics in the system.

This brings us to the next standard industry assumption: Optics must be complex and inefficient, or bulky, or both—to compensate for the dumb displays.

You’re One in a Million

The industry expects wearable displays will be the primary interface for the next major personal computing platform, enabling heads-up, hands-free, just-in-time, contextual insight. The expectation is this won’t be a niche product, but one embraced by billions of people. In the smartphone era, these billions have been satisfied with a small assortment of unique product variations, such as the standard and larger size. The way we use the devices, and the relative differences in our hand sizes, have made this feasible.

Image source: Microsoft

Unfortunately, the variations in our eyes and noses really compound the challenges of designing wearable displays to support billions of people. For the vast majority of us, our eyes are spaced about 6.4 centimeters apart, plus or minus 1 centimeter. This may not seem like a lot, but when trying to align tiny displays right up against our faces, it can be the difference between clearly seeing the digital image and seeing no image at all.

But the industry still wants to keep AR glasses to 1 or 2 versions, as a result of the cost and complexity of the optics, perhaps combined with logistical challenges and relatively low volume production in the early years.

To accommodate the variation in humans, while maintaining a one-size-fits-most (or two-sizes-fit-all) pair of glasses, the virtual image must be viewable from a wide range of positions behind the lens. We call this area behind the glasses where we can see the image the “eyebox,” and the implication is it must be large. (Note, this is not the same as the field of view, which is the amount of your vision covered by the digital image when you are able to see it.)

To accommodate a large eyebox, the favored waveguide approach is to make the display appear in more than one spot at the same time. The technique is called “pupil replication,” which while an impressive engineering accomplishment, compounds the requirements for the display system: the display must be much brighter to account for all of the light that’s spread across multiple locations, or “exit pupils”. In this approach, the light from a small display is crammed into an even smaller opening (in-coupling optics) and distributed across a bunch of exit spots to make the display viewable from a larger area behind the glasses.

To compensate for the light lost due to expanding the viewing area as well as the light lost due to corralling the light and getting it into the waveguide, we get our last assumption: displays must be extremely bright — to compensate for the inefficiencies of the optics and the limited product variations.

How inefficient? For diffractive waveguides, well over 99% of the light emitted from the display doesn’t make it to our eyes. To get an image that is viewable outdoors, a pupil-replicating waveguide needs several million nits of brightness. By comparison, the display you’re using to read this article produces a few hundred nits. So the industry-favored waveguide approach needs light that is at least 4 orders of magnitude brighter than your phone or computer screen…in a device that you wear on your head.

Next Up… 

To recap, the industry has collectively made a handful of assumptions:

Display technologies are dumb—because they need a lot of help conditioning the light to be useful.
Optics must be complex and inefficient, or bulky, or both—to compensate for the dumb displays.
Unique product variations must be few—to compensate for the complexity of the optics.
Displays must be extremely bright—to compensate for the inefficiencies of the optics and limited product variations.

So what’s the answer? Is it laser scanning displays or emerging microLED displays paired with diffractive waveguides? Or are there viable alternative approaches? We’ll explore possible answers next week in Part 3.

Jason McDowall is VP of Visual Experience at Ostendo Technologies, and the creator and host of the AR Show podcast.

 

More from AR Insider…