Charting a Path to Viable Consumer AR Glasses, Part IV

Flipping Assumptions on their Head
by Jason McDowall

 

Welcome back to our series on the path to viable consumer AR. Over four parts, we’ll explore the biggest barriers and potential solutions to making consumer-grade Augmented Reality (AR) glasses that people will actually wear.

Part 1: The two biggest barriers holding wearable displays back are visual quality and device comfort, which hinge on the display and optics technology.
Part 2: The industry currently holds several assumptions about how displays and optics need to work, which impose very difficult implications on the overall design of AR glasses.
Part 3: MicroLED and laser scanning displays hold promise and challenges.
Part 4 (You are here): A different set of assumptions, with the right breakthrough innovation, may yield a better path.

Flipping Assumptions

For Augmented Reality (AR) glasses to achieve high visual quality (both the quality of the digital image and the real world) and device comfort (both the physical and social comfort), the right combination of display and optics are essential. However even spending billions of dollars pursuing the current set of assumptions hasn’t yielded truly wearable, consumer-grade, mass-market devices—and there is no Moore’s Law when it comes to displays and optics. (Remember, we are using the term “AR glasses” broadly to reference head-word devices that allow you to directly see the real world as well as digital content, whether basic 2D content or more immersive 3D experiences tied to the real world.)

Flipping the assumptions on their head implies a new type of display technology is needed, which is no small undertaking. There’s a reason why big tech has either stayed away or struggled to produce a breakthrough in display technology: it’s notoriously difficult. In fact, over the last 150 years, the fastest time from conception to commercialization of a new display technology was 20 years. That was DLP, which is now commonly used in theater projectors, but also in some head-worn devices. Getting a new display technology to fit inside of a device that can be worn on the head can take decades more—although as we’ve seen, with very marginal results. MicroLED is following a similarly slow path and still has some fundamental challenges to meeting our goals, but we’re talking about something even beyond microLED. What does that look like, and how do we get there?

Old vs New Assumptions for Consumer-Grade Wearable Displays

Human Visual System and Maximum Efficiency

To circle back to an earlier point from part 2 of this series, any light that doesn’t end up in our eyes is wasted. And wasted light is wasted energy, heat, size, aesthetics, etc. This is the reason for all of the extra optics and effort to corral and condition the light from the head-worn display.

We can take this notion of wasted light one step further if we consider the needs of the human visual system. We perceive most of the color and high-resolution detail only through the densely packed sensors (cones) at the very center of our eyes. The rest of our vision is primarily covered by different light sensors (rods), which are good for low-light and motion sensing.

Furthermore, the brain stitches together information to form our perception of reality. The brain is telling the eyes to jump around from spot to spot to fill in details, in what’s called a saccade. And it does this several times a second. The brain plans ahead, ignores the moments when the eyes are in a saccadic motion, and then backfills the information after the eyes come to a rest. And in doing all of this, it does a brilliant job of taking in a few kilobits of information, and convincing us that our entire visual field is colorful, high resolution, and always up to date.

Taking this concept of wasted light to an extreme, we can see that full color, high-resolution light that hits a grayscale, low-resolution sensor in our eye is wasted. Light that hits the eye when the brain is ignoring input is wasted. Most of the light coming from the display is wasted.

For AR glasses to achieve high visual quality and device comfort (both the physical and social comfort), the right combination of display and optics are essential. Click To Tweet

Using some of these insights, engineers have utilized foveated rendering in VR devices as an approach to reduce the amount of computation needed to render an image for display. The same insights can be extended to the display hardware. Varjo has applied a simplified version of this concept to their VR device, and Avegant is pursuing a foveated display for AR. The demo is quite impressive.

Although motivated by the right insights, these approaches still struggle to achieve our ultimate goal around wearability.

The AR Show: Designing AR for the Way Our Eyes Work

A Different Perspective

If the goal is wearability—a solution that delivers both a high visual quality (for the digital information and the real world) along with device comfort (both physically and socially)—then the problem cannot be effectively addressed as a collection of discrete problems pursued by independent teams working on each component. This divide-and-conquer approach can work brilliantly in software, but in hardware—particularly face-worn hardware—a more holistic approach is needed.

If we could start with the needs of the human brain and appreciate the implications of a truly wearable form factor, then an interdisciplinary team would design a different kind of display. And if we had a different kind of display, then we could design a different kind of see-through combiner optic that perfectly matched that display.

There's no Moore’s Law when it comes to displays and optics. Click To Tweet

Working Towards An Ideal Solution

An ideal display wouldn’t just spew light everywhere, relying on external optics to condition the light for insertion into the lens of the glasses (the combiner optic). Doing so takes up space, wastes light, and can affect the visual quality. Instead, the display would pre-condition the light before it emerges from the panel. And each pixel would be pointed in exactly the right direction for the light to be efficiently inserted into the combiner optic. This would allow the display to be directly coupled to the lens of the glasses, which is far more space and energy-efficient.

Furthermore, this ideal display would be a thin, full-color, microLED-based display. The bright, tiny pixels would be driven by an integrated image processor that can dynamically adjust the content across each pixel to match what our eyes can perceive in each moment (which becomes more powerful when paired with the right eye sensing and prediction solution).

All of this functionality in the display would be integrated using a System on a Chip (SoC) approach. This would be monolithically constructed using standard semiconductor tools and techniques, enabling it to be manufactured at scale and reasonable cost.

Paired with this display would be a new type of combiner lens that incorporates the benefits of classic curved mirror and waveguide optics: an efficient light conductor with a slim profile that can be shaped to conform to our face. This ideal lens would be cheaply made and accommodate our unique physiology and vision correction needs.

There’s a reason why big tech has either stayed away or struggled to produce a breakthrough in display technology: it’s notoriously difficult. Click To Tweet

[A note on product variation: If the eyewear industry can sell $150 billion worth of product each year with mass customization, the tech industry may be able to figure it out. Apple is beginning to experiment by offering 18 different sizes for their claspless Solo Loop watch bands.]

With the goal of maximizing physical and social comfort, this ideal display and lens would be integrated into a pair of glasses that serve as a companion to a smartphone. We already carry a device that can do the heavy lifting for computing and communicating the information to be displayed. There is little benefit, and a lot of detriment, attempting to cram all of the capabilities of the phone into the glasses within the next few years. Mass-market wearable displays will extend the capabilities of smartphones for the foreseeable future.

Ostendo is Showing the Way

Ostendo Technologies is pursuing this vision of a holistically designed intelligent display and combiner optic solution. The company has introduced its Quantum Photonic Imager (QPI®) display technology that incorporates all of the attributes described above. Ostendo has demonstrated production-quality chips and is gearing up for pilot production. To complement the QPI, the company is designing a new type of combiner optic as described above, perfectly matched to the QPI.

(To be clear, we will not be buying Ostendo-branded glasses. They are positioning themselves as the Intel of AR, making smart light for our visual computing future.)

Will the old assumptions yield a winning result? Many in the industry are betting so. But just as a breakthrough in display technology (LCD) was needed to unlock the potential of laptops and smartphones, a similar breakthrough in display technology is needed to unlock the potential of wearable displays. But that breakthrough is not simply microLED. That is not enough. We must rethink the standard assumptions. We have to migrate from tackling these as a discrete set of problems and instead think and innovate in a more holistic way. We must do the hard work of creating a smart display.

Such innovation doesn’t happen overnight, but Ostendo is well on its way to delivering the essential set of ingredients to realize truly wearable, mass-market glasses that achieve both high visual quality and device comfort. Until then, we can try to appreciate and learn from the devices built using the old assumptions.

Jason McDowall is VP of Visual Experience at Ostendo Technologies, and the creator and host of the AR Show podcast.

 

More from AR Insider…