“Vantage” is AR Insider’s editorial contributor program. It enlists spatial computing executives and innovators for first-hand strategic insights. Find out more or contact us to participate here. Authors’ opinions are their own.  


Charting a Path to Viable Consumer AR Glasses, Part I

What’s Holding Wearable Displays Back?

by Jason McDowall

Welcome to our series on the path to viable consumer AR. Over four parts, we’ll explore the biggest barriers and potential solutions to making consumer-grade Augmented Reality (AR) glasses that people will actually wear. Here we use the term “AR glasses” broadly to reference head-worn devices that allow you to directly see the real world as well as digital content, whether basic 2D content or more immersive 3D experiences tied to the real world, sometimes called Mixed Reality. This first part of the series describes a major hurdle and the current industry assumptions about how to address this hurdle. The second part dives deeper into the details of the hurdle with the goal of gaining an appreciation for the magnitude of the challenge. Part 3 explores promising new display technologies and highlights their suitability and shortcomings. Finally, part 4 challenges the standard assumptions and describes an alternative approach to an idealized solution along with companies pursuing this path.

Starting Point: Why Is AR Stuck in Neutral?

AR’s story relates to VR, which had a false start in the 1990s before receding to research labs. Now we see a small, but healthy and growing market for VR devices, software, and services, which are focused on delivering solutions around entertainment, training, education, and clinical use.

But it took two decades and significant advancements in technology (including processors, displays, optics, and sensors—much of which came courtesy of the smartphone revolution) before becoming good enough to achieve this early level of commercial success. Given the unimpressive adoption of Google Glass and everything that has come since, should we assume AR glasses need another couple of decades before becoming good enough for broad adoption?

Given the unimpressive adoption of Google Glass and everything that has come since, should we assume AR glasses need another couple of decades before becoming good enough for broad adoption? Click To Tweet

What is clear is that the attempts to date have fallen short. Over the last seven years, we’ve seen the introduction of a number of see-through AR glasses for consumers and enterprises from companies such as Google, Microsoft, Magic Leap, North, Meta, ODG, DAQRI, Vuzix, and the list goes on. Not one has achieved meaningful success, and several don’t even qualify as “glasses”.

As tech enthusiasts, we sometimes forget the difference between “can” and “will”. Can these devices be worn and display digital content? Yes. Will they or have they been adopted in any meaningful numbers? No. They are not wearable enough or good enough for consumers to be willing to buy them. Even for enterprises where the ROI is compelling across several use cases, the numbers are minuscule. (Hololens fans may argue it’s too early to tell for Hololens 2, but it falls short in a couple of key dimensions. We’ll get into that. And there’s hope that Apple and Facebook are on the cusp of something worthy, but the release of their products has and continues to be a few years out. So we’ll have to wait and see.)

It’s not for lack of investment or engineering talent that the industry has fallen short, but for a lack of innovation—specifically around displays and optics. Click To Tweet

Display and Optics are the Biggest Barriers

Many billions of dollars have been spent on these and other attempts by tech companies who are otherwise very successful. It’s not for lack of investment or engineering talent that the industry has fallen short, but for a lack of innovation—specifically around displays and optics. The display and optics in AR glasses have an enormous impact on the visual quality and device comfort. The visual quality of a wearable display includes both the quality of the digital imagery (color, brightness, field of view, angular resolution, etc.) as well as the quality of the real world (obstruction, distortion, dimming, etc.). The device comfort includes both physical comfort (weight, heat, ergonomics, etc.) and social comfort of wearing the device in public and being able to look into each other’s eyes. Getting visual quality and device comfort right are necessary for AR glasses to become truly wearable and broadly adopted.

[My focus on visual quality and device comfort does not diminish the importance of the glasses’ ability to understand the context of our situation using (privacy-acceptable) sensors and techniques, nor the importance of an intuitive approach to telling the glasses what we want. While critical, these are not currently the biggest barriers.]

Despite some beautiful artist renderings, light does not magically emerge from every part of the lens of today’s AR glasses, nor will they in Apple or Facebook’s eventual entries into the market. In fact, part of the challenge for AR glasses is we don’t look directly at the display as we do when looking at the flat panel in our living rooms or in our hands. Instead, we look through a clear lens at an (ideally) undistorted and undiminished view of the real world. The display sits somewhere off to the side, and the light needs to be redirected through, or reflected off of, the lens and into our eyes. Plus, when talking about actual glasses and not headgear clamped to our head, the whole contraption needs to rest on the bridge of our delicate nose and ears and against our sensitive skin.

Despite some beautiful artist renderings, light does not magically emerge from every part of the lens of today’s AR glasses, nor will they in Apple or Facebook’s eventual entries into the market. Click To Tweet

Historical Parallel

In the AR glasses attempts to date, the industry has been trying to reuse 40-year-old display technology that was built for a different purpose, and it’s not working. These efforts are equivalent to integrating an old CRT tube monitor with a computer and keyboard and calling it a laptop. While such devices did meet the loose definition of a portable “laptop” computer, they barely scratched the surface of the potential benefits of such devices. It was the invention and refinement of LCD display technology that was necessary for the advancement and wide-scale adoption of laptops (and ultimately smartphones). For truly wearable and broadly adopted AR glasses, the same type of significant advancement in display and optics technology is needed.

In the AR glasses attempts to date, the industry has been trying to reuse 40-year-old display technology that was built for a different purpose, and it's not working. Click To Tweet

Hard Problems and Current Assumptions

As mentioned before, so much of the AR glasses visual quality and device comfort are driven by the amount of energy and space needed to generate light, modulate it (turn the right red, green, and blue sub-pixels on and off at the right time), and redirect it into our eyes. The generating and modulating is typically done by a display system, and the redirecting is done by optical elements.

To break down the impacts a bit further, the display and optics directly affect the visual quality in the form of the field of view, angular resolution, brightness, color uniformity, depth of focus, visibility of the real world, among other attributes. The device comfort, including the size, weight, and aesthetics, are affected by the bulk and weight of the optics as well as their placement on the glasses. Device comfort is further impacted by the size and power efficiency of the display, which can create more heat and demand larger batteries.

With such a massive impact on the “wearability” of AR glasses, the display and optics must work in close concert to solve the many problems of being wearable displays. Because the display sits so close to our eyes, the pixels need to be tiny (≤ 10 µm). Otherwise, we’ll see the gaps between the pixels, and the angular resolution will be too low. Because the real world is the “black” background of the image, the display needs to be bright (> 5,000 nits to the eye), otherwise the image will be washed out when we are in a bright room or outside. Because we want to use the glasses for more than a few minutes at a time, the device needs to be lightweight and energy-efficient (weight ≤ 65 g, volume ≤ 50 cc, and support hours of active use). Because we care what others think of us, the glasses need to look “normal” and reflect our personal style.

With such a massive impact on the wearability of AR glasses, the display and optics must work in close concert to solve the many problems of being wearable displays. Click To Tweet

That’s what success will look like. In attempting to solve these problems, the industry has collectively made a handful of assumptions:

1. Display technologies are dumb—because they need a lot of help conditioning the light to be useful.
2. Optics must be complex and inefficient, or bulky, or both—to compensate for the dumb displays.
3. Unique product variations must be few—to compensate for the complexity of the optics.
4. Displays must be extremely bright—to compensate for the inefficiencies of the optics and limited product variations.

Next Up…

We’ll be back next week for Part II of this series, in which we’ll explore these industry assumptions, and gain a better understanding of their implications and potential outcomes.

Jason McDowall is VP of Visual Experience at Ostendo Technologies, and the creator and host of the AR Show podcast.

 

More from AR Insider…