As the saying goes, hardware is hard. That’s especially the case when trying to deliver photons to near-eye displays inside a VR headset. And you also have to fit the whole thing in a compact package that isn’t too heavy…then convince the world to wear it on their faces.

This is one of the many challenges facing VR. Making matters worse, some aspects of VR hardware don’t benefit from Moore’s Law (chip-based tech advancement). These include display and optical systems that are bound by physical constraints such as the way light moves.

These challenges were raised in a discussion between Mark Zuckerberg and Meta Reality Labs CTO Michael Abrash. While providing a rare glimpse at prototypes in Meta’s VR road map, they stopped to explain all the technical nuances VR faces – the focus of this week’s XR Talks.

Moore’s Law: Do We Live by the Spirit or the Letter?

Shifting Focus

Diving into the takeaways, Zuckerberg and Abrash previewed four Meta VR headset prototypes. Each is meant to tackle a different challenge facing VR today. The hope is that focusing on each of these barriers with separate hardware will allow it to reach collective solutions faster.

We’re talking about things like the vergence accommodation conflict, which is usually discussed in an AR context. But these focal depth challenges are present in VR as well, given objects rendered at fixed distances. This prevents your eyes from focusing the way they do in real life.

One approach Meta is applying involves varifocal lenses. These move back and forth to shift focus on objects rendered at near and far distances. Using eye-tracking as your eyes naturally focus on things, the varifocal lenses do their thing and translate your pupil movements to shifts in focus.

Another challenge is getting “retinal resolution,” which is 60 pixels per degree – a total overall pixel volume that is several times greater than high-end monitors today. Meanwhile, HDR is another goal, similar to the dynamic lighting and contrast seen in HD TVs… but more difficult in VR.

Through all these endeavors, one underlying goal discussed by Zuckerberg and Abrash is to pass the “visual Turing test.” A variation on Alan Turing’s test to determine the endpoints of artificial intelligence, the goal is to have optical quality in VR that’s indistinguishable from real life.

We’re still a ways from that goal, but Meta is investing heavily to get there faster. In total, the prototypes include Butterscotch (retinal resolution), Half Dome (focal depth), Starburst (HDR), and Holocake (holographic display). See them all in the videos further below.

5 Ways to Address AR’s Vergence Accommodation Conflict

Building Lifeboats

Panning back, one prominent takeaway from Zuckerberg and Abrash’s discussion is how hard some of these design challenges are. Another important piece of subtext is the level of investment that Meta is making to bankroll the advancement of VR and AR. It exceeds $10 billion per year.

Meta’s other crimes tend to overhang everything it does, including these efforts. And though its work here is self-serving, it’s doing the AR and VR sectors a favor by accelerating user traction. VR would not be where it is today without the Quest 2 and other Meta loss leaders.

The other misconception around Meta’s investments in AR, VR, and all things metaverse is the notion that it’s a distraction or a trite endeavor while its core business declines. In fact, these investments are precisely because of its ad business declines. Meta is building lifeboats.

And though these videos to break down its strategy and intent are likewise self-serving, it’s rare to get a technical breakdown on otherwise top-secret hardware (see: Apple), from a CEO no less. Zuckerberg’s hands-on technical chops come through, as do Abrash’s (unsurprising).

We’ll put a period there and cue the full video below. See the “short version” in the first video and the deeper dive in the second video… 

More from AR Insider…