XR Talks is a weekly series that features the best presentations and educational videos from the XR universe. It includes embedded video, as well as narrative analysis and top takeaways.


Rumors have swirled this week that Google will acquire lightfield optics company Lytro, which comes one week after its new app that introduces lightfields to a broader audience. This has prompted lots of interest and questions, including most notably… “what’s a lightfield?”

So we’ve devoted this week’s XR Talks to light-field explainer videos. Embedded below, there are videos from our interview with lightfield luminary Ryan Damm, and The Verge’s interview with Avegant. The former covers lightfield capture while the latter covers lightfield display.

But first, to answer the “what?” question more directly, a lightfield is all of the light in a given space. It defines all the ways that light bounces off physical objects, enters our eyes and renders our perception of the world. It’s colors, shadows, motion parallax and all that we see.

Why is that relevant to AR & VR? Re-creating or augmenting environments realistically means that all those colors and shadows should be dynamic as we move around a virtual space. The hard part is when point of view changes: Take a step forward and there’s a whole new scene.

To put that into perspective, if you’re watching a 360 video, you’re seeing one fixed vantage point. Though you can look around with 3 degrees of freedom (3DOF), otherwise known as head tracking, you’re confined to the vantage point that the camera initially shot.

Moving into positional tracking (6DOF), what happens when you then are able to take a step forward? The light rays in that new position are now different. Items in your field of view now have a different relationship to each other and to you: shadows and parallax become dynamic.

So lightfields in VR are about capturing and reproducing that realistically. Of course, we already have 6DOF positional tracking within graphical experiences like games. But the holy grail is for that same volumetric movement but with photorealistic — as opposed to graphical — spaces.

Image Credit: Google

These two approaches — graphics/polygon versus photorealistic — have pros and cons. Polygons have a high degree of interactivity for things like games and training simulations. But photorealistic lightfields are more natural, especially with key subjects like human faces.

“Both are going to live alongside each other,” said Damm. “Polygons will play well in situations where you need interactivity. On the flip side, if it’s more lean back entertainment and someone wants to see something that’s beautiful, lightfields are going to excel at that.”

Lightfields will also follow the overall trajectory of AR/VR sectors. In other words, attention is shifting to AR, due to its nearer term opportunity and installed base. That will start mobile, but forward-thinking companies like Magic Leap and Avegant are focused on AR glasses.

See more about what Avegant is building below.


For a deeper dive on AR & VR insights, subscribe to ARtillry Intelligence Briefings, and sign up for the free ARtillry Weekly newsletter. 

Disclosure: ARtillry has no financial stake in the companies mentioned in this post, nor received payment for its production. Disclosure and ethics policy can be seen here.

Header Image Credit: Lytro