XR Talks is a weekly series that features the best presentations and educational videos from the XR universe. It includes embedded video, as well as narrative analysis and top takeaways. 


This week, we got a rare up-close look at one of the most anticipated pieces of hardware in the XR universe: Magic Leap One (video below). Though we’ve already seen the device, this demo got a little more up close and personal, including commentary around key features.

This came with a dose of Magic Leap design principles and product philosophies, which was a bit promotional, but educational enough to be valuable. At a high level, Magic Leap’s Shanna De Luliis explains the device’s achievement of advanced AR features like image persistence.

“It allows the world to become your desktop,” she said, “letting digital content live persistently so whenever you leave the room and come back, the content [is] exactly where you left it…  it’s really about releasing content into the environment and the coexistence of virtual and physical worlds.”

It’s also important to note that Magic Leap is a lightfield-based system (also see Avegant). As we’ve examined in VR contexts, lightfields are all the light in a given space. So replicating lightfields in immersive computing is about adding light to create realistic scene imagery.

“It’s a light-additive system, said De Luliis. “The device is mimicking the way you naturally see the world: the way that light bounces off of objects and reflects back towards the eye… the device is based on the way we see the world so it’s mimicking the way that we take in the world naturally.”

Beyond light, Magic Leap also applies sound fields and ambisonic audio. This recreates spatial orientation and distance (think: a door closing behind you). Magic Leap’s audio also interestingly doesn’t shut out environmental sound, but overlays it… just like AR overlays graphics.

“We want you to be able to interact with the real world at the same time as your [digital] content to make it really feel like it’s it’s coexisting,” said De Luliis. “You need to have that audio really sync with the content to make it feel like it’s coexisting.”

Joining the above outputs, Magic Leap develops inputs that carry the same natural and intuitive dynamics. That includes head tracking, eye tracking (gaze) and gestural inputs. These raise possibilities for developers to build things that have intuitive controls and agency.

“Head pose can trigger content in different areas in the environment,” said De Luliis. “And another one of our interaction elements is gaze… so we understand where your eyes are [and] where you’re looking. You can think of multiple ways to utilize that type of type of Interaction.”

Overall, there’s a pattern in Magic Leap’s design principles and product philosophies that involve expanding the canvas of graphical user interfaces that live within rectangles (phones, monitors, etc.). And that requires rethinking computing as we know it, and native design for six dimensions.

“If you think about media historically, it’s always been contained by the four corners of a screen,” said De Luliis. “So this is really one of those opportunities that you’re no longer controlled by a screen…you’re able to see different elements and and walk-around things.”

See the entire video below, coded to start at the right point for the Magic Leap One demo.


For deeper XR data and intelligence, join ARtillry PRO and subscribe to the free ARtillry Weekly newsletter. 

Disclosure: ARtillry has no financial stake in the companies mentioned in this post, nor received payment for its production. Disclosure and ethics policy can be seen here.