The AR revolution is coming. Even doubters are turning into believers, especially after Apple CEO Tim Cook described AR as a “profound technology” that will utterly transform our lives.
Because AR integrates computer-generated content into real-world spaces, this transformation means that the virtual will increasingly bleed into our day-to-day activities, from visits to the doctor to shopping. These will be real experiences, with real impact on our lives. So does this blurring of lines mean that the virtual has now become real?
I’d like to make a controversial claim, one ripe for the digital age: reality is possible without materiality.
A thing doesn’t need to exist in the physical realm in order to be real. Indeed, experiences in digital space are already an important part of our socialization. They are vivid and emotionally important, sometimes playing a more formative role in our lives than those in the physical realm.
Have you ever learned a life lesson from a favorite video game character? Or tried to model your behavior and values on those of the heroic protagonist of an interactive digital story? This has been possible for decades, through the simple power of language. Language, arguably the earliest form of neural audio technology, has always had the power to implant ideas – to literally change minds.
In the field of spatial computing, we are entering a once-unimaginable era: a time when we will be able to interact with digital content within the material world around us in a seamless, intuitive way. But for this new phygitally integrated reality to become as rich and impactful as what we experience in the material world, I believe it must find ways to go beyond sight and sound and engage a whole range of our senses.
How Real is Reality?
The futurist Bernard Marr once said: “While augmented reality allows virtual info to be overlaid on a real environment, users can’t interact with it like they would be able to in real life.”
Until recently, this was true.
Objects and landscapes in AR were no more real to us than scenes on a movie screen or old arcade side-scroller games. We could see and hear them, but they neither noticed nor responded to our physical presence.
If we wanted to interact with virtual objects, beings or environments, we were also forced to do so through a panel of glass. The AR pets created by companies like Niantic can scamper around your living room and make cute noises, but if you want to feed, play with, or pet them, you have to do so by tapping your smartphone screen: an experience about as engaging as doom-scrolling through TikTok until your thumb gets tired and your brain wearies of the dopamine treadmill.
In the quest for richer and more realistic digital experiences, neuroscientists and technologists are hard at work searching for avenues to true multisensory AR. And while smell and taste may lie some way off, advances in thinking and technology have now brought interactive AR touch within reach.
Early attempts to add touch to virtual experiences, such as through vibrating haptics on video game controllers or in movie theater seats, have so far failed to offer a truly satisfying experience (although “Ratchet And Clank: Rift Apart” by Insomniac Games on PS5 deserves an honorable mention!).
A Touching Reality
A major first step towards bringing the sense of touch into AR is the removal of the screen barrier between us and whatever virtual creature or object we have manifested in physical space. That may sound intuitive, and yet it was impossible to build until now.
This is in part because most device cameras were unable to gauge depth. An oft-underappreciated feature of human sight is the ability to perceive depth in flat images, and while computers can “see” thanks to camera technology, true depth perception has been a massive challenge. They could perceive light and shade, sharpness and color, but were unable to use this data to determine, for instance, the position or movement of a user’s hand.
But a new technology called monocular hand reconstruction has changed all that. By drawing on various data points, it can work out the size, shape, and position of your hand – along with what it is doing – in real time. It is now possible to create a virtual representation of a physical hand, and to use it to touch and elicit responses from AR objects and environments positioned precisely as a layer on top of the material world.
However, you cannot yet actually feel those virtual objects Brave New World-style – although scientists are exploring new ways to simulate that experience. Among those is so-called chemical haptics, which use specialized headsets and chemical compounds to simulate such sensations as tingling, numbing, cooling, warming, and stinging.
By making it possible for virtual beings and objects to respond realistically to a user’s touch, monocular hand reconstruction adds intensity to AR by conjuring sensory elements that can be interwoven into the tapestry of the experience. Not only can this heighten the realism of individual AR interactions, but it also helps enable more engaging shared – or intersubjective – experiences. You can invite real people to play collective games or play with virtual creatures in a new, more involved way.
Shareable precisely positioned AR, enhanced with the element of touch, is likely to have profound implications for a whole array of fields: education, marketing, management, design, and medicine, just to name a few.
Aspiring surgeons will be able to use AR to learn operating techniques in an exact and safe way. Artists, architects, and engineers will have the ability to collaborate on the design of projects, shifting elements around with their virtual hands to test visual balance. And imagine interacting with a truly responsive holographic representation of a dear friend or family member.
With monocular hand reconstruction, we have taken a major step forward on the path toward multisensory AR: a technology that promises to make the digital real for us all.
Damir First is co-founder of Matterless.