AR has evolved from a novel concept to a powerful tool, with immersive experiances now widespread on smartphones. Leveraging platforms like ARCore and ARKit, developers integrate AR to enhance user experience, enabling users to overlay digital products on the real world.

Common applications include virtual furniture placement, try-on clothing and accessories, and playful face filters. However, AR’s current status as a “tool” underplays its potential. AR could revolutionize interaction by becoming the default living interface (LI, as I like to call it) of the future.

Necessary Hardware

AR hardware comprises three critical components:

Vision-enablers: While AR glasses like those from Meta and Snapchat have made strides, they are not yet ready for mainstream adoption

UI Controls: One of the main challenges is the inadequacy of current interaction methods – object manipulation on a 2D screen feels unintuitive for a 3D environment. There has been some progress lately – object manipulation can be done using finger pads or bracelets that monitor hand gestures.

Supporting Hardware: AR tags, a complementary hardware class, are also likely to gain traction. These physical tags would aid AR UIs by facilitating localized positioning, local data transfer, and on-chip encryption. Their integration could bridge the gap between digital and physical environments. When we discuss Layers in the next section, we can see how AR tags will become critical for UI functionality.

Brainstorming into the future: AR vision tools could evolve into contact lenses that compute on the cloud while solely acting as visual interfaces.

Control mechanisms might shift toward non-invasive brain interfaces or even neural implants, allowing users to interact with AR via the power of thought alone. This might sound unreal, but it can turn into reality sometime in the next decade.

For the sake of envisioning a more developed AR ecosystem in the next section, let’s assume we have really good AR glasses, controls, and AR tags.

The Vision for AR as the Ultimate UI

Imagine a future where AR is no longer just a tool but the primary UI through which we interact with the world – blending reality with augmentation to improve productivity, collaboration, and entertainment in daily life. With sophisticated AR glasses and modest controls, users could engage with environments in profoundly immersive ways. For instance, haptic feedback in AR could enable users to “feel” virtual objects, enhancing object manipulation through tactile cues.

Tasks like rotary control, page flipping, and analog movements could mimic real-world interactions, making AR interfaces more intuitive and engaging. AR tags would further enhance this experience by allowing users to introduce objects and environments into the AR UI seamlessly. This integration would transform AR into an indispensable tool for both personal and professional use, eventually making AR as a UI second nature to users.

Layers on Reality

A fully integrated AR UI could redefine our interaction with the world through “layers” of data. These layers would enable users to toggle between different realities, each serving unique purposes while maintaining harmony with the physical world.

Layer Zero – The Baseline:  This layer represents our current reality, devoid of any augmentation. It acts as the foundation upon which other layers are built.

Layer One – Information Layer: In this layer, users gain access to contextual information about their surroundings.

Example: users could view real-time updates about bus schedules at a bus stop or identify nearby landmarks with detailed descriptions. This layer could also be optimized for commercial purposes using AR tags; imagine shopping at Walmart, where your AR glasses display a personalized path to items on your shopping list. Vendors could suggest alternatives or allow users to request recommendations, enhancing convenience and customer satisfaction.

Layer Two – Invitation Layer: This collaborative layer allows users to join environments or experiences curated by others.

Example: Colleagues could invite each other to brainstorm ideas on an AR whiteboard, or tour guides could provide immersive AR experiences at the Colosseum. By fostering shared AR environments, this layer would facilitate teamwork, learning, and much more.

Layer Three – Personal Layer: The personal layer empowers users to create and customize their AR environments. Users could design personal workspaces, add virtual objects to specific locations, or set reminders linked to real-world contexts.

Example: While walking down a familiar street, a user could place a virtual sticky note as a reminder to revisit a particular store or remember a past experience. These could be shared with a friend who can see it if they visit that place.

Other Layers: Beyond these core layers, additional applications and layers could emerge to cater to specific industries, hobbies, or social networks.

Example: Gaming layers could enable users to engage in AR-based multiplayer games, education layers could provide interactive learning modules in schools or museums, an industrial layer could help a warehouse worker to analyze inventory, etc.

Tomorrow

The vision of AR as the default UI is exciting but several challenges must be overcome. Key among them is ensuring AR technology startups are well supported. Equally important will be societal acceptance and behavioral adaptation. Transitioning to AR as the default UI requires users to embrace new ways of interacting and trust in the technology to enhance their lives meaningfully. AR could become the ultimate human-computer interface, fundamentally changing how we experience the world and interact with each other.

Aiyappa Byrajanda is an independent AR hardware and software developer.


More from AR Insider…