
AR and AI continue to converge. There are several ways this plays out such as an intelligence layer for spatial experiences. Yesterday, we examined Google’s AI infusions to make visual search smarter. Meta is doing similar with multimodal AI, as is Apple with Visual Intelligence.
AI is also increasingly infused with AR on the back end. Tools like Snap’s Easy Lenses serve as a production co-pilot to streamline AR design. It’s not a panacea, but it can help AR creators save time by generating 3D elements – for inspiration, ideation, or actual implementation.
Then there’s AI’s most imaginative convergence with AR: letting users generate AR experiences on the fly using prompts. This takes all of the serendipity and magic of generative AI and brings it to AR – sort of like the above creator-facing tools but for users to apply on the front end.
That last part is what we call generative AR, involving animations and interactions that are more dynamic and intelligent. And Snap this week stepped closer toward this vision. Its new AI Video Lenses offer AI-generated lenses that users can interact with and share.
Dynamic & Dimensional
Before going into longer-term generative AR, what are AI Video Lenses, and what can you do with them today? Utilizing Snap’s in-house generative video model, it will activate animated AI lenses that interact dynamically and dimensionally with users and/or physical surroundings.
Snap launches the new format with three lenses. These include “Raccoon” and “Fox,” both of which animate cartoony woodland creatures that scurry around the user in playful ways. “Spring Flowers” meanwhile blooms several flowers around the user as the perspective pans back.
To be clear, this isn’t the fully evolved vision stated above for user-prompted imaginative lenses. But it’s a step in that direction. The AR that’s generated through AI Video Lenses is preordained in that visual elements are specific. But they’re generative in their dynamic interaction.
In other words, Snap’s generative video model enables these lenses to interact with physical objects on the fly. That includes animations that synchronize with users’ geometry or movement in ways that are more open-ended and autonomous than standard lenses (see them in action).
The three new AI Video Lenses are available to Snapchat Platinum subscribers – Snap’s $15.99/ month premium subscription tier. From there, it will launch more lenses periodically, which users can find in the Lens carousel. They can be used with both front and rear-facing cameras.
Experiential Jolt
Back to the longer-term goal of user-generated lenses, Snap’s update this week gets it one step closer, as noted. Beyond generative AI lenses that it provides – raccoons, flowers & flowers – it wants to get to a point where it empowers users to dynamically imagine lenses into existence.
Snap CTO Bobby Murphy told us as much on stage at AWE USA, and the company doubled down on this vision at its Partner Summit. The challenge of course is that generative AI is computationally intensive and hard to process on-device. But Snap is working to crack the code.
As it continues to step closer to that vision, it will transform AR from being something that’s preordained to being expansive and serendipitous – a fundamentally different use case. It turns AR users into creators, potentially making them more engaged and invested in the medium.
But this won’t necessarily displace creators. In fact, prolific AR creator Page Piskin told us during the same AWE session that she welcomes this evolution. It lets her design experiences that users can then personalize with their own imaginations – a real-time creator/user collaboration.
All of the above could be the experiential jolt that AR needs. And its development coincides with AR’s gradual shift from handheld to headworn – where generative AR will really shine. When considering the things that will sit at the intersection of AR and AI, this is one of them.
