Though it’s often painted as a technology that will steal AR’s thunder, AI is the best thing that’s happened to AR in a while. AI can be an intelligence backbone for AR, while AR can be the front-end graphical interface for AI. The two technologies complete each other in many ways.
That goes for user-facing AR experiences, such as generating world-immersive overlays and lenses on the fly through text prompts. And it applies to creator-facing work, such as streamlining workflows through the time-saving and capability-boosting power of generative AI.
Both angles were recently advanced by Snap. In Lens Studio 5.0, it launched GenAI suite. This gives lens creators generative AI tools to do things like rapidly prototype 3D designs using text prompts. It also teased the other angle noted above: user-facing generative lenses.
We know all this because we were on stage with Snap CTO Bobby Murphy when he announced it at AWE USA 2024. Along with prolific AR creator Page Piskin, we had an on-stage discussion about possibilities at the intersection of AR and AI. See the full video and our takeaways below.
XR ❤️ AI
One of the ongoing areas of excitement and speculation in XR is its collision with AI. There are several ways these technologies support and elevate each other, as noted above. In his AWE USA 2024 welcome keynote, Ori Inbar set the tone by proclaiming that XR ❤️ AI.
This was followed that same morning by Snap CTO Bobby Murphy announcing GenAI Suite. As a core component of Snap Lens Studio 5.0, this brings generative AI to lens workflows. For example, creators can streamline lens development by generating 3D assets on the fly.
Prolific lens creator Paige Piskin has already used it and reports force-multiplying effects. This manifests in a few creator-facing benefits, including time-savings and new capabilities. With time-savings, she’s empowered to do more client projects, impacting her bottom line.
With new capabilities, she’s building lenses that have greater dimension than her past work. And though it could be argued that AI threatens to replace creators like Piskin – an ongoing theme with all things AI – she sees it differently. It’s more of a tool that empowers her work.
Specifically, gen AI automates rote tasks, freeing her up for higher-value creative work. Furthermore, her hands-on creative rigor is still required for successful lenses, regardless of AI. Following the pro-AI drumbeat, it works best when it’s a copilot that assists creator workflows.
Creative Control
Beyond the creator-facing aspects of AI – demonstrated in GenAI Suite in Lens Studio 5.0 – Murphy teased another Snap development: user-facing gen AI. This will let users become creators by generating custom animations and lenses on the fly via text prompts.
Like the above, this could be argued to replace the work of creators, given that users generate lenses on their end. However, creators will be able to use this tool in their own lens development. In other words, they can build lenses that hand control to users to then customize further.
This is welcomed by Piskin, as AR has always thrived on sharing creative control with users. It’s reinvented with every use, she says, as it’s infused with scenes & surroundings. This is one of the things that makes AR unique as an art form, as it’s less static than most formats.
In fact, Murphy positions this as one of Snap’s driving principles in AR (besides monetization of course). As an LA company – the setting for our stage discussion – Snap is culturally entwined with the city’s rich artist community. So empowering artists is part of the deal.
Meanwhile, user-created generative lenses require ample compute and load balancing. For that reason, it’s not available yet but Murphy says that his team is hard at work. Until then, Lens Studio 5.0 and the creator-facing GenAI Suite have already been released into the wild.