Snap held its sixth annual Lens Fest yesterday (the same event we spoke at last year). This is the company’s annual celebration of AR, and its opportunity to parade out its platform updates in Lens Studio. This year’s event was built around the beta launch of Lens Studio 5.0.

So what are the highlights and takeaways? In short, and as expected, AI stole the show. It’s positioned as a central component of several new Lens Studio functions, including AI-powered lens-development workflows, and a new partnership with OpenAI for GPT-infused lenses.

Snap announced new platform figures as well. Specifically, 330,000 AR creators, developers, and teams are now building lenses on Snap’s AR platform. Collectively, they’ve created 3.5 million Lenses, which have been viewed more than 3 trillion times in the past year alone.

But the biggest takeaway is that AI now joins AR as a North Star at Snap. It recognizes that these technologies fit together. This deviates from the myopic hot takes you may hear elsewhere about the current wave of AI excitement and investment replacing that of AR and VR.

Generative XR: AI & Spatial Computing Converge

Generative AR

Going deeper into new Lens Studio features, they include AI-streamlined workflows as noted. This includes the ability to generate graphical elements or materials using text prompts. For example, a new face mask generator combines generative AI and Snap’s Face Mesh technology.

Elsewhere in the AI action, Snap is launching ChatGPT Remote API in partnership with OpenAI. This essentially lets creators infuse conversational AI functionality into their lenses. It adds dimension to lenses by letting users converse with them, or retrieve answers a la ChatGPT.

This could open up new AR use cases. For example, educational lenses can offer the ability for users to ask questions or display objects (think: animal kingdom). We can also envision apps that tap into Snap Scan and Snap Map to let users discover restaurants or book reservations.

There were also non-AI updates and structural changes. For example, Lens Studio 5.0 Beta has more horsepower to load projects 18x faster. And it’s now purpose-built for team collaboration, including Git version control, which accommodates brands and agencies that build lenses.

Lastly, Snap has opened up Lenses with Digital Goods, a program it launched last year that lets creators offer additional lens functions or skins that can be unlocked through in-app purchases. This follows Snapchat+ in diversifying Snap’s otherwise advertising-heavy revenue model.

https://youtu.be/KAbPlt9hNUg

Native Enlightenment

Back to the broader marriage of AR and AI, there are several ways we’ve speculated that they’ll converge (and others that we haven’t), some of which maps to what Snap launched this week. This includes AI’s role in streamlining AR and 3D-model creation workflows.

Just like generative AI has erstwhile been used to generate 2D art from text prompts, the same principle can apply to 3D content creation. Of course, it’s not a silver bullet and still requires creative talent and technical rigor, but it can automate some rote aspects of development.

Generative AI can also be used as an inspirational tool. It can help prototype lens ideas and get the creative juices flowing. In other words, generative AI’s serendipitous and whimsical tendencies – even when it comes to hallucinations – can inspire creative directions for new lenses.

And on the user end, there are several possibilities such as AR experiences that are open-ended rather than pre-ordained. In other words, we could start to see AR interactions that leave it to the user to “generate” world-immersive content via spoken prompts (e.g., “show me a dragon.”)

Lastly, AI use cases will evolve in ways that we haven’t imagined yet. That’s often how it goes as developers gain their footing and work towards moments of native enlightenment. We saw that play out with early iPhone apps (including Snapchat) and will likely see the cycle repeat.

More from AR Insider…