The intersection of AR and AI will take many forms, as it’s been theorized and future-gazed throughout the XR punditsphere. One of the most promising visions that’s been speculated is the ability to generate AR experiences and interactions on the fly, using text prompts.

Bringing all the workings and wonder of generative AI to AR, we call this generative AR. After several steps in this direction, Snap recently became the first to market with generative AR with the launch of its Imagine Lens. It lets users speak AR animations into existence via prompts.

The way this works is that users can take a selfie, then prompt Snapchat to do things like “turn me into a cowboy with an old-timey animation style.” Among other things, the timing is right as Snapchat users can use this to “rapidly prototype” their Halloween costumes, or find inspiration. 

To pull this off, Snap employs a custom mix of in-house AI models and industry-leading models. And though it has already integrated generative AI in various places, Imagine Lens is the first manifestation of open-ended text-to-image AR lenses where end-users are in the driver’s seat.

Lens Fest 2025 Dispatch: Snap Primes its Faceworn Future

Doubling Down

All the above was a recent development, leading up to Lens Fest. Now, Snap has doubled down in the wake of Lens Fest to give Imagine Lens more legs. Specifically, it’s making it free to all users rather than limited to Snap’s paid subscription, Snapchat+ (via its AR add-on Lens+).

In fact, when covering Imagine Lens’ launch last month, we stated, “Expect wider rollouts in the next year, as those power users uncover fitting use cases and product iterations.” That ended up being true, but much faster than we anticipated. The question is what compelled that speed. 

It’s unclear what drove the move, but it could be to gain scale — or at least to spark awareness by meeting the moment around the above Halloween use case. Adding to that is competitive pressure from socially-fueled tools like Sora’s Cameos that render users in gen-AI videos. 

But to be fair, Snap is still reserving some Imagine Lens functionality for Lens+ users and Snapchat Platinum Subscribers, such as a wider range of lens generations. For now, free users in the U.S. (other countries coming later) will get a narrower set of animation possibilities.

Snap Takes a Step Closer to Generative AR

Red Meat

Stepping back to contextualize Snap’s latest move, how did we get here, and what have been its recent “steps in this direction” noted earlier? The goal in posing these questions is historical context of Snap’s AI evolutionary path so far, in order to triangulate where it could go next.

To that end, one of Snap’s first public declarations of its ambitions towards generative AR – though it didn’t use that term specifically – was at AWE 2024. In fact, we were on stage as Snap co-founder and CTO Bobby Murphy discussed the vision for dynamic text-to-image Lenses.

But rather than just future-looking platitudes, Murphy had some red meat to offer. As an evolutionary step towards user-facing generative AR, he announced Lens Studio 5.0, including the Gen-AI suite. This was the foundation for a set of AI tools to automate lens creation.

Those tools have since evolved rapidly in Lens Studio, including Easy Lenses, which let creators build lenses via prompts. It also rolled out AI Video Lenses that interact dynamically and dimensionally with scenes. And Sponsored AI Lenses bring AI personalization to paid lenses. 

XR Talks: What Sits at the Intersection of AR and AI?

Reinforced by Results

Fast forward through several other AI-fueled additions to Snap’s broader AR efforts, and we’re back to the present. Much of this was synthesized in Lens Studio AI launched at Lens Fest — altogether making good on Bobby Murphy’s projections from the AWE stage, 15 months ago.

Snap is the logical candidate to have done this, and to continue doing it. The company is committed to AR as a cornerstone of its product and revenue model. This includes large-scale AR glasses investments, which will take a big step towards consumer markets next year.

That commitment has been reinforced by results. Snap now achieves 8 billion lens engagements per day – the equivalent of more than one lens play per human on the planet per day. This happens as Meta has left an opportunity gap by abandoning its social lens platform, Spark.

As for what’s coming next, we envision Snap’s work at the intersection of AI and AR to align with its longer-term headworn ambitions. In other words, lenses that are ambient and intelligent – such as fashion discovery or shopping aids – could be on brand and monetizable.

More from AR Insider…