Today at Snap Partner Summit in Santa Monica, Evan Spiegel’s keynote address offered a “one-more-thing” moment. He unveiled the fifth generation of Snap Spectacles. Though we’re here at the event, most of our take is based on a hands-on demo we were given a few weeks ago.

Before getting into the device specs, UX, and other dynamics, let’s back up for context. This is Spectacles’ fifth version, as noted. The first three generations were camera glasses while the last two, including this one, are AR glasses with display systems (see our Gen-4 review here).

It should also be noted who this device is meant for. Like Gen 4, it will only be sold to developers. At $99/month, the point is to get it into their hands first to build up a base of experiences that make headworn AR attractive to consumers en masse. Snap knows we’re not there yet.

Now, on to the device itself. It weighs 226 grams and employs an LCoS display system with a 46-degree field of view (3x that of Gen-4) and 37 pixel-per-degree resolution. It has a 45-minute battery with plug-in power for developers (again, the target user) doing desk-based work.

It also has a Snapdragon dual SoC for distributed heat and weight, with each processor handling separate functions (optics, inputs, etc). This design choice avoids typical split processing that requires a tethered phone. True to Snap’s persona, it’s going for socially-active use cases.

But perhaps the most notable thing about Spectacles is its OS. This marks the debut of SnapOS, which handles core functions like spatial understanding and hand tracking (more on those in a bit). That way, developers can tap into established functions and focus instead on their creations.

Hands-On with Snap Spectacles

Object Interaction

All the above – especially Snap OS – makes Spectacles Gen 5 fairly vertically-integrated. And that shows throughout the UX. Positional tracking and plane detection are tight, while visuals are dimensionally accurate and properly occluded. Object interactions feel natural.

The UI is also built around hand tracking. With the exception of using your phone as a tracked controller – such as a full-swing golf game we demoed – hand interactions rule. The UI is fairly slick and intuitive, including depth understanding (think: reaching for faraway objects).

The glasses were comfortable to wear for the extent of our hour-long demo, with frames that have a malleable rubber mantle that will be forgiving to a range of head sizes. Battery life is, again, relatively short but it should work for Spectacles’ developer-intended use cases.

Another deliberate design decision was to engender multi-user interaction. Snap’s DNA is inherently social after all. To that end, Snap OS emphasizes spatial anchoring so several users can simultaneously and naturally experience digital elements, such as a chess game.

Speaking of social interaction, another factor that defines Spectacles is a portrait-oriented field of view. I questioned this at first, as it seemed Snap was forcing a mobile UI onto glasses. But it makes sense – not only due to an established lens-based design language but for practicality.

In other words, a field of view that’s “taller” is purpose-built for a deliberate subject: people. Again, social interaction is a north star… and humans are built vertically. That contrasts AR hardware like Xreal Air 2, built for lean-back entertainment and thus fitting to a landscape FOV.

What Sits at the Intersection of AR and AI?

Natural Avenues

That brings us to the apps and content. Utilizing much of the above as a framework, Snap has seeded the experience with a tray of initial apps to get the ball rolling. But the real scale, and magic, will come when developers put Lens Studio 5.0 to work and apply their own creativity.

Meanwhile, a human anatomy app lets you visualize systems of the body that come to life when selected. Several users can crowd around to examine the model – a natural use case for education, training, surgical prep, or anything that can benefit from experiential learning.

The anatomy app is one place where both multi-user functionality and the vertical field of view come in handy. The same FOV advantage could be said for a golf game, given that the ball is on the ground; and a Peridot App from Niantic, given that your virtual pet scurries around your feet.

Each of these apps likewise benefited from another Spectacles attribute: brightness. Snap doesn’t reveal the brightness metrics, but the display stood up to the L.A. sun. An auto-dimming feature (and manual option) enhances contrast as light conditions shift, sort of like transition lenses.

But the multi-user functionality is where developers could really have a field day. We mentioned a chess game earlier, which floated in space and utilized hand tracking in addition to multi-player spatial anchoring. Other board games or card games could be natural avenues.

The underlying multi-user functionality could also inspire parlor games. Think of team-based games like Pictionary or Charades where certain people are allowed to see clues and game elements that others aren’t. That could be accomplished spatially and digitally with Spectacles.

Is the AR Industry Getting More Realistic?

Worldly Canvas

Sticking with experiences that happen on the device, two use cases could develop independently of developer-created apps: visual search and generative AI. These are open-ended activities that aren’t confined to a specific pre-ordained experience, but rather have the world as their canvas.

Starting with visual search, Spectacles employ all the work, computer vision, and machine learning already put into Snap Scan. Like Google Lens, this lets you identify and contextualize physical objects that you encounter in the real world, making it more of a high-frequency utility.

We’ve been saying for years that visual search as an AR use case is a sleeping giant. That hasn’t panned out yet, but two things may accelerate it: headworn form factors and AI. With multimodal AI in Snap OS, users can identify/contextualize objects while talking to refine the search.

This means you can open your fridge and peer in while asking Snap what you can make with the ingredients you have. Or tools like Brickit (already a Snapchat lens) can tell you what Lego models you can build given the pieces you have. Travel and shopping use cases also abound.

As for generative AI, Snap has partnered with OpenAI so Specatcles can generate 3D models on the fly through voice prompts. Use cases are open-ended and could include things like design inspiration for product teams – again, using spatial anchoring for dimensional collaboration.

Brickit Express Thinks Outside the Blocks

Doubling Down

The key term above is “open-ended” and it applies beyond AI. Spectacles’ fate will depend on what developers build and how AI can breathe new life into it. It’s also an open question whether or not the world is ready for social headworn AR. There’s still ample cultural resistance.

But again, this device is for developers. Snap is honest with itself and with us that this is a step towards a broader cultural embrace of AR glasses. And what it’s doing with Spectacles, along with Meta and others, is to make large-scale investments to accelerate that process.

In that sense, Snap is playing a long game with AR, and it knows it. Though there are flavors of AR on the smartphone that have traction today – and where Snap is a dominant player – it knows that the long-term future is headworn. And it knows that “long” is the operative word.

Back to those open questions, they’re what make things exciting, but also uncertain. Though some dynamics are out of Snap’s control, it’s done its part to create a foundation for innovation. And if its investments in AR platforms and people are any indication, it will keep doubling down.

More from AR Insider…