As you may have seen, prolific Apple sleuth Mark Gurman revealed over the weekend that Apple Intelligence will soon make its way to Apple Vision Pro. This is a logical convergence of both products, and one we predicted a few months ago in our annual predictions exercise.

Specifically, Gurman writes that Vision Pro will integrate Apple Intelligence as part of the upcoming VisionOS 2.4 update. This will include the standard slate of existing Apple Intelligence features such as writing tools, ChatGPT integrations, Genmoji, and Image Playground.

But more important than existing AI features – built with an iPhone in mind – the value will be in native features that develop for Vision Pro. In other words, things that are immersive and spatial, such as identifying physical objects in space, or generating 3D models given voice prompts.

To further flesh out that opportunity, we’ll resurface our recent analysis of Vision Pro’s need for an intelligence engine (other than Siri). The device has a lot to potentially gain from an intelligent assistant and front-end voice interface. Apple Intelligence is now primed to fill that gap.

See that analysis below – first published January 28th but more relevant now…

Can AI Save AVP?

Intelligent & Ambient

AR’s current stage coincides with the rise of AI, and that’s a good thing. AR and AI already go together in several ways, as we explored in a recent report. For example, the vision of all-day smart glasses that annotate the world in intelligent and ambient ways is heavily reliant on AI.

As AR continues to evolve from handheld to headworn, AI functions will become more ambient and automatic given that these devices are hands (and keyboard) free. That’s where the “conversation” in conversational AI factors in, and we’re already seeing this elevate smart glasses.

Given that natural convergence, a key question is how Apple is thinking about AI. It answered that question to some degree with Apple Intelligence, which brings more capable AI into the mix. Among other things, this addressed a longstanding bottleneck in Apple’s capabilities: Siri.

Though Apple Intelligence will cut across Apple products, there’s one device where a decent voice assistant represents a make-or-break moment: Vision Pro. Its gesture-based inputs have been widely lauded, but voice will also be a key part of the equation as there’s no keyboard.

As a still unproven – and quite expensive – product, Vision Pro has the challenge of feeling comfortable and natural. So voice-assistant failures – which are all-too-common for Siri – are bound to disorient already-hesitant users. That’s where Apple Intelligence could save the day.

And though we’re talking about Vision Pro, the integration of Apple Intelligence would be a training ground for its real endpoint: smart glasses. Though the fate of AR glasses at Apple is uncertain, this is where multimodal AI integration is most additive, a la Ray-Ban Meta Smartglasses.

Follow the Money: What’s Apple’s AR Long Game?

First-Party Advantage

Another question that emerges is Apple’s edge over competitors that are similarly blending AR and AI. For example, Google’s Android XR is built around Gemini, and its ability to tap into Google’s knowledge graph for key functions like mapping, shopping, and object recognition.

But when it comes to AI that fuels any AR endeavors, Apple will be advantaged by its vertical integration. This is simply because it will have a more cohesive corpus of first-party data to engender strong AI training and personalization. That taps into more than a billion iOS devices.

Though Google’s knowledge graph can fuel lots of AI training, it doesn’t have the same vertical integration that Apple has. The Android world is a more fragmented mix of hardware from various device manufacturers, which makes it harder to gather reliable usage data.

Why does that matter? In the age of privacy reform, the key term is “first party.” Data usage restrictions kick in whenever you have to reach outside of your own direct customer interactions to track anything. That brings us back to Apple’s edge in owning the entire stack.

Though this particular angle isn’t discussed as much as the gadgetry and other elements of Apple’s AR endeavors, it could be a critical factor. It traces back to the same vertical integration that sits at the center of several Apple advantages (device continuity, elegant integrations, etc.).

This could be the ace up Apple’s sleeve for AI-driven AR. That will apply to Vision Pro first, and some form of smart glasses in the longer term. The latter is still a question mark, as noted, but given the trajectory of smart glasses, Apple will likely re-enter that race at some point.

More from AR Insider…