Tis the season for developer conferences. After Google I/O, Facebook F8 and Snap Partner Summit comes the O.G. event: Apple’s WWDC. As far as AR is concerned, this event in the past has hosted announcements ranging from Quick Look to GeoAnchors to ARkit.

Though this year won’t see the revolutionary smartglasses unveiling that the AR world is waiting for, Apple continues to make evolutionary moves. This was in full force at WWDC including new AR navigation in Apple Maps, visual search features, and 3D Object Capture.

These announcements don’t have the sex appeal of a prospective “iPhone moment” for AR glasses, but they’re important. In fact, one common theme among AR announcements at WWDC was utility. It’s all about moving past AR’s fun and games to practical use cases.

Beyond the AR-specific products and featured, it’s also evident that aspects of AR permeate several Apple products. That includes things like spatial audio and new digital effects for Facetime calls. It seems digital/physical augmentation is in the bloodstream at Apple.

Through that lens, we’re unpacking the keynote and all its AR aspects for this week’s XR talks. Check out the abridged video below, along with narrative takeaways.

Is AR Evolving from Novelty to Utility?

AR Guidance

The award for flashiest AR announcement at WWDC this year goes to Apple Maps’ new 3D navigation feature. Similar to Google’s Live View, it lets users navigate urban areas through spatially-anchored directional overlays through an upheld smartphone.

This can be more intuitive than looking down at a 2D map to mentally translate it to 3D space. And like Google’s Live View, this could evolve into informational overlays for waypoints and storefronts along one’s path. This would make it more of a local search and discovery engine.

As background, we saw this move coming after Apple began collecting first-party mapping data, including 3D spatial maps. The initial result was Apple Maps’ Street View-like Look Around feature, but this data also enables devices to localize for 3D navigation — the bigger play.

In fact, the new 3D mapping feature is available in the same cities where Look Around is active, including London, L.A., New York, Philadelphia, San Francisco, and Washington, D.C.. We’ll keep watching as this develops and competes with Google in mapping’s next battleground.

Is AR Navigation Coming Next from Apple?

Object Capture

Next on the list of practical AR updates from WWDC is Object Capture. This is a new 3D scanning feature that democratizes the process of capturing 3D models to be used in AR experiences. This most notably includes product models for AR visualization and camera commerce.

Naturally, this will work with Apple hardware and its escalating optical qualities, including LiDAR. A companion app will guide users with prompts for capturing camera angles, then stitch them into 3D models. In other words, Apple has streamlined the art of photogrammetry.

In addition to the companion app, Object Capture will go to market through partners, such as integration into Unity’s MARS workflow. Apple will also make it available to developers and eCommerce brands like Wayfair, which can lighten and scale up their 3D asset creation.

As we’ve written, 3D asset creation is the biggest missing piece in camera commerce. Early AR adopters like IKEA have built homegrown systems, but the real opportunity is for something that’s more standardized and scalable. Apple just took a big step in that direction with Object Capture.

https://youtu.be/LVHhCgxwk0g

AR-Adjacent

In addition to the explicitly-AR updates above, digital augmentation flowed through several Apple updates, as noted. For example, Spatial Audio — a flavor of AR we’ve examined — is now available natively across a range of Apple features and media, including music.

Speaking of audio, selective audio is now available for FaceTime calls. This isolates sound sources to tone down background noise. Elsewhere in Facetime, visual augmentation joins the party with Zoom-like effects and a new blur background that builds on Apple’s Portrait Mode.

But perhaps most notable among these AR-adjacent functions is Live Text. Using computer vision and other machine-learning magic, users can scan text in the real world (or in their photos), to digitize it. Once in digital form, that text can be searched or language-translated.

A similar feature identifies objects like pets and landmarks, not unlike Google Lens. Triangulating all of the above, this will likely move towards visual product searches, which would make it a meaningful tool for AR shopping….and congruent with Apple’s theme of AR utility.

We’ll pause there and cue the video. See an abridged version from The Verge below, which cuts out most of the fluff and transitions. Or scroll further for the full event. 

More from AR Insider…