The AR world continues to wait with bated breath for Apple to release its rumored smart glasses. The thought is that Apple’s classic halo effect will mainstream the still-nascent technology and raise all boats. It has proven to do this several times including smartphones and tablets.
Leading up to the recent WWDC, rumors swirled that this could finally be AR glasses’ “iPhone moment.” Signals included IP filings for a “Reality Operating System” (rOS) to play an iOS adjacent role analogous to tvOS. But it turns out that the timing had nothing to do with WWDC.
Sure enough, there was no AR hardware unveiling. As these rumors amplify in the weeks preceding Apple events, we’re starting to see a “cry wolf” dynamic in each cycle of anticipation and disappointment. But the tech and culture aren’t ready and Apple knows it.
That said, the AR-world disappointment in the wake of WWDC has overshadowed the fact that there were notable AR elements at the event. Besides subtle signals throughout, Apple announced a new room scanning API known as RoomPlan that could democratize AR creation.
That announcement is the topic of this week’s XR Talks, with video and summarized takeaways below.
Before going into RoomPlan’s details, a bit of background. For AR to work, it needs a dimensional map of a given space. Of course, there’s rudimentary AR like early versions of Pokémon Go, but that involves “floating stickers” that don’t offer environmental interaction.
For AR’s true promise, digital elements should interact with their surroundings. Back to the Pokémon example, graphics should be able to hide behind trees or remain at a fixed distance while real-life people walk in front or behind them, as dimensionally appropriate.
This brings us back to environmental scanning. The idea is that spaces are scanned for geometric and semantic understanding. The former is all about dimension, while the latter is all about context. For example, Niantic can parse surfaces like grass or water, for optimal AR placement.
Bringing all of this beyond gaming to more practical examples, room scanning can enable use cases for real estate, design, or architecture. Or as Apple puts it, “the first step in architecture and interior design workflows to help streamline conceptual exploration and planning.”
For example, interior design apps can integrate RoomPlan to let their users visualize wall colors to inform the right shade (and amount) of paint. This is a common pain point, given challenges in visualizing colors in the accurate lighting of a given room – a big factor with paint.
As for RoomPlan’s delivery, it’s an API that lets businesses integrate room scanning in their own apps. One caveat is that it requires LiDAR-equipped iOS devices. For those unfamiliar, LiDAR creates detailed digital scans of a given space by calculating the round trip of light beams.
For Room Plan, “the framework inspects a device’s camera feed and LiDAR readings and identifies walls, windows, openings, and doors. [It] also recognizes room features, furniture, and appliances [–] a fireplace, bed, or refrigerator, and provides that information to the app.”
Of course, all of the above isn’t new, given that companies like Matterport offer high-end spatial maps for 3D model creation in real estate. The difference with RoomPlan (besides a slightly different output) is that it’s brought to any app via API, and any LiDAR-equipped iOS device.
The practical outcome is that 3D scanning for physical spaces – previously reserved for deeper-pocketed tech players – is brought to startups and app developers. It could also boost the functionality of existing tools for home projects, such as Houzz’ new AR renovation tool.
And that’s what comes next. We’ll see third-party apps increasingly jump on this opportunity and enable advanced room scanning. There are some constraints in current LiDAR penetration in iOS devices. But it will phase in gradually, possibly in step with RoomPlan’s adoption.
We’ll pause there and cue the full video below from WWDC…