There’s been lots of XR industry talk about how AR will boost the automotive industry in various ways. It’s everything from AR-assisted design & manufacturing for automakers to AR visualization in the consumer buying cycle.
But beyond how AR can fuel the auto industry, what about the opposite? Specifically, could the computer vision requirements for autonomous vehicles (AVs) feed into AR? Could spatial maps generated by and for AVs also help construct AR’s biggest enabler: the AR Cloud?
AR and AVs share technological underpinnings in that AR’s area mapping (the ‘M’ in SLAM) is generally similar to the computer vision that self-driving cars use to “see” the road. The question is if the quality standard required by the latter could create robust spatial maps for the former.
As we’ve examined, the deep-pocketed auto industry could refine the computer vision that can spin out and assist AR in lots of ways. The industry is highly-motivated to innovate for the era of AV’s, and that R&D muscle could accelerate consumer-grade AR as a by-product.
Give and Get
AV’s could add value on qualitative and quantitative levels. For the former, as mentioned, point clouds generated for AVs will have high standards of density and granularity, due to the need for reliable vehicle guidance. That could exceed spatial mapping data captured by other means.
Quantitatively, the sheer volume of spatial mapping data required for a functional AR cloud will be a challenge. AVs could scale up that data collection. In their ongoing collection of mapping data to improve their own capabilities, the AR cloud can benefit by tapping into some of that same data.
Like 6D.ai’s crowdsourced approach with mobile devices, AR cloud data builds over time as people use it while also passively scanning their surroundings. This give-and-get approach is analogous to Waze, as we discussed with 6D. AVs will likely do the same to build data over time.
AVs also may have value in supplementing AR cloud data in that they have different vantage points than human or mobile device-generated spatial maps. That includes street-level views in high-value places such as commercial districts. Pervasive sensors around the car amplify this.
So combining vehicles with human or mobile device-generated data collection, it’s possible that several concurrent meshes can be combined for a more robust and comprehensive 3D map. And that could create denser or more reliable point clouds that assist both AVs and AR.
Highly Motivated
Once piece of evidence for this concept can be seen in Apple’s recently-launched navigational mapping initiative. It wants to improve Apple Maps by reconstructing the underlying mapping data from scratch. But while it’s doing that, it’s gathering 3D mapping data that could feed AR apps.
Apple’s motivation is to elevate apps that developers build on ARkit, pursuant to its larger AR goals. Google values the AR cloud as a “search index for the physical world.” Either way, auto tech companies (including Waymo) that generate spatial mapping data can be in a valuable spot.
Of course, car-generated mapping data won’t be a silver bullet. The data they generate is primarily for the purpose of improving driving performance and accuracy, not AR. Conversely, AR-focused startups like 6D.ai and Ubiquity 6 enable spatial maps that are purpose-built for AR.
But altogether, it could be a collective effort where several sources and vantage points contribute to the construction of an AR cloud (or decentralized collection of clouds). Just as AR boosts the auto industry in lots of ways, cars can give back valuable data: Another worthwhile “give-and-get.”
For deeper XR data and intelligence, join ARtillry PRO and subscribe to the free ARtillry Weekly newsletter.
Disclosure: ARtillry has no financial stake in the companies mentioned in this post, nor received payment for its production. Disclosure and ethics policy can be seen here.
Header image credit: WayRay