As we examine in our ongoing Space Race series, one of AR’s most opportune areas is geospatial experiences. Because AR’s inherent function is to enhance the physical world, its relevance is often tied to specific locations. This is a foundational principle of the AR cloud.
For this reason, one of AR’s competitive battlegrounds will be in augmenting the world in location-relevant ways. That could be wayfinding with Google Live View, geospatial gaming experiences from Niantic, or location-specific social AR experiences like Snap’s Local Lenses.
In order to synthesize these dynamics, AWE Nite recently gathered a heavy-hitter panel that we covered in parts I and II of this series. To continue the discussion, Part III of our series now examines a VentureBeat panel on a similar topic, but with a new twist: 3D location data.
The Z-Axis
To start, what is 3D location data? Most location data we encounter is based on two dimensions: X & Y. These classifications define GPS coordinates for example. The difference with 3D location data is that it adds a Z-axis to the mix to define positioning in terms of elevation.
Why is this useful? For one, when targeting mobile ads, lat/long precision is critical to hit the right geo-fenced target. But how does this work in a multi-level building like a 3-story shopping mall or even a 60 story building that has commercial businesses that occupy lower floors?
Similarly, in location-based ad attribution — tracking users’ anonymized foot traffic to see if ad exposure resulted in store visits — 3D location can add dimension. It’s more insightful if you know someone visited The Gap on the second floor versus the Hot Topic right below it.
Beyond a marketing context, Z-level accuracy becomes more urgent and valuable in emergency response scenarios. When someone calls 911 from the 32nd floor of a 40 story building, being able to get a precise fix on three-dimensional location can save precious minutes.
According to 3D location specialist Next Nav‘s Dan Hight, these use cases continue to expand. For example 3D location data can enhance in-stadium experiences such as ordering food from your seat; Or summoning customer service in a multi-level department store or mall.
Will 3D Cities Unlock Immersive Experiences? Part II
What About AR?
All of the above examples are in marketing and customer service contexts….what about AR? This traces back to the AR cloud’s fundamental need for spatial data precision. For example, in the department store example above, AR navigation can be unlocked to find your way to products.
Stepping back, the AR cloud requires spatial mapping data so devices can localize themselves (know where they are) and overlay the right graphics (know what they’re looking at). These functions rely on a combination of visual object recognition and geospatial orientation (geopose).
An example of a product that utilizes these signals is Google Live View. Using Google’s database of Street View imagery, AR devices can localize themselves. Positioning is further reinforced through GPS and IMU before dimensionally accurate 3D directions are overlayed.
Beyond Google and its Street View database, efforts are underway from Facebook, Apple, Niantic, and others to spatially map the physical world as a foundation for AR experiences. 5G will also help as its low-range high frequency will engender millimeter-level precision.
But systems that support Z-Axis positioning will likewise be an enabling factor for geospatial AR. After all, one of the fundamentals of spatial computing is recognizing and rendering objects in several dimensions. That requires not only understanding depth but, in some cases, elevation.
This could involve playing Pokémon go across topographical planes, or navigating the hills of San Francisco with Google Live View. Whether it’s games or utilities — both of which will drive value in AR — Z-axis orientation will be one factor in the continued quest to unlock realistic augmentation.
We’ll pause there and cue the full video below…