Because AR’s inherent function is to enhance the physical world, its relevance is often tied to specific locations. This is what we call geo-local AR. It’s all about AR experiences whose value is tied to locations. We’re talking informational overlays on storefronts and waypoints.
If this sounds familiar, it’s the foundational principle behind the AR cloud – a conceptual framework for AR’s future. For those unfamiliar, the AR cloud is a data mesh that covers the physical world to inform and empower AR devices to invoke the right content and experiences.
This concept also may sound familiar as it aligns with a buzzword that’s run rampant: the metaverse. Though the term is often used in the context of online fully-digital experiences, it can also involve content that brings digital depth to real-world places: Call it AR’s metaverse.
This is also what we call the metavearth and it’s the topic of a recent report from our research arm ARtillery Intelligence. Entitled Geolocal AR: The Metavearth Materializes, it breaks down the drivers and dynamics of this emerging area and is the topic of our latest excerpt (below).
Geolocal AR: The Metavearth Materializes
LiDAR
Picking up where we left off in the last installment of this series, there are several supporting technologies that will accelerate geospatial AR. For example, SDKs like Niantic Lightship will enable developers to build apps and web AR experiences, as we examined last week.
Also on that list is LiDAR. Short for light detection and ranging, LiDAR involves sensors that track how long it takes light to reach an object and bounce back. This is the state of the art for depth sensing and is how autonomous vehicles achieve the computational vision to “see” the road.
This will manifest in mostly-unseen computational work that happens before AR graphics are shown, such as spatial mapping. LiDAR is better equipped to quickly scan room contours, which is the first step to believable and dimensionally-accurate AR that’s anchored to physical places.
That will engender new use cases for AR. For example, it will mean more indoor activations such as spatially mapping your office or bedroom. It also extends AR and its use cases from the front-facing camera (selfie lenses) to augment the broader canvas of the physical world.
As for timing, LiDAR-enabled cameras are only available on certain higher-end phones, such as the iPhone 13 Pro, but they will phase into ubiquity in the rest of the iPhone lineup in the coming years. This will unlock AR’s next generation, and better enable geospatial AR.
XR Talks: What is Niantic’s ‘Real-World Metaverse?’
5G
5G has become quite a buzzword in and out of AR circles. But for geospatial AR specifically, the technology could become a critical enabler and force multiplier. How will it do this? Among other things, we’ll zero in on three key factors: speed, edge compute, and location precision.
Starting with speed, 5G offers a wider pipe for AR’s (and VR’s) polygon-heavy payloads. It will provide the low latency that AR and VR experiences require. That expanded capability could also inspire innovation around the next generation of bandwidth-intensive AR experiences.
Beyond connectivity, 5G is characterized by a low-range, high-frequency signal. Among other things, this makes the technology primed for edge computing. This lets AR devices offload CPU and GPU needs to the network edge. They can then shed size, heat and cost.
Lastly, 5G enables location precision. The same high-frequency signal achieves millimeter-level precision, compared to GPS’ meter-level accuracy. This will be critical for geospatial AR use cases like holding up your phone to identify storefronts and waypoints. This is where inches matter.
Of course, it could be a while before all of the above is realized on a practical level. As we discussed with Niantic in a recent virtual event, 5G’s location precision is promising but a moot point until networks are fully rolled out….and for 5G mobile hardware to become more ubiquitous.
Living Cities Launches to Build the Mirror World
Z-Axis Location Data
The third and final enabling technology is 3D location data. As background, accurate location data will be one key puzzle piece for the geospatial AR vision. And because we’re talking about 3-dimensional content, the location data itself needs to be more dimensional than GPS.
This notion has led to 3D location data from players like NextNav. The idea is that most location data we encounter is based on two dimensions: X & Y – or lat/long in GPS terms. 3D location data adds a Z-axis to the mix to define spatial positioning and navigation in terms of elevation.
One way this has traditionally been used is targeting mobile ads. GPS lat/long readings don’t help ad targeting in places like 3-story shopping malls or 60-story buildings that have commercial businesses on lower floors. In these cases, elevation matters for the right content to be delivered.
Beyond a marketing context, Z-level accuracy becomes more valuable in emergency response scenarios. When someone calls 911 from the 32nd floor of a 40-story building, being able to get a precise fix on a three-dimensional location can save precious minutes.
As for AR, this could involve playing Pokémon Go across topographical planes, or navigating the hills of San Francisco with Google Live View. Whether it’s games or utilities — both of which will drive value in AR — Z-axis orientation will be one factor in the continued quest for geo-local AR.