XR Talks is a weekly series that features the best presentations and educational videos from the XR universe. It includes embedded video, as well as narrative analysis and top takeaways. 


Google continues to blitz immersive computing. There are several points along the XR spectrum that feed, advance or protect its revenue streams, which is evident in ongoing product launches. The latest was showcased this week at its I/O developer’s conference.

One of the least discussed but most impactful XR announcements was Google’s new AR Cloud Anchors. As we examined yesterday, they’ll standardize tools for developers to achieve image persistence, which is a key enabler for would be killer-apps in social AR.

Cloud Anchors will also be cross-platform, and work across ARCore and ARkit. This is key to prevent the fragmentation and platform incompatibility that has stunted XR’s growth to date. We’ll continue to see unifying technologies that are meant to counteract that.

“We know Cloud Anchors will enable many new AR experiences that combine the power of your device, your creativity and the people around you,” said Google’s Nathan Martz. “But because these experiences are so powerful, they should work regardless of the kind of phone you own.”

(see video below for time-stamped clip for AR Cloud Anchors)

Elsewhere in the I/O keynote, there were updates to visual search and Google Lens. This is one of Google’s main plays for AR, as it aligns with its core search business. The thought is that visually immersive tech — in this case the increasingly-active camera — is a search input.

We believe this will eventually be applied to lots of monetizable search such as retail and local discovery. But first it’s taking form in utilitarian products that will attract and grow a user base. That includes most notably AR navigation for street walking, built on Google’s VPS.

“Just like when we’re in an unfamiliar place, [we] look for visual landmarks — storefronts, building facades, etc.” said Google’s Aparna Chennapragada. “It’s the same idea: VPS uses visual features in the environment [to] help figure out exactly where you are and where you need to go.”

Beyond wayfinding, visual search will use computer vision and machine learning to ingest and process text. For example, Google Lens will decipher menus (find out what’s in a dish), street signs and several other use cases that will develop in logical (and eventually monetizable) ways.

For example, “Style Match” searches for items similar to apparel users point their phone at. That has clear shopping and commerce tie-ins, and the next step will be to integrate visual searches into transactional functionality through Google Shopping or partners like Pinterest.

It should be noted that most of the above was presented in the context of what’s coming, without explicit launch dates. In Google fashion it will release at some point and iterate in the wild. Next steps will also include real-time visual search for a wider range of physical items and products.

“The camera is not just answering questions, but putting the answers right where the questions are,” said Chennapragada.

(see video below for time-stamped clip for Google Lens and VPS)


For deeper XR data and intelligence, join ARtillry PRO and subscribe to the free ARtillry Weekly newsletter. 

Disclosure: ARtillry has no financial stake in the companies mentioned in this post, nor received payment for its production. Disclosure and ethics policy can be seen here.