As you may know, Google I/O took place this week. The annual developer conference had the standard roster of rapid-fire updates from the Googleverse. This year, the underlying theme was all-things AI, with most new products having an intelligence layer of some sort.

That notably includes Google’s AR-related announcements and updates. Though Google has walked away from some AR (and VR) initiatives, others are going strong. And now they’re stronger with a dose of AR. As we recently examined, AR and AI will be natural bedfellows.

So what were the AR and AI highlights at I/O? And how do they signal the continued convergence of these technologies? This is the focus of this week’s XR Talks, including our three biggest moments from the show broken down below in strategic takeaways and embedded video.

Can Google Merge AR and AI?

1. Set Up to Scale

First on the list is Google’s Geospatial Creator. This takes Google’s work around geospatial AR and sets it up to scale by living elsewhere. Specifically, Geospatial Creator plugs into Adobe Aero and Unity so that developers can build 3D animations that are anchored to physical places.

The 3D creation part is where Unity and Adobe shine while “anchored to physical places” is what Google brings to the table. It has assembled geospatial data for years, starting with Street View. These data help devices localize by knowing their coordinates and heading (geopose).

This device localization is the first step toward activating the right content for geo-anchored AR. We’re talking promotional animations around local storefronts, creative endeavors around public spaces, or architectural mockups of a new building project (see video below).

Back to the word of the day, the next step could be to infuse all the above with the magic of generative AI. That could involve 3D animations that are generated on the fly via voice prompts. As we’ve examined, this can bring additional serendipity to geospatial AR experiences.

2. Watch Where You’re Going

Next up, Google Maps’ Immersive View is expanding into routing. For those unfamiliar, Immersive View is Google Maps’ feature for dimensional and stylized maps. Rather than 2D overhead maps with color coding and some topography, these maps look more like a 3D game.

This is available in high-traffic areas and waypoints so that users (think: travelers) can preview them with greater depth and color. With this week’s expansion to routing – available by the end of this year in 15 cities – all these features are now available for everyday navigation.

The idea is that an elevated sense of realism in routing and navigation lets you mentally map 2D directions to 3D space. It lets users visualize places they’re going and anticipate factors like traffic or hectic cityscapes – variables that don’t often come through in 2D mapping.

Google will even layer in weather conditions for a more realistic preview of your route. And all of this wouldn’t be complete without the magic word: AI. Immersive View uses computer vision and AI to stitch billions of Street View and aerial images into a 3D model of the world.

3: Seeing Stars

Speaking of image stitching, Google’s Project Starline has slimmed down and leveled up. But first, what is Project Starline? This is a 3D teleconferencing technology housed in a photobooth-like structure. It contains camera arrays to simulate natural depth and object parallax.

New this week for Project Starline is a streamlined form factor that makes it more realistic for homes, offices, and other potential placements (think: hotel business centers). Google says that the prototype has gone from the size of a restaurant booth to the size of a flat-screen TV.

It achieved this by requiring fewer cameras to render people in 3D, given the integration of more… you guessed it… AI. Software now picks up the 3D rendering load previously accomplished through several cameras. This makes it a sort of software-simulated flavor of 3D.

This is similar to the way that Apple’s Object Capture creates 3D models from 2D images. It’s also similar to smartphone software innovations to achieve advanced photography styles and effects without hardware focal depth. In other words, Google has some experience here.

Honorable Mentions

In addition to the above updates and announcements, a few other I/O moments stood out. They include Magic Eraser, which lets you edit parts of photos in Google Photos. This is a form of augmentation that meets broadened definitions of the term – a natural evolution.

Beyond AR, we saw an AI-driven update to the search engine results page (SERP). This involves a conversational AI module that will get prime placement in SERPs and seek to answer search queries directly. This positions AI ahead of Page Rank and “10 blue links.” A big move.

Google has been building towards this for years with SERP evolutions that tap the knowledge graph to answer questions. So though Google has gotten flak lately for being late to AI, it’s better positioned than you think with some of the best image and text training sets on the planet.

We’ll pause there and cue the video below, which is a sped-up version of the most notable announcements from the show…

More from AR Insider…