Online mapping continues to see a barrage of new features. This is driven by Google’s lead, as well as Apple’s escalating challenges to that lead. The result is early-stage-like feature rollouts, including lots of AI and AR, which is rare for a product that’s somewhat mature.
The latest in this feature arms race comes from Google, which recently announced a range of functional updates. If there’s a theme to extract, it’s more underlying data and AI-fueled automation to help users both navigate and find local businesses that pair with their needs.
Some of these are new features, some are updates (or geographical expansions) and others are the official launch of features previewed at recent Google events. Let’s take them one at a time, including new features that rest firmly on AI, visual search, and 3D navigation.
Lens in Maps
Google Lens lets users point their phones at items to identify them – an area known as visual search. It joins voice search in Google’s efforts to expand the surface area for search and future-proof its core business. And it’s now more accessible and integrated in Google Maps.
To activate it, users can tap the Lens icon in the Maps search bar, then lift their phone to reveal dimensional overlays that signal the locations of nearby businesses. Top business categories include ATMs, transit stations, restaurants, coffee shops, and retail stores.
A use case that Google underscores is exiting a subway station and needing to orient oneself and get the lay of the land. Previously known as Search with Live View, Lens in Maps is live now in 50 new cities including Austin, Las Vegas, Rome, São Paulo, and Taipei.
In Maps and in search, Google continues to find ways to be more conversational. Though accelerated by the rise of conversational AI like ChatGPT, this has always been present in Google’s evolutionary path, seen in things like the knowledge graph and one-box answers.
As part of that evolution, Google sees itself as a discovery engine in addition to a search engine. Here it wants to inspire things to do, see, and buy locally. To that end, users can type broad characteristics of things they’re looking for, rather than specific business names.
For example, Maps can now handle broad interest-based queries like “animal latte art” or “pumpkin patch with my dog.” This could also converge with another AI-based initiative from Google: Multisearch. Search results include photos, listings, and mapping/navigation.
Immersive View is expanding into routing and navigation. As background, Immersive View is an existing feature that makes Google Maps more dimensional and stylized. Instead of boring maps with color coding and topography, these maps look more like a video game.
With the expansion to routing, these features are brought to you when you’re trying to get somewhere. With a greater sense of realism in routing and navigation, users can know they’re in the right place and anticipate their routes visually – variables that don’t come through in 2D maps.
This is all made possible by computer vision and AI to stitch billions of Street View and aerial images into 3D models. They’re now available on Android and iOS for major Western European cities as well as Las Vegas, Los Angeles, Miami, New York, San Francisco, Seattle, and Tokyo.
So there you have it… the latest round of updates. Expect more as standards continue to evolve in all of the underlying elements that drive Maps: AI, AR, computer vision, and the moving target of user expectations. All these factors will continue to grow in step and drive one another.