Tech giants see different versions of spatial computing’s future. These visions often trace back to their core businesses. Facebook wants to be the social layer to the spatial web, while Amazon wants to be the commerce layer and Apple envisions a hardware-centric multi-device play.
Where does Google fit in all of this? It wants to be the knowledge layer of the spatial web. Just like it amassed immense value indexing the web and building a knowledge graph, it wants to index the physical world and be its relevance authority. This is what we call the Internet of places (IoP).
Besides financial incentive to future-proof its core search business with next-generation visual interfaces — per our ongoing “follow the money” exercise — Google’s actual moves triangulate an IoP play. That includes its “search what you see” Google Lens, and Live View 3D navigation.
These are all moves we’ve examined, but more clues recently emerged. So to connect the dots on Google’s visual future, it’s time to revisit this and layer in the latest evidence. Based on Google’s gravitational pull, knowing its spatial endpoints can help startups align their own road maps.
Level-Setting
Google’s latest moves include updates to its Live View visual navigation to help users identify and qualify local businesses. This was unveiled at its recent Seach On 2020 event and follows soon after its Earth Cloud Anchors that will let users create digital content on physical places.
Before diving into these moves in greater detail, let’s level set on Google’s broader AR ambitions. The company continues to invest in visual search to future-proof its core search business, as noted. Given Gen-Z’s camera affinity, Google wants to lead the charge to make it a search input.
This includes Google Lens, which lets users point their cameras at real-world objects to contextualize them. This starts with general interest searches like pets and flowers, but the real opportunity is a high-intent shopping engine that’s monetized in Google-esque ways.
Live View similarly uses the camera to help users navigate with 3D urban walking directions. Instead of the “mental mapping” to translate a 2D map to 3D space, holding up your phone to see directional arrows is more intuitive. And like Google Lens, monetization is on the road map.
Google is uniquely positioned with these efforts because they tap into its knowledge graph and the data it’s assembled from being the world’s primary search engine for 20 years. Lens taps into Google’s vast image database for object recognition, while Live View uses Street View imagery.
Master Plan
This brings us back to the present. Google’s latest visual search play combines Lens and Live View. While navigating with Live View, Google now offers small click targets on your touchscreen when it recognizes a business storefront. When tapped, expanded business information appears.
This is something Google has been teasing for a few years. As shown above, it includes business details that help users discover and qualify businesses. The data flow from Google My Business (GMB) and the current version offers structured listings content like hours of operation.
As mentioned, this follows Google’s less-discussed Earth Cloud Anchors. Another vector in Google’s overall visual search and AR master plan, this ARCore feature lets users geo-anchor digital content for others to view. This could feed into Google Lens and Live View.
In other words, Cloud Anchors could engender a user-generated component of visual search. It could have social, educational and serendipitous use cases such as digital scavenger hunts and hidden notes for friends. But a local business reviews use case could also develop.
That last part is most aligned with Google’s DNA, as it’s become a local search powerhouse, with lots of revenue to show for it. This has driven Google My Business, which involves getting direct local business data to reduce reliance on the Yelps of the world. Data is the new oil, and all that.
Visual SEO
Back to where this could go next, we’ve long speculated that Google Lens could engender a new flavor of SEO that focuses on AR-based visual experiences. If a visual front-end resonates with Gen-Z and phases into ubiquity, it could compel businesses to optimize local listings for that UX.
Again, much of the data that populates these user experiences will flow from GMB, so there isn’t much extra work required in this prospective branch of SEO. But there could be certain types of data (think: food images) that shine in visual-search and let local businesses stand out.
If nothing else, a visual front-end could reinforce the existing reasons to ensure local listings accuracy — a large branch of the local search world. You don’t want your business details to be wrong when someone’s standing in front of your store pointing a phone at it, ready to transact.
Of course, the SEO angle is speculative. Outcomes will hinge on the wild card that is user traction. If the use case falls flat, this won’t be a channel that local businesses need to worry about. Though we think visual search is a potential AR killer app, it’s yet to be seen if people will use it en masse.
That’s why Google continues to tread carefully into AR and visual search. It jumped too quickly at VR before deprecating the Daydream platform. AR is much closer to search, but Google needs to feel out user behavior and optimal UX. Then, just like search itself, monetization could follow.