One of our top picks for augmented reality killer app is visual search. Its utility and frequency mirror that of search and hit several marks for potential killer-app status. It’s also monetizable, and Google is highly motivated to make it happen.

But first let’s step back and explain what we mean by “visual search,” as it can take a number of forms, such as Google image search.  What we’re talking about is pointing your phone at an object to get info about it, à la Google Lens. It’s just one of many developing flavors of AR.

Google is keen on visual search as a corollary to its core search business. This includes building an “Internet of Places,” accessible through a visual front end like Google Lens. This works toward boosting Google search volume by creating additional inputs and modes, just like voice search.

“One way we think about Lens is indexing the physical world — billions of places and products … much like search indexes billions of pages on the web,” said Google’s Aparna Chennapragada at Google I/O. “Sometimes things you’re interested in are difficult to describe in a search box.”

Search What You See

New Lens features announced last month include real-time language translations by pointing your phone at signage (think public transit). There’s also the ability to calculate restaurant tips — sort of like Snap’s new feature — and getting more information on restaurant menu items (see above).

This creates local search and discovery use cases for Google Lens. It already recognized storefronts, using Google Street View imagery for object recognition. But now the experience gets more granular in searching for specific menu items using Google My Business data.

“To pull this off, Lens first has to identify all the dishes on the menu, looking for things like the font, style, size, and color to differentiate dishes from descriptions,” said Chennapragada. “Next, it matches the dish names with relevant photos and reviews for that restaurant in Google Maps.”

Other use cases like language translation likewise tap into Google assets and knowledge graph: “What you’re seeing here is text-to-speech, computer vision, the power of translate, and 20 years of language understanding from search, all coming together,” said Chennapragada.

Front Runner

These existing assets — along with its AR motivations — make Google a front runner in the race to deploy visual search. But others will contend: Pinterest is making a logical search play, while Snapchat joins the race through a partnership with Amazon to identify products you encounter.

Though these visual search challengers could shine in niche use cases such as fashion items, Google will be the best all-around visual search utility. It has the deepest tech stack, and the substance (knowledge graph) to be useful beyond just novelty to identify things visually.

The name of the game now is to get users to adopt it. Google Lens won’t be a silver bullet and will shine in a few areas where Google is directing users, such as pets and flowers. But it will really shine in product search, which happens to be where monetization will come into the picture.

But the first step is to accelerate adoption, and Google has shown this is a priority. Over the past year, it’s taken several steps to place Google Lens front and center in users’ search experiences so they can find it. It wants to acclimate users to this new way to search and develop the habit.

AR Training Wheels

This visual search acclimation process includes last Fall’s addition of a Google Lens button in Google’s iOS app. It joins voice search on the search bar (see image above). And last month’s I/O event included several moves to “incubate” Google Lens in search so people can find it easier.

“We’re excited to bring the camera to search, adding a new dimension to your search results,” said Google’s Aparna Chennapragada on stage. “With computer vision and AR, the camera in our hands is turning into a powerful visual tool to help you understand the world around you.”

And it’s already working. According to consumer survey data from our research arm, ARtillery Intelligence and Thrive Analytics, 24 percent of AR users already engage visual search. We believe this will grow as Google pushes Lens as a common use case in the above ways.

Panning back, visual search aligns with other initiatives already underway at Google like voice search. These are part of a longstanding smartphone-era trend to bring Google from the “10 blue links” paradigm to answering questions and solving problems directly, as in the knowledge panel.

This has been a survival imperative for Google whose position at the front door to the web was displaced in the app-heavy paradigm of the smartphone era. So it wants to counterbalance resulting search volume attrition through AI-fueled mobile tools like voice and visual search.

“It all begins with our mission to organize the world’s information and make it universally accessible and useful,” said Google CEO Sundar Pichai during the I/O keynote. “Today, our mission feels as relevant as ever. But the way we approach it is constantly evolving. We’re moving from a company that helps you find answers to a company that helps you get things done.”


For deeper XR data and intelligence, join ARtillery PRO and subscribe to the free AR Insider Weekly newsletter. 

Disclosure: AR Insider has no financial stake in the companies mentioned in this post, nor received payment for its production. Disclosure and ethics policy can be seen here.

Header image credit: Google