XR Talks is a series that features the best presentations and educational videos from the XR universe. It includes embedded video, as well as narrative analysis and top takeaways. Speakers’ opinions are their own.
Among tech giants investing in AR, Google is at the top of the list. And its flavors of AR map to its core search business, such as visual search and AR mapping. We already knew that, but Google validated its direction by doubling down on AR in it’s I/O keynote this week (video below).
In fact, AR not only supports Google’s search business but is, in a small way, driving it. Panning back, AR aligns with other initiatives like voice search and the knowledge graph. These bring Google from the “ten blue links” paradigm to answering questions and solving problems directly.
“It all begins with our mission to organize the world’s information and make it universally accessible and useful,” said Google CEO Sundar Pichai on visual search. “Today, our mission feels as relevant as ever. But the way we approach it is constantly evolving. We’re moving from a company that helps you find answers to a company that helps you get things done.”
Incubating AR
Google’s interlinked (excuse the pun) AR efforts accomplish these goals in a few ways, many of which were advanced at I/O this week. We’ll examine some of those below. But first to level set, what does the range of Google’s AR efforts look like? Here’s a quick list for easy reference.
— ARCore (Formerly Tango)
— Playground (formerly Playmojii)
— Google Maps AR Navigation (VPS)
— AR Visualization in Search (New)
— Visual Search (Google Lens)
We’ll zero in on the last two because that’s where Google’s I/O keynote focused. One common theme, as we examined earlier this week, is that Google is accelerating these nascent AR modalities by “incubating” them within the already established institution that is search.
“We’re excited to bring the camera to search, adding a new dimension to your search results,” said Google’s Aparna Chennapragada on stage. “With computer vision and AR, the camera in our hands is turning into a powerful visual tool to help you understand the world around you.”
Visualizing Search
Starting with AR visualization, search results will increasingly feature clickable objects to dive into 3D models (glTF) that are related to search results. This can be seen as the counterpoint to Apple’s Quick Look, and will shine in physically-complex topics where visualization is additive.
“Say you’re studying human anatomy. Now when you search for something like muscle flexion, you can see a 3D model right from the search results,” said Chennapragada. “It’s one thing to read about flexion, but seeing it in action in front of you while studying it…very handy.”
Taking it a step further, the same 3D models can be overlaid on real-world scenes. This is where AR really comes in. It may be more of a novelty in some cases, but there will be product-based search results that can benefit from real-world visualization, and are highly monetizable.
“Say you’re shopping for a new pair of shoes,” said Chennapragada” “With New Balance, you can look at shoes up close from different angles directly from search. That way, you get a much better sense for what the grip looks like on the sole, or how they match the rest of your clothes.”
The Camera is the New Search Box
Going beyond AR product visualization, visual search is likewise now integrated deeper into search. This means easier ways to activate Google Lens to identify real-world objects. Google is highly motivated to bring Lens to more people as a corollary to its core search business.
“One way we think about Lens is we’re indexing the physical world — billions of places and products… much like search indexes the billions of pages on the web,” said Chennapragada. “Sometimes the things you’re interested in are difficult to describe in a search box.”
New Lens features include utility-oriented things like real-time language translations by pointing your phone at signage (think: public transit). There’s also the ability to calculate restaurant tips –sort of like Snap’s new feature — and getting more information on restaurant menu items.
“To pull this off, Lens first has to identify all the dishes on the menu, looking for things like the font, style, size and color to differentiate dishes from descriptions,” said Chennapragada. “Next, it matches the dish names with relevant photos and reviews for that restaurant in Google Maps.”
20 Years of Search
Speaking of food, Google Lens will bring recipes to life in Bon Appetit magazine with animated cooking instructions overlaid on the page. Other use cases will require similar content partnerships, which already include Nasa, Samsung, Target, Volvo, Wayfair, and others.
All of the above will join Google’s other AR utilities like VPS walking directions, now available in all Pixel phones. But the underlying thread is that most of Google’s AR and visual search efforts will launch from, and map to, its established search products, per the incubation play.
In fact, use cases like language translation tie together voice, computer vision, machine learning, and the knowledge graph. It coalesces those things in a visual search front end — an intuitive tool that will be a “lighthouse” example for the many things Google Lens grows into.
“What you’re seeing here is text-to-speech, computer vision, the power of translate and 20 years of language understanding from search, all coming together,” said Chennapragada.
See the keynote below, coded to start at Chennapragada’s segment.
For deeper XR data and intelligence, join ARtillery PRO and subscribe to the free AR Insider Weekly newsletter.
Disclosure: AR Insider has no financial stake in the companies mentioned in this post, nor received payment for its production. Disclosure and ethics policy can be seen here.
Header image credit: Google