Last week, Google teased its latest AR glasses – a second swing at the technology. The first was of course the ill-fated Google Glass which, in fairness, had a fruitful second life in the enterprise. Macro factors have changed, leading Google back to the consumer AR well.
One of those factors is the underlying technology. We’re closer (but not quite there) in optical and display systems that can fit into glasses that approach consumer viability. Google acquired North Focals in 2020 to accelerate its path towards that goal, and now we know why.
But the other factor that’s different about Google’s latest AR tease is focus. It learned the hard way in the Google Glass era that the proposed use cases have to be deliberate and refined. There needs to be a single good answer to the question, “why do I need this thing on my face?”
Common Qualities
This all leads up to the concept of killer apps. The term is thrown around for use cases that have some degree of traction. But real killer apps have a few common qualities, including utilities that have wide-scale applicability, inherently-frequent usage, and major addressable markets.
The other common misconception about killer apps is that they have to be sexy. In fact, most killer apps are decidedly mundane. Take the web for example… its killer apps have settled into a comfortable set of utilities like shopping, weather, email, search, and social networking.
That brings us back to Google’s AR glasses play. Its approach in teasing the device at the I/O conference last week was to lean into one focused use case that could be its killer app. That use case was one that we’ve speculated in the past: real-time foreign-language translation.
It doesn’t get much more mundane than that….and that’s precisely what could make it successful. Going back to the above list of checkmarks for killer apps, it’s a utility with lasting value (as opposed to fleeting novelty), that solves a pain point for a massive addressable market.
The latter includes the global travel industry. It also includes foreign language learning, as well as domestic encounters with service professionals for whom English is a second language. I’m living the latter in a current home renovation with the valued craftsman in my house daily.
Captions for the Physical World
Panning back, what Google teased in its AR-fueled foreign-language translation could represent something broader: captions for the physical world. That starts with language but could spawn into other kinds of identifying overlays for anything that can benefit from “translation.”
Taking that term broadly, it includes everything from business information about storefronts, to calories in a menu selection, to where to buy a jacket you just saw on the street. And that all traces back to the next visual era of Google’s mission to “organize all the world’s information.”
This invokes our “follow the money” exercise that triangulates tech giants’ AR moves based on their core businesses. Not only is Google motivated to future-proof its massive search business – and all of the above is one way to do that – but it has a competitive edge in doing so.
For example, AR-based foreign language translation taps into Google Translate. Google Lens taps into Google’s years of indexing images in its knowledge graph. And its Live View AR navigation utilizes Street View imagery to localize AR devices. No one else has these things.
So for Google, AR success will live in the Venn diagram between killer apps and competency. That will mean some type of knowledge layer for the physical world. Expect similar from Meta (social layer), Amazon (commerce layer), and Microsoft (productivity layer) to name a few.
That brings up another misconception about any emerging technology’s killer app: its singular tense. There will be several AR killer apps, like the web. But given AR’s practicality headwinds – as with anything proposed to go on one’s face – they’ll take longer to germinate.