This article features the latest episode of The AR Show. Based on a new collaboration, episode coverage now joins AR Insider’s editorial flow including narrative insights and audio. See past and future episodes here or subscribe.
AR Insider’s ongoing coverage of The AR Show episodes is an exercise in deconstructing and synthesizing the musings of spatial computing’s top minds. But the focus of these articles is often the guest’s commentary, as that’s most fitting to a narrative format. But what about the host?
Jason McDowall brings us fresh episodes every week, applying diligence in production quality (we know from being on the show), narrative development, and craftful interviewing. But once per year, he turns the mic around on himself for a monologue on his own observations and projections.
In the latest episode (audio below) we learn why McDowall asks such adept questions every week. With a résumé that ranges from the Air Force to Salesforce, his career is a veritable cross-training routine that’s primed him for his role in AR at Ostendo… not to mention podcast host.
McDowall’s insights are prefaced by a few burning questions: Why are AR glasses inevitable? Who are the top players? What progress did we see in 2019? What hurdles remain? What does mobile AR teach us? And what can we expect in 2020? So there’s a lot to unpack.
The first question requires also asking where we are in spatial computing’s lifecycle? And more broadly speaking, where does AR fit in the overall timeline of computing’s evolutionary path? This historical lens can often provide insight into how platform shifts play out, and what we can expect.
“Over the last few decades, we’ve seen the PC and the smartphone, combined with the internet, have a profound and far-reaching impact on our lives. I think smart glasses are the next step in that evolution of personal information technology. One of the biggest challenges we face in personal computing isn’t how to add additional camera lenses to our phones, but how to evolve computers to better conform to how we as humans interact with each other and our environments. We are fundamentally visual creatures who think in three dimensions and live in a physical world […] As Kevin Kelly penned in his article entitled Mirrorworld in 2019, we are entering the photonic era, an era shaped by light.”
This evolutionary perspective is logical in a macro sense. Drilling down into micro factors and practical realities, it will be things like tangible time savings that drive AR’s actual adoption. AR’s utilitarian benefits — both enterprise and consumer — will be the real driver for its traction.
“Smart glasses will improve the time between our desire to know something or to do something until the action is complete, and completed correctly, by more than 10x over today’s computers and phones. I think this will be the primary driver for people to adopt glasses when they become good enough. Rather than having to pull your phone out of your pocket, enter your passcode, search for some info, consume it and then act on it. Imagine knowing the answer to the question on the tip of your brain the moment you touch your phone. That speed up is driven by the device being hands-free, the display always visible, sensors that see and understand the environment around you, and the ability to precisely overlay the digital on top of the physical world.”
That covers the “demand side.” One the supply side, there are likewise powerful forces driving AR’s development and deployment. There are tens of billions of dollars being spent collectively by tech’s Big-5 and beyond. This “follow the money” exercise is a key AR confidence signal.
“Because augmented reality smart glasses represent the next transformational shift in personal computing. A lot of major companies are standing up and taking notice. When I consider that the market capitalization of the top five incumbent personal computing companies is four and a half-trillion dollars. It becomes more clear why so many are willing to spend billions on the hope of maintaining or becoming the next major tech Empire. Apple, Microsoft, Facebook, Google, Snap and Qualcomm are all making significant investments.”
As just one example, Facebook has increased its annual R&D spend by $10 billion between 2014, when it bought Oculus, to the present. Lots of that is likely going to AR & VR. In the interest of time and brevity, we’ll refer you to McDowall’s calculations which we separately examined recently.
Other signals abound if you consider all the AR action in 2019. We saw the launch of the Hololens II, Spectacles III, and lots of activity from ecosystem players. But despite that, we’re still a long way from the spatial future we all envision, and we haven’t seen the worst of the shakeout.
For example, struggles that counterbalance 2019’s progress include Microsoft’s challenge in scaling up Hololens II production. Then there’s Magic Leap. Its ~$3 billion in funding is often rebuked, but it’s far less than Facebook’s R&D budget quantified above. Could it run low on cash?
But beyond broad shortcomings, McDowall brackets three specific areas where AR needs to develop. The first is comfort, quality & visuals. The second is situational awareness. And the third is social acceptance. The first point is where McDowall gets to flex his knowledge of optics.
“So much of the systems’ volume, weight and heat are driven by the needs of the display system and the associated combiner optics. And of course, the angular resolution, field of view and having the brightness to be worn outside are also driven by the display system and these combiner optics. As several guests have noted, the likely solution for displays is to abandon LCD based technology, to forego DLP and to look past OLED. Laser-beam scanning could be the answer, but I’m doubtful. I think the ultimate solution is to bring directly-emissive, full color, inorganic micro-LED displays to market […] The combiner optics for smart glasses also aren’t a solved problem. The job of these optics is to collect and redirect the light from the display into your eyes while allowing you to directly view the real world without distortion degradation or ideally any reduction in your peripheral vision. There are an extremely challenging set of trade-offs here between size, weight efficiency, transmissivity field of view, image clarity and uniformity, plus the need to manufacture it at scale and reasonable cost.”
The second challenge brings in factors often discussed in AR-cloud circles. These are software-based challenges such as semantic scene understanding, image persistence & occlusion, and other computational challenges. If these aren’t right, the illusion and experience are broken.
“The system’s understanding of my intentions and the world around me are two different but related challenges. Plus, the system needs to filter down and communicate the relevant information back to me and put it in the right place. There is a massive multi-layered set of software problems here. And the solutions have dependencies on the sensors, on the device, and the system’s ability to compute and communicate the results. So we need some hardware innovation here too.”
That brings us to social acceptance. This is the least-discussed factor, as it’s a bit of an intangible. Technologists and engineers often deal in tangible factors and variables. But AR inherently brings in the fickle worlds of style and culture. Google learned this the hard way with Google Glass.
“The idea is me feeling confident wearing them and interacting with them, as well as you accepting that I’ve always got a camera pointed at you. Solving this is partially about the previous two buckets: the better the system feels and looks, as well as the better the system understands me and the world, the less awkward I will feel wearing them. But assuming we can solve for form and function, I think social acceptance is mostly about familiarity. The better we as a society understand smart glasses, and the more often we see them, the more comfortable we will become around them.”
Lastly, on a positive note, the question on everyone’s mind is the market acceleration and “halo effect” that Apple’s rumored AR glasses could bring. Like us, McDowall doesn’t see that happening in 2020 for several reasons. But it’s building towards something significant.
McDowall also aligns with some of our past speculation in that Apple could iterate its way towards all-day AR glasses and a wearables suite. That could start with something that taps into current consumer comfort, such as an entertainment wearable… then evolve just like the iPhone 1 did.
“While there have been lots of reports and evidence that Apple is investing in AR, I’m highly doubtful that they will announce anything in 2020. The timing isn’t right based on the set of available ingredients for an Apple-quality device. Plus, it doesn’t fit their previous behavior […] Perhaps before an AR device, there’ll be a VR-like device meant for the living room or the office […] When those Apple-branded AR smart glasses do arrive. I imagined they will be as a complement to the phone, not as a replacement. In fact, I foresee a constellation of devices used for sensing, computing and augmenting your reality […] Just as we’ve seen before, I expect to see an incremental approach of incorporating expanding capabilities on the way to having the AR experiences we all imagine.”
Disclosure: AR Insider has no financial stake in the companies mentioned in this post, nor received payment for its production. Disclosure and ethics policy can be seen here.