One of our favorite voices in the wide world of spatial computing is Tom Emrich. Also known as the “man from the future,” Emrich is director of product management at Niantic. We’ve also worked closely with Tom as a co-organizer of AWE Nite SF and other roles.
Each year, Emrich publishes his lessons for the previous year, and predictions for the year ahead. The former is refreshingly honest in detailing what he got right and wrong in the previous year’s predictions. The look forward is likewise insightful and fittingly illustrated with generative art.
In this year’s 2023 outlook, Emrich’s predictions align with some of our own, such as the waning of a metaverse hype cycle. He goes a step further in saying that the metaverse will be replaced by AI in terms of the hot topic that everyone’s excited about. This is already well underway.
So to synthesize and summarize his projections, we’ve excerpted highlights below for AR Insider readers, including our favorite predictions and quotes from Emrich. And in fairness to Emrich and the rigor he put into his analysis, we encourage you to check out his full article.
Let’s dive in…
#1: AI takes the Hype Cycle crown as the metaverse begins its slow descent into the metaverse winter
Artificial Intelligence (AI) is set to knock the metaverse down the Hype Cycle curve by taking its place at the peak this year.
Metaverse is defined differently by different people but for me, metaverse is not a single game, a virtual world, or an NFT collection, but rather an “aha moment”. It is a realization that the next wave of computing consists of a new stack made up of emerging technologies (including blockchain, AI, IoT, AR, and VR) that will all work together to create a fundamental shift in our relationship with technology. This shift can be summed up in one word: presence. This great virtual awakening was brought on by mass digital transformation fueled by the global pandemic which necessitated organizations and individuals to expedite their use of virtual technologies to survive and in turn gave them a solid foundation to see what’s next. It also generated new expectations for our technology to deliver more human connection and better replicate the physical world experience.
In my 2022 AR Trends post, I started my report expecting the metaverse to remain at the peak of the Gartner Hype Cycle last year which it did. I also correctly posited that we will soon see that the metaverse is “more mirage than miracle” as the closer we think we are getting to it, the more we will realize that there is still a way to go before we hit our final destination. That is not to say that the individual technologies which make up the metaverse, such as AR, are at the same point in the cycle. But rather that the grander vision of the metaverse is going to take a lot to accomplish and so it will move into the Trough of Disillusionment (or break out into its own Hype Cycle completely just like the Internet of Things (IoT) in 2016) and continue to evolve behind the scenes before we wake up one day and realize that we have arrived.
Taking its crown will be one of these individual technologies, AI, or more specifically generative AI, which, at the end of 2022 was already beginning to dominate headlines and attention.
#3: AR headworn gets its “PC moment” as VR headsets add color passthrough AR as a core feature
This will be a huge year for mixed reality (MR) headsets or virtual reality head-mounted displays that double as augmented reality headsets with color video passthrough added as a core feature.
If you have been watching the AR industry for a while, you’ll know that everyone has been waiting for headworn AR to hit a breakthrough point with consumers. While most of the focus has been on optical see-through AR glasses, that look like regular eyewear and can be worn all-day anywhere you go, video passthrough AR has readied itself to transform every VR head-mounted display into an AR device which will take the lead in consumer adoption.
2022 ended with a number of significant milestones in mixed-reality headsets. Meta launched its Meta Quest Pro, Lynx shipped its R-1, and Lenovo launched its ThinkReality VRX for the enterprise. When I walked the CES floor this year, if there wasn’t already a mixed reality device debuted, such as those from TCL, Sonium Spaces and HTC, every VR manufacturer indicated a mixed reality module was coming including Pimax whose high-end, high-resolution device, Pimax Crystal, is set to take on Varjo in offering AR and VR.
Rumors of Samsung, Google and Apple joining this race continue to flood the tech news with Bloomberg and other analysts suggesting Apple may be debuting its mixed reality headset as early as this year.
I call this milestone AR headworn’s “PC moment” as they are still relatively bulky, and expensive, are fixed to a room/indoor location and, for the few that can afford it, we most likely will only see one per household. That being said, they are ready for adoption and are only going to get better in a short amount of time.
What is great about mixed reality devices is that they benefit from the adoption journey virtual reality has been on in the consumer space. The VR market has slowly gained adoption since 2016 when the first set of consumer devices hit the market. Statista estimated that 74 million people were using VR hardware in 2022, with that number getting closer to 100M this year. Consumers know VR, they know family and friends who have a VR device, and so adding AR as a feature to VR is just going to make something they are familiar with even more powerful. In this way, you could see VR HMDs as the trojan horse for AR headworn to get to the masses.
#6 Advancements in optics show promise but see-through AR glasses to replace our smartphones are still a ways out
In my 2020 AR trends post, I wrote: “the optics and photonics community are literally trying to break the laws of physics to give us the components we need to create fashionable all-day AR glasses”. This statement remains true today.
2023 began with a number of major waveguide players showcasing the leaps and bounds they have made in components that one day will enable all-day, everyday AR glasses. Meta Materials and Ant Reality both showcased electrochromic dimming at CES 2023. Electrochromic dimming enables greater control of blocking the physical world’s light using electrically-controlled liquid crystals.
Advancements in dimming are key to enabling AR optical-see-through devices to function properly outdoors or in areas with a lot of light. They can also enable a toggle between AR and VR content.
Today’s waveguides are also brighter, lighter, and offer a wider field of view. Lumus debuted its second generation Z-Lens which can deliver a 3,000-nit display at 2K-by-2K resolution in an optical engine that’s 50% smaller. Ant Reality’s “Crossfire” boasts a 120-degree field of view with possible first commercial use in a future Nreal device. And Vuzix is hoping to fast-track the creation of AR glasses with its OEM smart glasses platform, Vuzix Ultralite. Ultralite is designed to expedite the manufacturing of a smartphone accessory that weighs 38 grams, is super power-efficient with up to 48 hours of run time on a single charge, and packs an impressive waveguide to display information from your phone hands-free to your eye.
Dispelix and VividQ teamed up to debut a major breakthrough in new 3D waveguide technology. It has developed a waveguide combiner that can accurately display simultaneous variable-depth 3D content within a user’s environment.
While we can expect many of these advancements to begin to power next-generation connected eyewear, especially smartphone accessories, these big steps forward continue to shine a light on just how hard highly functional optical see-through AR glasses are to create. While the optics industry continues to make good progress we are still a ways out until we have everything we need to create all-day, everyday AR smart glasses that can try to replace our smartphones.
#9 Entering its third generation, mobile AR grows up to better deliver on its promise of blending the physical with the digital
Mobile AR is growing up, and it’s entering its third generation.
In 2009, I remember using LAYAR and Yelp Monocle when AR was used on a feature phone and was really just an overlay (I mean it showed me that I could easily get to my nearest Starbucks if I literally walked through a wall—if you know, you know). 2009 was also when I was scanning a large AR marker that Robert Downey Jr. was sitting on on the cover of Esquire magazine. AR was in its infancy but even back then it showed promise to bridge the gap between the digital and the physical world. I consider this era, Mobile AR 1.0.
Today of course mobile AR is much more sophisticated thanks to advancements in computer vision which enables markerless world tracking, image tracking, face tracking, and more. Mobile AR 2.0, or most of the AR we know of today, leverages smarter phones than the previous generation along with technologies that enable AR to be more rooted in our reality. While this has all allowed for more immersive experiences on smartphones, the next phase of mobile AR, Mobile AR 3.0, gets much more contextual.
The shift in Mobile AR 3.0 is a significant one. We are moving from AR that can be used on “any place”, “any face” and “any thing” to AR that requires “this place”, “this face” and “this thing”. This will be driven by technologies like VPS (Visual Positioning System), semantic understanding, and AI. Mobile AR 3.0 will begin to make good on AR’s promise of blending the digital with the physical, as it makes more use of the physical world as part of the experience. In fact, experiences that make use of this next generation of mobile AR technology will make the physical world an even more essential part of the experience, so much so that AR will feel scarce and precious as we may only be able to experience it at certain times of the day, certain locations around the world, on certain objects and have it change depending on the user. In turn, this will make AR feel more real, more personal, and much more valuable.
This year, keep an eye on platforms as they roll out new tools and features which enable developers to create content that represents the third generation of mobile AR.
#11 Generative AI will play a key role in accelerating AR content creation as it sparks ideas and generates assets
With 2023 being a big year for AI, one of the key trends in this area will be the use of AI in 3D/AR/VR development. In particular, generative AI will play a key role in accelerating AR and VR content creation as it will not only be used to spark new ideas but also accelerate the creation of assets.
One of the major roadblocks in creating AR and VR content is not a lack of AR and VR development tools but rather a lack of 3D assets to create content with. 3D modeling and 3D animation is still a highly specialized skill. While you can, and do, use 2D assets in AR and VR, as spatial experiences, they require 3D content to really be immersive.
While it’s still early days, we are already seeing how generative AI can be used to create 3D content. OpenAI, NVIDIA and Luma AI have demonstrated the path to using AI to produce 3D models from text prompts. Last year, Luma AI released a tool that can generate 3D printable 3D models from a text prompt. OpenAI debuted “Point-E”, a “system for generating 3D point Clouds from a complex prompt” which in turn could be converted into a mesh model with existing software tools. We may soon be able to generate 3D assets for games and AR/VR content just as easily as we are creating 2D images today using Midjourney, DALLE-E and Stable Diffusion. Democratizing the creation of 3D assets will unblock many in the development of AR and VR content.
But the use of generative AI to assist in the development of AR content goes beyond assets. Midjourney can be used today to generate storyboards and character concepts, ChatGPT can be used to generate narratives and copy, and DALL-E can be used to generate textures for 3D models as just a few examples. Generating AI-enabled avatars is another way this trend will materialize as we are seeing with the likes of InWorld.
Within this sphere, there is also growing use of neural radiance fields, or NeRFs, a technology that is revolutionizing the representation of 3D spaces. NeRF is a type of AI technology that is used to generate 3D scenes based on a limited number of input images. The technology essentially turns 2D photos into 3D scenes. Last year, NVIDIA debuted a new NeRF technique which the company claims is the fastest to date and only needs seconds to train and to generate a 3D scene. We also saw Polycam and Luma AI launch NeRF as a feature of their popular scanning apps.
Keep an eye on the AI space, which is rapidly expanding and evolving, as it plays a key role in augmented reality.
So there you have it… just five top trends Emrich breaks down for us. See his article for the full effect and attend AWE Nite SF’s event on February 2 where he’ll break down his insights live.