One of our favorite voices in the wide world of spatial computing is Tom Emrich. Also known as the “man from the future,” Emrich is director of product management for AR Platforms at Niantic. We’ve worked closely with Tom in the past as a co-organizer of AWE Nite SF.
Each year, Emrich publishes his lessons for the previous year, and predictions for the year ahead. The former is refreshingly honest in detailing what he got right and wrong in the previous year’s predictions. The look forward is likewise insightful and fittingly illustrated with generative art.
In this year’s 2024 outlook, Emrich’s predictions align with some of our own, such as AI’s role in a wearable tech renaissance. With smart glasses, for example, value will lie not with a graphically-robust UX, but rather the relevance and personalization of the information being delivered.
So to synthesize and summarize his projections, we’ve excerpted highlights below for AR Insider readers, including our favorite predictions and quotes from Emrich. And in fairness to Emrich and the rigor he put into his analysis, we encourage you to check out his full article.
Let’s dive in…
#3 Virtual Reality HMDs are dead, long live Mixed Reality headsets
2024 will be remembered as the year we said goodbye to virtual reality head-mounted displays. Taking their place are mixed reality headsets capable of both virtual reality and augmented reality. This shift began in 2023 with the launch of Meta Quest 3 and the debut of Apple’s Vision Pro. But the category of mixed reality headsets will really start to come together in 2024 when Apple finally ships its device in February, and we start to see other tech giants make their headsets available, including those expected by Samsung/Google, Bytedance, and Oppo as well as devices from up-and-coming players like Xreal.
Mixed reality may prove to be the missing piece needed to accelerate the slow but steadily adopted VR device market. Giving users the ability to see their space and the people around them while wearing the headset not only opens new types of applications but also addresses one of the biggest concerns about VR—isolation. Something as small as starting your headset experience in MR makes the overall experience feel less claustrophobic, and being able to see people in the room with you makes the headset experience more social and safe.
While this will be a big year for mixed reality, it is still early days for this category. Today’s mixed reality category reminds me very much of the early days of the PC, clunky and expensive but capable enough to enter the home of early adopters with enough use cases to enrich the lives of many family members. Like the PC, we will continue to see this device innovate, especially with improvements in comfort and design, display and visual fidelity, and interaction and tracking. We will also see changes in cost and a growing list of applications which will aid in wider adoption.
We may also get a sneak peek of what’s to come this year as we learn details and gain a ton of rumors about the successors of Meta Quest 3 and Vision Pro. The focus for Quest 4 will most likely be around eye-tracking and resolution to compete with Apple, while the focus for Vision Pro 2 will be all about price to better compete with Meta.
#5 Multi-modal AI triggers a wearable tech renaissance fueled by virtual assistants that want to make sense of the world.
The wacky and weird world of wearable technology, which started to get us to wear sensors on various parts of our bodies over a decade ago, will see a return this year. While iterative improvements in hardware components can now be used to create smaller, lighter, and more wearable form factors, the main driver for their return is AI. The GenAI boom of 2023 has ushered in powerful multi-modal LLMs, an advanced type of AI that can understand and generate not just text but also other types of data, such as images, audio, and possibly even video. “Multi-modal” refers to the model’s ability to process and relate information across these different modes or formats. The development of multi-modal LLMs is a significant step forward in AI, opening up new possibilities for more intuitive and comprehensive AI systems that can better understand and interact with the world in ways similar to humans. Wearables play a major role in this leap as they equip the AI system with eyes and ears through sensors such as cameras and microphones, which make our experience in interacting with these systems much smarter and more contextual. In addition, the output from these wearables, such as spatial audio systems, projectors, and other displays, gives the AI system a means to communicate back to us without the use of typical screens, which is less intrusive and distracting.
We started to see these AI-enabled wearables bubble up at the tail end of 2023, with the debut of Pendant, Humane’s Ai Pin, and the next-generation Meta Ray-Ban glasses. All of these devices center their value around AI. We caught a glimpse of just how powerful the combination of smarter virtual assistants and wearables can be with Meta Ray-Ban’s roll-out of its multi-modal AI feature to select users where Zuckerberg used the eyewear to ask Meta AI to help put together an outfit based on the shirt he was looking at.
I expect that this year, we will see AI-enabled wearable devices launch, leveraging models from the likes of OpenAI and Anthropic or smarter versions of Alexa, Google Assistant, and Siri. We may also see new applications and updates to existing wearables, like the Apple Watch, which turns ten this year, which may make use of smaller, more efficient LLMs running on the smartphone, offering enhanced privacy and offline capabilities.
#7 Smart glasses that mirror our smartphone screen are the new “Google Cardboard”
2023 saw some positive signals for smart glasses that mirror your smartphone screen. In fact, wearable displays that offer a larger screen and a more comfortable heads-up position saw six-digit adoption numbers last year, according to Xreal. We may see even further uptake this year as consumers who are curious about the Vision Pro and Meta Quest 3 look for cheaper alternatives to get a taste of this next wave of computing. While this will be a win to get consumers to wear tech on their faces, it could become a “Google Cardboard”-like moment for mixed reality if these consumers are truly expecting to get a Vision Pro-like experience. Cardboard was a cheap alternative for VR back in 2016, and while it provided a VR-like experience, it paled in comparison to what was possible with higher-end machines. It arguably both helped raise awareness and democratized VR while at the same time setting it back by under-delivering based on the mainstream misunderstanding that not all headsets are created equal. That isn’t to say that video glasses don’t have value on their own, and certainly, users who are interested in the monitor extension use case may be satisfied with the device. These devices will also benefit from a resurgence of 360-degree and 3D content that is sure to be readied for Apple and Meta use.
#11 AI continues to accelerate augmented reality development
As we look towards 2024, the convergence of AI and augmented reality will continue to accelerate immersive content creation. The advent of generative media and the rapid evolution of prompt-generated 3D assets and animation, in particular, is supercharging the prototyping and development of VR and AR. This is helped by AI-enabled co-pilots which are fast becoming a necessary developer companion for new and advanced developers alike and are enabling a wider spectrum of no to low-code solutions. These advancements democratize content creation, moving it more into the hands of prosumers, akin to the revolution seen with Adobe Photoshop years ago. I expect that we will see further innovation in the way of 3D-generated media as well as further roll-out of co-pilot systems for XR developers.
AI will also help filter creation go mainstream. I expect we will see social media platforms follow TikTok’s lead in releasing user tools to create and remix filters and lenses without the need to open a studio tool. This trend is set to bring a new level of personalization and immersion to user-generated content, further blurring the lines between virtual and physical realities.
Simultaneously, the adoption of new scanning technologies is set to revolutionize how the real world is virtualized for developer use. Techniques like MERF (Multi-Echo Resonance Fusion), SMERF (Spectral Multi-Echo Resonance Fusion), NERF (Neural Radiance Fields), and Gaussian Splatting are making headlines, promising to simplify the creation of detailed 3D models for use in XR. This advancement is not just about creating spaces; it’s about capturing moments and the essence of reality, enabling developers and creators to weave these elements seamlessly into AR experiences. Expect to hear a lot about these technologies and more solutions adopting and innovating with them.
#17 3D movies and 360-degree video return as sports and entertainment become a big draw for MR headset use
3D movies and 360 (and 180) degree videos are not new, in fact, many would argue they tried to be a thing and failed. But expect a resurgence of 3D and 360 content mainly due to Apple’s Vision Pro. Apple’s launch event highlighted how Vision Pro is, among many things, Apple’s first “TV” with a focus on 3D movies, unique viewpoints at sports games, and large floating screens and environments that offer a movie theater-like experience. Immersive media consumption is old hat for VR devices. Meta Quest has a number of apps that offer this, including Netflix and Bigscreen, but it may hit differently on the Vision Pro thanks to its specs of 4K screen per eye, low-latency eye tracking, and wider field of view. Sports and entertainment is already a key vertical for Apple, and it has its vast media catalog via Apple TV+ and Apple Music, partnerships with the likes of Disney+ and the NBA, as well as the technology it has acquired from the likes of NextVR to bring to the table.
Over the holidays, we saw first-hand how powerful a draw top-tier entertainment content can be to get users into the headset as Swifties found out that they could watch The Eras Tour on Prime VR in the Meta Quest headset. Arguably, Apple is in the best position to bring Taylor Swift-level content to the Vision Pro, which could be one of its killer apps in adoption. In addition, mixed reality brings a new dynamic to the sports and entertainment headset offerings. Whereas in VR, experiences could only connect viewers as avatars, mixed reality viewing allows you to get all the benefits of immersive viewing while still being able to see and easily communicate with the people in the physical room you may be watching it with. I expect we will see a number of big moves in MR this year from movie studios, record labels, sports franchises, and streaming apps as they explore this new channel to engage audiences.