Call me biased, but I think Spatial Computing + AI is the most compelling battleground in tech today.
Before you scoff, hear me out.
There’s certainly a long list of other wildly exciting frontiers—semiconductors, nuclear power, biology/chemistry, crypto, quantum computing (to name a few); all things that will also ride the AI tidal wave, create immense value, and transform society.
But how many of them will be directly experienced by the average person? On a daily basis and global scale?
How many of them will impact the things that make us most human, such as consuming and making sense of information? Communicating and connecting with others? Telling stories and transferring knowledge?
Sure, these other breakthroughs will play a role. But they’re ‘behind the curtain’ innovations, multiple layers removed from most consumers.
As for Spatial + AI?
This is a direct shift in human-machine interaction, something all of us do, all day every day.
It’s also a direct shift in experience (of both the digital and physical world), and in turn, an evolution of human behavior and culture.
Does it get much more fundamental than that?
The cherry on top is how spatial will accelerate AI. This merger will move machines from probabilistic guessers of the next letter/pixel (Large Language Models, aka LLMs), to machines with a complete understanding of our world (Large World Models, aka LWMs).
The result is ‘spatial intelligence’, i.e. machines that can model the world and reason about objects, places, and interactions in 3D space and time.
The ultimate result? True AGI.
With that backdrop, we’re here to ask ourselves… who will be the winners in the Spatial + AI race?
Here in Part II, we’re exploring the main character of this drama — AR glasses — and the two clear market leaders — Apple and Meta.
I know, I know. There are numerous other companies in hot pursuit of this vision.
Google, Snap, Niantic, to name a few, along with a flurry of Chinese players. But as of today, Apple and Meta are the ‘Vegas favorites’, with the most public-facing progress and the most apparent strategies. They’re also the closest to home (for most).
(Although, if you’re paying attention, you’ll know Google is a meandering juggernaut. See this Project Astra video. Also see Google Maps + 3D, AR Core + Spatial Anchors, and Waymo Open Dataset Challenge. All things worthy of a future essay).
If you haven’t read Part I of this series, check it out here. It covers the impact of Meta’s big reveal, Orion: the first AR glasses we’ve seen worthy of mainstream adoption.
This created a wave of enthusiasm and hope across a weary ecosystem. It also sparked a new era of very real, very intense competition. The most intense being between Apple and Meta.
The rest of this series will break down this showdown and predict a winner. We’ll do so via a variety of vignettes. Each one will analyze advantages on critical battle fronts, spanning hardware, software, AI, content, developers, distribution, leadership, and philosophy.
This will also give us a blueprint for analyzing future players to come, and for analyzing the industry writ large.
With that, let’s get into Part II, starting with hardware.
AR Hardware
The natural instinct is to give Apple the clear edge here.
From design mastery to economies of scale, Apple dominates in creating sleek, mass-market devices. It’s easy to assume this advantage translates seamlessly into AR.
But I’m not so sure this edge will hold…
Yes, Apple’s hardware expertise is legendary, built on decades of refining mass production through vertically integrated supply chains.
They also excel in key areas like displays, chips, and battery life—critical components of AR devices.
Most importantly, their ability to produce at scale drives down costs, and as Meta readily admits, Orion is far too expensive today. It would cost upwards of $10,000 per unit to produce. And while Apple almost certainly has a similar prototype in a lab, bringing that price down without significant technological compromises is going to be a massive challenge.
The best shot at doing so will be serious economies of scale; something Apple does in its sleep, at levels Meta hasn’t even sniffed.
Regardless, I think it’s a mistake to discount Meta’s hardware progress and trajectory.
Their Ray-Ban glasses, built in partnership with Luxottica, are affordable, well-designed, and gaining meaningful traction. Everyone I know who owns them loves them, including myself.
They’ve also shipped millions of Quest VR/MR headsets, giving them valuable experience in developing the right supply chain and refining hardware for mass-market adoption. Even the now-defunct Meta Portal (RIP) taught them hard lessons about manufacturing and consumer preferences.
And let’s not forget the looming, great equalizer: AI + robotics.
Meta’s manufacturing prowess will be further accelerated by advances in robotics, AI-driven design, and simulation technologies. Armed with these innovations, they’re poised to pull off something similar to Tesla.
In less than a decade, Tesla went from an underdog with little manufacturing experience to leading the industry, leapfrogging traditional automakers in areas like battery efficiency and production scalability, mostly powered by automation & robotics; the great equalizers.
I predict a similar journey for Meta with AR hardware, propelled by their willingness to innovate and a lack of inertia from legacy operations & products.
But this game doesn’t reside with the AR glasses alone… The other critical dimension is a device for offloaded compute.
Meta’s Orion comes with a small puck for running heavy workloads that impact battery life, heat, weight, etc. Most experts agree this will be the standard for consumer AR.
In which case… Apple should have the edge here as well.
The iPhone (and/or Macbooks) could become the de facto “edge computer” for AR, handling heavy workloads for rendering, positional tracking, room mapping, and object recognition.
This is where Apple’s custom silicon will also be a major advantage—year after year, their chips outpace competitors in power and efficiency, making the iPhone a natural candidate for this role.
But here’s where things get tricky: the iPhone is Apple’s crown jewel, its most profitable product, and the backbone of its ecosystem.
Repurposing the iPhone for high-performance AR workloads would require significant changes to its design, battery life, and heat management—potentially undermining its appeal as a general-purpose device.
This is a classic innovator’s dilemma: will Apple risk its cash cow for a future that is still speculative?
History suggests otherwise. Think of Kodak, which clung to film cameras while digital photography surged ahead, or Intel, which delayed its move into mobile chips and GPUs, allowing competitors like ARM and NVIDIA to dominate.
Apple could hedge this risk by introducing a specialized “iPhone Pro++,” catering to AR enthusiasts while preserving its core smartphone market. But even then, they would need to confront operational trade-offs and internal politics to execute this strategy effectively.
Meta, by contrast, faces no such legacy constraints. They can innovate with the offloaded compute model without worrying about cannibalizing an existing product line. Their relative freedom to experiment positions them well to iterate quickly and find solutions that work for consumers, unencumbered by the bureaucratic inertia that often hampers established players.
There’s one last variable that is under-discussed. We’ll tease it out with a question…
Is this really an AR glasses battle? Or… is it a ‘wearable computing’ battle?
Despite my/our industry’s bias for AR glasses… It’s hard to see glasses being a general-purpose computer (similar to the smartphone today).
It’s likely going to be a companion device, in a world of many companions: your watch, your earbuds, your phone, and for some, a chip (in your brain, as we explored in this podcast episode).
This next paradigm isn’t about one or two screens sharing a common operating system and a common interface. The touch/cursor interface is dissolving and becoming dynamic. It will be a mixture of audio, light visuals, heavy visuals, all controlled by a variety of inputs: gestures, voice, eyes, and yes… even thought.
In this world, the winner will be the company that can stitch all these things together into a seamless experience, such that they all work in harmony, with a level of contextual awareness. This is the idea of ‘ambient computing’.
Meta has some amazing IP here with their wrist device that can read EMG signals. This allows you to control an interface by just thinking about a gesture. Meta could certainly make this a smartwatch, either homegrown or via partners. The partner route is the most likely, rinsing & repeating the Luxottica/Ray Bans success.
But it’s hard to argue against Apple’s strength here: iWatch, iPhone, Air Pods, etc… the fusion of Apple’s hardware ecosystem will be very hard to beat. Although not impossible…
If Meta can build a compelling ‘spatial/ambient’ operating system, and then deploy it in a similar fashion to Android…. then ecosystem innovation abounds.
This just might be the winning strategy: let the companies who excel at wearables duke it out on top of your software platform. This would allow Meta to effectively crowdsource their wearables ecosystem. May the best brand/team win.
The critical risk here is quality control (at which point, an acquisition could bridge the gap).
Final Prediction
The hardware race will be much closer than people think. While I do think Meta will rapidly catch up, and potentially leapfrog (for AR glasses at least), it’d be overly speculative to give Meta the advantage here.
Apple is *the* hardware company of our generation. And I think their ability to make numerous devices sing in harmony is going to yield the ultimate user experience. This, combined with innovations at the chip layer, gives Apple the edge.
That said… don’t get too distracted by the hardware battle. This won’t be where the war is won or lost.
The UI/UX will come down to software, and ultimately, AI… which is where we’ll be going next in Part 3…
Editor’s note: This article was written before Google’s recent Android XR announcement, which the author intends to cover in future articles…
Evan Helda is a writer, podcaster, and 8-year spatial computing veteran. A version of this post originally appeared on MediumEnergy.io, a newsletter exploring spatial computing, AI, and being human in the digital age (subscribe here). Opinions expressed here are his, and not necessarily those of his employer.