
Welcome back to our weekly roundup of happenings from XR and AI realms. Let’s dive in…
The Lede
Real-time visual context from outward-facing cameras and location awareness is pushing AR glasses toward a tipping point. Meta’s “look and ask” feature shows what happens when a wearable understands where you are and what you are looking at. Previously, users had to take a photo and then query the photo. Now, the computer vision is live. Listening and watching. The glasses can turn any street, storefront, or sign into a searchable interface. You can look at a restaurant and ask for a review or ask for the cheapest downtown parking lot. You can glance at a storefront and get reviews and menus in real time. Google is pursuing the same idea in its Gemini-powered Maps and Live View layers, where the phone camera acts as a query field for the physical world. Meta’s glasses and Google’s mobile system are moving toward the same outcome, ambient intelligence that attaches context to place and makes visual search an everyday experience.
Feeling Spatial
Valve also stepped back into hardware territory this week. Reports from Skarredghost and Road to VR confirm that the company is preparing a headset called the Steam Frame, along with a compact PC called the Steam Cube. The headset runs SteamOS on a Snapdragon based platform with dual LCD panels at 2160 by 2160 per eye. Valve says it will support wireless streaming from a full Steam library, meaning PC VR experiences without a tether. The Cube is a small form factor SteamOS machine that can act as a companion for desktop and living room. Pricing is not public. Valve says the headset will be below the cost of the Index headset, with release planned for early 2026. The move gives PC VR developers a clearer hardware target and signals that Valve still sees a future for high performance VR even as the rest of the industry tilts toward mixed reality.
World Labs and Escape.ai teamed up to transform traditional films into immersive 3-D spaces. Using Escape.ai’s video-intelligence engine to extract key frames and Marble (World Labs’ world-generation API) to build Gaussian-splat based geometry, the pipeline auto-creates an explorable 3-D environment and embeds the film inside it for a hybrid viewing and walking experience. For filmmakers, the collaboration lowers the barrier to immersive companion spaces; for viewers, it turns a film into a place. The effort proves that spatial cinema — film plus 3-D world in real time — is operational and ready for scaling.
The AI Desk
ElevenLabs added another piece to the week with the launch of its Iconic Voice Marketplace. The company is formalizing the licensing of AI-generated voices from well-known actors and public figures, including Michael Caine. The system provides a performer-first rights structure with approvals and commercial terms for use in ads, narration, and interactive content. This places synthetic voice work on firmer legal ground and begins to move the industry away from the wild cloning era that caused so much concern among performers. It also gives creators in XR, gaming, and immersive media a controlled way to add recognizable voices to projects without long production cycles, provided the rights holders have opted in. Matthew McConaughey, an investor in Eleven Labs, uses the technology to read his newsletter in Spanish, though his voice is not available for licensing.
Who owns the dreams of an AI machine? Sora 2 begs the question, and the courts are already answering it. I felt the split myself when I made two fifteen-second shorts. One was a scripted sci-fi western built from detailed prompts. The other was a loose request to revisit my childhood that Sora turned into a personal homage to Back to the Future, complete with a glowing portal and parents from another timeline. One felt authored. One felt delivered. Judges in the Anthropic and Getty cases are drawing the same line, treating control as the measure of ownership. That standard is starting to shape the creative economy.
Spatial Audio
For more spatial commentary & insights, check out the AI/XR Podcast, hosted by the author of this column, Charlie Fink, and Ted Schilowitz, former studio executive and futurist for Paramount and Fox, and Rony Abovitz, founder of Magic Leap. This week’s guests are Lamina 1 co-founders Rebecca Barkin and author Neal Stephenson. You can find it on podcasting platforms Spotify, iTunes, and YouTube.
Charlie Fink is an author and futurist focused on spatial computing. See his books here. Spatial Beats contains insights and inputs from Fink’s collaborators, including Paramount Pictures futurist Ted Shilowitz.





