Welcome back to our weekly roundup of happenings from XR and AI realms. Let’s dive in…

The Lede

Apple is accelerating development of AI-focused wearable hardware, including smart glasses, upgraded AirPods, and a wearable pendant. Apple is preparing smart glasses with two cameras and targeting an early 2027 launch, according to reporting cited by UploadVR. One camera would capture photos and video while the other would handle computer vision tasks such as spatial awareness and contextual assistance. The device is expected to include microphones and speakers, run on a custom Apple chipset derived from Apple Watch silicon, and support calls, music, navigation, translation, and multimodal AI features Apple calls Visual Intelligence. Mass production is reportedly scheduled for late 2026, with Apple emphasizing build quality and camera performance to differentiate from competitors.

Feeling Spatial

Meta is preparing to release its first smartwatch later this year, reviving an abandoned hardware effort previously shelved during Reality Labs budget cuts. The device, internally called Malibu 2, is expected to include health tracking features and a built-in Meta AI assistant. The watch could integrate with Meta’s smart glasses. Reality Labs is growing this segment as VR shrinks.

VRChat recorded its highest concurrency ever after a Japanese language virtual concert drew more than 156,000 simultaneous users. The event was part of the month-long Sanrio Virtual Festival running from February 8 to March 8, featuring VTubers, musical acts, and themed environments. The peak surpassed a prior New Year’s Eve record of about 148,000 users. Organizers attribute the spike to strong Japanese participation and growing mainstream visibility in that market, where brand activations such as McDonald’s Japan worlds have appeared. Additional performances of the headline concert are scheduled through early March.

Meta is restructuring Horizon by separating its social world platform from Quest VR hardware and shifting Worlds toward mobile devices. Company leadership said the change is intended to expand reach and compete with large-scale mobile platforms while still supporting VR development. The strategy follows years of heavy investment in Reality Labs and internal restructuring, including layoffs and studio closures. Executives said Horizon’s future includes sharing AI-generated 3D content across Meta services without requiring headsets. The company maintains plans for new hardware while repositioning internal apps that historically saw lower usage than third-party experiences.

Specs Exec Splits Snap After Rumored Blow Up With Spiegel. Snap lost Scott Myers, its senior vice president overseeing the Specs AR glasses program, just as the company prepares to launch its standalone AR smart glasses later this year. TechCrunch reports Myers resigned after a dispute with CEO Evan Spiegel. The departure comes during a strategic push around hardware, including spinning the Specs team into a separate subsidiary to improve focus. Myers joined Snap in 2020 after roles at Apple, SpaceX, and Nokia, and previously described the glasses as a new computing paradigm.

Follow the Money

World Labs, founded by Fei-Fei Li, raised one billion dollars to develop spatial intelligence systems known as world models. Investors include Autodesk, Nvidia, AMD, Andreessen Horowitz, Fidelity, Emerson Collective, and Sea. Autodesk alone invested two hundred million dollars and will collaborate on integrating world model technology with design software. The company’s first product, Marble, generates persistent editable 3D environments from text, images, or video for applications including robotics, gaming, visual effects, and simulation. The funding will accelerate the development of these models, which are designed to perceive and interact with three-dimensional environments.

Ricursive Intelligence raised $335 million at a $4 billion valuation within four months of launching. Founded by former Google Brain researchers Anna Goldie and Azalia Mirhoseini, the startup develops AI systems that design computer chips, automating layout and verification tasks that normally take a year and compressing them into hours. Investors include Nvidia, AMD, and Intel. The company’s platform is designed to learn from each chip it produces, improving future designs and potentially accelerating AI progress by advancing the hardware that powers models.

Hook, a New York-based social AI music creation platform, raised $10 million dollars in Series A funding led by Khosla Ventures, bringing total capital to about sixteen million. The platform lets users remix songs with AI tools and distribute them across social media while rights holders retain ownership and monetization. Investors in the round include Point72 Ventures, Imaginary Ventures, Waverley Capital, and music industry backers such as DJ KSHMR. Hook has licensing partnerships that provide access to more than twenty million songs and has worked with major labels on campaigns generating over 250 million views.

The AI Desk

Google added generative music to the Gemini app using DeepMind’s Lyria 3 model. Users can create original 30-second tracks by describing a genre, mood, or idea, or by uploading images or video for audio matching. The system automatically generates lyrics, instrumentation, and cover art, and allows control over elements such as tempo and vocal style. Tracks include SynthID watermarking for provenance. The feature is available globally for users 18 and older, with support for multiple languages and higher usage limits for paid tiers.

The Seedance 2.0 Gen AI Video Has Lit Up Hollywood. For reasons only known to Bytedance, the company’s newly released video generator can easily be prompted to create copyright-protected images and characters. There was a twenty-second fight scene between Tom Cruise and Brad Pitt that nearly broke the internet last week. Bytedance is well aware of the guardrails its competitors, even Chinese ones, like Kling, put around their models to prevent deep fakes. So why did it choose to ignore them? Now that Disney and others have sent nastygrams, Bytedance has apologized and promised to apply fixes. The real story is how easily reality can now be manipulated convincingly enough to make us doubt the provenance of any image, even one seen with our own lying eyes.

https://youtu.be/V5AFQaEbHQU

Unity says its upcoming generative AI tools will let developers create full casual games from natural language prompts inside its engine. A beta is planned for March at the Game Developers Conference. CEO Matthew Bromberg said the system understands project context and runtime requirements, then assembles gameplay logic, assets, and structure to produce playable prototypes without coding. Democratization of Game Engines is not welcome by developers, who view generative AI negatively despite its productivity potential.

Spatial Audio

For more spatial commentary & insights, check out the AI/XR Podcast, hosted by the author of this column, Charlie Fink, and Ted Schilowitz, former studio executive and futurist for Paramount and Fox, and Rony Abovitz, founder of Magic Leap. This week, our guest is Vicki Dobbs Beck, recently retired head of ILMx Labs. You can find it on podcasting platforms SpotifyiTunes, and YouTube.

Charlie Fink is an author and futurist focused on spatial computing. See his books here. Spatial Beats contains insights and inputs from Fink’s collaborators, including Paramount Pictures futurist Ted Shilowitz.