Welcome back to Spatial Beats, where we round up all the top news and happenings from around the spatial computing spectrum, including its escalating infusions with AI and other emerging tech. Let’s dive in…

The Lede

Chinese AR startup Rokid has launched a Kickstarter campaign for its new smart glasses featuring green monochrome displays, generating over $500,000 within the first 24 hours. Priced at $479 (early-bird tier) or $599 MSRP, the device supports AI voice queries via ChatGPT, music playback, calls, and photo/video capture, navigation, teleprompter functionality, and real‑time text and voice translation. Last week, HTC unveiled the AI-powered Vive Eagle, a sleek smart glasses model weighing under 49 g and priced around $520. Meta is the market leader in Audio Smartglasses with the $399 Ray-Bans and Oakleys selling over 3 M units in the past two years.

Feeling Spatial

Samsung to introduce Project Moohan, its first XR headset, at a Galaxy Unpacked, September 29, 2025, with sales to the public on October 13 in Korea, followed by a staggered global rollout. Qualcomm’s Snapdragon XR2+ Gen 2 chip powers the headset, which looks remarkably like the Vision Pro, with 16 GB of RAM and built-in Gemini AI, runs Android XR.. Price estimates range from $1,800 – $2,900, placing it below Apple Vision Pro but still premium. As far as I’m concerned, the only thing interesting here is Android XR OS from Google. Samsung and Apple are fighting the last war over fully occluded Mixed Reality (camera pass-through systems) headsets so, near term, they both lose, but there are many more rounds to come in this match.

Snap is exploring outside investments to help bring its Specs AR glasses to market next year. The company has already invested $3 billion into AR over the past decade, yet its core social media business remains unprofitable, prompting CEO Evan Spiegel to say he’s now “ready to ask for some help.” Snap has even discussed the “nuclear option” of spinning off its AR unit into a separate company, though that presents challenges given the shared tech infrastructure. For now, the company maintains it doesn’t strictly require external funding but is evaluating the most efficient path to capitalize on its long-term AR investment.

Red 6 Lands a Whale of an AR Deal With the U.S. Air Force. The DOD awarded a contract to Red 6 to integrate its Airborne Tactical Augmented Reality System (ATARS) into F‑16 fighters over the next 12–18 months. ATARS is a helmet‑mounted AR system that projects realistic adversary aircraft, missiles, and friendly wingmen into the pilot’s field of view while flying. This enables in‑flight, immersive, and scalable training against virtual threats without relying on real “Red Air” aircraft, reducing costs and improving readiness. The program follows prior ATARS deployment on platforms like the T‑38, MC‑130, and RAF’s Hawk T‑2.

Social VR platform Rec Room has laid off around half its staff, reducing to just over 100 employees. The decision stems from an overwhelming influx of low‑quality user-generated content, particularly from mobile and console creators that placed significant review and infrastructure burdens on the team. Co-founders Cameron Brown and Nick Fajt emphasized that this restructure is a “business necessity” and not a reflection of employee performance. Affected staff will receive three months of pay, six months of health benefits, and may keep their work computers.

Adobe confirmed that it will discontinue its augmented reality design tool, Adobe Aero, on November 6, 2025, across all platforms. Complete user data removal from Adobe’s servers is scheduled for December 16, 2025. Simultaneously, Adobe has been quietly ramping up its generative AI capabilities by integrating Google’s Gemini 2.5 Flash Image model into Adobe Firefly and Adobe Express. This enhancement enables users to generate stylized graphics and then animate, resize, caption, and deploy them.

Meta’s Horizon Worlds will soon support fully‑embodied AI NPCs powered by its latest Llama LLM, enabling creators to place interactive characters that offer unscripted, dynamic conversations layered with scripted dialogue and selectable AI voices. Previously limited to non‑embodied background agents, these new NPCs can be customized by name, backstory, personality traits, and dialogue style, ideal as quest givers, guides, shopkeepers, or lore-rich characters. Later this year, Meta will expand their capabilities to trigger in‑world actions, making NPCs active participants in gameplay. Further details are expected at the Meta Connect developer conference starting September 17.

The AI Desk

“Nano Banana” is the whimsical internal codename for Google’s latest AI-powered image generation and editing model, officially launched as Gemini 2.5 Flash Image. Developed by Google DeepMind, Nano Banana integrates into the Gemini app (and is accessible via the Gemini API, Google AI Studio, and Vertex AI). It excels at maintaining character consistency to ensure that people, pets, and objects retain their appearance across successive edits. Users can accomplish complex, multi‑step edits such as changing attire, blending multiple images, or reimagining scenes, all through natural language prompts. Outputs include a visible watermark and an invisible SynthID watermark to clearly indicate AI-generated content.

Spatial Audio

For more spatial commentary & insights, check out the AI/XR Podcast, hosted by the author of this column, Charlie Fink, Ted Schilowitz, former studio executive and co-founder of Red Camera, and Rony Abovitz, founder of Magic Leap. This week, we’re off for Labor Day, but you can see the entire archive and any episodes you missed on SpotifyiTunes, and YouTube.

Charlie Fink is an author and futurist focused on spatial computing. See his books here. Spatial Beats contains insights and inputs from Fink’s collaborators including Paramount Pictures futurist Ted Shilowitz.

More from AR Insider…