
It came as no surprise as I walked the show floors of CES 2026 that AI is now the baseline infrastructure for a comprehensive range of new consumer technology. What did surprise and impress me was how innovative companies are at integrating this new-ish technology. AI runs through appliances, toys, wearables, vehicles, robotics, and home infrastructure, which means everywhere you look at CES. Some implementations are practical, some speculative, and some verge on performance art, but together they describe a market where intelligence is assumed.
That scope reflects how the show itself has evolved. As Gary Shapiro, CEO of the Consumer Technology Association, which puts on CES, the world’s largest trade show, said on The AI/XR Podcast last week, “Artificial intelligence has a bigger footprint at CES than ever before. It has greater pre-registration, a record number of innovation entries, and it’s touching every category.”
Let’s start with a mundane yet important place always packed with products at CES: the kitchen. Wan AIChef is an AI cooking agent, a system that identifies ingredients, selects cooking methods, controls temperature, and executes meals autonomously once food is loaded. Cooking is treated as a data-driven process, combining sensors, models, and a recipe system trained by professional chefs. The kitchen becomes a managed environment, optimized through software and precision hardware.
A few aisles away, FoloToy’s AI Sunflower is a screen-free animated plush that listens to music and moves in response to rhythm and tempo. It behaves like a character in the room, reacting to sound and presence. This is part of a growing class of ambient AI objects designed to add expression and interaction to everyday spaces.
Lepro Ami is a desktop AI companion with a curved OLED display, eye tracking cameras, environmental sensors, and an animated avatar designed to occupy physical space on a desk. It responds to movement, mood, and environment, framing AI as something that shares space rather than waits in the background.
Unexpectedly, outdoor exploration, in this case, birding, is also being reshaped. This took me by surprise, but shouldn’t have, given the breadth of this year’s show. Solvia’s ED 8×32 binoculars combine traditional optics with a built-in camera and offline species recognition trained on a large bird dataset. Users identify birds directly through the binoculars, then log and share discoveries through a companion app. Observation, identification, and documentation collapse into a single continuous experience. According to the US Department of Agriculture, there are 96 million birders in the United States.
Wearables are one of the densest categories on the floor this year, and the most interesting examples focus on context rather than commands. Looki’s L1 is a proactive AI wearable designed to recognize situations in real time and respond automatically. Using a camera, microphones, and motion sensors, it identifies context and captures moments or surfaces information without user prompts. During CES, Looki activates an Expo mode tailored for conferences, tracking interactions, flagging moments to revisit, and generating follow-up cues. I will be wearing Looki throughout the show to see how a context-aware wearable performs inside the density and noise of CES.
Nirva’s attractive AI jewelry won a CES innovation award for its attractive, unobtrusive design for an AI all-day wearable. It gathers data through audio, motion, and light exposure to automatically journal daily activity, track emotional patterns, and map social interactions. The device is designed for continuity, building a model of routine and behavior through constant presence rather than periodic use. Nirva, if you’re reading this, I’d love to try it at the show, too.
Shapiro mentioned health tech, a rapidly expanding category at the show, too. Xtand is demonstrating an AI-driven knee strap that dynamically adjusts pressure based on movement, paired with lightweight exoskeletons designed for mobility assistance and rehabilitation. Real-time sensing allows the system to anticipate strain and adapt support, applying AI to injury prevention and endurance. I have two family members who could really benefit from this innovative product.
Robotics occupies an expanding footprint. Robotera is exhibiting full-size humanoid robots with dexterous robotic hands capable of fine motor control and coordinated movement. The demonstrations emphasize stability, teleoperation, and precision, reinforcing the industry’s focus on machines that can operate safely in shared human spaces.
LumiMind showcases a non-invasive brain-computer interface technology through a live demonstration of brain-controlled gameplay in Elden Ring. The demo serves as a capability proof for a consumer sleep device that detects neural patterns associated with sleep onset and responds in real time. Neural decoding is moving toward consumer contexts.
Then there is the lawn mower. I was ready to ridicule this one until I saw it. Sunseeker’s S4 is a fully autonomous LiDAR-equipped robotic mower that maps yards, avoids obstacles, and operates without perimeter wires. With 3D sensing and adaptive path planning, it applies advanced AI navigation to routine lawn care. It is both technically impressive and emblematic of how CES exhibitors are applying sophisticated AI and computer vision systems to ordinary tasks.
As Shapiro noted on the podcast, “Robotics and applied AI have their biggest presence ever at CES, and this is just the beginning.” Much of what appears this week will remain experimental. Some products will enter everyday use while others will remain technology demos, framing this year’s show as the time when AI became common enough to appear everywhere at once.
Header image credit: I’M ZION on Unsplash
Charlie Fink is the author of the AR-enabled books “Metaverse,” (2017) and “Convergence” (2019). In the early 90s, Fink was EVP & COO of VR pioneer Virtual World Entertainment. He teaches at Chapman University in Orange, CA.

