Humanoids are moving from research labs into real industries — and capital is finally catching up.
Updated
January 8, 2026 6:31 PM

A face of a humanoid robot, side view on black background. PHOTO: UNSPLASH
Humanoid robots are shifting from sci-fi speculation to engineering reality, and the pace of progress is prompting investors to reassess how the next decade of physical automation will unfold. ALM Ventures has launched a new US$100 million early-stage fund aimed squarely at this moment—one where advances in robot control, embodied AI and spatial intelligence are beginning to converge into something commercially meaningful.
ALM Ventures Fund I, is designed for the earliest stages of company formation, targeting seed and pre-seed teams building the foundations of humanoid deployment. It’s a concentrated fund that seeks to take early ownership in a sector that many now consider the next major technological frontier.
For Founder and General Partner Modar Alaoui, the timing is not accidental. “After years of research, humanoids are finally entering a phase where performance, reliability and cost are converging toward commercial viability”, he said. “What the category needs now is focused capital and deep technical diligence to turn prototypes into scalable, enduring companies”.
That framing captures a shift happening across robotics: the field is moving out of the lab and into early commercial readiness. Improvements in perception systems, model-based reasoning and motion control are accelerating the transition. Advances in simulation are also lowering the complexity and cost of integrating humanoid platforms into real environments. As these systems become more capable, the gap between research prototypes and market-ready products is narrowing.
ALM Ventures is positioning itself at this inflection point. Fund I’s thesis centers on the core technologies required to scale humanoids safely and economically. This includes next-generation robot platforms, spatial reasoning engines, embodied intelligence models, world-modeling systems and the infrastructure needed for early deployment. Rather than chasing every robotics trend, the fund is concentrating on the essential layers that will determine whether humanoids can work reliably outside controlled settings.
The firm isn’t starting from zero. During the fund’s formation, ALM Ventures made ten early investments that directly align with its investment focus. The portfolio includes companies building at different layers of the humanoid stack, such as Sanctuary AI, Weave Robotics, Emancro, High Torque Robotics, MicroFactory, Mbodi, Adamo, Haptica Robotics, UMA and O-ID. The list reflects a broad but intentional spread, from hardware to intelligence to manufacturing approaches, all oriented toward enabling scalable physical AI.
Beyond capital, ALM Ventures has been shaping the ecosystem through its global Humanoids Summit series in Silicon Valley, London and Tokyo. The series gives the firm early visibility into emerging technologies, pre-incorporation teams and the senior leaders steering the global robotics landscape. That vantage point has helped the firm identify where commercialization is truly taking root and where bottlenecks still exist.
The rise of humanoids is often compared to the early days of self-driving cars: a long arc of research suddenly meeting an acceleration point. What separates this moment is that advances in embodied AI and spatial intelligence are giving robots a more intuitive understanding of the physical world, making them easier to deploy, teach and scale. ALM Ventures’ Fund I is an attempt to capture that transition while shaping the companies that could define the next technological era.
With US$100 million dedicated to the earliest builders in the space, ALM Ventures is signaling its belief that humanoids are not just another robotics cycle—they may be the next major platform shift in AI.
Keep Reading
A closer look at how reading, conversation, and AI are being combined
Updated
February 7, 2026 2:18 PM

Assorted plush character toys piled inside a glass claw machine. PHOTO: ADOBE STOCK
In the past, “educational toys” usually meant flashcards, prerecorded stories or apps that asked children to tap a screen. ChooChoo takes a different approach. It is designed not to instruct children at them, but to talk with them.
ChooChoo is an AI-powered interactive reading companion built for children aged three to six. Instead of playing stories passively, it engages kids in conversation while reading. It asks questions, reacts to answers, introduces new words in context and adjusts the story flow based on how the child responds. The goal is not entertainment alone, but language development through dialogue.
That idea is rooted in research, not novelty. ChooChoo is inspired by dialogic reading methods from Yale’s early childhood language development work, which show that children learn language faster when stories become two-way conversations rather than one-way narration. Used consistently, this approach has been shown to improve vocabulary, comprehension and confidence within weeks.
The project was created by Dr. Diana Zhu, who holds a PhD from Yale and focused her work on how children acquire language. Her aim with ChooChoo was to turn academic insight into something practical and warm enough to live in a child’s room. The result is a device that listens, responds and adapts instead of simply playing content on command.
What makes this possible is not just AI, but where that AI runs.
Unlike many smart toys that rely heavily on the cloud, ChooChoo is built on RiseLink’s edge AI platform. That means much of the intelligence happens directly on the device itself rather than being sent back and forth to remote servers. This design choice has three major implications.
First, it reduces delay. Conversations feel natural because the toy can respond almost instantly. Second, it lowers power consumption, allowing the device to stay “always on” without draining the battery quickly. Third, it improves privacy. Sensitive interactions are processed locally instead of being continuously streamed online.
RiseLink’s hardware, including its ultra-low-power AI system-on-chip designs, is already used at large scale in consumer electronics. The company ships hundreds of millions of connected chips every year and works with global brands like LG, Samsung, Midea and Hisense. In ChooChoo’s case, that same industrial-grade reliability is being applied to a child’s learning environment.
The result is a toy that behaves less like a gadget and more like a conversational partner. It engages children in back-and-forth discussion during stories, introduces new vocabulary in natural context, pays attention to comprehension and emotional language and adjusts its pace and tone based on each child’s interests and progress. Parents can also view progress through an optional app that shows what words their child has learned and how the system is adjusting over time.
What matters here is not that ChooChoo is “smart,” but that it reflects a shift in how technology enters early education. Instead of replacing teachers or parents, tools like this are designed to support human interaction by modeling it. The emphasis is on listening, responding and encouraging curiosity rather than testing or drilling.
That same philosophy is starting to shape the future of companion robots more broadly. As edge AI improves and hardware becomes smaller and more energy efficient, we are likely to see more devices that live alongside people instead of in front of them. Not just toys, but helpers, tutors and assistants that operate quietly in the background, responding when needed and staying out of the way when not.
In that sense, ChooChoo is less about novelty and more about direction. It shows what happens when AI is designed not for spectacle, but for presence. Not for control, but for conversation.
If companion robots become part of daily life in the coming years, their success may depend less on how powerful they are and more on how well they understand when to speak, when to listen and how to grow with the people who use them.