Startup Profiles

Startup Applied Brain Research Raises Seed Funding to Develop On-Device Voice AI

Why investors are backing Applied Brain Research’s on-device voice AI approach.

Updated

January 14, 2026 1:38 PM

Plastic model of a human's brain. PHOTO: UNSPLASH

Applied Brain Research (ABR), a Canada-based startup, has closed its seed funding round to advance its work in “on-device voice AI”. The round was led by Two Small Fish Ventures, with its general partner Eva Lau joining ABR’s board, reflecting investor confidence in the company’s technical direction and market focus.

The round was oversubscribed, meaning more investors wanted to participate than the company had planned for. That response reflects growing interest in technologies that reduce reliance on cloud-based AI systems.

ABR is focused on a clear problem in voice-enabled products today. Most voice features depend on cloud servers to process speech, which can cause delays, increase costs, raise privacy concerns and limit performance on devices with small batteries or limited computing power.

ABR’s approach is built around keeping voice AI fully on-device. Instead of relying on cloud connectivity, its technology allows devices to process speech locally, enabling faster responses and more predictable performance while reducing data exposure.

Central to this approach is the company’s TSP1 chip, a processor designed specifically for handling time-based data such as speech. Built for real-time voice processing at the edge, TSP1 allows tasks like speech recognition and text-to-speech to run on smaller, power-constrained devices.

This specialization is particularly relevant as voice interfaces become more common across emerging products. Many edge devices such as wearables or mobile robotics cannot support traditional voice AI systems without compromising battery life or responsiveness. The TSP1 addresses this limitation by enabling these capabilities at significantly lower power levels than conventional alternatives. According to the company, full speech-to-text and text-to-speech can run at under 30 milliwatts of power, which is roughly 10 to 100 times lower than many existing alternatives. This level of efficiency makes advanced voice interaction feasible on devices where power consumption has long been a limiting factor.

That efficiency makes the technology applicable across a wide range of use cases. In augmented reality glasses, it supports responsive, hands-free voice control. In robotics, it enables real-time voice interaction without cloud latency or ongoing service costs. For wearables, it expands voice functionality without severely impacting battery life. In medical devices, it allows on-device inference while keeping sensitive data local. And in automotive systems, it enables consistent voice experiences regardless of network availability.

For investors, this combination of timing and technology is what stands out. Voice interfaces are becoming more common, while reliance on cloud infrastructure is increasingly seen as a limitation rather than a strength. ABR sits at the intersection of those two shifts.

With fresh funding in place, ABR is now working with partners across AR, robotics, healthcare, automotive and wearables to bring that future closer. For startup watchers, it’s a reminder that some of the most meaningful AI advances aren’t about bigger models but about making intelligence fit where it actually needs to live.

Keep Reading

Artificial Intelligence

AgiBot Brings Real‐World Reinforcement Learning to Factory Floors

Robots that learn on the job: AgiBot tests reinforcement learning in real-world manufacturing.

Updated

January 8, 2026 6:34 PM

A humanoid robot works on a factory line, showcasing advanced automation in real-world production. PHOTO: AGIBOT

Shanghai-based robotics firm AgiBot has taken a major step toward bringing artificial intelligence into real manufacturing. The company announced that its Real-World Reinforcement Learning (RW-RL) system has been successfully deployed on a pilot production line run in partnership with Longcheer Technology.  It marks one of the first real applications of reinforcement learning in industrial robotics.

The project represents a key shift in factory automation. For years, precision manufacturing has relied on rigid setups: robots that need custom fixtures, intricate programming and long calibration cycles. Even newer systems combining vision and force control often struggle with slow deployment and complex maintenance. AgiBot’s system aims to change that by letting robots learn and adapt on the job, reducing the need for extensive tuning or manual reconfiguration.

The RW-RL setup allows a robot to pick up new tasks within minutes rather than weeks. Once trained, the system can automatically adjust to variations, such as changes in part placement or size tolerance, maintaining steady performance throughout long operations. When production lines switch models or products, only minor hardware tweaks are needed. This flexibility could significantly cut downtime and setup costs in industries where rapid product turnover is common.

The system’s main strengths lie in faster deployment, high adaptability and easier reconfiguration. In practice, robots can be retrained quickly for new tasks without needing new fixtures or tools — a long-standing obstacle in consumer electronics production. The platform also works reliably across different factory layouts, showing potential for broader use in complex or varied manufacturing environments.

Beyond its technical claims, the milestone demonstrates a deeper convergence between algorithmic intelligence and mechanical motion.Instead of being tested only in the lab, AgiBot’s system was tried in real factory settings, showing it can perform reliably outside research conditions.

This progress builds on years of reinforcement learning research, which has gradually pushed AI toward greater stability and real-world usability. AgiBot’s Chief Scientist Dr. Jianlan Luo and his team have been at the forefront of that effort, refining algorithms capable of reliable performance on physical machines. Their work now underpins a production-ready platform that blends adaptive learning with precision motion control — turning what was once a research goal into a working industrial solution.

Looking forward, the two companies plan to extend the approach to other manufacturing areas, including consumer electronics and automotive components. They also aim to develop modular robot systems that can integrate smoothly with existing production setups.