Robots that learn on the job: AgiBot tests reinforcement learning in real-world manufacturing.
Updated
November 27, 2025 3:26 PM

A humanoid robot works on a factory line, showcasing advanced automation in real-world production. PHOTO: AGIBOT
Shanghai-based robotics firm AgiBot has taken a major step toward bringing artificial intelligence into real manufacturing. The company announced that its Real-World Reinforcement Learning (RW-RL) system has been successfully deployed on a pilot production line run in partnership with Longcheer Technology. It marks one of the first real applications of reinforcement learning in industrial robotics.
The project represents a key shift in factory automation. For years, precision manufacturing has relied on rigid setups: robots that need custom fixtures, intricate programming and long calibration cycles. Even newer systems combining vision and force control often struggle with slow deployment and complex maintenance. AgiBot’s system aims to change that by letting robots learn and adapt on the job, reducing the need for extensive tuning or manual reconfiguration.
The RW-RL setup allows a robot to pick up new tasks within minutes rather than weeks. Once trained, the system can automatically adjust to variations, such as changes in part placement or size tolerance, maintaining steady performance throughout long operations. When production lines switch models or products, only minor hardware tweaks are needed. This flexibility could significantly cut downtime and setup costs in industries where rapid product turnover is common.
The system’s main strengths lie in faster deployment, high adaptability and easier reconfiguration. In practice, robots can be retrained quickly for new tasks without needing new fixtures or tools — a long-standing obstacle in consumer electronics production. The platform also works reliably across different factory layouts, showing potential for broader use in complex or varied manufacturing environments.
Beyond its technical claims, the milestone demonstrates a deeper convergence between algorithmic intelligence and mechanical motion.Instead of being tested only in the lab, AgiBot’s system was tried in real factory settings, showing it can perform reliably outside research conditions.
This progress builds on years of reinforcement learning research, which has gradually pushed AI toward greater stability and real-world usability. AgiBot’s Chief Scientist Dr. Jianlan Luo and his team have been at the forefront of that effort, refining algorithms capable of reliable performance on physical machines. Their work now underpins a production-ready platform that blends adaptive learning with precision motion control — turning what was once a research goal into a working industrial solution.
Looking forward, the two companies plan to extend the approach to other manufacturing areas, including consumer electronics and automotive components. They also aim to develop modular robot systems that can integrate smoothly with existing production setups.
Keep Reading
Redefining sensor performance with advanced physical AI and signal processing.
Updated
December 16, 2025 3:28 PM
.jpg)
Robot with human features, equipped with a visual sensor. PHOTO: UNSPLASH
Atomathic, the company once known as Neural Propulsion Systems, is stepping into the spotlight with a bold claim: its new AI platforms can help machines “see the invisible”. With the commercial launch of AIDAR™ and AISIR™, the company says it is opening a new chapter for physical AI, AI sensing and advanced sensor technology across automotive, aviation, defense, robotics and semiconductor manufacturing.
The idea behind these platforms is simple yet ambitious. Machines gather enormous amounts of signal data, yet they still struggle to understand the faint, fast or hidden details that matter most when making decisions. Atomathic says its software closes that gap. By applying AI signal processing directly to raw physical signals, the company aims to help sensors pick up subtle patterns that traditional systems miss, enabling faster reactions and more confident autonomous system performance.
"To realize the promise of physical AI, machines must achieve greater autonomy, precision and real-time decision-making—and Atomathic is defining that future," said Dr. Behrooz Rezvani, Founder and CEO of Atomathic. "We make the invisible visible. Our technology fuses the rigor of mathematics with the power of AI to transform how sensors and machines interact with the world—unlocking capabilities once thought to be theoretical. What can be imagined mathematically can now be realized physically."
This technical shift is powered by Atomathic’s deeper mathematical framework. The core of its approach is a method called hyperdefinition technology, which uses the Atomic Norm and fast computational techniques to map sparse physical signals. In simple terms, it pulls clarity out of chaos. This enables ultra-high-resolution signal visualization in real time—something the company claims has never been achieved at this scale in real-time sensing.
AIDAR and AISIR are already being trialled and integrated across multiple sectors and they’re designed to work with a broad range of hardware. That hardware-agnostic design is poised to matter even more as industries shift toward richer, more detailed sensing. Analysts expect the automotive sensor market to surge in the coming years, with radar imaging, next-gen ADAS systems and high-precision machine perception playing increasingly central roles.
Atomathic’s technology comes from a tight-knit team with deep roots in mathematics, machine intelligence and AI research, drawing talent from institutions such as Caltech, UCLA, Stanford and the Technical University of Munich. After seven years of development, the company is ready to show its progress publicly, starting with demonstrations at CES 2026 in Las Vegas.
Suppose the future of autonomy depends on machines perceiving the world with far greater fidelity. In that case, Atomathic is betting that the next leap forward won’t come from more hardware, but from rethinking the math behind the signal—and redefining what physical AI can do.