The collaboration between Oversonic Robotics and STMicroelectronics highlights how robotics is beginning to fill gaps traditional automation cannot.
Updated
December 31, 2025 2:10 PM

3D render of humanoid robots working in a factory assembly line. PHOTO: ADOBE STOCK
Oversonic Robotics, an Italian company known for building cognitive humanoid robots, has signed an agreement with STMicroelectronics, one of the world’s largest semiconductor manufacturers, to deploy humanoid robots inside semiconductor plants.
According to the companies, this is the first time cognitive humanoid robots will be used operationally inside semiconductor manufacturing facilities. And the first deployment has already taken place at ST’s advanced packaging and test plant in Malta.
At the center of the collaboration is RoBee, Oversonic’s humanoid robot. RoBee is designed to carry out support tasks within industrial environments, particularly where flexibility and interaction with human workers are required. In ST’s factories, the robots will assist with complex manufacturing and logistics flows linked to new semiconductor products. They are intended to work alongside existing automation systems, not replace them.
RoBee is notable for its ability to operate in environments shared with people. It is currently the only humanoid robot certified for use in both industrial and healthcare settings and is already in operation within several Italian companies. The robot is also being used in experimental hospital programs. That background helped position RoBee for deployment in tightly controlled manufacturing environments such as semiconductor plants.
Fabio Puglia, President of Oversonic Robotics, described the agreement as a milestone for deploying humanoid robots in complex industrial settings: “The partnership with STMicroelectronics is a great source of pride for us because it embodies the vision of cognitive robotics that Oversonic has brought to the industrial and healthcare markets. Being the first to introduce cognitive humanoid robots in a sophisticated production context such as semiconductors means measuring ourselves against the highest standards in terms of reliability, safety and operational continuity. This agreement represents a fundamental milestone for Oversonic and, more generally, for the industrial challenges these new machines are called to face in innovative and highly complex environments, alongside people and supporting their quality of work”.
From STMicroelectronics’ side, the use of humanoid robots is framed as part of a broader effort to manage growing manufacturing complexity. he company said RoBee will support complex tasks and help manage the intricate production flows required by newer semiconductor products. It is also expected to contribute to improved product quality and shorter manufacturing cycle times. The robots are designed to integrate with existing automation and software systems, helping improve safety and operational continuity.
In semiconductor manufacturing, precision and reliability leave little room for experimentation. Therefore, introducing humanoid robots into this environment signals a practical shift. It shows how robotics is starting to fill gaps that traditional automation has struggled to address.
Keep Reading
The hidden cost of scaling AI: infrastructure, energy, and the push for liquid cooling.
Updated
December 16, 2025 3:43 PM

The inside of a data centre, with rows of server racks. PHOTO: FREEPIK
As artificial intelligence models grow larger and more demanding, the quiet pressure point isn’t the algorithms themselves—it’s the AI infrastructure that has to run them. Training and deploying modern AI models now requires enormous amounts of computing power, which creates a different kind of challenge: heat, energy use and space inside data centers. This is the context in which Supermicro and NVIDIA’s collaboration on AI infrastructure begins to matter.
Supermicro designs and builds large-scale computing systems for data centers. It has now expanded its support for NVIDIA’s Blackwell generation of AI chips with new liquid-cooled server platforms built around the NVIDIA HGX B300. The announcement isn’t just about faster hardware. It reflects a broader effort to rethink how AI data center infrastructure is built as facilities strain under rising power and cooling demands.
At a basic level, the systems are designed to pack more AI chips into less space while using less energy to keep them running. Instead of relying mainly on air cooling—fans, chillers and large amounts of electricity, these liquid-cooled AI servers circulate liquid directly across critical components. That approach removes heat more efficiently, allowing servers to run denser AI workloads without overheating or wasting energy.
Why does that matter outside a data center? Because AI doesn’t scale in isolation. As models become more complex, the cost of running them rises quickly, not just in hardware budgets, but in electricity use, water consumption and physical footprint. Traditional air-cooling methods are increasingly becoming a bottleneck, limiting how far AI systems can grow before energy and infrastructure costs spiral.
This is where the Supermicro–NVIDIA partnership fits in. NVIDIA supplies the computing engines—the Blackwell-based GPUs designed to handle massive AI workloads. Supermicro focuses on how those chips are deployed in the real world: how many GPUs can fit in a rack, how they are cooled, how quickly systems can be assembled and how reliably they can operate at scale in modern data centers. Together, the goal is to make high-density AI computing more practical, not just more powerful.
The new liquid-cooled designs are aimed at hyperscale data centers and so-called AI factories—facilities built specifically to train and run large AI models continuously. By increasing GPU density per rack and removing most of the heat through liquid cooling, these systems aim to ease a growing tension in the AI boom: the need for more computers without an equally dramatic rise in energy waste.
Just as important is speed. Large organizations don’t want to spend months stitching together custom AI infrastructure. Supermicro’s approach packages compute, networking and cooling into pre-validated data center building blocks that can be deployed faster. In a world where AI capabilities are advancing rapidly, time to deployment can matter as much as raw performance.
Stepping back, this development says less about one product launch and more about a shift in priorities across the AI industry. The next phase of AI growth isn’t only about smarter models—it’s about whether the physical infrastructure powering AI can scale responsibly. Efficiency, power use and sustainability are becoming as critical as speed.