The collaboration between Oversonic Robotics and STMicroelectronics highlights how robotics is beginning to fill gaps traditional automation cannot.
Updated
January 23, 2026 10:41 AM

3D render of humanoid robots working in a factory assembly line. PHOTO: ADOBE STOCK
Oversonic Robotics, an Italian company known for building cognitive humanoid robots, has signed an agreement with STMicroelectronics, one of the world’s largest semiconductor manufacturers, to deploy humanoid robots inside semiconductor plants.
According to the companies, this is the first time cognitive humanoid robots will be used operationally inside semiconductor manufacturing facilities. And the first deployment has already taken place at ST’s advanced packaging and test plant in Malta.
At the center of the collaboration is RoBee, Oversonic’s humanoid robot. RoBee is designed to carry out support tasks within industrial environments, particularly where flexibility and interaction with human workers are required. In ST’s factories, the robots will assist with complex manufacturing and logistics flows linked to new semiconductor products. They are intended to work alongside existing automation systems, not replace them.
RoBee is notable for its ability to operate in environments shared with people. It is currently the only humanoid robot certified for use in both industrial and healthcare settings and is already in operation within several Italian companies. The robot is also being used in experimental hospital programs. That background helped position RoBee for deployment in tightly controlled manufacturing environments such as semiconductor plants.
Fabio Puglia, President of Oversonic Robotics, described the agreement as a milestone for deploying humanoid robots in complex industrial settings: “The partnership with STMicroelectronics is a great source of pride for us because it embodies the vision of cognitive robotics that Oversonic has brought to the industrial and healthcare markets. Being the first to introduce cognitive humanoid robots in a sophisticated production context such as semiconductors means measuring ourselves against the highest standards in terms of reliability, safety and operational continuity. This agreement represents a fundamental milestone for Oversonic and, more generally, for the industrial challenges these new machines are called to face in innovative and highly complex environments, alongside people and supporting their quality of work”.
From STMicroelectronics’ side, the use of humanoid robots is framed as part of a broader effort to manage growing manufacturing complexity. he company said RoBee will support complex tasks and help manage the intricate production flows required by newer semiconductor products. It is also expected to contribute to improved product quality and shorter manufacturing cycle times. The robots are designed to integrate with existing automation and software systems, helping improve safety and operational continuity.
In semiconductor manufacturing, precision and reliability leave little room for experimentation. Therefore, introducing humanoid robots into this environment signals a practical shift. It shows how robotics is starting to fill gaps that traditional automation has struggled to address.
Keep Reading
The focus is no longer just AI-generated worlds, but how those worlds become structured digital products
Updated
February 20, 2026 6:50 PM

The inside of a pair of HTC VR goggles. PHOTO: UNSPLASH
As AI tools improve, creating 3D content is becoming faster and easier. However, building that content into interactive experiences still requires time, structure and technical work. That difference between generation and execution is where HTC VIVERSE and World Labs are focusing their new collaboration.
HTC VIVERSE is a 3D content platform developed by HTC. It provides creators with tools to build, refine and publish interactive virtual environments. Meanwhile, World Labs is an AI startup founded by researcher Fei-Fei Li and a team of machine learning specialists. The company recently introduced Marble, a tool that generates full 3D environments from simple text, image or video prompts.
While Marble can quickly create a digital world, that world on its own is not yet a finished experience. It still needs structure, navigation and interaction. This is where VIVERSE fits in. By combining Marble’s world generation with VIVERSE’s building tools, creators can move from an AI-generated scene to a usable, interactive product.
In practice, the workflow works in two steps. First, Marble produces the base 3D environment. Then, creators bring that environment into VIVERSE, where they add game mechanics, scenes and interactive elements. In this model, AI handles the early visual creation, while the human creator defines how users explore and interact with the world.
To demonstrate this process, the companies developed three example projects. Whiskerhill turns a Marble-generated world into a simple quest-based experience. Whiskerport connects multiple AI-generated scenes into a multi-level environment that users navigate through portals. Clockwork Conspiracy, built by VIVERSE, uses Marble’s generation system to create a more structured, multi-scene game. These projects are not just demos. They serve as proof that AI-generated worlds can evolve beyond static visuals and become interactive environments.
This matters because generative AI is often judged by how quickly it produces content. However, speed alone does not create usable products. Digital experiences still require sequencing, design decisions and user interaction. As a result, the real challenge is not generation, but integration — connecting AI output to tools that make it functional.
Seen in this context, the collaboration is less about a single product and more about workflow. VIVERSE provides a system that allows AI-generated environments to be edited and structured. World Labs provides the engine that creates those environments in the first place. Together, they are testing whether AI can fit directly into a full production pipeline rather than remain a standalone tool.
Ultimately, the collaboration reflects a broader change in creative technology. AI is no longer only producing isolated assets. It is beginning to plug into the larger process of building complete experiences. The key question is no longer how quickly a world can be generated, but how easily that world can be turned into something people can actually use and explore.