Where smarter storage meets smarter logistics.
Updated
January 8, 2026 6:32 PM
.jpg)
Kioxia's flagship building at Yokohama Technology Campus. PHOTO: KIOXIA
E-commerce keeps growing and with it, the number of products moving through warehouses every day. Items vary more than ever — different shapes, seasonal packaging, limited editions and constantly updated designs. At the same time, many logistics centers are dealing with labour shortages and rising pressure to automate.
But today’s image-recognition AI isn’t built for this level of change. Most systems rely on deep-learning models that need to be adjusted or retrained whenever new products appear. Every update — whether it’s a new item or a packaging change — adds extra time, energy use and operational cost. And for warehouses handling huge product catalogs, these retraining cycles can slow everything down.
KIOXIA, a company known for its memory and storage technologies, is working on a different approach. In a new collaboration with Tsubakimoto Chain and EAGLYS, the team has developed an AI-based image recognition system that is designed to adapt more easily as product lines grow and shift. The idea is to help logistics sites automatically identify items moving through their workflows without constantly reworking the core AI model.
At the center of the system is KIOXIA’s AiSAQ software paired with its Memory-Centric AI technology. Instead of retraining the model each time new products appear, the system stores new product data — images, labels and feature information — directly in high-capacity storage. This allows warehouses to add new items quickly without altering the original AI model.
Because storing more data can lead to longer search times, the system also indexes the stored product information and transfers the index into SSD storage. This makes it easier for the AI to retrieve relevant features fast, using a Retrieval-Augmented Generation–style method adapted for image recognition.
The collaboration will be showcased at the 2025 International Robot Exhibition in Tokyo. Visitors will see the system classify items in real time as they move along a conveyor, drawing on stored product features to identify them instantly. The demonstration aims to illustrate how logistics sites can handle continuously changing inventories with greater accuracy and reduced friction.
Overall, as logistics networks become increasingly busy and product lines evolve faster than ever, this memory-driven approach provides a practical way to keep automation adaptable and less fragile.
Keep Reading
The focus is no longer just AI-generated worlds, but how those worlds become structured digital products
Updated
February 20, 2026 6:50 PM

The inside of a pair of HTC VR goggles. PHOTO: UNSPLASH
As AI tools improve, creating 3D content is becoming faster and easier. However, building that content into interactive experiences still requires time, structure and technical work. That difference between generation and execution is where HTC VIVERSE and World Labs are focusing their new collaboration.
HTC VIVERSE is a 3D content platform developed by HTC. It provides creators with tools to build, refine and publish interactive virtual environments. Meanwhile, World Labs is an AI startup founded by researcher Fei-Fei Li and a team of machine learning specialists. The company recently introduced Marble, a tool that generates full 3D environments from simple text, image or video prompts.
While Marble can quickly create a digital world, that world on its own is not yet a finished experience. It still needs structure, navigation and interaction. This is where VIVERSE fits in. By combining Marble’s world generation with VIVERSE’s building tools, creators can move from an AI-generated scene to a usable, interactive product.
In practice, the workflow works in two steps. First, Marble produces the base 3D environment. Then, creators bring that environment into VIVERSE, where they add game mechanics, scenes and interactive elements. In this model, AI handles the early visual creation, while the human creator defines how users explore and interact with the world.
To demonstrate this process, the companies developed three example projects. Whiskerhill turns a Marble-generated world into a simple quest-based experience. Whiskerport connects multiple AI-generated scenes into a multi-level environment that users navigate through portals. Clockwork Conspiracy, built by VIVERSE, uses Marble’s generation system to create a more structured, multi-scene game. These projects are not just demos. They serve as proof that AI-generated worlds can evolve beyond static visuals and become interactive environments.
This matters because generative AI is often judged by how quickly it produces content. However, speed alone does not create usable products. Digital experiences still require sequencing, design decisions and user interaction. As a result, the real challenge is not generation, but integration — connecting AI output to tools that make it functional.
Seen in this context, the collaboration is less about a single product and more about workflow. VIVERSE provides a system that allows AI-generated environments to be edited and structured. World Labs provides the engine that creates those environments in the first place. Together, they are testing whether AI can fit directly into a full production pipeline rather than remain a standalone tool.
Ultimately, the collaboration reflects a broader change in creative technology. AI is no longer only producing isolated assets. It is beginning to plug into the larger process of building complete experiences. The key question is no longer how quickly a world can be generated, but how easily that world can be turned into something people can actually use and explore.