Operations & Scale

How Cloud Software Is Simplifying Airport Operations and Replacing Legacy Systems

As airports grow more complex, the real innovation lies in making their systems simpler, faster, and easier to act on

Updated

March 24, 2026 5:55 PM

An airplane parked at Josep Tarradellas Barcelona-El Prat Airport. PHOTO: UNSPLASH

Airports are some of the most complex systems in the world. Every day, they manage thousands of flights, passengers, crew schedules, gates and ground operations—all moving at the same time. But much of this still runs on older software that doesn’t connect well, making simple decisions harder than they need to be.

This is the gap companies like AirportLabs are trying to address. Instead of relying on multiple disconnected systems, their approach brings airport operations into one cloud-based platform. The goal is straightforward: take scattered data and turn it into something teams can actually use in real time.

In practice, this means combining core systems like flight databases, resource management and display systems into a single interface. When everything is connected, airport staff can respond faster—whether it’s adjusting gate assignments, managing delays, or coordinating ground crews. Rather than reacting late, decisions can be made as situations unfold.

Another shift is how this technology is built. Traditional airport systems often require heavy on-site infrastructure and long deployment timelines. In contrast, cloud-based platforms remove much of that complexity. Updates are faster, systems are easier to scale and teams spend less time maintaining servers and more time improving operations.

What stands out is the speed of adoption. Instead of multi-year rollouts, newer systems can be implemented in weeks, allowing airports to see improvements much sooner.

At a broader level, this reflects a familiar pattern seen across industries. As operations become more data-heavy, the advantage shifts to those who can simplify complexity. In aviation, that doesn’t just mean better technology—it means making the entire system easier to run.

Keep Reading

Artificial Intelligence

HTC VIVERSE and World Labs Partner to Turn AI-Generated 3D Worlds Into Interactive Experiences

The focus is no longer just AI-generated worlds, but how those worlds become structured digital products

Updated

March 17, 2026 1:01 AM

The inside of a pair of HTC VR goggles. PHOTO: UNSPLASH

As AI tools improve, creating 3D content is becoming faster and easier. However, building that content into interactive experiences still requires time, structure and technical work. That difference between generation and execution is where HTC VIVERSE and World Labs are focusing their new collaboration.

HTC VIVERSE is a 3D content platform developed by HTC. It provides creators with tools to build, refine and publish interactive virtual environments. Meanwhile, World Labs is an AI startup founded by researcher Fei-Fei Li and a team of machine learning specialists. The company recently introduced Marble, a tool that generates full 3D environments from simple text, image or video prompts.

While Marble can quickly create a digital world, that world on its own is not yet a finished experience. It still needs structure, navigation and interaction. This is where VIVERSE fits in. By combining Marble’s world generation with VIVERSE’s building tools, creators can move from an AI-generated scene to a usable, interactive product.

In practice, the workflow works in two steps. First, Marble produces the base 3D environment. Then, creators bring that environment into VIVERSE, where they add game mechanics, scenes and interactive elements. In this model, AI handles the early visual creation, while the human creator defines how users explore and interact with the world.

To demonstrate this process, the companies developed three example projects. Whiskerhill turns a Marble-generated world into a simple quest-based experience. Whiskerport connects multiple AI-generated scenes into a multi-level environment that users navigate through portals. Clockwork Conspiracy, built by VIVERSE, uses Marble’s generation system to create a more structured, multi-scene game. These projects are not just demos. They serve as proof that AI-generated worlds can evolve beyond static visuals and become interactive environments.

This matters because generative AI is often judged by how quickly it produces content. However, speed alone does not create usable products. Digital experiences still require sequencing, design decisions and user interaction. As a result, the real challenge is not generation, but integration — connecting AI output to tools that make it functional.

Seen in this context, the collaboration is less about a single product and more about workflow. VIVERSE provides a system that allows AI-generated environments to be edited and structured. World Labs provides the engine that creates those environments in the first place. Together, they are testing whether AI can fit directly into a full production pipeline rather than remain a standalone tool.

Ultimately, the collaboration reflects a broader change in creative technology. AI is no longer only producing isolated assets. It is beginning to plug into the larger process of building complete experiences. The key question is no longer how quickly a world can be generated, but how easily that world can be turned into something people can actually use and explore.