Deep Tech

The Startups Building the Machines That Could Work the Moon

Getting to the Moon was the first chapter. Interlune and Astrolab are working on how to operate there.

Updated

March 6, 2026 1:32 AM

Apollo 17 Astronaut's Snapshot of Taurus-Littrow Valley. PHOTO: UNSPLASH

As plans for a long-term human presence on the Moon pick up pace, the focus is shifting from landing there to working there. It is one thing to reach the surface. It is another to build roads, prepare sites and extract materials in a way that can support real activity.

That is where Interlune and Astrolab come in. Interlune is a space resources company. Astrolab builds planetary rovers. The two are now working together to mount Interlune’s lunar digging system onto Astrolab’s Flexible Logistics and Exploration (FLEX) rover. They have completed a concept study and are planning hardware testing in Houston.

The aim is straightforward: combine a rover that can move reliably across the Moon with equipment that can dig, collect and handle lunar soil. Interlune is focused on harvesting natural resources from the Moon, starting with helium-3. To do that at scale, the system cannot sit in one place. It has to move across the surface, handle dust and operate in harsh conditions. "Reliable, autonomous mobility is crucial to the Interlune harvesting system and broader lunar infrastructure development", said Rob Meyerson, co-founder and CEO of Interlune. "Astrolab's FLEX is the right vehicle for the job".

By fitting its digging and collection hardware onto FLEX, Interlune is working toward a mobile system that can gather large amounts of lunar soil and support future construction needs. Beyond helium-3, the same setup could help prepare base sites, level ground, build protective barriers and lay the groundwork for other structures. In simple terms, it is about turning a rover into a working machine for the Moon.

The partnership also connects to Interlune’s work with Vermeer Corporation to develop equipment for continuous, high-volume digging adapted to lunar conditions. Taken together, the goal is to build systems that can support both commercial and government missions — whether that means resource extraction or preparing land for future bases.

For Astrolab, the collaboration strengthens the role of FLEX as more than just a transport vehicle.

"Working with Interlune further differentiates FLEX as the rover of choice for commercial and government Moon missions", said Jaret Matthews, Astrolab founder and CEO. "Interlune's expertise in developing and testing highly specialized regolith simulant will further enhance FLEX's ability to mitigate dust and operate in extreme environments".

Testing will be centered in Houston, which is becoming an important hub for commercial space development. Astrolab was the first company to lease space at the Texas A&M University Space Institute, currently under construction at NASA’s Johnson Space Center. Interlune operates the Houston-based Interlune Research Lab, where it creates and tests simulated versions of lunar soil.

That detail matters. Moon dust is fine, abrasive and difficult to manage. Before any hardware flies, it needs to prove it can survive and function in those conditions. By testing their systems in realistic soil simulants, the companies can refine how the rover moves and how the digging system performs.

The Houston lab is partially funded by the Texas Space Commission, reflecting the growing role of regional space initiatives in supporting private companies building beyond Earth. Overall, the collaboration is not about grand promises. It is about integrating hardware, running real tests and taking practical steps toward operating on the Moon.  

Keep Reading

Artificial Intelligence

The Real Cost of Scaling AI: How Supermicro and NVIDIA Are Rebuilding Data Center Infrastructure

The hidden cost of scaling AI: infrastructure, energy, and the push for liquid cooling.

Updated

January 8, 2026 6:31 PM

The inside of a data centre, with rows of server racks. PHOTO: FREEPIK

As artificial intelligence models grow larger and more demanding, the quiet pressure point isn’t the algorithms themselves—it’s the AI infrastructure that has to run them. Training and deploying modern AI models now requires enormous amounts of computing power, which creates a different kind of challenge: heat, energy use and space inside data centers. This is the context in which Supermicro and NVIDIA’s collaboration on AI infrastructure begins to matter.

Supermicro designs and builds large-scale computing systems for data centers. It has now expanded its support for NVIDIA’s Blackwell generation of AI chips with new liquid-cooled server platforms built around the NVIDIA HGX B300. The announcement isn’t just about faster hardware. It reflects a broader effort to rethink how AI data center infrastructure is built as facilities strain under rising power and cooling demands.

At a basic level, the systems are designed to pack more AI chips into less space while using less energy to keep them running. Instead of relying mainly on air cooling—fans, chillers and large amounts of electricity, these liquid-cooled AI servers circulate liquid directly across critical components. That approach removes heat more efficiently, allowing servers to run denser AI workloads without overheating or wasting energy.

Why does that matter outside a data center? Because AI doesn’t scale in isolation. As models become more complex, the cost of running them rises quickly, not just in hardware budgets, but in electricity use, water consumption and physical footprint. Traditional air-cooling methods are increasingly becoming a bottleneck, limiting how far AI systems can grow before energy and infrastructure costs spiral.

This is where the Supermicro–NVIDIA partnership fits in. NVIDIA supplies the computing engines—the Blackwell-based GPUs designed to handle massive AI workloads. Supermicro focuses on how those chips are deployed in the real world: how many GPUs can fit in a rack, how they are cooled, how quickly systems can be assembled and how reliably they can operate at scale in modern data centers. Together, the goal is to make high-density AI computing more practical, not just more powerful.

The new liquid-cooled designs are aimed at hyperscale data centers and so-called AI factories—facilities built specifically to train and run large AI models continuously. By increasing GPU density per rack and removing most of the heat through liquid cooling, these systems aim to ease a growing tension in the AI boom: the need for more computers without an equally dramatic rise in energy waste.

Just as important is speed. Large organizations don’t want to spend months stitching together custom AI infrastructure. Supermicro’s approach packages compute, networking and cooling into pre-validated data center building blocks that can be deployed faster. In a world where AI capabilities are advancing rapidly, time to deployment can matter as much as raw performance.

Stepping back, this development says less about one product launch and more about a shift in priorities across the AI industry. The next phase of AI growth isn’t only about smarter models—it’s about whether the physical infrastructure powering AI can scale responsibly. Efficiency, power use and sustainability are becoming as critical as speed.