Inside the funding round driving the shift to intelligent construction fleets
Updated
February 7, 2026 2:12 PM

Aerial shot of an excavator. PHOTO: UNSPLASH
Bedrock Robotics has raised US$270 million in Series B funding as it works to integrate greater automation into the construction industry. The round, co-led by CapitalG and the Valor Atreides AI Fund, values the San Francisco-based company at US$1.75 billion, bringing its total funding to more than US$350 million.
The size of the investment reflects growing interest in technologies that can change how large infrastructure and industrial projects are built. Bedrock is not trying to reinvent construction from scratch. Instead, it is focused on upgrading the machines contractors already use—so they can work more efficiently, safely and consistently.
Founded in 2024 by former Waymo engineers, Bedrock develops systems that allow heavy equipment to operate with increasing levels of autonomy. Its software and hardware can be retrofitted onto machines such as excavators, bulldozers and loaders. Rather than relying on one-off robotic tools, the company is building a connected platform that lets fleets of machines understand their surroundings and coordinate with one another on job sites.
This is what Bedrock calls “system-level autonomy”. Its technology combines cameras, lidar and AI models to help machines perceive terrain, detect obstacles, track work progress and carry out tasks like digging and grading with precision. Human supervisors remain in control, monitoring operations and stepping in when needed. Over time, Bedrock aims to reduce the amount of direct intervention those machines require.
The funding comes as contractors face rising pressure to deliver projects faster and with fewer available workers. In the press release, Bedrock notes that the industry needs nearly 800,000 additional workers over the next two years and that project backlogs have grown to more than eight months. These constraints are pushing firms to explore new ways to keep sites productive without compromising safety or quality.
Bedrock states that autonomy can help address those challenges. Not by removing people from the equation—but by allowing crews to supervise more equipment at once and reduce idle time. If machines can operate longer, with better awareness of their environment, sites can run more smoothly and with fewer disruptions.
The company has already started deploying its system in large-scale excavation work, including manufacturing and infrastructure projects. Contractors are using Bedrock’s platform to test how autonomous equipment can support real-world operations at scale, particularly in earthmoving tasks that demand precision and consistency.
From a business standpoint, the Series B funding will allow Bedrock to expand both its technology and its customer deployments. The company has also strengthened its leadership team with senior hires from Meta and Waymo, deepening its focus on AI evaluation, safety and operational growth. Bedrock says it is targeting its first fully operator-less excavator deployments with customers in 2026—a milestone for autonomy in complex construction equipment.
In that context, this round is not just about capital. It is about giving Bedrock the runway to prove that autonomous systems can move from controlled pilots into everyday use on job sites. The company bets that the future of construction will be shaped less by individual machines—and more by coordinated, intelligent systems that work alongside human crews.
Keep Reading
The hidden cost of scaling AI: infrastructure, energy, and the push for liquid cooling.
Updated
January 8, 2026 6:31 PM

The inside of a data centre, with rows of server racks. PHOTO: FREEPIK
As artificial intelligence models grow larger and more demanding, the quiet pressure point isn’t the algorithms themselves—it’s the AI infrastructure that has to run them. Training and deploying modern AI models now requires enormous amounts of computing power, which creates a different kind of challenge: heat, energy use and space inside data centers. This is the context in which Supermicro and NVIDIA’s collaboration on AI infrastructure begins to matter.
Supermicro designs and builds large-scale computing systems for data centers. It has now expanded its support for NVIDIA’s Blackwell generation of AI chips with new liquid-cooled server platforms built around the NVIDIA HGX B300. The announcement isn’t just about faster hardware. It reflects a broader effort to rethink how AI data center infrastructure is built as facilities strain under rising power and cooling demands.
At a basic level, the systems are designed to pack more AI chips into less space while using less energy to keep them running. Instead of relying mainly on air cooling—fans, chillers and large amounts of electricity, these liquid-cooled AI servers circulate liquid directly across critical components. That approach removes heat more efficiently, allowing servers to run denser AI workloads without overheating or wasting energy.
Why does that matter outside a data center? Because AI doesn’t scale in isolation. As models become more complex, the cost of running them rises quickly, not just in hardware budgets, but in electricity use, water consumption and physical footprint. Traditional air-cooling methods are increasingly becoming a bottleneck, limiting how far AI systems can grow before energy and infrastructure costs spiral.
This is where the Supermicro–NVIDIA partnership fits in. NVIDIA supplies the computing engines—the Blackwell-based GPUs designed to handle massive AI workloads. Supermicro focuses on how those chips are deployed in the real world: how many GPUs can fit in a rack, how they are cooled, how quickly systems can be assembled and how reliably they can operate at scale in modern data centers. Together, the goal is to make high-density AI computing more practical, not just more powerful.
The new liquid-cooled designs are aimed at hyperscale data centers and so-called AI factories—facilities built specifically to train and run large AI models continuously. By increasing GPU density per rack and removing most of the heat through liquid cooling, these systems aim to ease a growing tension in the AI boom: the need for more computers without an equally dramatic rise in energy waste.
Just as important is speed. Large organizations don’t want to spend months stitching together custom AI infrastructure. Supermicro’s approach packages compute, networking and cooling into pre-validated data center building blocks that can be deployed faster. In a world where AI capabilities are advancing rapidly, time to deployment can matter as much as raw performance.
Stepping back, this development says less about one product launch and more about a shift in priorities across the AI industry. The next phase of AI growth isn’t only about smarter models—it’s about whether the physical infrastructure powering AI can scale responsibly. Efficiency, power use and sustainability are becoming as critical as speed.