Funding & Deals

Bedrock Robotics Hits US$1.75B Valuation Following US$270M Series B Funding

Inside the funding round driving the shift to intelligent construction fleets

Updated

February 7, 2026 2:12 PM

Aerial shot of an excavator. PHOTO: UNSPLASH

Bedrock Robotics has raised US$270 million in Series B funding as it works to integrate greater automation into the construction industry. The round, co-led by CapitalG and the Valor Atreides AI Fund, values the San Francisco-based company at US$1.75 billion, bringing its total funding to more than US$350 million.

The size of the investment reflects growing interest in technologies that can change how large infrastructure and industrial projects are built. Bedrock is not trying to reinvent construction from scratch. Instead, it is focused on upgrading the machines contractors already use—so they can work more efficiently, safely and consistently.

Founded in 2024 by former Waymo engineers, Bedrock develops systems that allow heavy equipment to operate with increasing levels of autonomy. Its software and hardware can be retrofitted onto machines such as excavators, bulldozers and loaders. Rather than relying on one-off robotic tools, the company is building a connected platform that lets fleets of machines understand their surroundings and coordinate with one another on job sites.

This is what Bedrock calls “system-level autonomy”. Its technology combines cameras, lidar and AI models to help machines perceive terrain, detect obstacles, track work progress and carry out tasks like digging and grading with precision. Human supervisors remain in control, monitoring operations and stepping in when needed. Over time, Bedrock aims to reduce the amount of direct intervention those machines require.

The funding comes as contractors face rising pressure to deliver projects faster and with fewer available workers. In the press release, Bedrock notes that the industry needs nearly 800,000 additional workers over the next two years and that project backlogs have grown to more than eight months. These constraints are pushing firms to explore new ways to keep sites productive without compromising safety or quality.

Bedrock states that autonomy can help address those challenges. Not by removing people from the equation—but by allowing crews to supervise more equipment at once and reduce idle time. If machines can operate longer, with better awareness of their environment, sites can run more smoothly and with fewer disruptions.

The company has already started deploying its system in large-scale excavation work, including manufacturing and infrastructure projects. Contractors are using Bedrock’s platform to test how autonomous equipment can support real-world operations at scale, particularly in earthmoving tasks that demand precision and consistency.

From a business standpoint, the Series B funding will allow Bedrock to expand both its technology and its customer deployments. The company has also strengthened its leadership team with senior hires from Meta and Waymo, deepening its focus on AI evaluation, safety and operational growth. Bedrock says it is targeting its first fully operator-less excavator deployments with customers in 2026—a milestone for autonomy in complex construction equipment.

In that context, this round is not just about capital. It is about giving Bedrock the runway to prove that autonomous systems can move from controlled pilots into everyday use on job sites. The company bets that the future of construction will be shaped less by individual machines—and more by coordinated, intelligent systems that work alongside human crews.

Keep Reading

Artificial Intelligence

What Happens When AI Writes the Wrong References?

HKU professor apologizes after PhD student’s AI-assisted paper cites fabricated sources.

Updated

January 8, 2026 6:33 PM

The University of Hong Kong in Pok Fu Lam, Hong Kong Island. PHOTO: ADOBE STOCK

It’s no surprise that artificial intelligence, while remarkably capable, can also go astray—spinning convincing but entirely fabricated narratives. From politics to academia, AI’s “hallucinations” have repeatedly shown how powerful technology can go off-script when left unchecked.

Take Grok-2, for instance. In July 2024, the chatbot misled users about ballot deadlines in several U.S. states, just days after President Joe Biden dropped his re-election bid against former President Donald Trump. A year earlier, a U.S. lawyer found himself in court for relying on ChatGPT to draft a legal brief—only to discover that the AI tool had invented entire cases, citations and judicial opinions. And now, the academic world has its own cautionary tale.

Recently, a journal paper from the Department of Social Work and Social Administration at the University of Hong Kong was found to contain fabricated citations—sources apparently created by AI. The paper, titled “Forty Years of Fertility Transition in Hong Kong,” analyzed the decline in Hong Kong’s fertility rate over the past four decades. Authored by doctoral student Yiming Bai, along with Yip Siu-fai, Vice Dean of the Faculty of Social Sciences and other university officials, the study identified falling marriage rates as a key driver behind the city’s shrinking birth rate. The authors recommended structural reforms to make Hong Kong’s social and work environment more family-friendly.

But the credibility of the paper came into question when inconsistencies surfaced among its references. Out of 61 cited works, some included DOI (Digital Object Identifier) links that led to dead ends, displaying “DOI Not Found.” Others claimed to originate from academic journals, yet searches yielded no such publications.

Speaking to HK01, Yip acknowledged that his student had used AI tools to organize the citations but failed to verify the accuracy of the generated references. “As the corresponding author, I bear responsibility”, Yip said, apologizing for the damage caused to the University of Hong Kong and the journal’s reputation. He clarified that the paper itself had undergone two rounds of verification and that its content was not fabricated—only the citations had been mishandled.

Yip has since contacted the journal’s editor, who accepted his explanation and agreed to re-upload a corrected version in the coming days. A formal notice addressing the issue will also be released. Yip said he would personally review each citation “piece by piece” to ensure no errors remain.

As for the student involved, Yip described her as a diligent and high-performing researcher who made an honest mistake in her first attempt at using AI for academic assistance. Rather than penalize her, Yip chose a more constructive approach, urging her to take a course on how to use AI tools responsibly in academic research.

Ultimately, in an age where generative AI can produce everything from essays to legal arguments, there are two lessons to take away from this episode. First, AI is a powerful assistant, but only that. The final judgment must always rest with us. No matter how seamless the output seems, cross-checking and verifying information remain essential. Second, as AI becomes integral to academic and professional life, institutions must equip students and employees with the skills to use it responsibly. Training and mentorship are no longer optional; they’re the foundation for using AI to enhance, not undermine, human work.

Because in this age of intelligent machines, staying relevant isn’t about replacing human judgment with AI, it’s about learning how to work alongside it.