Funding & Deals

Bedrock Robotics Hits US$1.75B Valuation Following US$270M Series B Funding

Inside the funding round driving the shift to intelligent construction fleets

Updated

February 7, 2026 2:12 PM

Aerial shot of an excavator. PHOTO: UNSPLASH

Bedrock Robotics has raised US$270 million in Series B funding as it works to integrate greater automation into the construction industry. The round, co-led by CapitalG and the Valor Atreides AI Fund, values the San Francisco-based company at US$1.75 billion, bringing its total funding to more than US$350 million.

The size of the investment reflects growing interest in technologies that can change how large infrastructure and industrial projects are built. Bedrock is not trying to reinvent construction from scratch. Instead, it is focused on upgrading the machines contractors already use—so they can work more efficiently, safely and consistently.

Founded in 2024 by former Waymo engineers, Bedrock develops systems that allow heavy equipment to operate with increasing levels of autonomy. Its software and hardware can be retrofitted onto machines such as excavators, bulldozers and loaders. Rather than relying on one-off robotic tools, the company is building a connected platform that lets fleets of machines understand their surroundings and coordinate with one another on job sites.

This is what Bedrock calls “system-level autonomy”. Its technology combines cameras, lidar and AI models to help machines perceive terrain, detect obstacles, track work progress and carry out tasks like digging and grading with precision. Human supervisors remain in control, monitoring operations and stepping in when needed. Over time, Bedrock aims to reduce the amount of direct intervention those machines require.

The funding comes as contractors face rising pressure to deliver projects faster and with fewer available workers. In the press release, Bedrock notes that the industry needs nearly 800,000 additional workers over the next two years and that project backlogs have grown to more than eight months. These constraints are pushing firms to explore new ways to keep sites productive without compromising safety or quality.

Bedrock states that autonomy can help address those challenges. Not by removing people from the equation—but by allowing crews to supervise more equipment at once and reduce idle time. If machines can operate longer, with better awareness of their environment, sites can run more smoothly and with fewer disruptions.

The company has already started deploying its system in large-scale excavation work, including manufacturing and infrastructure projects. Contractors are using Bedrock’s platform to test how autonomous equipment can support real-world operations at scale, particularly in earthmoving tasks that demand precision and consistency.

From a business standpoint, the Series B funding will allow Bedrock to expand both its technology and its customer deployments. The company has also strengthened its leadership team with senior hires from Meta and Waymo, deepening its focus on AI evaluation, safety and operational growth. Bedrock says it is targeting its first fully operator-less excavator deployments with customers in 2026—a milestone for autonomy in complex construction equipment.

In that context, this round is not just about capital. It is about giving Bedrock the runway to prove that autonomous systems can move from controlled pilots into everyday use on job sites. The company bets that the future of construction will be shaped less by individual machines—and more by coordinated, intelligent systems that work alongside human crews.

Keep Reading

Deep Tech

Meta’s Hypernova Smart Glasses: Features, Price & What to Expect

At under US$1,000, Hypernova isn’t just eyewear—it’s Meta’s push to make AR feel ordinary.

Updated

January 8, 2026 6:34 PM

Closeup of the Ray-Ban logo and the built-in ultra-wide 12 MP camera on a pair of new Ray-Ban Meta Wayfarer smart glasses. PHOTO: ADOBE STOCK

Meta is preparing to launch its next big wearable: the Hypernova smart glasses. Unlike earlier experiments like the Ray-Ban Stories, these new glasses promise more advanced features at a price point under US$1,000. With a launch set for September 17 at Meta’s annual Connect conference, the Hypernova is already drawing attention for blending design, technology and accessibility.  

In this article, let’s take a closer look at Hypernova’s design, features, pricing and the challenges Meta faces as it tries to bring smart glasses into everyday life.

Why Hypernova matters

Meta’s earlier Ray-Ban glasses offered cameras and audio but no display. Hypernova changes that: The glasses will ship with a built-in micro-display, giving wearers quick access to maps, messages, notifications and even Meta’s AI assistant. It’s a step toward everyday AR that feels useful and natural, not experimental.

Perhaps most importantly, the price makes them attainable. While early estimates placed the cost above US$1,000, Meta has committed to a launch price of around US$800. That’s still premium, but it moves AR smart glasses into reach for more consumers.  

Design and build

Hypernova weighs about 70 grams, roughly 20 grams heavier than the Ray-Ban Meta models. The added weight likely comes from added components like the new display and extra sensors.  

To keep the glasses stylish, Meta continues its partnership with EssilorLuxottica, the company behind Ray-Ban and Prada eyewear. Thicker frames—especially Prada’s designs—help hide the hardware like chips, microphones and batteries without making the glasses look oversized.

The glasses stick close to the classic Ray-Ban silhouette but feature slightly bulkier arms. On the left side, a touch-sensitive bar lets users control functions with taps and swipes. For example, a two-finger tap can trigger a photo or start video recording.

Expected features of Hypernova  
Integrated display:  

Hypernova introduces something the earlier Ray-Ban glasses never had: a display built right into the lens. In the bottom-right corner of the right lens, a small micro-screen uses waveguide optics to project a digital overlay with about a 20° field of view. This means you can glance at turn-by-turn directions, check a notification or quickly consult Meta’s AI assistant without pulling out your phone. It’s discreet, practical and a major step up from the older models, which were limited to capturing photos and videos, handling calls and playing music via speakers.  

Gesture controls with neural wristband:  

Alongside the glasses comes the Ceres wristband, a companion device powered by electromyography (EMG). The band picks up the tiny electrical signals in your wrist and fingers, translating them into commands. A pinch might let you select something, a wrist flick could scroll a page, and a swipe could move between screens. The idea is to avoid clunky buttons or having to talk to your glasses in public. Meta has also been experimenting with handwriting recognition through the band, though it’s not clear if that feature will be ready in time for launch.  

Built-in gaming:  

Meta doesn’t just want Hypernova to be useful—it wants it to be fun. Code found in leaked firmware revealed a small game called Hypertrail. It looks to borrow ideas from the 1981 arcade shooter Galaga, letting wearers play a simple, retro-inspired game right through their glasses. It’s not the main attraction, but it shows Meta is trying to make Hypernova feel more like a playful everyday gadget rather than just a piece of serious tech.  

App ecosystem:  

Hypernova runs on a customized version of Android and pairs with smartphones through the Meta View app. Out of the box, it should support the basics: calls, music and message notifications. Leaks suggest several apps will come preinstalled, including Camera, Gallery, Maps, WhatsApp, Messenger and Meta AI. A Qualcomm processor powers the whole setup, helping it run smoothly while keeping energy demands reasonable.  

Meta is also trying to bring in outside developers. In August 2025, CNBC reported that the company invited third-party developers—especially in generative AI—to build experimental apps for Hypernova and the Ceres wristband. The Meta Connect 2025 agenda even highlights sessions on a new smart glasses SDK and toolkit. The push shows Meta’s interest in making Hypernova more than just a device; it wants a broader platform with apps that go beyond its own first-party software.  

Pricing strategy: Why under US$1,000 matters

During development, Hypernova was rumored to cost as much as US$1,400. By pricing it around US$800, Meta signals that it wants adoption more than profit. The company is keeping production limited (around 150,000 units), showing it sees this as a market test rather than a mass rollout. Still, the sub-US$1,000 price tag makes advanced AR far more accessible than before.

Challenges ahead

Despite its promise, Hypernova may still face hurdles. The Ceres wristband can struggle if worn loosely, and some testers have reported issues based on which arm it’s worn on or even when wearing long sleeves. In short, getting EMG input right for everyone will be critical.

Privacy is another major concern. In past experiments, researchers hacked Ray-Ban Meta glasses to run facial recognition, instantly identifying strangers and pulling personal info. Meta has added guidelines, like a recording indicator light, but critics argue these measures are too easy to ignore. Moreover, data captured by smart glasses can feed into AI training, raising questions about consent and surveillance.

The bottom line

The Meta Hypernova smart glasses mark a turning point in wearable tech. They’re lighter and more stylish than bulky AR headsets, while offering real-world features like navigation, messaging and hands-free control. At under US$1,000, they aim to make AR glasses more than a luxury gadget—they’re a step toward everyday use.

Whether Hypernova succeeds will depend on how well it balances style, usability and privacy. But one thing is clear: Meta is betting that always-on, glanceable AR can move from science fiction to daily life.