Deep Tech

Meta’s Hypernova Smart Glasses: Features, Price & What to Expect

At under US$1,000, Hypernova isn’t just eyewear—it’s Meta’s push to make AR feel ordinary.

Updated

January 8, 2026 6:34 PM

Closeup of the Ray-Ban logo and the built-in ultra-wide 12 MP camera on a pair of new Ray-Ban Meta Wayfarer smart glasses. PHOTO: ADOBE STOCK

Meta is preparing to launch its next big wearable: the Hypernova smart glasses. Unlike earlier experiments like the Ray-Ban Stories, these new glasses promise more advanced features at a price point under US$1,000. With a launch set for September 17 at Meta’s annual Connect conference, the Hypernova is already drawing attention for blending design, technology and accessibility.  

In this article, let’s take a closer look at Hypernova’s design, features, pricing and the challenges Meta faces as it tries to bring smart glasses into everyday life.

Why Hypernova matters

Meta’s earlier Ray-Ban glasses offered cameras and audio but no display. Hypernova changes that: The glasses will ship with a built-in micro-display, giving wearers quick access to maps, messages, notifications and even Meta’s AI assistant. It’s a step toward everyday AR that feels useful and natural, not experimental.

Perhaps most importantly, the price makes them attainable. While early estimates placed the cost above US$1,000, Meta has committed to a launch price of around US$800. That’s still premium, but it moves AR smart glasses into reach for more consumers.  

Design and build

Hypernova weighs about 70 grams, roughly 20 grams heavier than the Ray-Ban Meta models. The added weight likely comes from added components like the new display and extra sensors.  

To keep the glasses stylish, Meta continues its partnership with EssilorLuxottica, the company behind Ray-Ban and Prada eyewear. Thicker frames—especially Prada’s designs—help hide the hardware like chips, microphones and batteries without making the glasses look oversized.

The glasses stick close to the classic Ray-Ban silhouette but feature slightly bulkier arms. On the left side, a touch-sensitive bar lets users control functions with taps and swipes. For example, a two-finger tap can trigger a photo or start video recording.

Expected features of Hypernova  
Integrated display:  

Hypernova introduces something the earlier Ray-Ban glasses never had: a display built right into the lens. In the bottom-right corner of the right lens, a small micro-screen uses waveguide optics to project a digital overlay with about a 20° field of view. This means you can glance at turn-by-turn directions, check a notification or quickly consult Meta’s AI assistant without pulling out your phone. It’s discreet, practical and a major step up from the older models, which were limited to capturing photos and videos, handling calls and playing music via speakers.  

Gesture controls with neural wristband:  

Alongside the glasses comes the Ceres wristband, a companion device powered by electromyography (EMG). The band picks up the tiny electrical signals in your wrist and fingers, translating them into commands. A pinch might let you select something, a wrist flick could scroll a page, and a swipe could move between screens. The idea is to avoid clunky buttons or having to talk to your glasses in public. Meta has also been experimenting with handwriting recognition through the band, though it’s not clear if that feature will be ready in time for launch.  

Built-in gaming:  

Meta doesn’t just want Hypernova to be useful—it wants it to be fun. Code found in leaked firmware revealed a small game called Hypertrail. It looks to borrow ideas from the 1981 arcade shooter Galaga, letting wearers play a simple, retro-inspired game right through their glasses. It’s not the main attraction, but it shows Meta is trying to make Hypernova feel more like a playful everyday gadget rather than just a piece of serious tech.  

App ecosystem:  

Hypernova runs on a customized version of Android and pairs with smartphones through the Meta View app. Out of the box, it should support the basics: calls, music and message notifications. Leaks suggest several apps will come preinstalled, including Camera, Gallery, Maps, WhatsApp, Messenger and Meta AI. A Qualcomm processor powers the whole setup, helping it run smoothly while keeping energy demands reasonable.  

Meta is also trying to bring in outside developers. In August 2025, CNBC reported that the company invited third-party developers—especially in generative AI—to build experimental apps for Hypernova and the Ceres wristband. The Meta Connect 2025 agenda even highlights sessions on a new smart glasses SDK and toolkit. The push shows Meta’s interest in making Hypernova more than just a device; it wants a broader platform with apps that go beyond its own first-party software.  

Pricing strategy: Why under US$1,000 matters

During development, Hypernova was rumored to cost as much as US$1,400. By pricing it around US$800, Meta signals that it wants adoption more than profit. The company is keeping production limited (around 150,000 units), showing it sees this as a market test rather than a mass rollout. Still, the sub-US$1,000 price tag makes advanced AR far more accessible than before.

Challenges ahead

Despite its promise, Hypernova may still face hurdles. The Ceres wristband can struggle if worn loosely, and some testers have reported issues based on which arm it’s worn on or even when wearing long sleeves. In short, getting EMG input right for everyone will be critical.

Privacy is another major concern. In past experiments, researchers hacked Ray-Ban Meta glasses to run facial recognition, instantly identifying strangers and pulling personal info. Meta has added guidelines, like a recording indicator light, but critics argue these measures are too easy to ignore. Moreover, data captured by smart glasses can feed into AI training, raising questions about consent and surveillance.

The bottom line

The Meta Hypernova smart glasses mark a turning point in wearable tech. They’re lighter and more stylish than bulky AR headsets, while offering real-world features like navigation, messaging and hands-free control. At under US$1,000, they aim to make AR glasses more than a luxury gadget—they’re a step toward everyday use.

Whether Hypernova succeeds will depend on how well it balances style, usability and privacy. But one thing is clear: Meta is betting that always-on, glanceable AR can move from science fiction to daily life.

Keep Reading

Artificial Intelligence

X-Humanoid Introduces Tien Kung 3.0 as Deployment Challenges Persist in Humanoid Robotics

A closer look at the tech, AI, and open ecosystem behind Tien Kung 3.0’s real-world push

Updated

February 18, 2026 8:03 PM

Humanoid robots working in a warehouse. PHOTO: ADOBE STOCK

Humanoid robotics has advanced quickly in recent years. Machines can now walk, balance, and interact with their surroundings in ways that once seemed out of reach. Yet most deployments remain limited. Many robots perform well in controlled settings but struggle in real-world environments. Integration is often complex, hardware interfaces are closed, software tools are fragmented, and scaling across industries remains difficult.

Against this backdrop, X-Humanoid has introduced its latest general-purpose platform, Embodied Tien Kung 3.0. The company positions it not simply as another humanoid robot, but as a system designed to address the practical barriers that have slowed adoption, with a focus on openness and usability.

At the hardware level, Embodied Tien Kung 3.0 is built for mobility, strength, and stability. It is equipped with high-torque integrated joints that provide strong limb force for high-load applications. The company says it is the first full-size humanoid robot to achieve whole-body, high-dynamic motion control integrated with tactile interaction. In practice, this means the robot is designed to maintain balance and execute dynamic movements even in uneven or cluttered environments. It can clear one-meter obstacles, perform consecutive high-dynamic maneuvers, and carry out actions such as kneeling, bending, and turning with coordinated whole-body control.

Precision is also a focus. Through multi-degree-of-freedom limb coordination and calibrated joint linkage, the system is designed to achieve millimeter-level operational accuracy. This level of control is intended to support industrial-grade tasks that require consistent performance and minimal error across changing conditions.

But hardware is only part of the equation. The company pairs the robot with its proprietary Wise KaiWu general-purpose embodied AI platform. This system supports perception, reasoning, and real-time control through what the company describes as a coordinated “brain–cerebellum” architecture. It establishes a continuous perception–decision–execution loop, allowing the robot to operate with greater autonomy and reduced reliance on remote control.

For higher-level cognition, Wise KaiWu incorporates components such as a world model and vision-language models (VLM) to interpret visual scenes, understand language instructions, and break complex objectives into structured steps. For real-time execution, a vision-language-action (VLA) model and full autonomous navigation system manage obstacle avoidance and precise motion under variable conditions. The platform also supports multi-agent collaboration, enabling cross-platform compatibility, asynchronous task coordination, and centralized scheduling across multiple robots.

A central part of the platform is openness. The company states that the system is designed to address compatibility and adaptation challenges across both development and deployment layers. On the hardware side, Embodied Tien Kung 3.0 includes multiple expansion interfaces that support different end-effectors and tools, allowing faster adaptation to industrial manufacturing, specialized operations, and commercial service scenarios. On the software side, the Wise KaiWu ecosystem provides documentation, toolchains, and a low-code development environment. It supports widely adopted communication standards, including ROS2, MQTT, and TCP/IP, enabling partners to customize applications without rebuilding core systems.

The company also highlights its open-source approach. X-Humanoid has open-sourced key components from the Embodied Tien Kung and Wise KaiWu platforms, including the robot body architecture, motion control framework, world model, embodied VLM and cross-ontology VLA models, training toolchains, the RoboMIND dataset, and the ArtVIP simulation asset library. By opening access to these elements, the company aims to reduce development costs, lower technical barriers, and encourage broader participation from researchers, universities, and enterprises.

Embodied Tien Kung 3.0 enters a market where technical progress is visible but large-scale adoption remains uneven. The gap is not only about movement or strength. It is about integration, interoperability, and the ability to operate reliably and autonomously in everyday industrial and commercial settings. If platforms can reduce fragmentation and simplify deployment, humanoid robots may move beyond demonstrations and into sustained commercial use.

In that sense, the significance of Embodied Tien Kung 3.0 lies less in isolated technical claims and more in how its high-dynamic hardware, embodied AI system, open interfaces, and collaborative architecture are structured to work together. Whether that integrated approach can close the deployment gap will shape how quickly humanoid robotics becomes part of real-world operations.