Artificial Intelligence

Rokid Glasses Get Smarter: Gemini ChatGPT Brings AI to AR Eyewear Worldwide

AI meets AR: How Rokid Glasses bring multilingual, real-time intelligence to smart eyewear globally

Updated

March 3, 2026 3:50 PM

Rokid's smart glasses. PHOTO: ROKID

Rokid, a Chinese company specializing in AI-powered smart eyewear and human–computer interaction, has rolled out a major software update for the international version of its Rokid Glasses. This update makes it the first smart glasses manufacturer to natively support Google’s Gemini, alongside three other leading large language models: OpenAI’s ChatGPT, Alibaba’s Qwen and DeepSeek.

The integration is powered by Rokid’s device-to-cloud architecture, which enables users to switch between AI models on the fly. In practice, this means a traveler can receive a real-time translation in Japanese using one AI model, then quickly switch to ChatGPT to answer a technical query—without noticeable delay. The system also supports multi-modal inputs like voice and gestures, making interactions more intuitive for everyday use.

This is more than a routine software update. By combining AI models from both U.S. and Chinese developers, Rokid is making its smart glasses relevant to global users, with features that adapt to local languages and preferences while maintaining high performance.  

These technological advancements have directly fueled Rokid’s international growth. Between November 2024 and October 2025, Shangpu Group data shows Rokid Glasses ranked No.1 in global sales for AI glasses with display functionality. Crowdfunding milestones further reflect this momentum: the product became the fastest smart glasses to raise over 100 million Japanese Yen on Japan’s MAKUAKE platform and broke Kickstarter records for smart eyewear.

Taken together, Rokid’s update highlights a shift in the smart glasses space: success increasingly comes from openness, flexibility and localized AI experiences rather than closed, single-platform ecosystems. By giving users choice, integrating global AI capabilities and bridging cultural and linguistic gaps, Rokid is positioning itself as a serious contender in the international AR and AI wearable market.

Keep Reading

Startup Profiles

Startup Applied Brain Research Raises Seed Funding to Develop On-Device Voice AI

Why investors are backing Applied Brain Research’s on-device voice AI approach.

Updated

January 28, 2026 5:53 PM

Plastic model of a human's brain. PHOTO: UNSPLASH

Applied Brain Research (ABR), a Canada-based startup, has closed its seed funding round to advance its work in “on-device voice AI”. The round was led by Two Small Fish Ventures, with its general partner Eva Lau joining ABR’s board, reflecting investor confidence in the company’s technical direction and market focus.

The round was oversubscribed, meaning more investors wanted to participate than the company had planned for. That response reflects growing interest in technologies that reduce reliance on cloud-based AI systems.

ABR is focused on a clear problem in voice-enabled products today. Most voice features depend on cloud servers to process speech, which can cause delays, increase costs, raise privacy concerns and limit performance on devices with small batteries or limited computing power.

ABR’s approach is built around keeping voice AI fully on-device. Instead of relying on cloud connectivity, its technology allows devices to process speech locally, enabling faster responses and more predictable performance while reducing data exposure.

Central to this approach is the company’s TSP1 chip, a processor designed specifically for handling time-based data such as speech. Built for real-time voice processing at the edge, TSP1 allows tasks like speech recognition and text-to-speech to run on smaller, power-constrained devices.

This specialization is particularly relevant as voice interfaces become more common across emerging products. Many edge devices such as wearables or mobile robotics cannot support traditional voice AI systems without compromising battery life or responsiveness. The TSP1 addresses this limitation by enabling these capabilities at significantly lower power levels than conventional alternatives. According to the company, full speech-to-text and text-to-speech can run at under 30 milliwatts of power, which is roughly 10 to 100 times lower than many existing alternatives. This level of efficiency makes advanced voice interaction feasible on devices where power consumption has long been a limiting factor.

That efficiency makes the technology applicable across a wide range of use cases. In augmented reality glasses, it supports responsive, hands-free voice control. In robotics, it enables real-time voice interaction without cloud latency or ongoing service costs. For wearables, it expands voice functionality without severely impacting battery life. In medical devices, it allows on-device inference while keeping sensitive data local. And in automotive systems, it enables consistent voice experiences regardless of network availability.

For investors, this combination of timing and technology is what stands out. Voice interfaces are becoming more common, while reliance on cloud infrastructure is increasingly seen as a limitation rather than a strength. ABR sits at the intersection of those two shifts.

With fresh funding in place, ABR is now working with partners across AR, robotics, healthcare, automotive and wearables to bring that future closer. For startup watchers, it’s a reminder that some of the most meaningful AI advances aren’t about bigger models but about making intelligence fit where it actually needs to live.