Artificial Intelligence

How AI Is Reinventing Speech Therapy for Children

Clinically grounded, game-based and always available — MIRDC’s AI system is redefining how children learn to communicate.

Updated

January 8, 2026 6:32 PM

A child practicing with a speech therapist. PHOTO: FREEPIK

Speech and language delays are common, yet access to therapy remains limited. In Taiwan, only about 2,200 licensed speech-language pathologists serve hundreds of thousands of children who need support—especially those with autism spectrum disorders or significant communication challenges. As a result, many children miss crucial periods of language development simply because help isn’t available soon enough.

MIRDC’s new AI-powered interactive speech therapy system aims to close that gap. Instead of focusing solely on articulation, it targets a wider range of language skills that many children struggle with: oral expression, comprehension, sentence building and conversational ability. This makes it a more complete tool for childhood speech and language development.

The system combines game-based learning, AI-driven guidance and automated language assessment into one platform that can be used both in clinics and at home. This integrated design helps children practice more consistently, providing therapists and parents with clearer insight into their progress.

The interactive game modules are built around clinically validated therapy methods. Imitation exercises, picture cards, storybooks and conversational prompts are turned into structured game levels, each aligned with a specific developmental goal. This step-by-step approach helps children move from simple naming tasks to more complex comprehension and response skills, all within a sequenced curriculum.

A key differentiator is the system’s real-time AI speech interpretation. As the child talks, the AI analyzes the response and generates tailored therapeutic cues—such as imitation, modeling, expansion or extension—based on the conversation. These are the same strategies used by speech-language pathologists, but now children can access them continuously, supporting more effective at-home practice and reducing long gaps between sessions.

After each session, the system automatically conducts a data-driven language assessment using 20 objective indicators across semantics, syntax and pragmatics. This provides clinicians and families with measurable, easy-to-understand reports that show how the child is progressing and which skills need more attention—something many traditional tools do not offer.

By offering a personalized, scalable and clinically grounded solution, MIRDC’s AI therapy system helps address the ongoing shortage of speech-language services. It doesn’t replace therapists; instead, it extends their reach, allows for more consistent practice and helps families support their child’s communication at home.

As an added recognition of its impact, the system recently earned two R&D 100 Awards, including the Silver Award for Corporate Social Responsibility. But at its core, the project remains focused on a simple mission: making high-quality speech therapy accessible to every child who needs a voice.

Keep Reading

Startup Profiles

Startup Applied Brain Research Raises Seed Funding to Develop On-Device Voice AI

Why investors are backing Applied Brain Research’s on-device voice AI approach.

Updated

January 14, 2026 1:38 PM

Plastic model of a human's brain. PHOTO: UNSPLASH

Applied Brain Research (ABR), a Canada-based startup, has closed its seed funding round to advance its work in “on-device voice AI”. The round was led by Two Small Fish Ventures, with its general partner Eva Lau joining ABR’s board, reflecting investor confidence in the company’s technical direction and market focus.

The round was oversubscribed, meaning more investors wanted to participate than the company had planned for. That response reflects growing interest in technologies that reduce reliance on cloud-based AI systems.

ABR is focused on a clear problem in voice-enabled products today. Most voice features depend on cloud servers to process speech, which can cause delays, increase costs, raise privacy concerns and limit performance on devices with small batteries or limited computing power.

ABR’s approach is built around keeping voice AI fully on-device. Instead of relying on cloud connectivity, its technology allows devices to process speech locally, enabling faster responses and more predictable performance while reducing data exposure.

Central to this approach is the company’s TSP1 chip, a processor designed specifically for handling time-based data such as speech. Built for real-time voice processing at the edge, TSP1 allows tasks like speech recognition and text-to-speech to run on smaller, power-constrained devices.

This specialization is particularly relevant as voice interfaces become more common across emerging products. Many edge devices such as wearables or mobile robotics cannot support traditional voice AI systems without compromising battery life or responsiveness. The TSP1 addresses this limitation by enabling these capabilities at significantly lower power levels than conventional alternatives. According to the company, full speech-to-text and text-to-speech can run at under 30 milliwatts of power, which is roughly 10 to 100 times lower than many existing alternatives. This level of efficiency makes advanced voice interaction feasible on devices where power consumption has long been a limiting factor.

That efficiency makes the technology applicable across a wide range of use cases. In augmented reality glasses, it supports responsive, hands-free voice control. In robotics, it enables real-time voice interaction without cloud latency or ongoing service costs. For wearables, it expands voice functionality without severely impacting battery life. In medical devices, it allows on-device inference while keeping sensitive data local. And in automotive systems, it enables consistent voice experiences regardless of network availability.

For investors, this combination of timing and technology is what stands out. Voice interfaces are becoming more common, while reliance on cloud infrastructure is increasingly seen as a limitation rather than a strength. ABR sits at the intersection of those two shifts.

With fresh funding in place, ABR is now working with partners across AR, robotics, healthcare, automotive and wearables to bring that future closer. For startup watchers, it’s a reminder that some of the most meaningful AI advances aren’t about bigger models but about making intelligence fit where it actually needs to live.