We bring you concise, up-to-the-minute coverage of the founders, funding rounds, and technologies shaping tomorrow. Expect clear explains, deal roundups, and stories that cut through the noise—so you can spot the next big move in tech, fast.
A closer look at the tech, AI, and open ecosystem behind Tien Kung 3.0’s real-world push
Humanoid robotics has advanced quickly in recent years. Machines can now walk, balance, and interact with their surroundings in ways that once seemed out of reach. Yet most deployments remain limited. Many robots perform well in controlled settings but struggle in real-world environments. Integration is often complex, hardware interfaces are closed, software tools are fragmented, and scaling across industries remains difficult.
Against this backdrop, X-Humanoid has introduced its latest general-purpose platform, Embodied Tien Kung 3.0. The company positions it not simply as another humanoid robot, but as a system designed to address the practical barriers that have slowed adoption, with a focus on openness and usability.
At the hardware level, Embodied Tien Kung 3.0 is built for mobility, strength, and stability. It is equipped with high-torque integrated joints that provide strong limb force for high-load applications. The company says it is the first full-size humanoid robot to achieve whole-body, high-dynamic motion control integrated with tactile interaction. In practice, this means the robot is designed to maintain balance and execute dynamic movements even in uneven or cluttered environments. It can clear one-meter obstacles, perform consecutive high-dynamic maneuvers, and carry out actions such as kneeling, bending, and turning with coordinated whole-body control.
Precision is also a focus. Through multi-degree-of-freedom limb coordination and calibrated joint linkage, the system is designed to achieve millimeter-level operational accuracy. This level of control is intended to support industrial-grade tasks that require consistent performance and minimal error across changing conditions.
But hardware is only part of the equation. The company pairs the robot with its proprietary Wise KaiWu general-purpose embodied AI platform. This system supports perception, reasoning, and real-time control through what the company describes as a coordinated “brain–cerebellum” architecture. It establishes a continuous perception–decision–execution loop, allowing the robot to operate with greater autonomy and reduced reliance on remote control.
For higher-level cognition, Wise KaiWu incorporates components such as a world model and vision-language models (VLM) to interpret visual scenes, understand language instructions, and break complex objectives into structured steps. For real-time execution, a vision-language-action (VLA) model and full autonomous navigation system manage obstacle avoidance and precise motion under variable conditions. The platform also supports multi-agent collaboration, enabling cross-platform compatibility, asynchronous task coordination, and centralized scheduling across multiple robots.
A central part of the platform is openness. The company states that the system is designed to address compatibility and adaptation challenges across both development and deployment layers. On the hardware side, Embodied Tien Kung 3.0 includes multiple expansion interfaces that support different end-effectors and tools, allowing faster adaptation to industrial manufacturing, specialized operations, and commercial service scenarios. On the software side, the Wise KaiWu ecosystem provides documentation, toolchains, and a low-code development environment. It supports widely adopted communication standards, including ROS2, MQTT, and TCP/IP, enabling partners to customize applications without rebuilding core systems.
The company also highlights its open-source approach. X-Humanoid has open-sourced key components from the Embodied Tien Kung and Wise KaiWu platforms, including the robot body architecture, motion control framework, world model, embodied VLM and cross-ontology VLA models, training toolchains, the RoboMIND dataset, and the ArtVIP simulation asset library. By opening access to these elements, the company aims to reduce development costs, lower technical barriers, and encourage broader participation from researchers, universities, and enterprises.
Embodied Tien Kung 3.0 enters a market where technical progress is visible but large-scale adoption remains uneven. The gap is not only about movement or strength. It is about integration, interoperability, and the ability to operate reliably and autonomously in everyday industrial and commercial settings. If platforms can reduce fragmentation and simplify deployment, humanoid robots may move beyond demonstrations and into sustained commercial use.
In that sense, the significance of Embodied Tien Kung 3.0 lies less in isolated technical claims and more in how its high-dynamic hardware, embodied AI system, open interfaces, and collaborative architecture are structured to work together. Whether that integrated approach can close the deployment gap will shape how quickly humanoid robotics becomes part of real-world operations.
A new safety layer aims to help robots sense people in real time without slowing production
Algorized has raised US$13 million in a Series A round to advance its AI-powered safety and sensing technology for factories and warehouses. The California- and Switzerland-based robotics startup says the funding will help expand a system designed to transform how robots interact with people. The round was led by Run Ventures, with participation from the Amazon Industrial Innovation Fund and Acrobator Ventures, alongside continued backing from existing investors.
At its core, Algorized is building what it calls an intelligence layer for “physical AI” — industrial robots and autonomous machines that function in real-world settings such as factories and warehouses. While generative AI has transformed software and digital workflows, bringing AI into physical environments presents a different challenge. In these settings, machines must not only complete tasks efficiently but also move safely around human workers.
This is where a clear gap exists. Today, most industrial robots rely on camera-based monitoring systems or predefined safety zones. For instance, when a worker steps into a marked area near a robotic arm, the system is programmed to slow down or stop the machine completely. This approach reduces the risk of accidents. However, it also means production lines can pause frequently, even when there is no immediate danger. In high-speed manufacturing environments, those repeated slowdowns can add up to significant productivity losses.
Algorized’s technology is designed to reduce that trade-off between safety and efficiency. Instead of relying solely on cameras, the company utilizes wireless signals — including Ultra-Wideband (UWB), mmWave, and Wi-Fi — to detect movement and human presence. By analysing small changes in these radio signals, the system can detect motion and breathing patterns in a space. This helps machines determine where people are and how they are moving, even in conditions where cameras may struggle, such as poor lighting, dust or visual obstruction.
Importantly, this data is processed locally at the facility itself — not sent to a remote cloud server for analysis. In practical terms, this means decisions are made on-site, within milliseconds. Reducing this delay, or latency, allows robots to adjust their movements immediately instead of defaulting to a full stop. The aim is to create machines that can respond smoothly and continuously, rather than reacting in a binary stop-or-go manner.
With the new funding, Algorized plans to scale commercial deployments of its platform, known as the Predictive Safety Engine. The company will also invest in refining its intent-recognition models, which are designed to anticipate how humans are likely to move within a workspace. In parallel, it intends to expand its engineering and support teams across Europe and the United States. These efforts build on earlier public demonstrations and ongoing collaborations with manufacturing partners, particularly in the automotive and industrial sectors.
For investors, the appeal goes beyond safety compliance. As factories become more automated, even small improvements in uptime and workflow continuity can translate into meaningful financial gains. Because Algorized’s system works with existing wireless infrastructure, manufacturers may be able to upgrade machine awareness without overhauling their entire hardware setup.
More broadly, the company is addressing a structural limitation in industrial automation. Robotics has advanced rapidly in precision and power, yet human-robot collaboration is still governed by rigid safety systems that prioritise stopping over adapting. By combining wireless sensing with edge-based AI models, Algorized is attempting to give machines a more continuous awareness of their surroundings from the start.
Bitmo Lab is testing an ultra-thin, bendable tracker built to fit inside items traditional trackers can’t
Location trackers have become everyday accessories for keys, bags and luggage. But as personal items grow slimmer and more design-focused — from minimalist wallets to passport sleeves and specialised gear — tracking them has become less straightforward. Most trackers are built as small, rigid discs that assume the presence of space, loops or compartments. That assumption has created a growing mismatch between modern product design and the technology meant to secure it.
Hong Kong–based startup Bitmo Lab is attempting to address that gap with a device called MeetSticker. Instead of the solid plastic casing typical of most trackers, MeetSticker is engineered to be flexible and ultra-thin, measuring just 0.8 millimetres thick. The bendable design allows it to sit within narrow compartments or along curved surfaces without altering the shape of the object. Rather than attaching to an item externally, it is intended to integrate discreetly inside it.
That structural shift is the core of the product’s proposition. By removing the rigid shell that defines conventional tracking hardware, MeetSticker can be placed in items that previously had no practical way to accommodate a tracker. Bitmo Lab states that the device connects through a proprietary network and a companion application compatible with both iOS and Android, positioning it as a cross-platform solution rather than one tied to a single ecosystem.
The implications extend beyond form factor. Objects without obvious attachment points — such as compact travel accessories or specialised tools — could potentially be monitored without visible add-ons. In doing so, the device broadens the scope of tracking technology into categories where aesthetics, aerodynamics or compact design matter as much as functionality.
Before moving toward retail distribution, however, the company is focusing on validation. Bitmo Lab has launched a five-week global alpha testing programme beginning February 9. Sixty participants will receive a prototype unit and early access to the app. According to the company, the programme is designed to assess durability, usability and real-world performance before a wider commercial release. Participants who provide feedback will receive a retail unit upon launch.
Such testing is particularly relevant for flexible electronics. Unlike rigid devices, bendable hardware must withstand repeated flexing, daily handling and environmental exposure. Early user data can help refine manufacturing processes and software optimisation before scaling production.
As with other connected tracking devices, privacy considerations remain part of the equation. Bitmo Lab has stated that data collected during the alpha programme will be used strictly for testing purposes and deleted once the programme concludes.
Whether flexible trackers will redefine the category will depend on how they perform outside controlled testing environments. Still, the introduction of a near-invisible, bendable tracking device reflects a broader shift in consumer technology. As everyday products become thinner and more design-conscious, the tools built to protect them may need to adapt just as seamlessly.
AI’s expansion into the physical world is reshaping what investors choose to back
Artificial intelligence is often discussed in terms of large models trained in distant data centres. Less visible, but increasingly consequential, is the layer of computing that enables machines to interpret and respond to the physical world in real-time. As AI systems move from abstract software into vehicles, cameras and factory equipment, the chips that power on-device decision-making are becoming strategic assets in their own right.
It is within this shift that Axera, a Shanghai-based semiconductor company, began trading on the Hong Kong Stock Exchange on February 10 under the ticker symbol 00600.HK. The company priced its shares at HK$28.2, debuting with a market capitalization of approximately HK$16.6 billion. Its listing marks the first time a Chinese company focused primarily on AI perception and edge inference chips has gone public in the city — a milestone that underscores growing investor interest in the hardware layer of artificial intelligence.
The listing comes at a time when demand for flexible, on-device intelligence is expanding. As manufacturers, automakers and infrastructure operators integrate AI into physical systems, the need for specialized processors capable of handling visual and sensor data efficiently has grown. At the same time, China’s domestic semiconductor industry has faced increasing pressure to build local capabilities across the chip value chain. Companies such as Axera sit at the intersection of these dynamics, serving both commercial markets and broader industrial policy priorities.
For Hong Kong, the debut adds to a cohort of technology companies seeking public capital to scale hardware-intensive businesses. Unlike software firms, semiconductor designers operate in a capital-intensive environment shaped by supply chains, fabrication partnerships and rapid product cycles. Their presence on the exchange reflects a maturing investor appetite for AI infrastructure, not just consumer-facing applications.
Axera’s early backer, Qiming Venture Partners, led the company’s pre-A financing round in 2020 and continued to participate in subsequent rounds. Prior to the IPO, it held more than 6 percent of the company, making it the second-largest institutional investor. The public offering provides liquidity for early investors and new funding for a company operating in a highly competitive and technologically demanding sector.
Axera’s market debut does not resolve the competitive challenges of the semiconductor industry, where innovation cycles are short and global competition is intense. But it does signal that investors are placing tangible value on the hardware, enabling AI’s expansion beyond the cloud. In that sense, the listing represents more than a corporate milestone; it reflects a broader transition in how artificial intelligence is built, deployed and financed — moving steadily from software abstraction toward the silicon that makes real-world autonomy possible.
How a Korean biotech startup is using AI to move drug discovery from trial-and-error to precision design
For decades, drug discovery has relied on trial and error, with scientists testing thousands of molecules to find one that works. Galux, a South Korean biotech startup, is changing that by using AI to design proteins from scratch. This method, called “de novo” design, makes it possible to build precise new therapies instead of searching through existing ones.
The company recently announced a US$29 million Series B funding round, bringing its total capital to US$47 million.This significant investment attracted a substantial roster of institutional backers, including the Korea Development Bank (KDB), Yuanta Investment, SL Investment and NCORE Ventures. These firms joined existing investors such as InterVest, DAYLI Partners and PATHWAY Investment, as well as new participants including SneakPeek Investments, Korea Investment & Securities and Mirae Asset Securities.
At the core of the company’s work is a platform called GaluxDesign. Unlike many AI tools that only predict how existing proteins fold, this system uses deep learning and physics to create entirely new therapeutic antibodies. This “from scratch” approach lets the team go after so-called “undruggable” proteins. These are targets that traditional small-molecule drugs can’t reach because they lack clear binding pockets. By designing proteins to fit these complex shapes, Galux aims to unlock treatments that have stayed out of reach for decades. And that’s exactly why investors are paying attention.
The pharmaceutical industry is actively looking for faster and more efficient ways to develop new drugs, and Galux is built for exactly that. The company connects its AI platform directly to its own wet lab, where designs can be tested in real time. Each result feeds straight back into the system, sharpening the next round of models. This continuous loop speeds up discovery and improves precision at every step. It’s also why partners like Celltrion, LG Chem and Boehringer Ingelheim are already working with Galux.
Galux is no longer just trying to make drugs that stick to a target. The company now wants its AI to design medicines that actually work in the body and can be made at scale. In simple terms, a drug has to do more than bind to a disease—it must be stable, safe and strong enough to change how the illness behaves. Galux is moving into tougher targets such as ion channels and GPCRs. These play key roles in heart function and sensory signals. Ultimately, the goal is to show that AI-driven design can turn complex biology into real treatments. And instead of hunting blindly for a solution, the team is building exactly what they need.
AutoFlight’s five-tonne Matrix bets on heavy payloads and regional range to prove the case for electric flight
The nascent industry of electric vertical takeoff and landing (eVTOL) aircraft has long been defined by a specific set of limitations: small payloads, short distances and a primary focus on urban air taxis. AutoFlight, a Chinese aviation startup, recently moved to shift that narrative by unveiling "Matrix," a five-tonne aircraft that represents a significant leap in scale for electric aviation.
In a demonstration at the company’s flight test center, the Matrix completed a full transition flight—the technically demanding process of switching from vertical lift-off to forward wing-born flight and back to a vertical landing. While small-scale drones and four-seat prototypes have become increasingly common, this marks the first time an electric aircraft of this mass has successfully executed the maneuver.
The sheer scale of the Matrix places it in a different category than the "flying cars" currently being tested for hops over city traffic. With a maximum takeoff weight of 5,700 kilograms (roughly 12,500 pounds), the aircraft has the footprint of a traditional regional turboprop, boasting a 20-meter wingspan. Its size allows for configurations that the industry has previously struggled to accommodate, including a ten-seat business class cabin or a cargo hold capable of carrying 1,500 kilograms of freight.
This increased capacity is more than just a feat of engineering; it is a direct attempt to solve the financial hurdles that have plagued the sector, specifically addressing the skepticism industry analysts have often expressed regarding the economic viability of smaller eVTOLs. These critics frequently cite the high cost of operation relative to the low passenger count as a barrier to entry.
AutoFlight’s founder and CEO, Tian Yu, suggested the Matrix is a direct response to those concerns. “Matrix is not just a rising star in the aviation industry, but also an ambitious disruptor,” Yu stated. “It will eliminate the industry perception that eVTOL = short-haul, low payload and reshape the rules of eVTOL routes. Through economies of scale, it significantly reduces transportation costs per seat-kilometer and per ton-kilometer, thus revolutionizing costs and driving profitability.”
To achieve this, the aircraft utilizes a "lift and cruise" configuration. In simple terms, this means the plane uses one set of dedicated rotors to lift it off the ground like a helicopter, but once it reaches a certain speed, it uses a separate propeller to fly forward like a traditional airplane, allowing the wings to provide the lift. This design is paired with a distinctive "triplane" layout—three layers of wings—and a six-arm structure to keep the massive frame stable.
These features allow the Matrix to serve a variety of roles. For the "low-altitude economy" being promoted by Chinese regulators, the startup is offering a pure electric model with a 250-kilometer range for regional hops, alongside a hybrid-electric version capable of traveling 1,500 kilometers. The latter version, equipped with a forward-opening door to fit standard air freight containers, targets a logistics sector still heavily reliant on carbon-intensive trucking.
However, the road to commercial flight remains a steep one. Despite the successful flight demonstration, AutoFlight faces the same formidable headwinds as its competitors, such as a complex global regulatory landscape and the rigorous demands of airworthiness certification. While the Matrix validates the company's high-power propulsion, moving from a test-center demonstration to a commercial fleet will require years of safety data.
Nevertheless, the debut of the Matrix signals a maturation of the startup’s ambitions. Having previously developed smaller models for autonomous logistics and urban mobility, AutoFlight is now betting that the future of electric flight isn't just in avoiding gridlock, but in hauling the weight of regional commerce. Whether the infrastructure and regulators are ready to accommodate a five-tonne electric disruptor remains the industry's unanswered question.
Inside the funding round driving the shift to intelligent construction fleets
Bedrock Robotics has raised US$270 million in Series B funding as it works to integrate greater automation into the construction industry. The round, co-led by CapitalG and the Valor Atreides AI Fund, values the San Francisco-based company at US$1.75 billion, bringing its total funding to more than US$350 million.
The size of the investment reflects growing interest in technologies that can change how large infrastructure and industrial projects are built. Bedrock is not trying to reinvent construction from scratch. Instead, it is focused on upgrading the machines contractors already use—so they can work more efficiently, safely and consistently.
Founded in 2024 by former Waymo engineers, Bedrock develops systems that allow heavy equipment to operate with increasing levels of autonomy. Its software and hardware can be retrofitted onto machines such as excavators, bulldozers and loaders. Rather than relying on one-off robotic tools, the company is building a connected platform that lets fleets of machines understand their surroundings and coordinate with one another on job sites.
This is what Bedrock calls “system-level autonomy”. Its technology combines cameras, lidar and AI models to help machines perceive terrain, detect obstacles, track work progress and carry out tasks like digging and grading with precision. Human supervisors remain in control, monitoring operations and stepping in when needed. Over time, Bedrock aims to reduce the amount of direct intervention those machines require.
The funding comes as contractors face rising pressure to deliver projects faster and with fewer available workers. In the press release, Bedrock notes that the industry needs nearly 800,000 additional workers over the next two years and that project backlogs have grown to more than eight months. These constraints are pushing firms to explore new ways to keep sites productive without compromising safety or quality.
Bedrock states that autonomy can help address those challenges. Not by removing people from the equation—but by allowing crews to supervise more equipment at once and reduce idle time. If machines can operate longer, with better awareness of their environment, sites can run more smoothly and with fewer disruptions.
The company has already started deploying its system in large-scale excavation work, including manufacturing and infrastructure projects. Contractors are using Bedrock’s platform to test how autonomous equipment can support real-world operations at scale, particularly in earthmoving tasks that demand precision and consistency.
From a business standpoint, the Series B funding will allow Bedrock to expand both its technology and its customer deployments. The company has also strengthened its leadership team with senior hires from Meta and Waymo, deepening its focus on AI evaluation, safety and operational growth. Bedrock says it is targeting its first fully operator-less excavator deployments with customers in 2026—a milestone for autonomy in complex construction equipment.
In that context, this round is not just about capital. It is about giving Bedrock the runway to prove that autonomous systems can move from controlled pilots into everyday use on job sites. The company bets that the future of construction will be shaped less by individual machines—and more by coordinated, intelligent systems that work alongside human crews.
From plush figures to digital pets, a new class of AI toys is emerging — built not around screens or sensors, but around memory, language and emotional awareness
Spielwarenmesse in Nuremberg is the global meeting point for the toy industry, where brands and designers preview what will shape how children play and learn next. At this year’s fair, one message stood out clearly: toys are no longer built just to entertain, but to listen, respond and grow with children. Tuya Smart, a global AI cloud platform company, used the event to show how AI-powered toys are turning familiar formats into interactive companions that can talk, react emotionally and adapt over time.
The company’s central argument was simple but far-reaching. The next generation of artificial intelligence toys will not be defined by motors, sensors or screens alone, but by how well they understand human behavior. Instead of being single-function objects, smart toys for children are becoming systems that combine language models, emotion recognition and memory to support ongoing interaction.
One of the most talked-about examples was Tuya Smart’s Nebula Plush AI Toy. At first glance, it looks like a soft, expressive plush figure. Inside, it uses emotional recognition to change its LED facial expressions in real time. If a child sounds sad or excited, the toy’s eyes respond visually. It supports natural conversation, reacts to hugs and touch and combines storytelling, news-style updates and interactive games. Over time, it builds memory, allowing it to behave less like a gadget and more like an interactive AI toy that recalls past interactions.
Another example was Walulu, also developed using Tuya’s AI toy platform. Walulu is an AI pet built around personalization. It can detect up to 19 emotional states and speak more than 60 languages. It connects to major large language models such as ChatGPT, Gemini, DeepSeek, Qwen and Doubao. Through simple app-based controls, users choose traits like cheerful, quiet, curious or thoughtful. Those choices shape how Walulu talks and reacts. Instead of repeating scripts, it adjusts its tone and behavior over time. The result is not a novelty item, but an emotionally responsive AI toy that feels consistent in daily use.
Tuya also showed how educational AI toys can extend into learning and exploration. Its AI Learning Camera blends computer vision with interactive content. When it recognizes an object, it links it to cultural and learning material. If a child points it at a foreign word, it offers real-time pronunciation and translation. It can also turn drawings into digital artwork, encouraging active creativity rather than passive screen time. In this sense, AI toys for kids are becoming tools for learning as much as play.
These products point to a larger strategy. Tuya is not just making toys — it is building the AI toy development platform behind them. Through its AI Toy Solution, developers can design a toy’s personality, memory logic and behavior without training models from scratch. The system integrates with leading AI models and supports multi-turn conversation and emotional feedback, turning standard hardware into responsive AI companions.
The platform supports multiple development paths. Brands can use ready-to-market OEM solutions, add AI to existing products or build custom toys around their own characters. Plush toys, robots, educational tools and wearables can all become AI-powered toys without changing their physical design.
Because these products are made for children and families, safety is built in. Tuya’s system includes parental controls, conversation history review and content management. It supports standards such as GDPR and CCPA with encryption and data localization.
From a business standpoint, Tuya’s pitch is speed and scale. The company says its AI toy infrastructure can cut development time by more than half and reduce R&D costs by up to 50 percent. Its AIoT network spans over 200 countries and supports more than 60 languages, making global deployment of AI toys easier.
What emerged at Spielwarenmesse 2026 was not just a lineup of smart gadgets, but a clear shift in the category. AI toys are evolving into emotionally aware systems that talk, listen, remember and adapt. Their value lies not in sounding clever, but in fitting naturally into everyday life.
The fair did not present AI toys as a distant future. It showed them as products already entering the mainstream. The real question now is not whether toys will use AI, but how carefully that intelligence is designed for children.
A closer look at PMMI’s FastTrack initiative and why it matters for growing manufacturing firms
Large trade shows are built for scale. But for small and medium-sized manufacturers, that scale often creates distance between what’s on display and what they can actually use. Too many options, too little time, and very few tools designed for companies that are still growing. That mismatch is what PMMI is trying to correct with its new SMB FastTrack Program at PACK EXPO East 2026.
That is the problem PMMI is trying to address with its new SMB FastTrack Program, launching at PACK EXPO East 2026 in Philadelphia.
PMMI — the Association for Packaging and Processing Technologies — is the industry body behind the PACK EXPO trade shows and a central organization in the global packaging and processing sector. Through FastTrack, it has created a program (not an app or a product) designed to help small and mid-sized companies navigate the show more efficiently and connect with solutions that fit their scale.
The idea behind SMB FastTrack is simple: reduce friction. Instead of asking smaller firms to sort through hundreds of exhibitors and sessions on their own, the program curates what is most relevant to them. Exhibitors that offer flexible pricing, right-sized machinery, or SMB-focused services are clearly identified with visual icons in both the online directory and on the show floor. That way, a small manufacturer can quickly distinguish between enterprise-only vendors and partners that are realistically accessible.
The same logic carries into education. Rather than treating all attendees the same, PACK EXPO East 2026 will include a learning track specifically built around SMB realities. These sessions focus on issues that smaller teams actually face—how to hire and train workers, use AI without over-investing, improve food safety, cut operating costs, and adopt technology in stages. The goal is not inspiration, but applicability: content that reflects real constraints, not ideal scenarios.
Planning, too, is built into the structure of the program. Through a dedicated FastTrack landing page, participants can access curated supplier lists, recommended sessions, and planning tools that help organize their time before they ever step onto the show floor. Tools like category search and sustainability finders are meant to narrow choices quickly, turning a massive event into something manageable.
Seen together, these elements point to a broader intention. PMMI is not simply adding features—it is reshaping how smaller manufacturers experience a major industry event. Instead of competing for attention in a space built for scale, SMBs are given clearer paths to the people, tools, and knowledge that match where they actually are in their growth cycle.
What makes SMB FastTrack notable is not the technology behind it, but the intention behind it. PMMI is recognizing that progress for small and mid-sized manufacturers depends less on spectacle and more on fit—solutions that are accessible, affordable, and adaptable. The program is designed to help companies move with purpose, not pressure.
In an industry where visibility often follows size, SMB FastTrack represents a structural shift. It treats small and medium-sized manufacturers not as a subset of the audience, but as a distinct group with distinct needs. By doing so, PMMI is quietly redefining what a trade show can be: not just a marketplace of innovation, but a usable platform for companies still building their next stage of growth.
A turbine-inspired generator shows how overlooked industrial airflow could quietly become a new source of usable power
Compressed air is used across factories, data centers and industrial plants to move materials, cool systems and power tools. Once it has done that job, the air is usually released — and its remaining energy goes unused.
That everyday waste is what caught the attention of a research team at Chung-Ang University in South Korea. They are investigating how this overlooked airflow can be harnessed to generate electricity instead of disappearing into the background.
Most of the world’s power today comes from systems like turbines, which turn moving fluids into energy or solar cells, which convert sunlight into electricity. The Chung-Ang team has built a device that uses compressed air to generate electricity without relying on traditional blades or sunlight.
At the center of the work is a simple question: what happens when high-pressure air spins through a specially shaped device at very high speed? The answer lies in the air itself. The researchers found that tiny particles naturally present in the air carry an electric charge. When that air moves rapidly across certain surfaces, it can transfer charge without physical contact. This creates electricity through a process known as the “particulate static effect.”
To use that effect, the team designed a generator based on a Tesla turbine. Unlike conventional turbines with blades, a Tesla turbine uses smooth rotating disks and relies on the viscosity of air to create motion. Compressed air enters the device, spins the disks at high speed and triggers charge buildup on specially layered surfaces inside.
What makes this approach different is that the system does not depend on friction between parts rubbing together. Instead, the charge comes from particles in the air interacting with the surfaces as they move past. This reduces wear and allows the generator to operate at very high speeds. And those speeds translate into real output.
In lab tests, the device produced strong electrical power. The researchers also showed that this energy could be used in practical ways. It ran small electronic devices, helped pull moisture from the air and removed dust particles from its surroundings.
The problem this research is addressing is straightforward.
Compressed air is already everywhere in industry, but its leftover energy is usually ignored. This system is designed to capture part of that unused motion and convert it into electricity without adding complex equipment or major safety risks.
Earlier methods of harvesting static electricity from particles showed promise, but they came with dangers. Uncontrolled discharge could cause sparks or even ignition. By using a sealed, turbine-based structure, the Chung-Ang University team offers a safer and more stable way to apply the same physical effect.
As a result, the technology is still in the research stage, but its direction is easy to see. It points toward a future where energy is not only generated in power plants or stored in batteries, but also recovered from everyday industrial processes.