Deep Tech

The Robot Anxiety Gap: Why Countries With Fewer Robots Fear Them More

A global survey shows robot anxiety drops when people encounter robots in real life

Updated

March 13, 2026 2:25 PM

Ameca the humanoid robot, featuring a grey rubber face. PHOTO: ADOBE STOCK

People often assume robots make people uneasy everywhere. But a new global study suggests something more nuanced. Robot anxiety tends to be highest in places where people rarely see robots in real life. Where robots are more visible, attitudes are often far more positive. That insight comes from a global study by Hexagon AB, which surveyed 18,000 participants across nine major markets. The research explored how adults and children think about robots and how those views change depending on everyday exposure.

In the United Kingdom, anxiety about robots is the highest among the countries studied. Around 52% of adults say they feel worried that something might go wrong when they think about interacting with or working alongside robots. South Korea sits at the other end of the spectrum, with only 29% reporting similar concerns. One factor appears to explain much of the gap: familiarity.

British adults are among the least likely to have encountered robots in real life. Only about 30% say they have seen or used one. In contrast, countries where robots are more visible tend to report greater comfort. China offers the clearest example. Around 75% of adults there say they have seen or interacted with robots. At the same time, 81% say they feel excited about the technology’s future potential.

The study suggests that attitudes toward robots are not fixed. Instead, they shift depending on where people encounter them and what tasks they perform. When robots are seen solving clear, practical problems, confidence tends to rise.

Across the surveyed countries, adults report the highest comfort levels with robots working in factories and warehouses. Around 63% say they are comfortable with robots in those environments. These are settings where tasks are clearly defined and safety standards are well understood. Acceptance drops in more personal spaces. Only 46% say they feel comfortable with robots in the home, while comfort falls further to 39% when robots are imagined in classrooms.

In other words, context matters. People appear more willing to accept robots when they take on physically demanding or dangerous work. Half of the respondents say improved safety is one of the main advantages of robotics in those environments. A similar share point to productivity gains as another benefit. Another finding challenges a common assumption about public fears. Job loss is often described as the biggest concern surrounding robotics. But the study suggests security risk worries people more.

Around 51% of adults say their biggest concern about robots at work is the possibility that the machines could be hacked or misused. That fear outweighs worries about physical malfunction or injury, which stand at 41%. Concerns about being replaced at work appear at the same level.

For many respondents, the issue is not simply whether robots can perform tasks. It is whether the systems controlling them are secure. According to researchers involved in the study, these concerns reflect how people evaluate emerging technologies. Instead of having a single opinion about robotics, people tend to judge each situation individually.

A robot helping assemble products in a factory may feel acceptable. The same technology operating in more sensitive environments can raise different questions. Dr. Jim Everett, an associate professor in moral psychology, says trust in artificial intelligence and robotics is often misunderstood. People are not simply asking whether they trust the technology, he notes. They are thinking about specific tools performing specific roles.

A robot assisting in a classroom or helping in healthcare carries different expectations than an AI system used in defense or surveillance. Even though these technologies are often grouped together in public debates, people evaluate them differently depending on their purpose.

Finally, the study also highlights another important factor shaping public attitudes: experience. When people actually encounter robots, fear often declines. Michael Szollosy, a robotics researcher involved in the project, says reactions tend to change quickly when individuals meet a robot for the first time.

The idea of an autonomous machine can feel intimidating in theory. But when people see a small service robot or an industrial machine performing a straightforward task, the reaction is often much calmer. Exposure can shift perceptions from abstract fears to practical understanding.

That shift matters because robotics is moving steadily into everyday environments. From manufacturing and logistics to healthcare and public services, machines capable of autonomous or semi-autonomous work are becoming more common.

As that happens, the study suggests public confidence may depend less on technical breakthroughs and more on visibility and transparency. Burkhard Boeckem, chief technology officer at Hexagon AB, argues that trust grows when people understand what robots are designed to do and where their limits lie.

Anxiety tends to increase when systems feel invisible or poorly understood. Clear boundaries and clear explanations can have the opposite effect. When people see robots working safely alongside humans, performing well-defined tasks and operating within clear rules, the technology becomes easier to accept.

In that sense, the future of robotics may depend as much on public familiarity as on engineering. The machines themselves are advancing quickly. But the relationship between humans and robots is still being negotiated. For now, the study offers a simple insight: the more people encounter robots in everyday life, the less mysterious they become. And once the mystery fades, the conversation often changes from fear to curiosity.

Keep Reading

Health & Biotech

How a Teen-Founded Startup Is Using AI to Reinvent Pesticide Discovery

Bindwell is testing a simple idea: use AI to design smarter, more targeted pesticides built for today’s farming challenges.

Updated

January 8, 2026 6:33 PM

Researcher tending seedlings in a laboratory environment. PHOTO: FREEPIK

Bindwell, a San Francisco–based ag-tech startup using AI to design new pesticide molecules, has raised US$6 million in seed funding, co-led by General Catalyst and A Capital, with participation from SV Angel and Y Combinator founder Paul Graham. The round will help the company expand its lab in San Carlos, hire more technical talent and advance its first pesticide candidates toward validation.  

Even as pesticide use has doubled over the last 30 years, farmers still lose up to 40% of global crops to pests and disease. The core issue is resistance: pests are adapting faster than the industry can update its tools. As a result, farmers often rely on larger amounts of the same outdated chemicals, even as they deliver diminishing returns.

Meanwhile, innovation in the agrochemical sector has slowed, leaving the industry struggling to keep up with rapidly evolving pests. This is the gap Bindwell is targeting. Instead of updating old chemicals, the company uses AI to find completely new compounds designed for today’s pests and farming conditions.  

This vision is made even more striking by the people leading it. Bindwell was founded by 18-year-old Tyler Rose and 19-year-old Navvye Anand, who met at the Wolfram Summer Research Program in 2023. Both had deep ties to agriculture — Rose in China and Anand in India — witnessing up close how pest outbreaks and chemical dependence burdened farmers.  

Filling the gap in today’s pesticide pipeline, Bindwell created an AI system that can design and evaluate new molecules long before they hit the lab. It starts with Foldwell, the company’s protein-structure model, which helps map the shapes of pest proteins so scientists know where a molecule should bind. Then comes PLAPT, which can scan through every known synthesized compound in just a few hours to see which ones might actually work. For biopesticides, they use APPT, a model tuned to spot protein-to-protein interactions and shown to outperform existing tools on industry benchmarks.

Bindwell isn’t selling AI tools. Instead, the company develops the molecules itself and licenses them to major agrochemical players. Owning the full discovery process lets the team bake in safety, selectivity and environmental considerations from day one. It also allows Bindwell to plug directly into the pipelines that produce commercial pesticides — just with a fundamentally different engine powering the science.

At present, the team is now testing its first AI-generated candidates in its San Carlos lab and is in early talks with established pesticide manufacturers about potential licensing deals. For Rose and Anand, the long-term vision is simple: create pest control that works without repeating the mistakes of the last half-century. As they put it, the goal is not to escalate chemical use but to design molecules that are more precise, less harmful and resilient against resistance from the start.