Deep Tech

The Robot Anxiety Gap: Why Countries With Fewer Robots Fear Them More

A global survey shows robot anxiety drops when people encounter robots in real life

Updated

March 13, 2026 2:25 PM

Ameca the humanoid robot, featuring a grey rubber face. PHOTO: ADOBE STOCK

People often assume robots make people uneasy everywhere. But a new global study suggests something more nuanced. Robot anxiety tends to be highest in places where people rarely see robots in real life. Where robots are more visible, attitudes are often far more positive. That insight comes from a global study by Hexagon AB, which surveyed 18,000 participants across nine major markets. The research explored how adults and children think about robots and how those views change depending on everyday exposure.

In the United Kingdom, anxiety about robots is the highest among the countries studied. Around 52% of adults say they feel worried that something might go wrong when they think about interacting with or working alongside robots. South Korea sits at the other end of the spectrum, with only 29% reporting similar concerns. One factor appears to explain much of the gap: familiarity.

British adults are among the least likely to have encountered robots in real life. Only about 30% say they have seen or used one. In contrast, countries where robots are more visible tend to report greater comfort. China offers the clearest example. Around 75% of adults there say they have seen or interacted with robots. At the same time, 81% say they feel excited about the technology’s future potential.

The study suggests that attitudes toward robots are not fixed. Instead, they shift depending on where people encounter them and what tasks they perform. When robots are seen solving clear, practical problems, confidence tends to rise.

Across the surveyed countries, adults report the highest comfort levels with robots working in factories and warehouses. Around 63% say they are comfortable with robots in those environments. These are settings where tasks are clearly defined and safety standards are well understood. Acceptance drops in more personal spaces. Only 46% say they feel comfortable with robots in the home, while comfort falls further to 39% when robots are imagined in classrooms.

In other words, context matters. People appear more willing to accept robots when they take on physically demanding or dangerous work. Half of the respondents say improved safety is one of the main advantages of robotics in those environments. A similar share point to productivity gains as another benefit. Another finding challenges a common assumption about public fears. Job loss is often described as the biggest concern surrounding robotics. But the study suggests security risk worries people more.

Around 51% of adults say their biggest concern about robots at work is the possibility that the machines could be hacked or misused. That fear outweighs worries about physical malfunction or injury, which stand at 41%. Concerns about being replaced at work appear at the same level.

For many respondents, the issue is not simply whether robots can perform tasks. It is whether the systems controlling them are secure. According to researchers involved in the study, these concerns reflect how people evaluate emerging technologies. Instead of having a single opinion about robotics, people tend to judge each situation individually.

A robot helping assemble products in a factory may feel acceptable. The same technology operating in more sensitive environments can raise different questions. Dr. Jim Everett, an associate professor in moral psychology, says trust in artificial intelligence and robotics is often misunderstood. People are not simply asking whether they trust the technology, he notes. They are thinking about specific tools performing specific roles.

A robot assisting in a classroom or helping in healthcare carries different expectations than an AI system used in defense or surveillance. Even though these technologies are often grouped together in public debates, people evaluate them differently depending on their purpose.

Finally, the study also highlights another important factor shaping public attitudes: experience. When people actually encounter robots, fear often declines. Michael Szollosy, a robotics researcher involved in the project, says reactions tend to change quickly when individuals meet a robot for the first time.

The idea of an autonomous machine can feel intimidating in theory. But when people see a small service robot or an industrial machine performing a straightforward task, the reaction is often much calmer. Exposure can shift perceptions from abstract fears to practical understanding.

That shift matters because robotics is moving steadily into everyday environments. From manufacturing and logistics to healthcare and public services, machines capable of autonomous or semi-autonomous work are becoming more common.

As that happens, the study suggests public confidence may depend less on technical breakthroughs and more on visibility and transparency. Burkhard Boeckem, chief technology officer at Hexagon AB, argues that trust grows when people understand what robots are designed to do and where their limits lie.

Anxiety tends to increase when systems feel invisible or poorly understood. Clear boundaries and clear explanations can have the opposite effect. When people see robots working safely alongside humans, performing well-defined tasks and operating within clear rules, the technology becomes easier to accept.

In that sense, the future of robotics may depend as much on public familiarity as on engineering. The machines themselves are advancing quickly. But the relationship between humans and robots is still being negotiated. For now, the study offers a simple insight: the more people encounter robots in everyday life, the less mysterious they become. And once the mystery fades, the conversation often changes from fear to curiosity.

Keep Reading

Artificial Intelligence

From Security Scores to Dollar Risk: Quantara AI Pushes Continuous Cyber Risk Modeling

Quantara AI launches a continuous platform designed to estimate the financial impact of cyber risk as companies move beyond periodic assessments

Updated

February 20, 2026 6:43 PM

A person tightrope walking between two cliffs. PHOTO: UNSPLASH

Cyber risk is increasingly treated as a financial issue. Boards want to know how much a cyber incident could cost the company, how it could affect earnings, and whether current security spending is justified.

Yet many organizations still measure cyber risk through periodic reviews. These assessments are often conducted once or twice a year, supported by consultants and spreadsheet models. By the time the report reaches senior leadership, the company’s systems may have changed and new threats may have emerged. The way risk is measured does not always match how quickly it evolves.

This gap is where Quantara AI is positioning its new platform. Quantara AI, a Boise-based cybersecurity startup, has introduced what it describes as the industry’s first persistent AI-powered cyber risk solution. The system is designed to run continuously rather than rely on occasional assessments.

The company’s core argument is straightforward: not every security weakness carries the same financial consequence. Instead of ranking issues only by technical severity, the platform analyzes active threats, identifies which company systems are exposed, and estimates how much money a successful attack could cost. It uses statistical models, including Value at Risk (VaR), to calculate potential losses. It also estimates how specific security improvements could reduce that projected loss.

The timing aligns with a broader market shift. International Data Corporation (IDC) projects that by 2028, 40% of enterprises will adopt AI-based cyber risk quantification platforms. These tools convert security data into financial estimates that can guide budgeting and investment decisions. The forecast reflects growing pressure on security leaders to present risk in terms that boards and regulators understand.

Traditional compliance and risk management systems often focus on meeting regulatory standards. Vulnerability management programs typically score weaknesses based on technical characteristics. Consultant-led risk studies provide detailed analysis, but they are usually performed at set intervals. In fast-changing threat environments, that model can leave decision-makers working with outdated information.

Quantara’s platform attempts to replace that periodic process with continuous measurement. It brings together threat data, internal system information and financial modeling in one system. The goal is to show, at any given time, which specific weaknesses could lead to the largest financial losses.

Cyber risk quantification as a concept is not new. What is changing is the expectation that these calculations be updated regularly and tied directly to financial decision-making. As cyber incidents carry clearer monetary consequences, companies are looking for ways to measure exposure with greater precision.

The broader question is whether enterprises will shift fully toward continuous, AI-driven risk analysis or continue relying on periodic external assessments. What is clear is that cybersecurity discussions are moving closer to financial reporting — and tools that estimate potential loss in dollar terms are becoming central to that shift.