Deep Tech

The Robot Anxiety Gap: Why Countries With Fewer Robots Fear Them More

A global survey shows robot anxiety drops when people encounter robots in real life

Updated

March 13, 2026 2:25 PM

Ameca the humanoid robot, featuring a grey rubber face. PHOTO: ADOBE STOCK

People often assume robots make people uneasy everywhere. But a new global study suggests something more nuanced. Robot anxiety tends to be highest in places where people rarely see robots in real life. Where robots are more visible, attitudes are often far more positive. That insight comes from a global study by Hexagon AB, which surveyed 18,000 participants across nine major markets. The research explored how adults and children think about robots and how those views change depending on everyday exposure.

In the United Kingdom, anxiety about robots is the highest among the countries studied. Around 52% of adults say they feel worried that something might go wrong when they think about interacting with or working alongside robots. South Korea sits at the other end of the spectrum, with only 29% reporting similar concerns. One factor appears to explain much of the gap: familiarity.

British adults are among the least likely to have encountered robots in real life. Only about 30% say they have seen or used one. In contrast, countries where robots are more visible tend to report greater comfort. China offers the clearest example. Around 75% of adults there say they have seen or interacted with robots. At the same time, 81% say they feel excited about the technology’s future potential.

The study suggests that attitudes toward robots are not fixed. Instead, they shift depending on where people encounter them and what tasks they perform. When robots are seen solving clear, practical problems, confidence tends to rise.

Across the surveyed countries, adults report the highest comfort levels with robots working in factories and warehouses. Around 63% say they are comfortable with robots in those environments. These are settings where tasks are clearly defined and safety standards are well understood. Acceptance drops in more personal spaces. Only 46% say they feel comfortable with robots in the home, while comfort falls further to 39% when robots are imagined in classrooms.

In other words, context matters. People appear more willing to accept robots when they take on physically demanding or dangerous work. Half of the respondents say improved safety is one of the main advantages of robotics in those environments. A similar share point to productivity gains as another benefit. Another finding challenges a common assumption about public fears. Job loss is often described as the biggest concern surrounding robotics. But the study suggests security risk worries people more.

Around 51% of adults say their biggest concern about robots at work is the possibility that the machines could be hacked or misused. That fear outweighs worries about physical malfunction or injury, which stand at 41%. Concerns about being replaced at work appear at the same level.

For many respondents, the issue is not simply whether robots can perform tasks. It is whether the systems controlling them are secure. According to researchers involved in the study, these concerns reflect how people evaluate emerging technologies. Instead of having a single opinion about robotics, people tend to judge each situation individually.

A robot helping assemble products in a factory may feel acceptable. The same technology operating in more sensitive environments can raise different questions. Dr. Jim Everett, an associate professor in moral psychology, says trust in artificial intelligence and robotics is often misunderstood. People are not simply asking whether they trust the technology, he notes. They are thinking about specific tools performing specific roles.

A robot assisting in a classroom or helping in healthcare carries different expectations than an AI system used in defense or surveillance. Even though these technologies are often grouped together in public debates, people evaluate them differently depending on their purpose.

Finally, the study also highlights another important factor shaping public attitudes: experience. When people actually encounter robots, fear often declines. Michael Szollosy, a robotics researcher involved in the project, says reactions tend to change quickly when individuals meet a robot for the first time.

The idea of an autonomous machine can feel intimidating in theory. But when people see a small service robot or an industrial machine performing a straightforward task, the reaction is often much calmer. Exposure can shift perceptions from abstract fears to practical understanding.

That shift matters because robotics is moving steadily into everyday environments. From manufacturing and logistics to healthcare and public services, machines capable of autonomous or semi-autonomous work are becoming more common.

As that happens, the study suggests public confidence may depend less on technical breakthroughs and more on visibility and transparency. Burkhard Boeckem, chief technology officer at Hexagon AB, argues that trust grows when people understand what robots are designed to do and where their limits lie.

Anxiety tends to increase when systems feel invisible or poorly understood. Clear boundaries and clear explanations can have the opposite effect. When people see robots working safely alongside humans, performing well-defined tasks and operating within clear rules, the technology becomes easier to accept.

In that sense, the future of robotics may depend as much on public familiarity as on engineering. The machines themselves are advancing quickly. But the relationship between humans and robots is still being negotiated. For now, the study offers a simple insight: the more people encounter robots in everyday life, the less mysterious they become. And once the mystery fades, the conversation often changes from fear to curiosity.

Keep Reading

Artificial Intelligence

What Happens When AI Writes the Wrong References?

HKU professor apologizes after PhD student’s AI-assisted paper cites fabricated sources.

Updated

January 8, 2026 6:33 PM

The University of Hong Kong in Pok Fu Lam, Hong Kong Island. PHOTO: ADOBE STOCK

It’s no surprise that artificial intelligence, while remarkably capable, can also go astray—spinning convincing but entirely fabricated narratives. From politics to academia, AI’s “hallucinations” have repeatedly shown how powerful technology can go off-script when left unchecked.

Take Grok-2, for instance. In July 2024, the chatbot misled users about ballot deadlines in several U.S. states, just days after President Joe Biden dropped his re-election bid against former President Donald Trump. A year earlier, a U.S. lawyer found himself in court for relying on ChatGPT to draft a legal brief—only to discover that the AI tool had invented entire cases, citations and judicial opinions. And now, the academic world has its own cautionary tale.

Recently, a journal paper from the Department of Social Work and Social Administration at the University of Hong Kong was found to contain fabricated citations—sources apparently created by AI. The paper, titled “Forty Years of Fertility Transition in Hong Kong,” analyzed the decline in Hong Kong’s fertility rate over the past four decades. Authored by doctoral student Yiming Bai, along with Yip Siu-fai, Vice Dean of the Faculty of Social Sciences and other university officials, the study identified falling marriage rates as a key driver behind the city’s shrinking birth rate. The authors recommended structural reforms to make Hong Kong’s social and work environment more family-friendly.

But the credibility of the paper came into question when inconsistencies surfaced among its references. Out of 61 cited works, some included DOI (Digital Object Identifier) links that led to dead ends, displaying “DOI Not Found.” Others claimed to originate from academic journals, yet searches yielded no such publications.

Speaking to HK01, Yip acknowledged that his student had used AI tools to organize the citations but failed to verify the accuracy of the generated references. “As the corresponding author, I bear responsibility”, Yip said, apologizing for the damage caused to the University of Hong Kong and the journal’s reputation. He clarified that the paper itself had undergone two rounds of verification and that its content was not fabricated—only the citations had been mishandled.

Yip has since contacted the journal’s editor, who accepted his explanation and agreed to re-upload a corrected version in the coming days. A formal notice addressing the issue will also be released. Yip said he would personally review each citation “piece by piece” to ensure no errors remain.

As for the student involved, Yip described her as a diligent and high-performing researcher who made an honest mistake in her first attempt at using AI for academic assistance. Rather than penalize her, Yip chose a more constructive approach, urging her to take a course on how to use AI tools responsibly in academic research.

Ultimately, in an age where generative AI can produce everything from essays to legal arguments, there are two lessons to take away from this episode. First, AI is a powerful assistant, but only that. The final judgment must always rest with us. No matter how seamless the output seems, cross-checking and verifying information remain essential. Second, as AI becomes integral to academic and professional life, institutions must equip students and employees with the skills to use it responsibly. Training and mentorship are no longer optional; they’re the foundation for using AI to enhance, not undermine, human work.

Because in this age of intelligent machines, staying relevant isn’t about replacing human judgment with AI, it’s about learning how to work alongside it.