Do you think a robot should be allowed to lie? A new study published in Frontiers in Robotics and AI investigates what people think of robots that deceive their users.
Their research uses examples of robots lying to people to find out if some lies are acceptable – and how people might justify them.
Social norms say it can be okay for people to lie, if it protects someone from harm. Should a robot be allowed the same privilege to lie for the greater good? The answer, according to this study, is yes – in some cases.
Three types of lies
This is important, because robots are no longer reserved for science fiction. Robots are already part of our daily lives. You can find them vacuum-cleaning your floors at home, serving you at restaurants, or giving your elderly family member companionship. In factories, robots are helping workers assemble cars.
Several companies, like Samsung and LG, are even developing robots that may soon be able to do more than just vacuuming. They could do your house chores or play your favourite song if you look sad.
The new study, led by cognition researcher Andres Rosero from George Mason University in the United States, looked at three ways robots might lie to people:
Type 1: The robot could lie about something other than itself.
Type 2: The robot could hide the fact it is able to do something.
Type 3: The robot could pretend it is able to do something even though it is not.
The researchers wrote brief scenarios based on each of those deceptive behaviours, and presented the stories to 498 people in an online survey.
Respondents were asked if the robot’s behaviour was deceptive, and whether or not they thought the behaviour was okay. The researchers also asked respondents if they thought the robot’s behaviour could be justified.
What did the survey find?
While all types of lies were recognised as deceptive, respondents still approved of some types of lies and disapproved of others. On average, people approved of type 1 lies, but not type 2 and type 3.
Just over half of respondents (58%) thought a robot lying about something other than itself (type 1) is justified if it spares someone’s feelings or prevents harm.
This was the case in one of the stories involving a medical assistant robot that would lie to an elderly woman with Alzheimer’s about her husband still being alive. “The robot was sparing the woman [from] painful emotions,” said one respondent.
On average, respondents didn’t approve of the other two types of lies, though. Here, the scenarios involved a housekeeping robot in an Airbnb rental and a factory robot co-worker.
In the rental scenario, the housekeeping robot hides the fact it records videos while doing chores around the house. Only 23.6% of respondents justified the video recordings by arguing it could keep the house visitors safe or monitor the quality of the robot’s work.
In the factory scenario, the robot complains about the work by saying things like “I’ll be feeling really sore tomorrow”. This gives the human workers the impression the robot can feel pain. Only 27.1% of respondents thought it was okay for the robot to lie, saying it’s a way to connect with the human workers.
“It’s not harming anyone; it’s just trying to be more relatable,” said one respondent.
Surprisingly, the respondents sometimes highlighted that someone else besides the robot was responsible for the lie. For the house cleaning robot hiding its video recording functionality, 80.1% of respondents also blamed the house owner or the programmer of the robot.
A robot vacuum hiding its surveillance capabilities does so because it was programmed that way. Kokosha Yuliya/Shutterstock
Early days for lying robots
If a robot is lying to someone, there could be an acceptable reason for it. There are lots of philosophical debates in research about the ways robots should fit in with society’s social norms. For example, these debates ask whether it is ethically wrong for robots to simulate affection for people or if there could be moral reasons for that.
This study is the first to ask people directly what they think about robots telling different types of lies.
Previous studies have shown if we find out robots are lying, it damages our trust in them.
Perhaps, though, robot lies are not that straightforward. It depends on whether or not we believe the lie is justified.
The questions then are: who decides what justifies a lie or not? Whom are we protecting when we decide whether or not a robot should be allowed to lie? It might simply not be okay, ever, for a robot to lie.


Nvidia Nears $20 Billion OpenAI Investment as AI Funding Race Intensifies
Elon Musk’s Empire: SpaceX, Tesla, and xAI Merger Talks Spark Investor Debate
TSMC Eyes 3nm Chip Production in Japan with $17 Billion Kumamoto Investment
Instagram Outage Disrupts Thousands of U.S. Users
Sony Q3 Profit Jumps on Gaming and Image Sensors, Full-Year Outlook Raised
Tencent Shares Slide After WeChat Restricts YuanBao AI Promotional Links
SpaceX Reports $8 Billion Profit as IPO Plans and Starlink Growth Fuel Valuation Buzz
Anthropic Eyes $350 Billion Valuation as AI Funding and Share Sale Accelerate
US Judge Rejects $2.36B Penalty Bid Against Google in Privacy Data Case
Elon Musk’s SpaceX Acquires xAI in Historic Deal Uniting Space and Artificial Intelligence
Nvidia, ByteDance, and the U.S.-China AI Chip Standoff Over H200 Exports
Palantir Stock Jumps After Strong Q4 Earnings Beat and Upbeat 2026 Revenue Forecast
Jensen Huang Urges Taiwan Suppliers to Boost AI Chip Production Amid Surging Demand
Nvidia’s $100 Billion OpenAI Investment Faces Internal Doubts, Report Says
OpenAI Expands Enterprise AI Strategy With Major Hiring Push Ahead of New Business Offering
Oracle Plans $45–$50 Billion Funding Push in 2026 to Expand Cloud and AI Infrastructure
Nvidia Confirms Major OpenAI Investment Amid AI Funding Race 




