Can AI detect when a person is lying — should we trust it?
Can a machine really know when someone is lying? A new study suggests that the answer is far more complicated than many hoped.

Edited By: Joseph Shavit

A new study shows AI struggles to detect human lies, raising concerns about accuracy, bias and real-world risks. (CREDIT: UnSplash)
Artificial intelligence has already reshaped daily life, from writing emails to diagnosing illnesses, yet one question cuts deeper than convenience. Can a machine really know when someone is lying? A new study in the Journal of Communication suggests that the answer is far more complicated than many hoped.
Think about the small ways trust supports your daily routine. You rely on a friend to share honest advice, a doctor to explain a diagnosis clearly, or an employer to evaluate your work fairly. When doubt creeps in, the stakes rise. That fear grows even sharper when you imagine an AI system, not a person, deciding whether you are lying.
The emotional weight of being misread by a machine becomes clear when you picture the consequences for a job interview, a police stop, or a difficult conversation where truth feels fragile.
How Researchers Put AI to the Test
A team led by Michigan State University communication scholar David Markowitz and University of Oklahoma researcher Timothy Levine wanted to explore whether AI behaves anything like people when asked to judge honesty.
Their work involved twelve experiments and more than 19,000 separate AI evaluations. The design stretched across short video clips, longer interviews, audio-only recordings, and even different “personas” given to the AI, such as telling it to act like an FBI analyst or a first-year college student.
The researchers used the Viewpoints AI platform to show the model either truthful or deceptive statements. The statements ranged from questions about cheating in a game to personal stories about friends. After reviewing each clip, the AI had to label the speaker as truthful or lying and explain its choice. The team then compared its judgments with what Truth-Default Theory predicts about human behavior. That theory suggests people generally assume others are honest because constant suspicion comes with social and emotional costs.
“Humans have a natural truth bias,” Markowitz said. “We usually think others are being honest because doubting everyone would make everyday life difficult.”
What the AI Got Wrong
The findings show that AI often strayed far from human judgment. In a short interrogation scenario, the model labeled many people as deceptive even when they were telling the truth. It showed high accuracy for lies at 85.8 percent but extremely poor accuracy for truths at only 19.5 percent. That pattern flips the human tendency to trust first.
Longer clips made performance worse. When the AI listened to interviews that averaged a little more than two minutes, overall accuracy dropped to about 42.7 percent. In a condition that mirrored real life, where lies are rare, accuracy fell to 15.9 percent. That result suggests that when dishonesty becomes harder to spot, the system buckles under the pressure.
Not every test went poorly. When the topic shifted to personal stories about friends, the AI leaned toward believing people again. Accuracy rose to 57.7 percent. Yet even this improvement came with a catch. The bias switched from doubting nearly everyone to trusting almost everyone, depending on the context. That level of inconsistency poses its own set of problems.
Persona Tweaks Do Not Fix the Problem
In several experiments, the researchers tried giving the AI a professional identity. They told it to “act” like a federal investigator or a student, hoping it might anchor its decisions differently. These persona changes did shift whether the AI was more likely to trust or doubt, but they did not improve accuracy in any meaningful way.
“AI turned out to be sensitive to context; but that did not make it better at spotting lies,” Markowitz said.
Even when the team added background information, performance still wandered away from human norms. Sometimes the AI became more suspicious, sometimes more trusting, but rarely more correct.
Why AI Struggles With Deception
Deception detection is a difficult skill even for people. Visual cues, voice tone and body posture are unreliable. Humans barely score above chance, with accuracy hovering around 54 percent. If people struggle, expecting a machine to succeed is optimistic.
The researchers believe several forces may weaken AI performance. More information, such as longer clips, sometimes overwhelms the system. The model may rely too heavily on patterns that do not actually signal lying. And unlike humans, AI systems lack emotional and social experience. They do not navigate embarrassment, guilt, or fear. Without that lived understanding, the subtlety of deception becomes hard to decode.
“It is easy to see why people might want to use AI to spot lies,” Markowitz said. “But our research shows that we are not there yet.”
Real-World Risks and Caution
The appeal of using AI for lie detection is understandable. Machines seem fast, objective and tireless. Yet the study shows that relying on AI to judge truthfulness could produce serious errors. A system that labels almost everyone as lying, or almost everyone as honest, depending on the setup, threatens fairness in settings where accuracy matters most.
The findings also raise questions for the future. Will AI ever learn human-like social judgment? Should it? And how do researchers build systems that stay reliable when conditions shift?
Practical Implications of the Research
These findings carry weight for anyone working in security, courts, journalism, education or healthcare. They show that AI is not ready to make credibility judgments that affect people’s lives or reputations.
The study also encourages a deeper look at the limits of artificial intelligence. By understanding why machines struggle with human truthfulness, researchers can explore safer and more ethical uses of AI that support people rather than judge them.
Future work may focus on building systems that explain uncertainty, avoid harmful bias and help humans make informed decisions without taking control away from them.
Research findings are available online in the Journal of Communication.
Related Stories
- Artificial intelligence understands feelings better than people, study finds
- Artificial intelligence is learning to understand people in surprising new ways
- ContractNerd: New AI tool helps you spot hidden risks in business and job contracts before you sign
Like these kind of feel good stories? Get The Brighter Side of News' newsletter.
Shy Cohen
Science & Technology Writer



