He’s shifting in his seat. Talking fast. Looking away. Touching his face. Whatever he’s saying, it definitely doesn’t seem true.
Turns out, it might be.
It’s tempting to fall back on conventional wisdom in looking for the signs of a lie. But really, lying is much more complicated. And as a society, we’re still fairly bad at detecting deception — even when the stakes are very high.
But new strategies have emerged to make the pursuit of truth more accurate. One approach relies on the content of a liar’s words. The other, on counterintuitive clues from speech. Neither are perfect, but in a field that relies on outdated methods to catch lies, these are backed by empirical evidence.
What People Think Liars Do
Cultures all over the world have largely agreed on a collection of signs and signals that indicate dishonesty. “It turns out to be a strikingly universal belief that lies show,” said Maria Hartwig, an expert in deception detection, and a professor of psychology at the John Jay College of Criminal Justice at the City University of New York, “and particularly so in signs of discomfort, anxiety, stress — negative emotions.”
In 2016, researchers asked both police officers and undergraduate students, as part of a study with around 200 participants, what cues they believed indicated deception. They listed stereotypical signs, like nervousness, gaze aversion, movement and sweating. As it turned out, those signs weren’t actually good predictors of lying and truth-telling.
In a review that looked at over 100 studies that compared truth-telling behavior with lying behavior, Bella M. DePaulo and a team of researchers found that of the 158 "cues" to deception that the studies collectively mentioned, none were strongly associated with lying. That is, liars didn’t actually shift their gaze, talk faster, or blink much more or less than truth-tellers. Even the cues with the strongest associations — not being forthcoming, being more negative, leaving out detail — were still weak.
In the end, the reviewers conclude, everyone lies — and we’re so used to lying, that predictable signs of deception are barely detectable. They write: “We believe that most deceptive presentations are so routinely and competently executed that they leave only faint behavioral residues.”
Why It Matters
It’s one thing to dissect a teenager’s story about where they were last night. It’s another altogether when a false account puts an innocent person in prison for life. Or if a decision about national security comes down to the veracity of one person’s testimony. It’s these statements, with their enormous consequences, that society collectively strives to appraise accurately, whether through police interrogations, trials, or agencies like TSA and the CIA. Unfortunately, systems in place for separating truth from lies — for suspecting guilt to begin with — are flawed. Hartwig said what first motivated her to enter her field was the wrongful conviction of the Central Park Five, a group of Black and Latino teens who served years in correctional facilities for a crime they didn’t commit after coerced confessions.
Identifying truthful accounts could reduce coerced confessions, which, according to the Innocence Project, account for almost 30 percent of cases where a wrongfully convicted person is exonerated by DNA evidence.
“Apart from the criminal justice system, from a national security perspective, the consequences are significant,” says Hartwig. Incorrect intelligence from a source during conflict could lead to the death of innocent people — and many might point out, as Harwig does, that the Iraq War originated from false intelligence. And though miscarriages of justice and bad intelligence are complicated by many factors, interrogation and interview techniques that yield bad information play a unique role.
Why Conventional Methods Aren’t Working
Normal people aren’t good at detecting lies. In fact, we often do worse than chance. We’re a little better at picking out truth, but not by much. One might wonder, then, if professionals tasked with telling lies from truths are any better at it. Evidence suggests they are not - even in analyzing recordings of a real murderer lying.
Experience may not work predictably, but other widely used methods, which lend an air of objectivity to lie detection — are also problematic. A statement evaluation method called SCAN has been criticized by experts and polygraph machines, which Hirschberg calls “completely unreliable” have been reassessed in recent years. According to the American Psychological Association, polygraphs, which measure things like respiration, heart rate, and skin conductivity are flawed because “There is no evidence that any pattern of physiological reactions is unique to deception.” A liar could have an even heart rate, and a truth-teller could see theirs spike from nerves.
According to a Law and Human Behavior article from 2015, the most common method of questioning that criminal investigators were trained in was the Reid Technique, which employs directives like opening with a “direct positive confrontation” — or telling the suspect that the investigation so far has found evidence that they are guilty — and developing a “theme — suggesting reasons the suspect may have committed the crime that will psychologically justify or excuse the crime, in an attempt to get them to agree.
Julia Hirschberg, an expert in computational linguistics and natural language processing and a professor of computer science at Columbia University, who researches and develops methods of deception detection, said that she had taken the Reid technique training. “Once you decide who might be a criminal, then you come up with these really hard-ass questions that are just scary and you assume that they're guilty until they prove that they're not.”
A Focused Questioning Technique
Out of the collection of evidence that suggests lies don’t consistently reveal themselves in someone’s behavioral cues, a number of new strategies have emerged - alternatives to traditional police interrogation. These techniques rely on what a person says, not how they say it.
Hartwig helped to develop one of them — a questioning style known as SUE or the strategic use of evidence technique. Different from the Reid method but similar to other questioning methods, it relies on an approach meant not to intimidate but to draw out contradictions in a false statement or confirm a truthful account. Hartwig describes it as similar to a “psychological game or strategy where the person who knows more about the other person's strategies tend to win.”
In SUE, one doesn’t show all their cards at once — or, put another way, “If I'm going to play somebody in a chess game, it's to my advantage to have seen them play before,” she says.
Hartwig gives the example she’s used in testing scenarios for the technique: In one scenario, a role-player steals a wallet from a briefcase in a bookstore. In another, a role player moves a briefcase in a bookstore to find a specific book they were looking for. An interviewer who knows certain details about the case — for example, that fingerprints were found on the briefcase, tries to determine if the person they interview is telling a truth or a lie.
In a "strategic use of evidence" approach, the questioner might begin with general questions, seeing if the account matches what they already know to be true without revealing what they know about the fingerprints right away, and narrow in on the key detail methodically. Someone who is trying to be deceptive, for example, might not mention going to the bookstore or seeing a suitcase right away, while a truth-teller might bring these details up more readily.
In both cases, Hartwig says, the interviewee is treated the same — after all, an innocent person who doesn’t mention a briefcase might just have misremembered their day. But an interview like this has more time, Hartwig says, to calmly catch a suspect in a lie by withholding what they know until necessary — and to accurately identify a truth-teller — than interrogative techniques that operate on the presumption of guilt. “When you haven't been humiliated and attacked and berated, you've been given ample opportunity to give your side of the story,” she said. “It's just your side of the story doesn't match up with a known or checking of fact.”
And while Hartdig says many practitioners insist they already do this, “once you put them to the test, they don't,” she said. In a study of police trainees, those that hadn’t been trained in the technique, but with other strategies, detected deception accurately 56 percent of the time. Those that underwent the SUE training had an accuracy rate of 85.4 percent.
A Machine Learning Approach
Another digs in further to how a person presents information, but instead of zeroing in on eye movement or fidgeting, the focus is on elements of speech including linguistics, and specifically prosody — the sound, rhythm or intonation of speech. Hirschberg uses these elements in her research.
Together with her team, Hirschberg has identified features of both deceptive and truthful speech — and also what kinds of language are trusted and not trusted. For example, in one study, they looked at dialogue between participants who played a “lying game” with one another, asking a randomly paired partner 24 questions to which the partner responded with half-truths and half deceptive answers. The roles were reversed, and then both reported for each question if they thought the answers were true of false.
They found that deceptive interviewees gave longer responses and used more words. “Filled pauses” — pauses filled by “um’s” and “uhs” — tended to also indicate deceptive speech. But even though interviewers did pick up on some of those clues, their accuracy in detecting lies was 47.93 percent — worse even, than chance. “Basically, the idea is, people are just really bad at this,” said Hirschberg.
However, a machine-learning model they taught to identify deceptive speech performed much better. Taking into account the actual cues of deception in speech —including 93-word use patterns (words related to certain emotional states, filler words), 23 linguistic patterns (like pauses, laughter, contractions, denials), and response length, among others — they were able to automatically detect deceptive answers with 72.4 percent accuracy.
To top it off, a more recent study from Hirschberg found, with additional recorded lies and truths from a crowdsourced game they designed called LieCatcher, that the people completely misplaced their suspicion: “They trusted the kind of states that actually was a significant cue to deception. So they went the opposite way.” And the cues that interviewers found trustworthy weren’t reliable predictors of truth either.
“Quite honestly, I think it'd be helpful if people had some machine learning programs that they could use, particularly if they're people whose job is to be able to detect deception,” Hirschberg said, “Like police, who are not good at it.”
As we get closer to accurately sifting truth from lies where it matters most, no method has emerged as foolproof — and there’s certainly no one tell-tale sign of a liar. “What we see when we compare this massive, massive body of data at this point,” said Hardwig, “is that there is no Pinnochio’s nose.”