Show Summary Details

Page of

 PRINTED FROM the OXFORD RESEARCH ENCYCLOPEDIA, COMMUNICATION (communication.oxfordre.com). (c) Oxford University Press USA, 2016. All Rights Reserved. Personal use only; commercial use is strictly prohibited (for details see Privacy Policy and Legal Notice).

Subscriber: null; date: 20 June 2018

Detection of Deception

Summary and Keywords

Much research has examined people’s ability to correctly distinguish between honest and deceptive communication. The ability to detect deception is useful, but many misconceptions about effective lie detection have been documented. Research on deception is especially informative because the findings of research often contradict common sense. For example, both folk wisdom and several social scientific theories hold that lies can be detected through the careful observation of nonverbal behaviors. Yet research shows that most of the nonverbal behaviors that are stereotypically linked with deception have less diagnostic value than presumed. The widely accepted conclusion from decades of research is that while people are statistically better than chance at detecting lies, people are poor lie detectors in an absolute sense, averaging just 54 percent accuracy. Poor accuracy findings hold across the biological sex of the sender and judge, adult age and occupation, various types of media, spontaneous and planned lies, and more and less potent motivations for lying. Research also finds that people are usually truth-biased—that is, people tend to believe other people more often than not. As a consequence of truth-bias, accuracy for honest communication is typically higher than accuracy for lies, a finding known as the veracity effect. Subsequent research has yielded promising findings suggesting various ways deception detection accuracy can be improved. Focusing on communication content, especially when understood in context, understanding the motives for deception, using evidence, and persuading senders to be honest all have been shown to improve lie detection accuracy in recent experiments.

Keywords: lying, deception detection, nonverbal cues, linguistic cues, truth-bias, communication content

Introduction to the Social Scientific Study of Deception Detection

This entry focuses on the topic of deception detection. The research on how good people are at detecting lies and what are the most effective ways to detect lies is covered. Detecting lies is a really useful ability. It would be good to know if that investment you are about to make is a scam or if that person you are attracted to is not what they seem (and not in a good way). Well, research can help. We now know much more about lie detection than we did even ten years ago. These are exciting times in deception research, and there have been many key breakthroughs in recent years. But before getting to the theory and research, this entry begins with a series of questions. Asking these questions up front is worthwhile because they will help the reader better understand the topic.

First, how can you tell when someone is lying? Second, if a great many people were asked that first question, what do you think would be the most frequent answer? Third, how likely is it that most readers will answer the first two questions the same way?—that is, is there much consensus among people in beliefs about the things that give lies away?

Moving on, how many times have you lied in the past twenty-four hours? Think back to the last time you lied. Did the person you lied to detect your lie or did they seem to believe you? If you don’t know, what’s your best guess? Next, if you were going to rate yourself in terms of your ability to detect lies on a scale of 0 (no ability whatsoever) to 100 (you always know when you are being lied to and never doubt people who are actually honest), what rating would you give yourself?

This is the last set of questions. Now, think back to the time you discovered that someone had lied to you. How did you find out that what was said was a lie? That is, what kind of lie detection method did you use? Did you know you were being lied to at the time, or did you only find out afterward?

Take a moment to think about your answers as a package. Do they present a coherent picture, or (thinking critically here) do there seem to be some contradictions? If you answered the questions above like most research participants, there should be some things that don’t seem to add up. For example, research tells us that if you ask people about how can you tell when someone is lying you get very different sorts of answers than if you ask people to recall a lie they detected and ask how the lie was discovered. As another example, if we ask people how good they are at detecting lies, most people think they can usually tell when others lie. Most people are reasonably confident in their ability to tell when others are lying, especially if they know the other person well. But, if we ask people about when they lie, they report that their lies usually succeed. Both cannot be true, right?

This chapter describes what research tells us about each of the questions asked above. Research provides good clear answers to each of those questions. Surprisingly, many of those results have even been found to replicate across cultures. A major lesson learned from systematic research is that people’s intuition and folk wisdom about deception is often wrong. Things just aren’t what they seem in the realm of deception.

Before moving on, one more question. True or False: Most people lie less often than the average person? Think you know the answer. Perhaps you got it wrong. Believe it or not it is a true statement. Can you guess why most people are well below average when it comes to frequency of lying? According to the data from one national survey (Serota, Levine & Boster, 2010) most people (about 60 percent) report telling no (zero) lies in the past 24 hours. The average person, however, tells about 1.6 lies per day. The reason that the median (50th percentile score, in this case zero) and (mode most frequently occurring score, again zero) are less than the average is that there are a few people who lie a whole lot and they pull up the average. Lying, like many socially disapproved behaviors, has a skewed distribution. Seventy-five percent of people really do lie less frequently than average (Serota et al., 2010). And these findings hold up. They have now been independently replicated and validated several times (Serota & Levine, 2015).

By the way, if you are a college student, most likely you did not answer zero to the lies per day question. That finding was for all adults over 18. Lying declines with age. Teenagers lie most, college students less, and older adults less still (Levine, Serota, Carey, & Messer, 2013). But all groups show the same pattern. Most people are mostly honest most of the time. Most people are basically honest. But, in every group, there are a few prolific liars who lie much more often than most people. The implication of this for deception detection is that lying is simply not random. Believing people makes sense most of the time. The trick is identifying those people that lie often and those situations where people are tempted to lie.

Defining Deception Detection

Deception is knowingly, intentionally, or purposely misleading another person (Levine, 2014). Definitions of deception typically exclude honest mistakes, transparently false statements, and self-deception. It is also important to remember that outright lies (intentional statements that a communicator knowns to be are both false and misleading) are not the only way to deceive (McCornack, 1992, 1997). Omitting some key information is probably the most common type of deceptive communication (McCornack et al., 2014). Other alternatives to outright lying include equivocation and evasion (McCornack, 1992). While lies and deception are often used interchangeably, they are not the same thing. Lies are a subtype of the broader concept of deception. Most deception detection research studies lies per se rather than more subtle types of deception.

Deception detection refers the extent to which a person can distinguish between honest and deceptive communication. In the lab, deception detection tasks are much like true-false tests. Some research participants are senders and some are judges. Senders either lie or tell the truth about something. Judges try to ascertain which communications are honest and which are lies. An honest message judged as honest is correct as is a lie correctly judged as a lie, the latter sometimes being called a “hit.” Honest statements judged as lies (sometimes called false alarms) and lies misjudged as honest are errors. Deception detection accuracy (total accuracy) is scored just like a true-false test. The number of correct judgments is divided by the total number of judgments (the number correct plus the number incorrect) to yield a percentage of correctness. These percentages, like all percentages, can range from 0 to 100 percent. Zero means the judges were always wrong, 100% means they were perfect and never wrong, and 50% would reflect the pure chance rate expected from guessing. Accuracy can also be scored separately for just truths or just lies (truth accuracy and lie accuracy). It is common to score truth-bias. Truth-bias is percent of messages judged as honest. Thus, when reading results of a deception detection experiment, look for accuracy, truth-accuracy, lie-accuracy, and truth-bias scores. Each of these can be understood as simple percentages.

Truth-bias affects truth accuracy and lie accuracy differently. The more truth-biased a person is, the more likely they are to get honest messages correct and the more likely they are to miss the lies. The difference between truth accuracy and lie accuracy is called the “veracity effect” (Levine, Park, & McCornack, 1999). Truth-bias does not affect total accuracy when there are an equal number of truths and lies because the gains in truth accuracy and errors in lie accuracy cancel out.

Some research reports will provide signal detection scoring instead of or in addition to the raw accuracy (percent correct) score described in the previous paragraph. Signal detection metrics provide measures separating sensitivity and bias. Sensitivity (also called d-prime) is accuracy controlling for chance and bias. Bias scores tell if judges tend toward guessing truth or lie too often. Researching reporting signal detection usually will report sensitivity, bias, hits, and false positive values. If there are equal numbers of truths and lies in a deception detection task, sensitivity in signal detection and raw accuracy are almost perfectly correlated (Bond & DePaulo, 2006).

Theories

Most (but not all) theories of deception detection share a common basic logic and fit under the umbrella label “cue theories.” Cue theories presume that lying and honest communication involve different affective or cognitive states and that different affective or cognitive states are signaled behaviorally. That is, lying (relative to honesty) leads to internal psychological states which, in turn, leads to specific, observable behaviors called “cues” or “clues.” The affective and cognitive states mediate and explain the relationships between honesty or deception and behavioral cues. Consequently, deception can be detected, albeit indirectly and probabilistically, by careful attention to (mostly nonverbal) behavioral signals at least under certain favorable conditions. Although nonverbal cues have been studied most extensively, more recent research has expanded to linguistic cues. The available evidence suggests that conclusions regarding the usefulness (or lack thereof) of nonverbal cues extends to linguistic cues (Hauch, Blandon-Gitlin, Masip, & Sporer, 2014).

Historically, the first cue theory was put forth by Ekman and Friesen (1969) who coined the term “leakage.” Their view was later revised by Ekman in various editions of his book Telling Lies. The basic idea is that the act of lying is emotional so long as the stakes are high. Liars, compared to honest communicators, may feel guilty or fear detection or experience “duping delight” which can provide clues to deception in the form of very quick micro facial expressions.

Zuckerman, DePaulo, and Rosenthal (1981) expanded cue theory thinking into a framework often called 4-Factor Theory. Besides emotions, they argued that lying might also lead to arousal, greater cognitive effort (or load), and attempts to control behaviors. Each of these various internal psychological states (emotions, arousal, cognitive effort, and overcontrol) might have unique behavioral signals that could be used to detect lies.

Interpersonal Deception Theory (IDT) (Buller & Burgoon, 1996) shared the basic logic of 4-Factor Theory but added that liars might also be strategic. IDTs lumps cues associated arousal, cognitive load, and leaked emotions into nonstrategic behaviors which reveal lies. Strategic behaviors are behaviors liars do to look honest. This means that an especially skilled liar might act more honest than a person who is actually honest because liars act more strategically. Nevertheless, IDT predicts that people can and do detect deception when it is present and that the key to accurate deception detection is the recognition of strategic and nonstrategic behaviors linked with deception. However, this requires observation over time; short snippets of videotaped behavior are not sufficient. IDT also predicts that the interactivity of media matters a great deal. Findings from audiovisual media, for example, might not be expected to hold in face-to-face interaction.

The most recent cue theory is Aldert Vrij’s (2015) cognitive approach. The cognitive approach presumes that lying is more cognitively effortful than honest communication. However, unlike other cues theories, the cognitive approach does not rest on behavioral observation alone. According to the cognitive approach, the behavioral cues linked to deception are too weak on their own to be useful. This, however, can be overcome by magnifying the cues. Cues can be magnified in three ways: imposing additional cognitive load, asking unexpected questions, and encouraging senders to say more. Once magnified, behavioral observation of cues can be used to detect lies. The cognitive approach also focuses more on verbal cues such as the number of details and less of nonverbal cues.

Not all theories of deception detection adopt cue theory logic. In fact, two theories reject the idea that deception can be detected by observation of specific behavioral cues. The first theory to be skeptical of cue theory logic was Bella DePaulo’s (1992) self-presentation approach. According to DePaulo, both honest and deceptive communicators want to make a favorable impression on others and most people have sufficient communication skills to do so. Consequently, DePaulo’s self-presentation approach predicts that most nonverbal cues will not be especially diagnostic. Consequently most people will not be especially good lie detectors because there is little to signal deception.

A second alternative approach is Truth-Default Theory (TDT, Levine, 2014). Regarding deception detection, TDT makes several relevant predictions. First, TDT presumes that people typically believe other people. TDT says that people are usually truth-biased and consequently they are right about honest messages but more likely to wrong about lies. Second, TDT holds that deception detection accuracy in most deception detection experiments should be only slightly better than chance because most senders are not transparent liars (Levine, 2010) and because the cues judges use to assess honesty are not diagnostic (Levine et al., 2011). Instead of cues, TDT says that the paths to improved lie detection involve attention to communication content, effective questioning, considering motives to lie, the use of evidence, and persuading honest confessions.

Findings

Up until about ten years prior to this writing, the deception detection findings were extremely consistent and provide a clear and coherent picture. We can call those older results the traditional findings and they were well summarized in an influential meta-analysis analysis by Bond and DePaulo (2006). The traditional findings will be covered first, then more recent findings will be mentioned. As of this writing, the consensus position of most researchers is that the traditional findings are scientific fact.

Bond and DePaulo (2006) did a compressive meta-analysis of nearly 300 results of deception detection experiments. They found that the average accuracy from across all the prior studies was just under 54 percent. The 54 percent average accuracy was statistically better than the 50-50 chance level. People are better than chance, but not by that much. Think of it this way: If we judged 20 messages and just guessed, on average, over a large number of 20-judgment trials, we would expect an average of 10 out of 20. What deception detection experiments show is about 11 out of 20 correct. People do 1 out of 20 better than chance but still miss and average of 9 out of 20.

The most striking thing about Bond and DePaulo’s (2006) findings was remarkable consistency from study to study. Findings were normally distributed around the average. About two-thirds of the prior findings fell between 50 percent and 59 percent. Ninety-five percent of findings fell between 44 percent and 67 percent.

Another remarkable finding is that the slightly-better-than-chance conclusion held over a variety of different study features. Accuracy was similar for face-to-face communication, audio only media, and audiovisual media. Findings were similar for more and less motivated senders and spontaneous and planned lies. Findings were similar for judges who were college students and for people with professions that involved lie detection (like law enforcement). The thing that made the most difference was simply the size of the study. Studies involving more data were closer to the average; outliers were all small-scale studies.

The other big finding was that people were truth-biased. Judges assessed messages as honest more often than as lies. Truth accuracy was 61 percent, and lie accuracy 47 percent, supporting the veracity effect.

Catches, Qualifications, and Nuance

The Bond and DePaulo (2006) findings are most often interpreted as showing that humans are poor lie detectors. However, there are at least two qualifications here. A bit of nuance is needed to understand the findings. First, although 54 percent is not that much better than 50 percent in an absolute sense, this is a solid statistical difference. Chance can be statistically ruled out with much more than usual confidence (p < .0001), and the effect size of d = .4 is above the median effect in social psychology or communication research. Thus, it is a mistake to dismiss accuracy as just a coin flip. It is better than that.

Second, there are several features of the research that most prior studies had in common that likely matter. The 54% accuracy applies specifically to situations where there is a 50-50 chance that any given message is deceptive, where senders lie and tell the truth at random, where immediate judgments are required, where no evidence is available for fact checking, and where all judges have to go on is how the sender comes off. Prior research looked at real-time cue-based lie detection, and that is the approach to lie detection to which the findings apply.

One important issue here is what might be called the truth-lie base rate. The truth-lie base rate refers to the ratio of truthful and deceptive messages in a lie detection task. Most deception detection experiments involve an equal number of truths and lies. Outside the lab, of course, the truth-lie base-rate is seldom exactly 50-50. The key to understanding why this matters involves truth-bias and the veracity effect. As noted previously, the 54 percent average accuracy comes from studies with 50-50 base rates; people are typically truth-biased, and accuracy for just the truths is 61 percent compared to 47 percent for just the lies. Note that 61 percent + 47 percent divided by 2 = 54 percent. So long as people are truth-biased, they will do better on truths than lies. Furthermore, the more truth-biased, the bigger the difference. What research shows is that when experiments show judges more truths than lies, accuracy goes up. But, if there are more lies than truths, accuracy goes down (Levine, Kim, Park, & Hughes, 2006). The 54 percent accuracy findings are only for when truths and lies are equally probable. So what proportion of messages is usually honest? Remember the finding in the first section that most people are usually mostly honest? If that is right, then the typical deception detection experiment probably underestimates total accuracy. That also means that judges are only “biased” in relationship to base rate in the experiment. Participants’ judgments more closely approximate most real communication situations than do the base rates in experiments.

Another issue pertains to the availability of alternative detection methods. In the experiments finding 54 percent, judgments were made based on cues, and other lie-detection strategies were unavailable to provide experimental control. For example, fact-checking was not allowed. Honest confessions were excluded. Knowledge of sender motives was not available.

On the surface Bond and DePaulo’s (2006) findings may appear inconsistent with various cue theories. But, maybe cues are useful and judges just look for the wrong cues. Meta-analysis (Hartwig & Bond, 2011), however, rules out the wrong cue idea. Instead cues just lack diagnostic value. Meta-analysis also discredits the idea that high stakes are required for cues to manifest (Hartwig & Bond, 2014).

Meta-analyses of cues provide a less coherent picture than deception detection accuracy findings. Findings hinge on the unit of analysis and how cue differences are scored. If we look across cues, then the average study finds at least one cue that is diagnostic in differentiating truths from lies (Hartwig & Bond, 2014). However, if we look at any particular cue across various studies testing that same cue, then most cues have little diagnostic value (DePaulo et al., 2003). What this means is that cues can be useful in instances, but they do not pass the scientific criterion of replication. Cues that appear promising in individual studies do not look very useful across several studies. Individual studies provide support for various cues theories, but the findings as a whole are more consistent with DePaulo’s (1992) self-presentation theory.

Improving Accuracy

In the past decade, considerable improvement in deception detection accuracy has been reported in approximately two dozen published experiments (Levine, 2015). There appear to be a variety of viable approaches, but all focus more on communication content, understanding of context, and/or interaction rather than nonverbal or linguistic cues.

A shift in thinking away from the traditional cue-based approaches began with Park et al. (2002). Park et al. asked research participants to recall a recent time when they had discovered that they had been deceived. Participants were asked to describe the deception and how the deception came to be uncovered. Very few of the lies (about 2 percent) described were recognized as deception at the time based on sender cues. Instead, most lies were uncovered well after the fact, and they were discovered either because some evidence of the truth was uncovered or because the sender later confessed and admitted the truth. Park et al. concluded that the task judges face in typical deception detection experiments is quite different than how people really detect lies. They speculated that perhaps poor accuracy was a results of how deception detection has usually studied and that if people had access to alternative lie detection methods, accuracy might be improved.

The first approach to show strong experimental evidence of improved accuracy with an alternative approach was the strategic use of evidence approach, or SUE (Granhag et al., 2007). SUE can be used when a judge has some relevant evidence. What a sender says can be compared to what is indicated by the evidence. The key to SUE is that judges, at least initially, do not reveal the known evidence to the sender. Judges ask questions and see if the sender’s answers align with the evidence or not. Of course, discrepancies between the evidence and the content of sender’s statements indicate potential deception. Judges can then gradually reveal the evidence over time and see if the sender’s story changes. SUE can yield accuracy rates over 80 percent.

The content of senders’ communication can be useful in lie detection even without hard evidence. If the judge has a good understanding of the communication context (Blair, Levine, & Shaw, 2010) or is familiar with the situation (e.g., Reinhard, Sporer, & Scharmach, 2013), then content can be assessed for plausibility. Blair et al. found that providing judges with information about the context improved accuracy to 75 percent across three lie detection tasks.

A particularly useful aspect of the context has to do with whether the sender has a motive to lie. The idea is that judges can project sender motives (Levine, Kim & Blair, 2010) as a lie detection strategy. If senders have a good reason to lie they are much more likely to be lying than if they do not a motive for deception. Initial research suggest high accuracy is possible when judges have good information about senders’ motives (e.g., Bond, Howard, Hutchison, & Masip, 2013).

Another avenue of current research involves the extent to which accuracy might be improved by employing effective approaches to questioning senders. Merely asking questions does not seem to improve accuracy (Levine & McCornack, 2001). Current questions strategies include questioning to prompt cues (e.g., Vrij, 2015) and more content-based questioning (e.g., Levine et al., 2014).

The final new approach to improving lie detection involves persuading senders to be honest and confess their lies. Although this is among the least researched of the new alternatives initial results appear promising. Levine et al. (2014) reported accuracy over 90 percent in two studies.

In short, the once accepted conclusion that people are invariably poor lie detectors appears to be dated. Poor lie detection is the case with 50-50 base-rates, and real-time cue-based lie detection. However, alternative lie detection methods such as SUE, content-in-context, projecting motives, and persuading honest confessions offer improved accuracy (see Levine, 2015 for a review). Research on effective questioning strategies is ongoing but promising.

Discussion of the Literature

Fay and Middleton (1941) conducted the first deception detection experiment. They found that people were only a little better than chance at distinguishing between truthful statements and lies. But, deception detection research did not take off until Ekman and Friesen’s (1969) classic essay linking lying with nonverbal communication. The next major milestone was the Zuckerman et al. (1981) influential review that expanded the ideas of leakage and deception clues into 4-Factor theory. Zuckerman et al. also provided one of the first meta-analyses of cues and detection accuracy. They found evidence for strong nonverbal deception cues to deception and that deception detection was possible based on behavioral observation. Together, Ekman and Friesen and Zuckerman et al. shaped how human deception detection was understood. The idea of nonverbal deception cues have been prominent ever since. Other theories emerged including DePaulo’s (1992) self-presentation approach and interpersonal deception theory (IDT) (Buller & Burgoon, 1996).

In 1999, the veracity effect was published (Levine et al., 1999), and in 2002 Park et al. published research showing that most lies are detected after the fact based on evidence or confessions rather than the passive observation of cues.

It was not until after the turn of the century that the current consensus viewpoint began to take shape. Two meta-analyses were especially instrumental in changing how deception detection was understood. The first of these was DePaulo et al.’s (2003) meta-analysis of deception cues. The results showed much weaker effects than the previous analysis by Zuckerman et al. (1981). Since the publication of DePaulo et al., the scientific consensus is that most cues are at best weak and inconsistent indicators of deception. More recent meta-analyses (e.g., Hartwig and Bond, 2011, 2014) provide nuance, but the DePaulo et al. conclusions have, so far, proved both influential and durable.

In 2006, the Bond and DePaulo accuracy meta-analysis provided another key milestone: Accuracy levels were found to be lower and more uniform than previously believed (cf. Zuckerman et al., 1981). The 54 percent accuracy finding was obtained. Evidence for the veracity effect was also solidified.

It wasn’t long after the Bond and DePaulo (2006) meta-analysis that exceptions to the usual 54% accuracy emerged. The first approach to break out of poor accuracy was the strategic use of evidence approach (Granhag et al., 2007) which showed that improved accuracy was possible with evidence combined with strategic questioning of senders.

Currently, the two most prominent approaches to improving deception detection accuracy are Vrij’s (2015) cognitive approach and Levine’s (2014) TDT. The cognitive approach continues a focus on cues, but holds that cues need to be prompted and magnified to be useful. TDT, in contrast, suggests that cues lead to poor accuracy and that content, evidence, and persuasion-based approaches hold more promise.

Further Reading

Bond, C. F., Jr., Howard, A. R., Hutchison, J. L., & Masip, J. (2013). Overlooking the obvious: Incentives to lie. Basic and Applied Social Psychology, 35(2), 212–221.Find this resource:

    DePaulo, B. M. (1992). Nonverbal behavior and self-presentation. Psychological Bulletin, 111(2), 203–243.Find this resource:

      Ekman, P. (2009). Telling lies. NY: W. W. Norton.Find this resource:

        Granhag, P. A., Stromwal, L. A., & Hartwig, M. (2007). The SUE technique: The way to interview to detect deception. Forensic Update, 88, 25–29.Find this resource:

          Hauch, V., Blandon-Gitlin, I., Masip, J. & Sporer, S. (2014). Are computers effective lie detectors? A meta-analysis of linguistic cues to deception. Personality and Social Psychology Review, 19(4), 307–342.Find this resource:

            Levine, T. R., Clare, D., Blair, J. P., McCornack, S. A., Morrison, K., & Park, H. S. (2014). Expertise in deception detection involves actively prompting diagnostic information rather than passive behavioral observation. Human Communication Research, 40(4), 442–462.Find this resource:

              Levine, T. R., Feeley, T., McCornack, S. A., Harms, C., & Hughes, M. (2005). Testing the effects of nonverbal training on deception detection accuracy with the inclusion of a bogus train control group. Western Journal of Communication, 69(3), 203–218.Find this resource:

                Levine, T. R., Serota, K. B., & Shulman, H. C. (2010). The impact of Lie to Me on viewers’ actual ability to detect deception. Communication Research, 37(6), 847–856.Find this resource:

                  Trivers, R. (2011). The Folly of Fools: The Logic of Deceit and Self-deception in Human Life. NY: Basic Books.Find this resource:

                    Weinberger, S. (2010). Airport security: Intent to deceive? Can the science of deception detection help to catch terrorists? Nature, 465, 412–415.Find this resource:

                      References

                      Blair, J. P., Levine, T. R., & Shaw, A. J. (2010). Content in context improves deception detection accuracy. Human Communication Research, 36(3), 423–442.Find this resource:

                        Bond, C. F., & The Global Deception Research Team (2006). A world of lies. Journal of Cross-Cultural Psychology, 37(1), 60–74.Find this resource:

                          Bond, C. F., Jr., & DePaulo, B. M. (2006). Accuracy of deception judgments. Personality and Social Psychology Review, 10(3), 214–234.Find this resource:

                            Bond, C. F., Jr., & DePaulo, B. M. (2008). Individual differences in judging deception: Accuracy and bias. Psychological Bulletin, 134(4), 477–492.Find this resource:

                              Bond, C. F., Jr., Howard, A. R., Hutchison, J. L., & Masip, J. (2013). Overlooking the obvious: Incentives to lie. Basic and Applied Social Psychology, 35(2), 212–221.Find this resource:

                                Buller, D. B., & Burgoon, J. K. (1996). Interpersonal deception theory. Communication Theory, 6(3), 203–242.Find this resource:

                                  DePaulo, B. M., Lindsay, J. J., Malone, B. E., Muhlenbruck, L., Charlton, K., & Cooper, H. (2003). Cues to deception. Psychological Bulletin, 129(1), 74–118.Find this resource:

                                    Ekman, P. (2009). Telling lies. NY: W. W. Norton.Find this resource:

                                      Ekman, P., & Friesen, W. V. (1969). Nonverbal leakage and clues to deception. Psychiatry, 32(1), 88–106.Find this resource:

                                        Fay, P. J. & Middleton, W. C. (1941). The ability to judge truth-telling, or lying, from the voice as transmitted over a public address system. The Journal of general Psychology, 24(1), 211–215.Find this resource:

                                          Hartwig, M., & Bond, C. F., Jr. (2011). Why do lie-catchers fail? A lens model meta-analysis of human lie judgments. Psychological Bulletin, 137(4), 643–659.Find this resource:

                                            Hartwig, M., & Bond, C. F., Jr. (2014). Lie detection from multiple cues: Meta-analysis. Applied Cognitive Psychology, 28(5), 661–676.Find this resource:

                                              Hauch, V., Sporer, S. L., Michael, S. W., & Meissner, C. A. (2014). Does training improve the detection of deception? A meta-analysis. Communication Research 43(3), 283–343.Find this resource:

                                                Levine, T. R. (2010). A few transparent liars. In C. Salmon (Ed.) Communication Yearbook 34. Hoboken, NJ: Taylor and Francis.Find this resource:

                                                  Levine, T. R. (2014). Truth-default theory (TDT): A theory of human deception and deception detection. Journal of Language and Social Psychology, 33(4), 378–392.Find this resource:

                                                    Levine, T. R. (2015). New and improved accuracy findings in deception detection research. Current Opinion in Psychology, 6, 1–5.Find this resource:

                                                      Levine, T. R., Clare, D., & Blair, J. P. (2014). Diagnostic utility: Experimental demonstrations and replications of powerful question effects and smaller question by experience interactions in high stake deception detection. Human Communication Research, 40, 262–289.Find this resource:

                                                        Levine, T. R., Kim, R. K., & Blair, J. P. (2010). (In)accuracy at detecting true and false confessions and denials: An initial test of a projected motive model of veracity judgments. Human Communication Research, 36(1), 81–101.Find this resource:

                                                          Levine, T. R., Kim, R. K., Park, H. S., & Hughes, M. (2006). Deception detection accuracy is a predictable linear function of message veracity base-rate: A formal test of Park and Levine’s probability model. Communication Monographs, 73(3), 243–260.Find this resource:

                                                            Levine, T. R., & McCornack, S. A. (2001). Behavioral adaption, confidence, and heuristic-based explanations of the probing effect. Human Communication Research, 27(4), 471–502.Find this resource:

                                                              Levine, T. R., Park, H. S., & McCornack, S. A. (1999). Accuracy in detecting truths and lies: Documenting the “veracity effect.” Communication Monographs, 66(2), 125–144.Find this resource:

                                                                Levine, T. R., Serota, K. B., Carey, F, & Messer, D. (2013). Teenagers lie a lot: A further investigation into the prevalence of lying. Communication Research Reports, 30(3), 211–220.Find this resource:

                                                                  Levine, T. R., Serota, K. B., Shulman, H., Clare, D. D., Park, H. S., Shaw, A. S., . . . Lee, J. H. (2011). Sender demeanor: Individual differences in sender believability have a powerful impact on deception detection judgments. Human Communication Research, 37(3), 377–403.Find this resource:

                                                                    Masip, J., & Herrero, C. (2015). Police detection of deception: Beliefs about behavioral cues to deception are strong though contextual evidence is more useful. Journal of Communication, 65(1), 125–145.Find this resource:

                                                                      McCornack, S. A. (1992). Information manipulation theory. Communication Monographs, 59(1), 1–16.Find this resource:

                                                                        McCornack, S. A. (1997). The generation of deceptive messages: Laying the groundwork for a viable theory of interpersonal deception. In J. O. Greene (Ed.), Message production: Advances in communication theory (pp. 91–126). Mahwah, NJ: Erlbaum.Find this resource:

                                                                          McCornack, S. A., Morrison, K., Paik, J. E., Wiser, A. M., & Zhu, X. (2014). Information manipulation theory 2: A propositional theory of deceptive discourse production. Journal of Language and Social Psychology, 33(4), 348–377.Find this resource:

                                                                            Park, H. S., Levine, T. R., McCornack, S. A., Morrison, K., & Ferrara, M. (2002). How people really detect lies. Communication Monographs, 69(2), 144–157.Find this resource:

                                                                              Reinhard, M., Sporer, S. L., & Scharmach, M. (2013). Perceived familiarity with a judgmental situation improves lie detection ability. Swiss Journal of Psychology, 72(1), 43–52.Find this resource:

                                                                                Serota, K. B., & Levine, T. R. (2015). A few prolific liars: Variation in the prevalence of lying. Journal of Language and Social Psychology, 34(2), 431–440.Find this resource:

                                                                                  Serota, K. B., Levine, T. R., and Boster, F. J. (2010). The prevalence of lying in America: Three studies of self-reported lies. Human Communication Research, 36, 1–24.Find this resource:

                                                                                    Vrij, A. (2008). Detecting lies and deceit: Pitfalls and opportunities. West Sussex, UK: John Wiley.Find this resource:

                                                                                      Vrij, A. (2015). A cognitive approach to lie detection. In P. A. Granhag, A. Vrij, & B. Verschuere (Eds.), Deception Detection: Current Challenges and New Approaches (pp. 205–229). Chichester, UK: Wiley.Find this resource:

                                                                                        Zuckerman, M., DePaulo, B. M., & Rosenthal, R. (1981). Verbal and nonverbal communication of deception. In L. Berkowitz (Ed.), Advances in experimental social psychology (Vol. 14, pp. 1–59). New York: Academic Press.Find this resource: