deception, detection, demeanor, and truth bias in face-to-face and computer-mediated communication

27
Communication Research XX(X) 1–27 © The Author(s) 2013 Reprints and permissions: sagepub.com/journalsPermissions.nav DOI: 10.1177/0093650213485785 crx.sagepub.com Article Deception, Detection, Demeanor, and Truth Bias in Face-to-Face and Computer- Mediated Communication Lyn M. Van Swol 1 , Michael T. Braun 1 and Miranda R. Kolb 1 Abstract In an ultimatum game, participants were randomly assigned to the role of allocator or recipient and to interact face-to-face (FtF) or over computer text chat (computer- mediated communication [CMC]). The allocator was given money to divide. The recipient was unaware of the amount given, so the allocator could deceive the recipient. Perception of the allocator having a dishonest demeanor increased recipient suspicion of deception, but reduced detection accuracy for truths. Demeanor cues did not help detect deception. Recipients were better at detecting lies CMC than FtF. Overall, truth bias did not differ between CMC and FtF. Rates of deception did not differ between CMC and FtF, but type of deception marginally differed. There was more deceptive omission used in FtF and more deceptive commission (bald-faced lies) used in CMC. Results are discussed in terms of demeanor and truth bias. Keywords deception, lies, omission, truth bias, suspicion, computer-mediated communication (CMC) Most people are bad at detecting deception in deception experiments and have detec- tion rates slightly above chance (Bond & DePaulo, 2006). This article uses an ultima- tum game to examine how sender demeanor and truth bias affect suspicion and 1 Department of Communication Arts, University of Wisconsin-Madison, Madison, WI, USA Corresponding Author: Lyn M. Van Swol, Department of Communication Arts, University of Wisconsin, 821 University Avenue, Madison, WI 53706, USA. Email: [email protected] 485785CRX XX X 10.1177/0093650213485785<italic>Communication Research</italic>Van Swol et al. research-article 2013 at UNIV OF WISCONSIN-MADISON on February 9, 2015 crx.sagepub.com Downloaded from

Upload: wisc

Post on 10-Nov-2023

0 views

Category:

Documents


0 download

TRANSCRIPT

Communication ResearchXX(X) 1 –27

© The Author(s) 2013Reprints and permissions:

sagepub.com/journalsPermissions.nav DOI: 10.1177/0093650213485785

crx.sagepub.com

Article

Deception, Detection, Demeanor, and Truth Bias in Face-to-Face and Computer-Mediated Communication

Lyn M. Van Swol1, Michael T. Braun1 and Miranda R. Kolb1

AbstractIn an ultimatum game, participants were randomly assigned to the role of allocator or recipient and to interact face-to-face (FtF) or over computer text chat (computer-mediated communication [CMC]). The allocator was given money to divide. The recipient was unaware of the amount given, so the allocator could deceive the recipient. Perception of the allocator having a dishonest demeanor increased recipient suspicion of deception, but reduced detection accuracy for truths. Demeanor cues did not help detect deception. Recipients were better at detecting lies CMC than FtF. Overall, truth bias did not differ between CMC and FtF. Rates of deception did not differ between CMC and FtF, but type of deception marginally differed. There was more deceptive omission used in FtF and more deceptive commission (bald-faced lies) used in CMC. Results are discussed in terms of demeanor and truth bias.

Keywordsdeception, lies, omission, truth bias, suspicion, computer-mediated communication (CMC)

Most people are bad at detecting deception in deception experiments and have detec-tion rates slightly above chance (Bond & DePaulo, 2006). This article uses an ultima-tum game to examine how sender demeanor and truth bias affect suspicion and

1Department of Communication Arts, University of Wisconsin-Madison, Madison, WI, USA

Corresponding Author:Lyn M. Van Swol, Department of Communication Arts, University of Wisconsin, 821 University Avenue, Madison, WI 53706, USA. Email: [email protected]

485785 CRXXXX10.1177/0093650213485785<italic>Communication Research</italic>Van Swol et al.research-article2013

at UNIV OF WISCONSIN-MADISON on February 9, 2015crx.sagepub.comDownloaded from

2 Communication Research XX(X)

detection accuracy. Furthermore, we examine if lie detection accuracy improves in computer-mediated communication (CMC) environments where cues to a sender’s demeanor are limited. Finally, we investigate differences in deception type across two different communication channels, looking specifically at deception by omission and deception by commission.

Detection of Deception and Sender Demeanor

We consider two factors that may contribute to low detection accuracy. One factor that may reduce accuracy in detecting deception is that sender’s demeanor may not be highly related to the veracity of their message. Levine et al. (2011) found evidence that senders have variance in how honest and dishonest they appear, and this appearance of honesty or dishonesty is mostly independent of their actual honesty. Levine et al. state, “some people come off as sincere while others do not and, for most people, this has little or nothing to do with whether or not they are actually honest or actually lying” (p. 379). Levine et al. found that a sender’s demeanor had stronger effects for influenc-ing perception of lies than truths. Since a participant is randomly assigned to lie or tell the truth in a typical deception experiment, usually with 50% assigned to lie and 50% assigned to tell the truth, someone with a dishonest demeanor may be assigned to tell the truth or vice versa. A sender’s demeanor creates noise, because it is mostly unre-lated to actual honesty, and this increases the difficulty of identifying valid deception cues (Levine et al., 2011). Through interviews with participants watching tapes of lies and truths, Levine and colleagues identified characteristics of an honest demeanor (confident and composed, pleasant and friendly, engaged and involved, gives plausible explanations) and dishonest demeanor (avoid eye contact, hesitant and slow when answering, uncertain vocal tone, fidgeting excessively, appearing tense and nervous, inconsistency in demeanor, and conveys uncertainty with words). This study attempts to replicate Levine et al.’s (2011) research finding that demeanor is related to suspicion but not detection accuracy. Since many of the demeanor cues are not available in CMC, our measures of dishonest and honest demeanor will be different in CMC and face-to-face (FtF), so we cannot make a direct comparison between conditions. Instead, we test it separately in each condition and extend research on demeanor into a CMC context.

Hypothesis 1a (H1a): Perception of honest demeanor will predict suspicion but not accurate detection of deception.

Hypothesis 1b (H1b): Perception of dishonest demeanor will predict suspicion but not accurate detection of deception.

In testing the two interaction conditions (FtF and CMC), there is reason to believe that detecting lies may be easier in one compared to another. In support of CMC as an environment rich with deceptive potential, one might argue that the reduced channel richness of CMC (especially the text-only condition used in this study) may also reduce a number of cues that are useful in detecting deception. Inasmuch as factors of

at UNIV OF WISCONSIN-MADISON on February 9, 2015crx.sagepub.comDownloaded from

Van Swol et al. 3

eye contact, vocal tone, and other cues traditionally thought to indicate deception are now masked, a “leaky” liar may succeed in CMC where she would have failed FtF. This is one possibility that deserves consideration.

We do not support this contention, however, and instead hypothesize that detecting lies should be harder in the FtF condition than in the CMC condition. We advance this proposition for several reasons. First, the masking of demeanor cues in CMC may actually aid in detection because these cues have been found to relate to suspicion but not to accuracy (Levine et al., 2011); that is, detectors become distracted by those cues they think indicate deception and thus led astray by their own observations, mistaking interaction noise for deception signal. Relatedly, FtF interaction may increase arousal through the presence of another person, and this could increase leakage of deceptive cues (Marett & George, 2004; Zhou, Burgoon, Nunamaker, & Twitchell, 2004).

Second, for allocators actively seeking to deceive, the FtF environment may give a skilled deceiver more chances for success. We think this possibility is especially likely in the present study because we do not assign participants to lie or tell the truth; partici-pants freely choose whether to deceive. It is possible that a participant knows that they generally do not have an honest appearing demeanor or are easily aroused when telling a lie, and this participant may choose not to lie, especially in the FtF condition where demeanor characteristics are salient. Alternatively, a person with more confidence in their deceptive skill could use the richer channel of FtF to manage the interaction. Thus, while generally CMC communication may allow for more ambiguity and increase the ability to conceal information and conceal one’s level of arousal, in this case a rich channel like FtF allows the greater ability to distract the receiver and use the richer context for damage control (Marett & George, 2004).

This second line of reasoning draws directly into a central debate in the deception literature: What is the nature of a “leaky” deceiver? Some advance (Ekman & Friesen, 1969; Zuckerman, DeFrank, Hall, Larrance, & Rosenthal, 1979) the position that the act of deception itself causes leakage, similar to the claim that poker players have a “tell.” If a person can discover the “tell” in poker or the leakage in deception, then detection of truth should not be difficult. Others, however, disagree with the character-ization of leakage as universal. Levine, Shaw, and Shulman (2010) proposed that a small percentage of liars in deception experiments are “leaky” liars who have a con-sistency between their truthfulness and demeanor, and these liars may be driving the slightly above chance rate (about 54%) at which deception is usually detected. In line with Levine et al. (2011), we argue that demeanor cues may act as a detriment to detec-tion accuracy. Thus, we propose the following hypothesis.

Hypothesis 2 (H2): Receivers will be better at detecting lies CMC than FtF.

Detection of Deception and Truth Bias

Another reason people are bad at detecting deception is that they have a bias to pre-sume honesty and see an interaction partner as truthful, unless they have a reason to be suspicious (McCornack & Parks, 1986; Miller, Mongeau, & Sleight, 1986;

at UNIV OF WISCONSIN-MADISON on February 9, 2015crx.sagepub.comDownloaded from

4 Communication Research XX(X)

O’Sullivan, Ekman, & Friesen, 1988; Zuckerman, et al., 1979; Zuckerman, DePaulo, & Rosenthal, 1981). Since the vast majority of deception experiments use base rates of 50% lies and 50% truths in the material that participants judge, this truth bias leads participants to underestimate the lies and reduces accuracy (Levine, Kim, Park, & Hughes, 2006; Levine, Park, & McCornack, 1999; Park & Levine, 2001). However, a truth bias may not significantly reduce deception accuracy in many naturally occur-ring situations because rates of deception are usually much lower than 50% of all interactions (DePaulo, Kashy, Kirkendol, Wyer, & Epstein, 1996; George & Robb, 2008; Hancock, Thom-Santelli, & Ritchie, 2004a; Serota, Levine, & Boster, 2010). Studies that have lower rates of deception have higher rates of accuracy because of the truth bias (Levine et al., 2006; Levine et al., 1999; Park & Levine, 2001; Van Swol, Braun, & Malhotra, 2012). Levine and colleagues (1999) term this the veracity effect. For example, Van Swol, Malhotra, and Braun (2012) let participants decide whether they wanted to lie or not and found accuracy rates of 77% because most participants chose not to lie.

One question is whether the truth bias is diminished in CMC and how this affects accuracy. Some past research suggests that individuals do not have an accurate percep-tion about when and over which channels deception is likely to occur (Hancock, Thom-Santelli, & Ritchie, 2004b; Whitty & Carville, 2006). Hancock and colleagues (2004b) used a diary-study methodology and had participants record all interactions and whether or not they contained deception across all communication channels for 1 week. Though the results indicated that email was the channel that contained the least amount of deception, participants reported that they thought email was where they had lied the most. Other evidence also demonstrates that individuals anticipate more deception over computer-mediated channels. Caspi and Gorsky (2006) found that a majority of users of an online discussion board felt deception was widespread on the board (73%), but only a minority reported ever having lied themselves (29%). This suggests that individuals may be initially more suspicious of interactions occurring over computer-mediated channels compared to rates of suspicion in FtF encounters. This rate of increased suspicion should decrease the effect of the truth bias. If partici-pants have a lower truth bias in CMC, this will reduce accuracy for truths and increase accuracy for lies. Whether this affects overall accuracy compared to FtF depends on the rate of deception. For example, if people are not more likely to deceive their part-ner over the computer, a reduced truth bias could reduce accuracy because we expect more allocators to tell the truth than to lie.

Hypothesis 3 (H3): There will be a reduced truth bias in CMC. This will increase accuracy for lies and decrease accuracy for truths. However, since we expect more truths, this will reduce accuracy overall for CMC.

Communication Medium and Type of Deception

We do not make any predictions for whether there might be more deception FtF or CMC. Some research has found that people may prefer to lie FtF (George & Carlson,

at UNIV OF WISCONSIN-MADISON on February 9, 2015crx.sagepub.comDownloaded from

Van Swol et al. 5

2005). Hancock et al. (2004a) found fewer lies through email, although they reasoned it was due to the permanent record left by email, and differences in record of lie would be less important for our experiment in which FtF communication is also recorded. Whitty and Carville (2008) found that participants anticipated that they would lie the most through email, and Whitty (2002) found that people lie regularly in online chat rooms. Selwyn (2008) found no differences in lies and communication medium. Thus, we make no predictions about differences in deception between FtF and CMC.

We do predict, however, that people will deceive differently in the two conditions. Specifically, people will use more omission FtF and more commission CMC. Most previous research on deception has examined “bald-faced lies” or deception by com-mission. However, one can deceive by omitting key information. Information manipu-lation theory (McCornack, 1992; McCornack, Levine, Solowczuk, Torres, & Campbell, 1992; Yeung, Levine, & Nishiyama, 1999) distinguishes between deceptive messages that violate Grice’s (1989) conversational expectation of quality by deliberating falsi-fying information or violate the expectation of quantity by omitting information in a misleading fashion. Violations of quantity are judged as less severe than violations of quality (Jacobs, Dawson, & Brashers, 1996; McCornack et al., 1992). This is con-firmed in research on negotiation and judgment and decision making that has found that omissions are viewed as less deceptive and more socially acceptable than com-missions (Ritov & Baron, 1990; Spranca, Minsk, & Baron, 1991; Tenbrunsel & Messick, 2004; Van Swol et al., 2012; for exception see, Levine, Asada, & Massi, 2003). For example, DeScioli, Christner, and Kurzban (2011) found participants were more likely to use omission to deceive than commission when their behavior had the potential to be punished because participants anticipated less severe punishments with omission.

When engaging in deception FtF, senders might be more concerned with social disapproval of deception from their partner and with face concerns of getting accused of a bald-faced lie. They might also experience more social discomfort with a FtF bald-faced lie (DePaulo et al., 1996). Hancock, Curry, Goorha, and Woodworth (2008) state that “[it] may be the case that when it is safe to do so, deceivers will pepper their lies with more detail; but when they are at risk of being discovered they will be more hesitant to provide detail” (p. 16). Thus, because of the greater social risks in FtF com-munication, they may perceive it less safe to deceive through commission and opt to try to deceive through omission, instead. Furthermore, with FtF communication send-ers might be more concerned their demeanor could give them away when they state a bald-faced lie. Therefore, someone who generally is a “leaky” liar may opt to deceive through omission because they are concerned they could not pull off a bald-faced lie (Levine, 2010).

In addition to demeanor and social disapproval concerns in FtF communication, allocators may perceive that there is a higher cognitive load in FtF communication. “Sender’s maintenance of both their own false reality and the receiver’s ostensible reality comes at the price of cognitive resources” (Duran, Hall, McCarthy, & McNamara, 2010, p. 441). With a reduced cue environment in CMC, the sender does not need to worry about her nonverbal behavior while simultaneously monitoring the

at UNIV OF WISCONSIN-MADISON on February 9, 2015crx.sagepub.comDownloaded from

6 Communication Research XX(X)

recipients’ nonverbal cues, and it may be perceived as easier to construct a believable reality due to a lessened cognitive load. This is one reason Walther (1996) gives when supporting his claim that computer-mediated interaction can become “hyperpersonal.” Research in deception supports the advantage given to liars when communicating over computer-mediated channels. For example, according to Interpersonal Deception Theory (Buller & Burgoon, 1996) when the receiver is suspicious, the sender has to more carefully monitor the interaction and the sender’s own behavior. This monitoring may result in a greater cognitive load when communicating FtF (Burgoon, Stoner, Bonito, & Dunbar, 2003). The perception of greater cognitive load may reduce bald-faced lies in FtF condition.

Hypothesis 4 (H4): Participants will use more deceptive omission FtF than CMC. Participants will use more bald-face lies or deceptive commission CMC than FtF.

Method

Participants and Design

Three hundred eighty-eight undergraduates at a large, public Midwestern university in the United States participated in the experiment in exchange for extra credit in liberal arts communication classes. Participants were randomly assigned to communicate with their partner either using Apple iChat text-based messaging program (CMC; 94 dyads) or (FtF; 100 dyads).

Procedure

Upon arrival to the lab, participants were randomly assigned a role (allocator or recipient), escorted to their private room, and given a consent form. Once both par-ticipants had arrived, participants were individually given instruction. The allocator was told,

This is a study about communication in a monetary negotiation. You will be assigned a small amount of money, and it is your job to divide that money between yourself and your partner. After you’ve made your decision, you will meet with your partner [alt. you will briefly meet with your partner and then interact using an instant messaging program on the computer behind you] and tell them how much money you are offering them. If they decide to accept your offer, then they will receive the amount of money you are offering and you will keep the rest. The money will be divided exactly as you chose to split it. If they reject your offer, then your partner will receive the default amount of a $1.50 and you will receive nothing. It’s important to note that your partner knows only that you have received a small amount of money. They do not know and will never find out the exact amount that you received. You do not have to tell your partner the amount of money you received. You are free to tell them anything you wish. Does that make sense?

The recipient was similarly instructed,

at UNIV OF WISCONSIN-MADISON on February 9, 2015crx.sagepub.comDownloaded from

Van Swol et al. 7

Your partner has been assigned a small amount of money, and it is their job to choose how to divide that money between the two of you. When your partner has made their decision, the two of you will meet [alt. interact using an instant messaging program on the computer behind you] and your partner will offer you an amount of money. It is your job to decide whether you will accept their offer or reject it. If you accept it, then you will receive the amount of money your partner is offering and your partner will keep the rest. If you reject it, then your partner will receive nothing and you will receive the default amount of a $1.50. It is important to note that your partner is not required to tell you the total amount of money they received, but you are free to ask for that information, or to ask any questions that you feel will be helpful when making your decision of whether to accept or reject the offer. Does that make sense?

All participant questions were answered and both the allocator and receiver were then given a written instruction form that reiterated the rules of the game. They were told to read the sheet before interacting with their partner.

After instructions were given, the allocator was assigned an amount of money. The experimenter used a deck of cards with all face cards removed. As the experimenter laid the top four cards onto the table face down, he said, “These cards represent the amount of money you will have to allocate between yourself and your partner. Go ahead and select a card.” The participant selected a card and the experimenter then gathered the cards and showed the participant the value of the selected card. The deck was stacked such that the top four cards were all 6s. Thus, the participant was always shown a 6 and was told, “So that is the amount, in dollars, that you will receive.” The allocator then exited the room and returned with an envelope containing five single dollar bills and four quarters. The experimenter instructed the allocator to remove the money they wished to keep for themselves and put it in their bag or pocket, leave the rest of the money in the envelope, and let the experimenter know when they had made their decision. The first author read the transcripts and found no mention of any participant stating they had knowledge of the allocation amount from a previous experimental participant. However, we also tested to see if suspicion increased among recipients as more students had participated in the experiment. There was not enough evidence to conclude that suspicion increased over time. Comparing first round of data collection with the second in a chi-square test, there was no greater rate of suspicion, χ2 (n = 191, df = 1) = .002, p =.94. Comparing rates of suspicion month-by-month in a binary regression found that the model was not significantly dif-ferent from 0, χ2 (n = 191, df = 1) = .10, p = .75, and, as such, month of participation was not a significant predictor of rates of suspicion.

Once the allocator had made an allocation decision, in the FtF condition, the experi-menter ascertained that the recipient was ready to begin the interaction and then escorted the recipient to the allocator’s room. The experimenter asked the recipient to sit down and then started the video camera. Participants were then told that they would have up to 5 minutes to discuss and that the recipient should step out into the hallway when they had reached a decision. The experimenter closed the door and the partici-pants began discussing.

at UNIV OF WISCONSIN-MADISON on February 9, 2015crx.sagepub.comDownloaded from

8 Communication Research XX(X)

In the chat condition, the experimenter escorted the allocator to the recipient’s room to briefly introduce the participants FtF and show them the computer program they would be using. The experimenter said, “This is your partner with whom you will be interacting today. The two of you will be interacting using a program called iChat. It works just like any other chat program. You type something on the keyboard, hit return, and it sends the message to the other person. You both are familiar with these types of programs?”. The user names in iChat were researchsubject_1 and research-subject_2. After this introduction, the allocator was sent back to her room, both partici-pants were seated at their computers, the recipient was instructed on the 5 minute limit, and the interaction began. If 5 minutes elapsed, then participants were interrupted and asked if they required more time. More time was always granted if it was needed. This occurred in four cases FtF, and six cases CMC.

Once the interaction was complete and, in the FtF condition, the recipient escorted back to her room, the participants were given a questionnaire about the interaction that they just had as well as a decision sheet on which the allocator indicated how she had divided the money and the recipient indicated if she accepted or rejected the offer. Based on that decision, money was divided appropriately between the two partici-pants. Participants were told that they were free to leave once they have completed their final questionnaires. Care was taken to make sure the participants did not leave simultaneously.

Materials

The lab setup required two rooms for each dyad. The rooms were small and connected via a hallway in an isolated lab area with no outside noise. For the chat condition, each room also contained a networked Apple computer. Each computer ran the program iChat with chat enabled across a local network. A private chat room was created for each session and the computers were configured so that no other person could chat with the participants. FtF interactions were recorded with conspicuous consumer-grade digital video camera on a tripod.

The questionnaire for the allocator consisted of 38 questions and a 20 emotion PANAS-style scale adapted from Watson, Clark, and Tellegen (1988); in the chat con-dition, the questionnaire was slightly shorter, as items that asked about the recipient’s behavior were removed. Questions asked about how the allocator had divided her money, whether she had engaged in deception, and a variety of items discussed below. The recipient’s questionnaire included 33 questions and a 20 emotion PANAS-style scale. This questionnaire asked similar questions including assessing how much of the total allocation the recipient felt her partner had offered (e.g., “less than half,” “half,” “more than half”) and how suspicious the recipient was of her partner.

For both questionnaires, the first eight questions measured how well the participant knew their partner (e.g., “How often have you talked with your partner before this experiment,” “How often have you socialized,” ‘How often have you had class together”) (Van Swol et al., 2012). The eight questions had a high Cronbach’s χ (.93 for allocator and .92 for recipient). For the Allocator questionnaire, a factor analysis

at UNIV OF WISCONSIN-MADISON on February 9, 2015crx.sagepub.comDownloaded from

Van Swol et al. 9

on 20 of the allocators’ items revealed five factors, accounting for 70.86% of the vari-ance. From these items, two important factors emerged for analysis. The first factor (eigenvalue = 7.62) measured suspicion and consisted of six items (“My partner did not seem to believe my offer;” “My partner was suspicious of my offer;” “My partner thought I was completely honest with him/her” (reverse coded); “My partner kept pressing me on my answers;” “My partner didn’t seem to accept my offer and explana-tions;” “My partner trusted me” (reverse coded)). The Cronbach’s χ for these items was high (.91). The mean of the six items was taken as a measure of Partner Suspicion. The second factor (eigenvalue = 1.85) consisted of four items (“I was completely hon-est with my partner” (reverse coded); “I was deceptive with my partner”; “I gave evasive and ambiguous answers to the questions”; “I took a long time before respond-ing to the questions”). The Cronbach’s χ for these items was acceptable (.78) and the mean of the four questions was used as a measure of Allocator Honesty.

For the recipient questionnaire, a factor analysis on 19 of the recipients’ items revealed four factors, accounting for 67.9% of the variance. From these items, one factor useful for analysis emerged. This factor (eigenvalue = 9.03) measured suspicion and consisted of seven items (“I found my partner’s answers believable” (reverse coded); “My partner was very open and forthcoming” (reverse coded); “My partner was completely honest with me” (reverse coded); “My partner took a long time before responding to my questions;” “My partner was manipulative;” “I was suspicious of my partner’s offer;” and “I thought my partner was being deceptive”). The Cronbach’s χ for these items was high (.91). The mean of the seven items was taken as a measure of Recipient Suspicion.

Coding

Tapes were transcribed verbatim. Video was missing for two dyads in the FtF condi-tion. Two coders, blind to the experimental hypotheses, coded each transcript for sev-eral variables. First, they coded for amount of offer; coders only disagreed twice out of 194 interactions. For some interactions, coders were unable to determine the offer because the Allocator gave the Recipient the envelope containing the Recipient’s share and did not verbally state the offer. Second, coders coded whether the offer changed during the interaction. There was only one disagreement on this coded variable. Any disagreements between the coders on the offer was resolved by the first author exam-ining the transcripts. Coders were asked to judge on a scale from 1 (Not at all) to 4 (Somewhat) to 7 (Completely) how deceptive the allocator was. The coders did have knowledge of the true endowment amount. The mean difference score between the average of the two coders was small, M = –0.23, SD = 1.14, but significant, t (190) = –2.73, p = .007 and the intraclass correlation between the coders, using two-way mixed effects model, was significant, ICC = .93, p < .0001. Therefore, the reliability was acceptable. The average of the two coders was taken as a measure of judged deception.

Coders coded the allocation endowment amount into three categories: allocator did not state an endowment amount, stated it immediately, or stated it only after being

at UNIV OF WISCONSIN-MADISON on February 9, 2015crx.sagepub.comDownloaded from

10 Communication Research XX(X)

asked. Coders disagreed on the endowment amount category 15 times out of 194 inter-actions. Coders were asked how much money the Allocator stated they were given to allocate, if they stated this information; there were four disagreements among the cod-ers. Next, coders were asked to classify whether or not the endowment amount stated (if stated) was truthful; there were two disagreements. Coders coded whether the endowment amount told to the recipient changed during the interaction. The coders had no disagreements, and the allocation amount only changed in one interaction. Coders were asked to classify the interaction into one of six categories: (1) truthful; (2) deception; (3) omission; (4) omission, then truth; (5) omission, then lie; (6) lie, then truth. There were 31 disagreements among the coders. Most disagreements arose between category 1 and 4. The first author examined the coherency of the variables mentioned in this paragraph to check for consistency. For example, if the endowment amount was coded as being stated immediately, then the interaction could only be clas-sified as (1) truthful or (2) deception. If there was an inconsistency, the first author checked the transcript. Furthermore, when there were discrepancies between the two coders, the first author checked the transcript to resolve the inconsistency.

Results

The results are divided into several parts. First, we examine the types of offers (truth, lies, deceptive omission) and differences between communication channel. We then check if there is a relationship between participants’ relationship and deception. Next, we present results about the effects of offer type and communication channel on amount of recipient suspicion and accuracy in detecting truths and deception. Then, we examine the relationship between recipients’ perception of allocators’ demeanor, suspicion, and detection accuracy. Finally, we examine differences in negative emo-tion between offer type and communication channel.

Deception

Interactions were coded for deceptive commission (lie), deceptive omission, and truth.1 Lies (n = 40, 20.6%) were defined as the allocator stating a different endow-ment amount during the interaction than they actually received (e.g., stating they were given US$4 when they had actually been given US$6). Ten allocators lied immedi-ately, 29 lied after being questioned by the receiver about the amount of the endow-ment, and video was missing for one lie. One allocator initially lied about the allocation amount, but then gave the true allocation amount. We classified this as a truth. Deceptive omission (n = 13, 6.7%) was defined as the allocator failing to state an endowment amount to the recipient and giving the recipient less than half the endow-ment. Truth (n = 141, 72.3%) was defined as either the allocator truthfully stating their endowment amount to the recipient (n = 118) or failing to state an endowment amount but giving the recipient half or more of the endowment (n = 23). Of the allocators who truthfully stated their endowment, 42 stated it immediately, 75 stated it after being

at UNIV OF WISCONSIN-MADISON on February 9, 2015crx.sagepub.comDownloaded from

Van Swol et al. 11

asked by the receiver about the amount of the endowment, and video was missing for one truth. See Table 1 for frequencies of interaction type between FtF and CMC.

Omission with offers less than half was classified as deceptive omission because previous research using the ultimatum game has found that recipients view offers less than half as unfair and often reject unfair offers (Huck, 1999, Straub & Murnighan, 1995; Valenzuela & Srivastava, 2012). In fact, “attesting to its importance as a model of strategic behavior, ultimatum games have been widely used to document behavioral regularities that were interpreted to imply that fairness considerations often override strategic considerations,” and recipients will often take less money in order to punish allocators for an unfair offer (Valenzuela & Srivastava, 2012, p. 1672). Therefore, we expected allocators using deceptive omission to self-report less honesty than truthful allocators because they would perceive themselves as concealing information about an unfair offer. A one-way ANOVA on the Allocator reported honesty factor showed sig-nificant differences between truth (M = 6.56, SD = 0.81), lies (M = 3.61, SD = 0.83), and deceptive omission (M = 4.45, SD = 1.16), F (2,150) = 209.02, η2 = 0.69, p < .001. Post hoc tests using Fisher’s least significant difference (LSD) found that truths sig-nificantly differed from lies (p < .001) and deceptive omission (p < .001), and decep-tive omission and lies significantly differed (p = .002). We also compared the coders’ average rating of deceptiveness of the interaction (1 (Not at all) to 7 (Completely)) to offer type, F (2,189) = 426.94, η2 = 0.82, p < .001. Post hoc tests using Fisher’s LSD found that truths (M = 1.44, SD = 0.90) significantly differed from lies (M = 6.06, SD = 1.06; p < .001) and deceptive omission (M = 5.27, SD = 0.88; p < .001), and decep-tive omission and lies significantly differed (p < .01). Allocators were asked “Did you lie to your partner?” and answered Yes (Truth = 1, Lies = 36, Deceptive omission = 3) or No (Truth = 140, Lies = 3, Deceptive omission = 9). One allocator initially lied in the interaction, but then told the truth. We classified this as truth, but the allocator circled “Yes” to this question. Three allocators told their partner that their endowment was “around/about five.” These allocators circled that they did not lie to their partner, but we classified the interaction as a lie. Two of these three allocators then went on to state that they did not tell the whole truth. Allocators were also asked “Did you tell your partner the whole truth?” and answered Yes (Truth = 128, Lies = 1, Deceptive

Table 1. Total Interactions by Communication Channel Type.

Interaction type

Communication channel Truth Lie Omission

Truth-omission

Truth after question

Lie after question Total

CMC 19 6 4 11 36 18 94FtF* 23 4 9 12 39 11 98Total 42 10 13 18 75 29 192

Note: CMC = computer-mediated communication; FtF = face-to-face.*Two FtF interactions are excluded from this table due to no video.

at UNIV OF WISCONSIN-MADISON on February 9, 2015crx.sagepub.comDownloaded from

12 Communication Research XX(X)

omission = 4) or No (Truth = 12, Lies = 38, Deceptive omission = 8). Allocators’ self-reported honesty factor significantly correlated with both questions (Yes = 1, No = 0), “did you lie” (r = –.78, p < .001) and “tell whole truth” (r = .85, p < .001).

We tested amount of lies, deceptive omission, and truth between the FtF (Truth = 75, Lies = 16, Deceptive omission = 9) and CMC (Truth = 66, Lies = 24, Deceptive omis-sion = 4) and found no differences in overall rate of deception between the two condi-tions, χ2 (df = 2) = 3.92, p = .14. Hypothesis 4 predicted that of the deceptive interactions (CMC = 28, FtF = 25), there would be more deceptive omission in FtF than CMC and more lies in CMC than FtF. An analysis on just the deceptive interactions was margin-ally significant, χ2 (df = 1) = 3.36, p = .067 with the frequencies of lies and deceptive omission in the predicted direction. There was no difference in average amount of offer between FtF (M = 2.74, SD = 0.70) and CMC (M = 2.75, SD = 0.44), F (1,192) = 0.02, η2 = 0.00, p = .88. Offer and allocator self-reported honesty were significantly related (r = .56, p < .0001). Four offers (two lies, one deceptive omission, and one truth in which the allocator took more than half the money but was honest about it) were rejected by the recipient; these offers were all in the CMC condition.

Relationship

Participants were asked how well they knew their partner on eight questions (e.g., “How well do you know your partner?”) and answered from 1 (Don’t know at all, complete strangers, never worked together . . .) to 7 (Know very well, very good friends, worked together a lot). Allocators generally did not know their partner (M = 1.40, SD = 0.96), and there was no difference between offer type (lie, deceptive omission, truth), F (2,191) = 0.65, η2 = 0.01, p = .52. Receivers also generally did not know their partner (M = 1.39, SD = 0.88) and there was no difference between offer type, F (2,190) = 1.29, η2 = 0.01, p = .28.

Suspicion and Detection of Deception

This section presents results on offer type and communication channel on amount of recipient suspicion and accuracy in detecting truths and deception. Receivers were asked, “Did your partner tell you how much money he or she was given by the experimenter to allocate?” and answered Yes or No. If they answered yes, they were asked, “If yes, do you think your partner lied to you about the amount of money they were given by the experimenter?,” and if they answered no, they were asked, “If no, do you think your partner avoided telling you the amount of money the experimenter gave them so that they could keep more of the money?”. We operationalized No Suspicion as either the receiver indicating that their partner did not lie to them or indicating that their partner did not avoid telling them a total amount to keep more money. We operationalized Suspicion as the receiver indicating that their partner lied about the total amount that they were given or that their partner avoided telling them a total to keep more money. An ANOVA found significantly higher responses on recipients’ suspicion factor from the postquestionnaire when participants were

at UNIV OF WISCONSIN-MADISON on February 9, 2015crx.sagepub.comDownloaded from

Van Swol et al. 13

classified as suspicious (M = 2.90, SD = 1.02) than when they were classified as not suspicious (M = 1.86, SD = 0.88), F (1,189) = 185.33, η2 = 0.50, p < .001. The recipi-ents’ suspicion factor also correlated with offer amount (r = –.19, p = .01).

Correct detection of deception was created by combining deception type with sus-picion. It was coded as either No (the person was suspicious but was not mislead or the person was not suspicious in the face of deception) or Yes (the person was not suspi-cious and was not mislead or the person was suspicious in the face of deception). Table 2 shows correct detection by offer type and communication condition (FtF/CMC). For truths, there was not a significant difference between FtF and CMC in the detection of deception, χ2 (df = 1) = 0.07, p = .79. Receivers correctly detected 77% of truths in CMC and 79% of truths in FtF. Therefore, there was not a reduced truth bias for truths in the CMC condition. For lies, there was a significant difference between FtF and CMC, χ2 (df = 1) = 5.00, p = .025. Only one receiver (out of 16) whose partner lied in the FtF condition was suspicious. Therefore, receivers were more likely to detect a lie when they communicated over the computer. This supports Hypothesis 2. For decep-tive omission, all receivers in both conditions were suspicious.

In terms of truth bias, 70.8% of the receivers thought the allocator was truthful, so there was a truth bias. However, since 72.3% of the allocators were, in fact, truthful, this truth bias contributed to high accuracy; 68.58% (CMC = 68.09%; FtF = 69.07%) of the receivers were accurate in their judgment of truth or deception. There was no difference between CMC and FtF in accuracy, F (1,189) = 0.02, η2 = 0.00, p = .88.

Specifically, 78.3% of the receivers interacting with a truthful allocator correctly determined that the allocator was truthful. For lies, 25% of receivers correctly detected lies (37.50% in CMC and 6.25% in FtF); and 100% of receivers were suspicious when their partner deceived through omission. Overall, truth and deceptive omission were more easily ascertained than lies. In the CMC condition, the allocators lied 25.5% of the time, and receivers were suspicious 37.50% of the time when the sender was lying. Therefore, their detection rate of lies was above the base rate of lies (25.5%), although 37.5% did not significantly differ from 25.5%, t (24) = 1.19, p = .25. Therefore, in the CMC condition, receivers were not better than chance at detecting a lie.

Table 2. Correct and Incorrect Suspicion by Interaction Type and Channel.

Interaction type

Suspicion Truth Lie Omission Total

Correct suspicion CMC 51 9 4 64 FtF 57 1 9 67Incorrect suspicion CMC 15 15 0 30 FtF 15 15 0 30 Total 138 40 13

Note: CMC = computer-mediated communication; FtF = face-to-face.

at UNIV OF WISCONSIN-MADISON on February 9, 2015crx.sagepub.comDownloaded from

14 Communication Research XX(X)

Demeanor

Next, we present results on how the relationship between recipient’s perception of the allocator, suspicion, and detection accuracy. Using Levine et al. (2011) “eleven behav-iors and impressions linked to honest-dishonest demeanor” as a guide, we created an honest demeanor measure and a dishonest demeanor measure.2 In the FtF condition, the honest demeanor measure was the mean of questions about whether the partner had answers that were believable, smiled frequently, was very pleasant, behaved as expected, and engaged in a normal conversation. The dishonest demeanor measure included the mean of questions about whether the partner gave evasive answers, took a long time to answer, fidgeted, avoided eye contact, fiddled with clothes, and had unusual behavior. In the CMC condition, we also created an honest and dishonest demeanor scale, but did not include questions about allocator nonverbal behavior (smiled frequently and behaved as expected for honest demeanor and fidgety, avoided eye contact, and fiddled with clothes for dishonest demeanor) that the receiver was unable to observe. In both FtF and CMC conditions, the honest demeanor and dishon-est demeanor measures were negatively correlated, r = –.68, p < .001; r = –.57, p < .001, respectively. In the FtF condition, we regressed both the honest and dishonest demeanor measures on our dichotomous measure of suspicion, Cox & Snell R2 = .243, Nagelkerke R2 = .357, χ2 = 27.0, p < .001. The measure of dishonest demeanor signifi-cantly predicted suspicion, B =–.840, SE = .35, Wald = 5.78, p = .016. The measure of honest demeanor did not predict suspicion, B = .566, SE = .369, Wald = 2.35, p = .125. In the CMC condition, we regressed both the honest and dishonest demeanor measures on suspicion, Cox & Snell R2 = .264, Nagelkerke R2 = .377, χ2 = 28.5, p < .001. The measure of dishonest demeanor significantly predicted suspicion, B = –.656, SE = .199, Wald = 10.84, p = .001. The measure of honest demeanor was not a significant predictor, B = .451, SE = .298, Wald = 2.286, p = .13.

We also investigated how perceptions of demeanor were related to correct detection of deception. We performed four binary logistic regressions, one for each pairing of condition and interaction type (e.g., truthful, FtF interaction was one test), and regressed honest and dishonest demeanor on Correct Detection. In a truthful, FtF inter-action, the cues did significantly, negatively predict correct detection, Cox & Snell R2 = .236, Nagelkerke R2 = .368, χ2 = 19.36, p < .001. Perception of dishonest demeanor cues lead to a reduction in correct detection of truth, B = 1.18, SE = .457, Wald = 6.63, p = .01. Perception of honest cues was not related to correct detection. The same pat-tern was observed for truthful, CMC interactions, Cox & Snell R2 = .253, Nagelkerke R2 = .398, χ2 = 14.3, p = .001. Dishonest demeanor cues were associated with a reduc-tion in correct detection of truths, B = 1.08, SE = .429, Wald = 6.30, p = .012; honest demeanor cues had no association. In FtF interactions with liars, the rate of correct detection was very low and neither honest nor dishonest demeanor cues were associ-ated with detection accuracy. In CMC interactions with liars, the overall model was not significantly different from 0 (Cox & Snell R2 = .245, Nagelkerke R2 = .333, χ2 = 5.89, p = .053) and neither factor was associated with correct deception of detection. For all interactions in cases of deceptive omission, recipients were correctly suspi-cious of their partner.

at UNIV OF WISCONSIN-MADISON on February 9, 2015crx.sagepub.comDownloaded from

Van Swol et al. 15

Allocators’ Perception of Suspicion, Guilt, and Tension

To measure emotions, we used a PANAS-style scale in which participants rate how much they are currently feeling a list of 20 emotion adjectives on a scale from 1 (Not at all) to 5 (Extremely). From the original PANAS scale (Watson et al., 1988), we removed seven adjectives (interested, determined, inspired, alert, active, strong, and attentive) and replaced them with seven new ones deemed more pertinent to the cur-rent study (depressed, happy, sad, nervous, tense, uncomfortable, angry). To assess this new measure and prepare it for analysis, we performed an exploratory factor anal-ysis on the items for the allocator. The analysis showed four factors accounting for 65.0% of the variance. We labeled the first factor “Tension” (eigenvalue = 7.10); it contains items Anxious, Scared, Sad, Nervous, Tense, Jittery, Uncomfortable, and Afraid. Because Sad did not fit with the other items conceptually, and because it cross-loaded with Factor 3, it was removed. The remaining items showed an acceptable Cronbach’s χ (.86), and a mean of these items was taken for use in analysis. The sec-ond factor was labeled “Guilt” (eigenvalue = 2.44) and contains items Distressed, Upset, Guilty, and Ashamed. As the Cronbach’s χ for these items was acceptable (.83), a mean was taken for use in analysis. The third factor was labeled “Hostile” (eigen-value = 2.01); it contains items Depressed, Hostile, Irritable, and Angry. Because Depressed did not fit with the other items conceptually, and because removing it improved the scale’s reliability, this item was dropped; the remaining three items showed acceptable reliability (Cronbach’s χ = .81) and were averaged for analysis. The fourth factor (eigenvalue = 1.46) was labeled “Positive Emotions” and contains items Excited, Enthusiastic, Proud, and Happy. Cronbach’s χ for these items was acceptable (.76) and their average was taken for use in analysis. We also took an aver-age of the negative emotion items.

On the PANAS-style scale for negative emotion, from 1 (Not at all) to 5 (Extremely), a one-way ANOVA on the mean of the negative emotions showed significant differ-ences between truth-tellers (M = 1.21, SD = 0.30), liars (M = 1.69, SD = 0.65), and omitters (M = 1.48, SD = 0.34), F (2,191) = 23.98, η2 = 0.20, p < .001. Post hoc tests using Fisher’s LSD comparisons found that truth-tellers reported significantly less negative emotion than both liars (p =.000) and omitters (p = .019), but the amount of negative emotion reported by liars and omitters was indistinguishable (p =.10).

The mean of the emotions in the Tension factor 1 (Not at all) to 5 (Extremely) was analyzed in a one-way ANOVA. There were significant differences between truth-tellers (M = 1.37, SD = 0.51), liars (M = 1.80, SD = 0.86), and omitters (M = 1.59, SD = 0.41), F (2,191) = 8.51, η2 = 0.08, p < .001. Post hoc tests using Fisher’s LSD com-parisons found that truth-tellers reported significantly less tension than liars (p < .001). There was not enough evidence to conclude differences in Tension between truth-tellers and omitters (p = .19) or between liars and omitters (p = .28).

Similarly, on the Guilt factor, there were significant differences between truth-tellers (M = 1.11, SD = 0.31), liars (M = 2.13, SD = 0.86), and omitters (M = 1.63, SD = 0.63), F (2,191) = 67.1, η2 = 0.41, p < .001. Post hoc tests using Fisher’s LSD comparisons found that truth-tellers reported significantly less guilt than both liars

at UNIV OF WISCONSIN-MADISON on February 9, 2015crx.sagepub.comDownloaded from

16 Communication Research XX(X)

(p < .001) and omitters (p < .001), and omitters reported less guilt than liars (p = .002). As allocators gave the receiver less money, they reported higher levels of guilt (r = –.53, p < .001).

Because guilt might be lessened with less social distance, we compared guilt between FtF and CMC for each type of deception. There was a main effect of com-munication condition. Participants communicating over CMC (M = 1.47, SD = 0.09) reported feeling less guilt overall than those interacting FtF (M = 1.73, SD = 0.07), F (1, 188) = 4.84, η2 = 0.03, p = .029. When comparing individual pairings, however, there was not enough evidence to conclude significant differences between communi-cation conditions in any interaction type; guilt levels were indistinguishable between CMC and FtF for truth, lie, and deceptive omission.

Discussion

In summary, we found that participants engaged in some form of deception in 27.32% of the interactions. We found evidence of both demeanor effects and truth bias. Hypothesis 1a and 1b predicted that demeanor effects would be related to suspicion, but not accuracy. In support of Hypothesis 1b, perception of the allocator having a dishonest demeanor increased suspicion, but dishonest demeanor was related to reduced detection accuracy for truths and had no relationship to detection accuracy for deception. Contrary to Hypothesis 1a, honest demeanor had no relationship to either suspicion or detection accuracy. Overall, demeanor cues were not helpful in accurate detection of truths and deception. This finding replicates Levine et al. (2011). Hypothesis 2 predicted that participants would be more likely to detect a lie through CMC than FtF, and this was supported. Hypothesis 3 predicted a lower truth bias in CMC, but truth bias did not differ between CMC and FtF for truths. Overall, partici-pants had a truth bias, and since most interactions were truthful, this contributed to high accuracy in both CMC and FtF. Therefore, we can be more confident that the increased detection of lies in CMC than FtF is due to demeanor effects rather than an overall reduced truth bias in CMC. In CMC, the receiver may have been less fooled by demeanor cues that were irrelevant to detecting a lie because these cues were less salient in CMC. Rates of deception did not differ between CMC and FtF, but type of deception marginally did, and the means were in the direction predicted by Hypothesis 4. There was more deceptive omission used FtF than CMC and more deceptive com-mission (bold-faced lies) used CMC than FtF.

Deception, Truth Bias, Demeanor, and Detection

Despite the fact that the allocator’s partner would never find out the allocation endow-ment, most people did not deceive, even in the CMC condition where face concerns should be lessened. This supports research by Mazar and Ariely (2006) that found that most people do not lie, even when there is no possibility of getting caught. However, recipients also had a strong truth bias, so the combination of low rates of deception and a strong truth bias contributed to high accuracy (68.58%). Specifically, this bias

at UNIV OF WISCONSIN-MADISON on February 9, 2015crx.sagepub.comDownloaded from

Van Swol et al. 17

contributed to high accuracy in the judgment of truthful interactions and low accuracy in the judgment of lies. This supports the veracity effect (Levine et al., 1999), which states that the truthfulness of the sender is the best predictor of the detection accuracy of the receiver. The presumption of truth may be a bias in a laboratory situation in which half the messages are manipulated to be lies, but with naturally occurring truth and deception, rates of deception tend to be much lower than 50%, calling into ques-tion whether the truth bias is a “bias.”

We replicated research by Levine et al. (2011) that found that perception of dishon-est demeanor predicted suspicion but did help increase detection accuracy. Rather, for truths, dishonest demeanor was related to increased inaccuracy in detection. The dis-honest demeanor scale still predicted suspicion in the CMC condition where nonverbal cues and voice tone were not available. However, there were no effects of honest demeanor. What we cannot determine is the direction of influence; that is, we cannot say if demeanor causes suspicion or if suspicion affects perception of demeanor. Recipients might be more suspicious of allocators who display the behaviors associ-ated with a dishonest demeanor. This suggests that, for truths, participants were misled by behaviors that are stereotypically associated with deception but are not helpful toward accurate detection. Alternatively, it is possible that when the recipient is suspi-cious, they assume that the allocator is displaying characteristics stereotypically asso-ciated with deception. A third explanation is also possible: When recipients become suspicious, the allocator begins to display dishonest demeanor characteristics, regard-less of whether the allocator is actually lying.

We predicted better lie detection in CMC than FtF, and this was confirmed. Overall, only one person detected a lie FtF, so detection was well below chance detection lev-els. George and Carlson (2004, 2010) found that people preferred FtF communication for lies in several different scenarios, possibly because they perceive it is easier to lie FtF. Although recipients were more likely to detect lies CMC than FtF, their levels of detection were still not significantly better than chance. There are several possible conclusions we can make from the fact that only one person in the FtF condition was suspicious of a lie. First, perhaps people have a larger truth bias in FtF communication: They do not think someone will lie to their face. A problem with this conclusion is that there were no differences between FtF and CMC in incorrect suspicion when the allo-cator was telling the truth. Specifically, when told the truth, 15 out of 72 (20.8%) recipients FtF and 15 out of 66 (22.72%) recipients CMC were suspicious of a lie. Therefore, for truths, there was not a larger truth bias FtF compared to CMC, so we cannot make the conclusion that a higher overall truth bias is driving the lack of suspi-cion of lies in FtF.

Given the challenge of demonstrating the truth bias’ contribution to better detec-tion levels in CMC interaction, one plausible alternative explanation is that only skilled liars lied in the FtF condition when given the choice to lie or tell the truth (Levine et al., 2010). Serota, Levine, and Boster (2010) state, “We speculate that the prolific liars are likely those people with especially honest demeanor and that unusu-ally transparent liars avoid lying. If most lies outside the laboratory are told by people who are usually believed, lie detection rates would be lower than those observed in

at UNIV OF WISCONSIN-MADISON on February 9, 2015crx.sagepub.comDownloaded from

18 Communication Research XX(X)

randomized experiments” (p. 22). Thus, the lower lie detection rates in FtF may be driven by allocator self-selection based on their knowledge of their own deception ability. Also, for a skilled liar, it may be easier to tell a believable lie FtF because one has more cues at one’s disposal to create a believable reality (Marett & George, 2004). These cues could also lead recipients to be less accurate in their suspicions. Specifically, recipients may be more likely to use inaccurate demeanor cues in the FtF condition than the CMC condition because more of these cues, especially nonverbal and vocal tone, are available through FtF communication.

Most likely it is a combination of several factors. Unskilled liars may be less com-fortable lying in the FtF condition (borne out by the smaller number of bald-faced lies FtF), and skilled liars may be better at appearing truthful by controlling their dishonest demeanor behavior and voice tone (keeping calm, making eye contact, answering con-sistently and without pauses, lack of fidgeting, etc.). Since recipients have more of these inaccurate cues to deception with FtF interactions, this combination of recipients relying on these cues and skilled liars avoiding these behaviors leads to the high num-ber of successful lies FtF. One way to test demeanor effects and deception skill is to assign people to deception condition (lie or truth) so that skilled and nonskilled liars are randomly assigned to deception type and then see if people are more successful at lies FtF than CMC.

Deceptive omission was a remarkably unsuccessful strategy for allocators seeking to avoid detection. All participants correctly detected deceptive omission, and half of participants in the truthful omission condition were suspicious. However, these high rates of suspicion may be an artifact of the experimental task and instructions. Receivers knew the allocator was given some money, so any omission occurred within an ultimatum game where recipients knew omission was occurring; it was not covert omission. McCornack (1992) argued that for omission to be deceptive, it must be covert. An example of covert omission is a student telling her mother she is going to “The Library,” while omitting the fact that “The Library” is the name of a local bar. Although the deceptive omission in this study was overt and part of the game, we clas-sify it as deceptive because allocators were using deceptive omission to avoid disclos-ing information about the fairness of their offer and were violating maxim of quantity (Grice, 1989). Still, whether our results generalize to deceptive omission that is covert needs to be addressed in future research.

Despite it being an unsuccessful deception strategy, one reason allocators may have used deceptive omission was that they perceived it as less deceptive than telling a bald-faced lie. This is confirmed by both allocators’ and independent judges’ rating of the deceptiveness of omission; it was rated as more deceptive than truth, but less deceptive than lies. Also, allocators using deceptive omission felt less guilt than liars. This repli-cates previous research on omission and deception (Ritov & Baron, 1990; Spranca, Minsk, & Baron, 1991; Tenbrunsel & Messick, 2004; Van Swol et al., 2012). One rea-son for more deceptive omission FtF is that people who wanted to keep more money for themselves might have felt uncomfortable lying FtF, but not more uncomfortable using deceptive omission because of the lower guilt and possibly because they anticipated less disapproval of deceptive omission if it was detected (DeScioli et al., 2011).

at UNIV OF WISCONSIN-MADISON on February 9, 2015crx.sagepub.comDownloaded from

Van Swol et al. 19

Communication Channel

We found differences in deception type between communication channels. Burgoon and Levine (2009) state the base rate of 50% lies and 50% truth is a “common design feature of 40 years of deception detection experiments” and “has led to meta-analysis results with very narrow generality” (p. 211). By deviating from this design feature and allowing participants to choose to deceive, we can consider how communication channel might affect the decision to deceive. We found no differences in overall rates of deception between communication channel, but differences in how people chose to deceive. There were more lies in CMC and more deceptive omission in FtF. We had hypothesized that people might anticipate more social consequences for lying FtF and might doubt their ability to pull off a bald-faced lie to someone’s face, and this would increase the rate of deceptive omission over lies for FtF. Ironically, lying to someone’s face was a very effective strategy for avoiding detection and deceptive omission was completely ineffective strategy, but this finding may be driven by sender skill, such that only confident and skilful liars will attempt a bald-faced lie to someone’s face.

Some past research has found that people anticipate more deception in computer-mediated interactions (Whitty & Carville, 2008) and that deception occurs regularly in anonymous environments (Whitty, 2002). This may suggest that more deception should occur in the CMC condition, but including both deceptive omission and lies, we found that rates of deception did not differ by condition. Given that participants interacting over the computer met briefly before their mediated interaction, and given that allocators knew they would be briefly introduced to their partner prior to making their allocation decision, this may have reduced the amount of deception. Possibly we would have significant differences in overall deception between the two communication conditions if the computer condition was anonymous. However, we chose to not have the computer chat condition anonymous because, in naturally occurring situations, most of our text chat, email, and other forms of CMC are not done with anonymous strangers. Rather, we usually have met the person on the receiving end of the communication. We sought to test, in part, how properties of the FtF interaction affected the detection of deception. For this reason, we were con-cerned that a totally anonymous interaction might result in different decision pro-cesses and interactions between participants. For example, an allocator who does not meet with her partner may believe that she does not, in fact, have a partner and that her interaction is occurring with a confederate. Furthermore, because most commu-nication occurring FtF or over mediated channels occurs with people we know, not with strangers, it was important to emphasize the lack of anonymity in the interac-tion. We recognize that this limits conclusions we can make about how the channel itself contributed to rates of deception. This is a question that future research should address.

Another limitation comparing CMC to FtF is that communication was recorded in both conditions. Outside the lab, CMC communication is naturally recordable, while FtF communication often is not. This recordability of email or text messages may

at UNIV OF WISCONSIN-MADISON on February 9, 2015crx.sagepub.comDownloaded from

20 Communication Research XX(X)

actually reduce deception in this communication channel in naturally occurring situa-tions (Hancock, Woodworth, & Goorha, 2010), as senders exercise warranted caution towards the permanent record that email leaves.

Other Limitations and Future Research

There are several other limitations to our study. First, interactions were generally under 2 minutes. This did not give the receivers extended information from which to judge the allocator. Second, although we reasoned that participants may have a higher cognitive load in the FtF condition, we did not explicitly test this. Future research could confirm if participants experienced differences in cognitive load in the two conditions or if par-ticipants anticipated lying FtF as more cognitively challenging. Another limitation, which we discussed previously, is with our characterization of deceptive omission. We are not confident generalizing these results about deceptive omission to a situation where the omission is covert. Also, we could have provided a definition of deception to the participants before they filled out the questionnaire to make it less ambiguous that deceptive omission was considered deception by the researchers. However, this could have also primed participants to be more suspicious in their responses on the question-naire or have affected their emotions on the PANAS-style scale.

Finally, the amount of money used was small, and this may not have provided much motivation either to deceive or detect deception. A higher amount of money may have motivated more people to lie, although (Van Swol et al., 2012) found no differences in deception between US$5 and US$30 provided to allocators. Most likely, as Mazar and Ariely (2006) found, most people just do not like to lie. Also, in many cases, the amount of money offered by the allocator was just a dollar or two higher than the default amount, and this may not have provided a strong motivation to detect decep-tion. A future study could increase the monetary reward to the recipient for correct detection of deception. Possibly the recipient could get all the money (their offer plus the allocator’s money) if they correctly detect deception, although this would probably further reduce rates of deception from the allocator.

Another interesting study for future research would be to compare demeanor effects for conditions when participants are assigned to lie or tell the truth versus conditions, as in this study, where participants can decide to lie or be truthful. We would predict that demeanor effects would be more misleading when participants can choose to deceive or not and that detection accuracy for lies would be lower when participants can choose. Future research could also ask people before the experiment about their ability to lie; this might be able to tap into skillful and nonskillful liars. If so, it can be tested if people who perceive themselves as less skillful liars are less likely to lie, especially in the FtF condition.

Finally, future research could study the effects of anonymity and if deception would increase CMC if participants were anonymous. Research could examine other forms of CMC besides text chat. Text chat in this study was synchronous and had low rehears-ability, but other CMC forms of communication, like email, allow low synchronicity and high rehearsability (Carlson, George, Burgoon, Adkins, & White, 2004).

at UNIV OF WISCONSIN-MADISON on February 9, 2015crx.sagepub.comDownloaded from

Van Swol et al. 21

Conclusion

We found that people are better at detecting lies when interacting using computer-based text chat than when FtF, although they are still not better than chance. We hypothesized that people may be especially bad at detecting lies FtF, when the sender is making the decision of whether or not they want to deceive, due to both demeanor effects and self-selection of unskilled liars towards truth. We also found that people use more omission to deceive FtF and more bald-faced lies in CMC, possibly because people are less worried about being caught in a bald-faced lie CMC. In conclusion, although rates of overall deception did not differ between CMC and FtF, the ability to detect that deception did differ.

Appendix

Allocator Questionnaire

Answer the following questions about the person with whom you just interacted.Questions 1 to 8 scored from 1 to 7, with labels appropriate for the item; for example, question 1 is 1 (Never talked) and 7 (Talked often).

(1) How often have you talked with your partner before this experiment? (2) How often have you worked together on class projects? (3) How often have you socialized with partner outside of class? (4) How much contact are you likely to have with your partner after the experi-

ment is over? (5) How well do you know your partner? (6) How well do you think your partner knows you? (7) How often have you had class with your partner? (8) How friendly are you with your partner? (1) How much money did you allocate to your partner? $__________ (2) Did you tell your partner AN amount of money that the experimenter gave

you to allocate? Yes or No If no, did you avoid telling your partner the amount of money the experimenter

gave you so that you could keep more money for yourself? No Yes (3) How much money do you think your partner believed you had been given by

the experimenter? $__________ (4) Do you believe you deceived your partner? Yes or No (5) Did you lie to your partner? Yes or No (6) Did you tell your partner the whole truth? Yes or No Questions 7-27 scored from 1 (True) to 7 (False). Questions 28-30 scored from 1

(Very likely) to 7 (Very unlikely). (7) I was successful in making a good impression with my partner. (8) I was completely honest with my partner. (9) My partner did not seem to believe my offer.

at UNIV OF WISCONSIN-MADISON on February 9, 2015crx.sagepub.comDownloaded from

22 Communication Research XX(X)

(10) My partner was suspicious of my offer.(11) My partner thought I was completely honest with him/her.(12) My partner kept pressing me on my answers.(13) My partner didn’t seem to accept my offer and explanations.(14) My partner appeared fidgety and uncomfortable.(15) My partner fiddled with her/her clothes or objects.(16) My partner smiled frequently.(17) My partner gave me lots of eye contact.(18) My partner was very pleasant during the discussion.(19) My partner trusted me.(20) I was very tense while talking to my partner.(21) I was deceptive with my partner.(22) I gave evasive and ambiguous answers to the questions.(23) I took a long time before responding to the questions.(24) My answers to my partner’s questions were consistent.(25) I felt relaxed and at ease while interacting with my partner.(26) I gave very brief answers.(27) If a person had the opportunity to successfully deceive you out of money by

lying to you, how likely is it that this person would do so?(28) If you had the opportunity to successfully deceive a person out of money by

lying to him or her, how likely is it that you would do so?(29) Most people in my situation would probably lie in order to keep more money

for themselves.(30) This scale consists of a number of words that describe different feelings and

emotions. Read each item and then mark the appropriate answer in the space next to that word. Indicate to what extend you feel this way right now, that is, at the present moment. Use the following scale to record your answers.

1 (Very slightly or not at all), 2 (A little), 3 (Moderately), 4 (Quite a bit), 5 (Extremely)Anxious, Distressed, Excited, Upset, Depressed, Guilty, Scared, Hostile,

Enthusiastic, Proud, Irritable, Happy, Ashamed, Sad, Nervous, Tense, Jittery, Uncomfortable, Afraid, Angry

Recipient Questionnaire

Answer the following questions about the person with whom you just interacted.Questions 1 to 8 scored from 1 to 7, with labels appropriate for the item; for example, question 1 is 1 (Never talked) and 7 (Talked often).

(1) How often have you talked with your partner before this experiment? (2) How often have you worked together on class projects? (3) How often have you socialized with partner outside of class? (4) How much contact are you likely to have with your partner after the experi-

ment is over?

at UNIV OF WISCONSIN-MADISON on February 9, 2015crx.sagepub.comDownloaded from

Van Swol et al. 23

(5) How well do you know your partner? (6) How well do you think your partner knows you? (7) How often have you had class with your partner? (8) How friendly are you with your partner? (1) How much of the money do you think your partner allocated to you? None,

Less than Half, Half, More than Half, All (2) How much money do you think your partner had to allocate? $__________ (3) Did your partner tell you how much money he or she was given by the experi-

menter to allocate? Yes or No If yes, do you think your partner lied to you about the amount of money they were

given by the experimenter? Yes or No If no, do you think your partner avoided telling you the amount of money the

experimenter gave them so that they could keep more of the money? Yes or No Questions 4-23 scored from 1 (True) to 7 (False). Questions 24-26 scored from 1

(Very likely) to 7 (Very unlikely). (4) I found my partner’s answer believable. (5) My partner was very open and forthcoming with me. (6) My partner was completely honest with me. (7) My partner was not sincere in answering my questions. (8) My partner gave very brief answers. (9) My partner gave evasive and ambiguous answers to my questions.(10) My partner took a long time before responding to my questions.(11) My partner appeared fidgety and uncomfortable.(12) My partner avoided looking me in the eye while answering.(13) My partner smiled frequently.(14) My partner fiddled with his/her clothes or objects.(15) My partner was very pleasant during the discussion.(16) My partner made a good impression on me.(17) My partner’s behavior was unusual.(18) My partner behaved the way I expect most people to behave.(19) My partner engaged in normal conversational behavior.(20) My partner was manipulative.(21) Most people in my partner’s situation would probably lie in order to keep

more money for themselves.(22) I was suspicious of my partner’s offer.(23) I thought my partner was being deceptive.(24) If a person had the opportunity to successfully deceive you out of money by

lying to you, how likely is it that this person would do so?(25) If you had the opportunity to successfully deceive a person out of money by

lying to him or her, how likely is it that you would do so?(26) This scale consists of a number of words that describe different feelings and

emotions. Read each item and then mark the appropriate answer in the space next to that word. Indicate to what extend you feel this way right now, that is, at the present moment. Use the following scale to record your answers.

at UNIV OF WISCONSIN-MADISON on February 9, 2015crx.sagepub.comDownloaded from

24 Communication Research XX(X)

1 (Very slightly or not at all), 2 (A little), 3 (Moderately), 4 (Quite a bit), 5 (Extremely)Anxious, Distressed, Excited, Upset, Depressed, Guilty, Scared, Hostile,

Enthusiastic, Proud, Irritable, Happy, Ashamed, Sad, Nervous, Tense, Jittery, Uncomfortable, Afraid, Angry

Acknowledgments

We thank Caitlin Cusack, Kasi Graff, Sarah Haydostian, Lindsey Mair, Carla Pentimone, Olivia Weyers, Karen Dohnal, Daniel Kaplan, Lindsay Montgomery, Peter Moomjian, Randi Russel, Brennan Harris, Ellen Meinholz, Rebecca Rogers, Heena Shin, and Michael Ray for their help running the experiments, transcribing interactions, entering data, and coding transcripts. Thanks to Peter Sengstock for technical help. Thanks to two anonymous reviewers for their incredibly helpful comments.

Declaration of Conflicting Interests

The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Funding

The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This research was supported by a grant from the Wisconsin Alumni Research Foundation.

Notes

1. In the FtF condition, we compared offer type (lie, deceptive omission, truth) by gender of the allocator, χ2 (df = 2) = 4.62, p = .10. We compared offer type by gender composition of dyad (same gender vs. mixed gender), χ2 (df = 2) = 2.04, p = .36.

2. These measures were created with permission from Levine et al. (2011).

References

Bond, C. F., & DePaulo, B. M. (2006). Accuracy of deception judgments. Personality and Social Psychology Review, 10, 214-234. doi:10.1207/s15327957pspr1003_2

Buller, D. B., & Burgoon, J. K. (1996). Interpersonal deception theory. Communication Theory, 6, 203-242. doi:10.1111/j.1468-2885.1996.tb00127.x

Burgoon, J. K., & Levine, T. R. (2009). Advances in deception detection. In S. Smith & S. Wilson (Eds.), New Directions in Interpersonal Communication (pp. 201-220). Thousand Oaks, CA: SAGE.

Burgoon, J. K., Stoner, G. M., Bonito, J. A., & Dunbar, N. E. (2003). Trust and deception in mediated communication. Proceedings of the 36th Hawaii International Conference on System Sciences, Waikolua, HI.

Carlson, J. R., George, J. F., Burgoon, J. K., Adkins, M., & White, C. H. (2004). Deception in computer-mediated communication. Group Decision and Negotiation, 13, 5-28. doi:10.1023/B:GRUP.0000011942.31158.d8

Caspi, A., & Gorsky, P. (2006). Online deception: Prevalence, motivation, and emotion. Cyperpsychology and Behavior, 9, 54-59. doi:10.1089/cpb.20069.54

at UNIV OF WISCONSIN-MADISON on February 9, 2015crx.sagepub.comDownloaded from

Van Swol et al. 25

DePaulo, B. M., Kashy, D. A., Kirkendol, S. E., Wyer, M. M., & Epstein, J. A. (1996). Lying in everyday life. Journal of Personality and Social Psychology, 70, 979-995. doi:10.1037/0022-3514.70.5.979

DeScioli, P., Christner, J., & Kurzban, R. (2011). The omission strategy. Psychological Science, 22, 442-446. doi:10.1177/0956797611400616

Duran, N. D., Hall, C., McCarthy, P. M., & McNamara, D. S. (2010). The linguistic correlates of conversational deception: Comparing natural language processing technologies. Applied Psycholinguistics, 31, 439-462. doi:10.1017/S0142716410000068

Ekman, P., & Friesen, W. V. (1969). Nonverbal leakage and clues to deception. Psychiatry, 32, 88-106.

George, J. F., & Carlson, J. R. (2004). Media appropriateness in the conduct and discovery of deceptive communication: The relative influence of richness and synchronicity. Group Decision and Negotiation, 13, 191-210. doi:10.1023/B:GRUP.0000021841.01346.35

George, J. F., & Carlson, J. R. (2005). Media selection for deceptive communication. Proceedings of the 38th Hawaii International Conference on System Sciences, Waikolua, HI.

George, J. F., & Carlson, J. R. (2010). Lying at work: A deceiver’s view of media characteris-tics. Communications of the Association for Information Systems, 27, Article 44. Retrieved from http://aisel.aisnet.org/cais/vol27/iss1/44

George, J. F., & Robb, A. (2008). Deception and computer-mediated communication in daily life. Communication Reports, 21, 92-103. doi:10.1080/08934210802298108

Grice, P. (1989). Studies in the way of words. Cambridge, MA: Harvard University Press.Hancock, J. T., Curry, L., Goorha, S., & Woodworth, M. T. (2008). On lying and

being lied to: A linguistic analysis of deception. Discourse Processes, 45, 1-23. doi:10.1080/01638530701739181

Hancock, J. T., Thom-Santelli, J., & Ritchie, T. (2004a). Deception and design: The impact of communication technology on lying behavior. CHI Letters, 6(1), 129-134. doi:10.1145/985692.985709

Hancock, J. T., Thom-Santelli, J., & Ritchie, T. (2004b). What lies beneath: The effect of com-munication medium on deception production. Annual Meeting for the Society for Text and Discourse, Chicago, IL.

Hancock, J. T., Woodworth, M. T., & Goorha, S. (2010). See no evil: The effect of communica-tion medium and motivation on deception detection. Group Decision & Negotiation, 19, 327-343. doi:10.1007/s10726-009-9169-7

Huck, A. (1999). Responder behavior in ultimatum games with incomplete information. Journal of Economic Psychology, 20, 183-206. doi:10.1016/S0167-4870(99)00004-5

Jacobs, S., Dawson, E. J., & Brashers, D. (1996). Information manipulation theory: A replication and assessment. Communication Monographs, 63, 70-82. doi:10.1080/03637759609376375

Levine, T. R. (2010). A few transparent liars. Communication Yearbook, 34, 40-61.Levine, T. R., Asada, K. J., & Massi, L. L. (2003). The relative impact of violation type and lie

severity on judgments of message deceptiveness. Communication Research Reports, 20, 208-218. doi:10.1080/08824090309388819

Levine, T. R., Kim, R. K., Park, H. S., & Hughes, M. (2006). Deception detection accu-racy is a predictable linear function of message veracity base-rate: A formal test of Park and Levine’s probability model. Communication Monographs, 73, 243-260. doi:10.1080/03637750600873736

Levine, T. R., Park, H. S., & McCornack, S. A. (1999). Accuracy in detecting truths and lies: Documenting the “veracity effect.” Communication Monographs, 66, 125-144. doi:10.1080/03637759909376468

at UNIV OF WISCONSIN-MADISON on February 9, 2015crx.sagepub.comDownloaded from

26 Communication Research XX(X)

Levine, T. R., Serota, K. B., Shulman, H., Clare, D. D., Park, H. S., Shaw, A. S., & Lee J. H. (2011). Sender demeanor: Individual differences in sender believability have a powerful impact on deception detection judgments. Human Communication Research, 37, 377-403. doi:10.1111/j.1468-2958.2011.01407.x

Levine, T. R., Shaw, A., & Shulman, H. (2010). Increasing deception detection accuracy with strategic direct questioning. Human Communication Research, 36, 216-231. doi:10.1111/j.1468-2958.2010.01374.x

Marett, L. K., & George, J. F. (2004). Deception in the case of one send and multiple receivers. Group Decision and Negotiation, 13, 29-44.

Mazar, N., & Ariely, D. (2006). Dishonesty in everyday life and its policy implications. Journal of Public Policy and Marketing, 25, 1-21. doi:10.1509/jppm.25.1.117

McCornack, S. A. (1992). Information manipulation theory. Communication Monographs, 59, 1-16. doi:10.1080/03637759209376245

McCornack, S. A., Levine, T. R., Solowczuk, K. A., Torres, H. I., & Campbell, D. M. (1992). When the alteration of information is viewed as deception: An empirical test of information manip-ulation theory. Communication Monographs, 59, 17-29. doi:10.1080/03637759209376246

McCornack, S. A., & Parks, M. R. (1986). Deception detection and relationship development: The other side of trust. In M. L. McLaughlin (Ed.), Communication Yearbook 9 (pp. 377-389). Beverly Hill, CA: Sage.

Miller, G. R., Mongeau, P. A., & Sleight, C. (1986). Fudging with friends and lying to lov-ers: Deceptive communication in personal relationships. Journal of Social and Personal Relationships, 3, 495-512. doi:10.1177/0265407586034006

O’Sullivan, M., Ekman, P., & Friesen, W. V. (1988). The effect of comparisons on detecting deceit. Journal of Nonverbal Behavior, 12(3, Pt. 1), 203-215. doi:10.1007/BF00987488

Park, H. S., & Levine, T. R. (2001). A probability model of accuracy in deception detection experiments. Communication Monographs, 68, 201-210. doi:10.1080/03637750128059

Ritov, I., & Baron, J. (1990). Reluctance to vaccinate: Omission bias and ambiguity. Journal of Behavioral Decision Making, 3, 263-277. doi:10.1002/bdm.3960030404

Selwyn, N. (2008). A safe haven for misbehaving? An investigation of online misbe-havior among university students. Social Science Computer Review, 26, 446-465. doi:10.1177/0894439307313515

Serota, K. B., Levine, T. R., & Boster, F. J. (2010). The prevalence of lying in America: Three studies in self-reported lies. Human Communication Research, 36, 2-25. doi:10.1111/j.1468-2958.2009.01366.x

Spranca, M., Minsk, E., & Baron, J. (1991). Omission and commission in judgment and choice. Journal of Experimental Social Psychology, 27, 76-105. doi:10.1016/0022-1031(91)90011-T

Straub, P., & Murnighan, J. K. (1995). An experimental investigation of ultimatum games: Information, fairness, expectations, and lowest acceptable offer. Journal of Economic Behavior & Organization, 27, 345-364. doi:10.1016/0167-2681(94)00072-M

Tenbrunsel, A. E., & Messick, D. M. (2004). Ethical fading: The role of self-deception in uneth-ical behavior. Social Justice Research, 17, 223-236. doi:10.1023/B:SORE.0000027411. 35832.53

Valenzuela, A., & Srivastava, J. (2012). Role of information asymmetry and situational salience in reducing intergroup bias: The case of ultimatum games. Personality and Social Psychology Bulletin, 38, 1671-1683. doi:10.1177/0146167212458327

Van Swol, L. M., Malhotra, D., & Braun, M. T. (2012). Deception and its detection: Effects of monetary incentives and personal relationship history. Communication Research, 39, 217-238. doi:10.1177/0093650210396868

at UNIV OF WISCONSIN-MADISON on February 9, 2015crx.sagepub.comDownloaded from

Van Swol et al. 27

Walther, J. B. (1996). Computer-mediated communication: Impersonal, interpersonal and hyperper-sonal interaction. Communication Research, 23(1), 3-43. doi:10.1177/009365096023001001

Watson, D., Clark, L. A., & Tellegen, A. (1988). Development and validation of brief mea-sures of positive and negative affect: The PANAS scales. Journal of Personality and Social Psychology, 54, 1063-1070. doi:10.1037/0022-3514.54.6.1063

Whitty, M. T. (2002). Liar, liar! An examination of how open, supportive and honest peo-ple are in chat rooms. Computers in Human Behavior, 18, 343-352. doi:10.1016/S0747-5632(01)00059-0

Whitty, M. T., & Carville, S. E. (2008). Would I lie to you? Self-serving lies and other ori-ented lies told across different media. Computers in Human Behavior, 24, 1021-1031. doi:10.1016/j.chb.2007.03.004

Yeung, L. N. T., Levine, T. R., & Nishiyama, K. (1999). Information manipulation the-ory and perceptions of deception in Hong Kong. Communication Reports, 12, 1-11. doi:10.1080/08934219909367703

Zhou, L., Burgoon, J. K., Nunamaker, J. F. Jr., Twitchell, D. (2004). Automating linguistic-based cues for detecting deception in text-based asynchronous computer-mediated com-munication. Group Decision and Negotiation, 13, 81-106.

Zuckerman, M., DeFrank, R. S., Hall, J. A., Larrance, D. T., & Rosenthal, R. (1979). Facial and vocal cues of deception and honesty. Journal of Experimental Social Psychology, 15, 378-396. doi:10.1016/0022-1031(79)90045-3

Zuckerman, M., DePaulo, B. M., & Rosenthal, R. (1981). Verbal and nonverbal communication of deception. In L. Berkowitz (Ed.), Advances in experimental social psychology (Vol. 14, pp. 1-59). New York, NY: Academic Press.

Author Biograpies

Lyn M. Van Swol (PhD University of Illinois-Urbana-Champaign) is an associate professor of Communication Science at University of Wisconsin-Madison. She studies deception, advice utilization, and group decision-making.

Michael T. Braun is a PhD candidate in the Department of Communication Arts at the University of Wisconsin-Madison. His research focuses on communication technology prefer-ence and adoption across the lifespan.

Miranda R. Kolb is a graduate student in Communication Science at University of Wisconsin-Madison and is currently working on her master’s thesis. She is interested in information shar-ing in groups, ostracism, and persuasion.

at UNIV OF WISCONSIN-MADISON on February 9, 2015crx.sagepub.comDownloaded from