chatbot anthropomorphism: adoption and acceptance in

87

Upload: others

Post on 10-Feb-2022

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Chatbot anthropomorphism: Adoption and acceptance in
Page 2: Chatbot anthropomorphism: Adoption and acceptance in

Chatbot anthropomorphism: Adoption and acceptance in

customer service

Final thesis for the Master of Science in Communication Studies

Name Katja Raunio

Student number 2406160

E-mail [email protected]

Master Communication Science

Specialization Technology & Communication

Faculty Behavioral Management and Social Sciences

Date 27th of April 2021

Supervisor Dr. J. Karreman

Second supervisor Dr. S. van Der Graaf

Page 3: Chatbot anthropomorphism: Adoption and acceptance in

Abstract

Purpose – Robots are becoming more common in customer service. Customer service chatbots are made

to create a better customer experience, increase satisfaction and engagement. These conversational

agents are getting more advances due to progress in artificial intelligence, and their appearance and

conversational tone can be extremely human-like. Since many firms want to either replace or support

their existing customer service with chatbots, it is important to examine how the customer experience

can be improved. However, there is a lack of studies concerning how appearance and conversational style

influence users’ adoption and acceptance of chatbots. This study aimed to explore how human

characteristics in chatbots influence attitudes towards using chatbots, concentrating on the visual

(human/robot/logo) and conversational style (formal/informal).

Design and Methodology – The study used an online experimental 3x2 between-subjects design followed

by a questionnaire to explore users’ (N=339) perceptions of the perceived usefulness, ease of use,

helpfulness, competence, trust, and attitude towards using chatbots of an e-commerce chatbot in a

customer service setting. Additionally, 12 semi-structured interviews were conducted to further explore

how users feel about chatbots' visual appearance and conversational style.

Results – The results of the online experiment show that there is no significant effect of human

appearance or conversational style on the perceived ease of use, usefulness, helpfulness, competence,

trust towards chatbots, or attitude towards using chatbots in the future. The results of the interview

showed that users prefer a human or a robot avatar and the informal conversational style. Emojis are

appreciated as they create a friendly atmosphere but should not be used in difficult situations.

Additionally, the interviews showed that the chatbots do not significantly differ in their perceived ease of

use, usefulness, helpfulness, or competence. However, users want to use chatbots in simple interactions

which the bot is competent enough to provide useful assistance. In general, users trust chatbots unless

they must share private or sensitive information with them. Furthermore, chatbot users would like to

know when they are interacting with a bot instead of a human customer service agent.

Discussion – A customer service chatbot should have an informal, friendly conversational style. Emojis

should be used sparingly, and not in serious interactions where the customer might be distressed.

Furthermore, a chatbot should not pretend to be a human and disclose themselves as a robot. Moreover,

users might be hesitant to share private information with a chatbot, so access to a human customer

service agent is recommended. These results can be particular for anyone interested in chatbots, as well

as scholars, conversational designers, chatbot developers, and copywriters.

Keywords – Text-based chatbots, visual appearance, conversational style, anthropomorphism,

Technology Acceptance Model

Page 4: Chatbot anthropomorphism: Adoption and acceptance in

Contents

1 Introduction ............................................................................................................................................ 1

2 Theoretical Framework .......................................................................................................................... 3

2.1 Understanding user acceptance .................................................................................................... 3

2.2 Technology Acceptance Model ...................................................................................................... 3

2.2.1 Perceived usefulness .................................................................................................................. 4

2.2.2 Perceived ease of use................................................................................................................. 4

2.2.3 Perceived competence ............................................................................................................... 5

2.2.4 Perceived helpfulness ................................................................................................................ 6

2.2.5 Attitude towards using chatbots................................................................................................ 6

2.2.6 Trust in chatbots as a mediator ................................................................................................. 7

2.3 Chatbot anthropomorphism as a design feature ........................................................................... 7

2.3.1 Visual appearance ...................................................................................................................... 8

2.3.2 Conversational style ................................................................................................................... 8

2.4 Hypotheses ..................................................................................................................................... 9

2.5 Research model ............................................................................................................................11

3 Study 1 – Online experiment ................................................................................................................11

3.1 Methodology ................................................................................................................................11

3.1.1 Research design .......................................................................................................................11

3.1.2 Stimulus material .....................................................................................................................12

Pre-test .....................................................................................................................................................13

3.1.3 Pre-test 1 ..................................................................................................................................13

3.1.4 Pre-test 2 ..................................................................................................................................14

3.1.5 Pre-test 3 ..................................................................................................................................14

3.2 Main study ...................................................................................................................................14

3.2.1 Procedure .................................................................................................................................14

3.2.2 Participants ..............................................................................................................................15

3.3 Measurements .............................................................................................................................16

3.3.1 Perceived usefulness ................................................................................................................16

3.3.2 Perceived ease of use...............................................................................................................16

3.3.3 Perceived helpfulness ..............................................................................................................17

3.3.4 Perceived competence .............................................................................................................17

3.3.5 Attitude towards using chatbots..............................................................................................17

Page 5: Chatbot anthropomorphism: Adoption and acceptance in

3.3.6 Trustworthiness .......................................................................................................................17

3.4 Construct validity and reliability ..................................................................................................17

3.4.1 Factor analysis ..........................................................................................................................17

4 Results ..................................................................................................................................................19

4.1 Multivariate analysis of variance (MANOVA) ...............................................................................19

4.2 Main effects .................................................................................................................................19

4.2.1 Main effects of visual appearance ...........................................................................................19

4.2.2 Main effects of the conversational style ..................................................................................20

4.3 Interaction effects ........................................................................................................................21

4.4 Trust as a mediator ......................................................................................................................23

4.5 Overview of the hypotheses ........................................................................................................23

5 Study 2 – An interview study ................................................................................................................24

5.1 Methodology ................................................................................................................................24

5.2 Results of the interviews ..............................................................................................................24

6 Discussion .............................................................................................................................................28

6.1 Theoretical implications ...............................................................................................................31

6.2 Practical implications ...................................................................................................................32

6.3 Limitations ....................................................................................................................................33

6.4 Recommendations for future research ........................................................................................33

7 Conclusion ............................................................................................................................................34

8 References ............................................................................................................................................35

9 Appendix 1: Conditions ........................................................................................................................42

9.1 Condition 1: Human avatar + formal conversational style ..........................................................42

9.2 Condition 2: Human avatar + Informal conversational style .......................................................42

9.3 Condition 3: Robot avatar + Formal conversational style ............................................................43

9.4 Condition 4: Robot avatar + Informal conversational style .........................................................43

9.5 Condition 5: Logo avatar + Formal conversational style ..............................................................44

9.6 Condition 6: Logo avatar + Informal conversational style ...........................................................44

10 Appendix 2: Chatbot conversations: Formal and informal style ..........................................................45

10.1 Formal ..........................................................................................................................................45

10.2 Informal ........................................................................................................................................46

11 Appendix 3: Measures ..........................................................................................................................48

12 Appendix 4: Interview Protocol ............................................................................................................49

13 Appendix 5: Questionnaire ...................................................................................................................50

Page 6: Chatbot anthropomorphism: Adoption and acceptance in

14 Appendix 6: Codebook interviews ........................................................................................................68

15 Appendix 7: Interview results ...............................................................................................................68

15.1 Human avatar + formal conversational style ...............................................................................68

15.2 Human avatar + informal conversational style ............................................................................69

15.3 Robot avatar + formal conversational style .................................................................................71

15.4 Robot avatar + informal conversational style ..............................................................................73

15.5 Logo avatar + formal conversational style ...................................................................................75

15.6 Logo + informal conversational style ...........................................................................................78

Page 7: Chatbot anthropomorphism: Adoption and acceptance in

1

1 Introduction Traditionally, customer service interactions have taken place in direct face-to-face communication

between customers and employees. Over the decade, advances in Artificial Intelligence (AI) technologies

have transformed the way businesses conduct customer service operations. Consequently, chat

interfaces have become an increasingly popular tool to provide customer service in real-time. These

messaging applications are popular among customers and sometimes even preferred over other types of

customer support, such as phone or e-mail (Conversocial, 2017). While human customer service agents

can operate live chats, companies have discovered the potential of chatbots to automate workflows,

boost customer and employee engagement, and improve productivity (NT, 2020).

A chatbot is a computer program system that interacts with humans through written text or

voice, and usually incorporates some type of avatar (Coniam, 2008). For example, many online operators

in the Netherlands use a customer service chatbot, such as the web shop bol.com as shown in Figure 1.

Figure 1.

Chatbot Billie (n.d.). © Bol.com. Retrieved April 15, 2021, from

https://www.bol.com/nl/klantenservice/index.html. Screenshot by author.

Page 8: Chatbot anthropomorphism: Adoption and acceptance in

2

Customer service chatbots are designed to communicate with customers to obtain product details or

assistance, such as solving technical issues (Adam, Wessel, & Benlian, 2020). They are widely used to

substitute human customer service agents; human support agents dedicate a lot of time answering

frequently asked questions, which can be easily done by chatbots (Cui et al., 2017). Moreover, chatbots

are available 24/7 and reduce personnel costs (Hald, 2018).

Due to their increased popularity, considerable effort has been devoted to improving the

interaction between humans and chatbots. For example, developers have added humanlike elements to

the chatbots’ personality, such as empathy and friendliness (Callcentre Helper, 2020). Furthermore,

progress in AI technologies has allowed chatbot developers to employ various tools to design smarter

chatbots. Moreover, the ease of implementation has boosted the use of chatbots in online customer

service. For example, many businesses offer software that requires no programming skills to create a

chatbot. These platforms provide visual flow builders, drag-and-drop options, and situation-specific

templates, making them simple to use (NT, 2020). However, most of these platforms are limited to

scripted interactions; chatbots are not yet successful enough in mimicking a natural human conversation.

Therefore, most of the current customer service chatbots are used for basic interactions with a limited

range of responses (Adam, Wessel, & Benlian, 2020).

Due to chatbots’ limited capabilities, some users may have negative experiences with chatbots.

As a consequence of an unpleasant interaction with a bot, the user leaves dissatisfied, which in turn,

negatively affects businesses’ customer relationships (Brandztaeg & Følstad, 2018). Moreover, if the

chatbot does not offer enough value for the customer, it will be left unutilized. Furthermore, ignoring

users’ frustrations can lead to negative perceptions of the service, ultimately leading to customers

perceiving the chatbot as cold and incompetent (Brave & Nass, 2002). For example, users are frustrated

with the bots’ inability to provide a clear response to queries, lack of empathy, and low intelligence

(Smolaks, 2019).

Since customer service chatbots are being implemented more frequently, it is important to

examine how users react to different chatbots. Moreover, users will exhibit new behaviors and

expectations in an online customer service situation (Brandztaeg & Følstad, 2018). Therefore, for a

chatbot to be successful, it should be considered which design factors increase user acceptance.

The distant nature of online interactions has urged companies to create chatbots that act like

humans to make customers feel like they are interacting with a traditional customer service agent (Go &

Sundar, 2019). For example, chatbot developers add human characteristics to bots to compensate for

that (Penn State, 2019). These design features include manipulating the chatbot’s conversational style

and appearance, usually represented in terms of an avatar, and incorporating human-like conversational

cues into their responses.

Despite the growing popularity of customer service chatbots, there is still a gap in the theoretical

knowledge of optimal chatbot design characteristics; the influence of the chatbot’s appearance and

conversational style is not entirely clear, and to what extent they influence the users’ acceptance of

chatbots.

This research aims to investigate if visual appearance and conversational tone chatbot design

characteristics might influence users’ perceptions of them in a customer service setting. Designers and

developers would greatly benefit from the users’ insights and perceptions of chatbots. When aware of

how different design characteristics are perceived by the users, practitioners can save resources and time

to design chatbots that lead to satisfied users who want to come back to the chatbot. Furthermore, this

study contributes to the theoretical knowledge of chatbot design, especially in the context of acceptance

in the customer service setting. Thus, the following research questions are proposed:

Page 9: Chatbot anthropomorphism: Adoption and acceptance in

3

RQ1: To what extent does the visual appearance of a customer service chatbot influence their

acceptance?

RQ2: To what extent does the conversational style of a customer service chatbot influence their

acceptance?

The extensively applied Technology Acceptance Model (TAM) will serve as the basis for this study. The

original TAM variables perceived ease of use perceived usefulness, and attitude are kept. The model is

extended with additional variables perceived helpfulness and perceived competence.

In the next sections, literature regarding user acceptance of chatbots will be discussed.

Additionally, literature about human-chatbot interactions, chatbot appearance, and conversational style

will be reviewed. Based on the findings, 16 hypotheses will be presented. Later, the research design and

methods will be elaborated, followed by the data analysis and results. The final chapter includes the

discussion, limitations, and the practical and theoretical implications of this study.

2 Theoretical Framework 2.1 Understanding user acceptance The success of any information technology depends on whether users are going to adopt it or not.

Therefore, understanding the reason why people adopt, accept, and use information technologies is

crucial in developing optimal chatbots (Brandtzaeg & Følstad, 2018). For this reason, developers must

know more about the experience users have with chatbots, and what motivates their future use. Many

chatbots are developed without understanding why and how people use them, resulting in unsatisfied

users (Brandztaeg & Følstad, 2018). However, customer service is a relatively big part of peoples’ life.

Thus, it must be recognized if customers find customer service chatbots useful and valuable; a chatbot

that offers a bad user experience will not be successful in the long term.

It is important to investigate how chatbots can be designed to resonate with users’ needs,

behaviors, and desires (Brandztaeg & Følstad, 2018). For example, current chatbots often fail, because

they seldom succeed in unpredictable, open-ended conversations (Adam, Wessel, & Benlian, 2020;

Brandztaeg & Følstad, 2018; Coniam, 2015). To create a successful chatbot, developers need to have in-

depth knowledge about peoples’ motives to use them, how they are perceived, and why people keep or

stop using chatbots. Moreover, it is important to understand the users’ goals and the context of use,

including the tasks they must perform to reach that goal.

2.2 Technology Acceptance Model The theory of reasoned action (TRA) is the earliest acceptance theory. The model was developed by Icek

Ajzen and Martin Fishbein in 1967. TRA states that specific behavior is determined by behavioral intent ,

which is determined by one’s attitude and subjective norms towards specific behavior (Fishbein and

Ajzen as cited in Davis et al. 1989). The central message of TRA is that people make rational decisions

regarding technology use (Davis, 1989).

A model extending from TRA is the Technology Acceptance Model (TAM), developed by Davis in

1986. TAM derives from TRA to determine whether there is a causal relationship between perceived

usefulness (PU) and perceived ease of use, the user’s attitudes, intentions, and adoption of technology.

Moreover, users will want a balance between ease of use and performance benefits (Davis, 1989). In

TAM, peoples’ attitudes towards technology are determined by its ease of use and usefulness.

Page 10: Chatbot anthropomorphism: Adoption and acceptance in

4

Consequently, a positive attitude towards a technology positively influences the behavioral intention to

use it.

TAM is one of the most extensively reviewed models in the literature. For example, a meta-

analysis by Legris, Ingham & Collerette (2003) reviewed TAM by analyzing 22 published articles from 1980

to 2001 in which the model was applied. Their findings concluded that the model generates statistically

reliable results and is tested empirically. However, the authors suggest that the model should have

additional factors to explain more than 40% of system use. Moreover, since TAM was originally intended

for organizational use, it is recommended that external variables need to be added (Legris, Ingham, &

Collerette, 2003). For example, on top of perceived ease of use and perceived usefulness, chatbot

acceptance has been studied in the context of the perceived competence, trustworthiness, and

helpfulness. Therefore, the model for this study incorporates the original TAM variables perceived

usefulness, perceived ease of use, and attitude, and adds the chatbots’ perceived competence, trust

towards chatbots, and perceived helpfulness as additional variables. The next sections will include a

detailed explanation of each of the variables included in the model of this research, including a

hypothesis.

2.2.1 Perceived usefulness

The perceived usefulness is defined as “The degree to which an individual believes that using a particular

system would enhance his or her job performance.” (Davis, 1989, p. 320). The original definition was

based on the workplace context. However, studies have shown that perceived usefulness plays a role

across technologies and contexts, including chatbots (Zarouali, van den Broeck, Walrave, & Poels, 2018).

Additionally, perceived usefulness is one of the key variables in determining the use and attitude towards

retailers (Kulviwat et al., 2007; Cheng, Gillenson, & Sherrel, 2002; Chen & Tan, 2004; Zarouali et al.,

2018).

Perceived usefulness is a significant predictor of continuance intention for chatbots (Ashfaq, Yun,

Yu, Loureiro, 2020). After all, customer service chatbots are designed to help customers and provide

them with useful information. Moreover, it has been demonstrated that perceived usefulness is

positively linked with the intention to adopt (Thong, Hong, & Tam, 2006; Venkatesh, 2000), the

continuance of use (Agarwal & Karahanna, 2000), and satisfaction (Limayem et al., 2007). Therefore, the

more benefits users find from using a chatbot, the more satisfied they are with the experience.

Consequently, the likelihood that they continue using chatbots is higher (Oghuma, Libaque-Saenz, &

Wong, & Chang, 2015).

Chatbots’ anthropomorphic qualities have been noted to increase their perceived usefulness in

the enterprise context. Rietz, Benke, and Maedche (2019) studied how anthropomorphic chatbot

characteristics influence adoption in the workplace. The authors explored the impact of functional and

anthropomorphic chatbot features on employees’ acceptance using Slack, a popular enterprise

collaboration system. The authors concluded that anthropomorphic chatbot design features have a highly

significant effect on perceived usefulness. However, what is less clear is the nature of chatbots’ design

features in the customer service context; only a few studies have examined the relationship between

anthropomorphic design features and perceived usefulness in customer service (Sheehan, Jin, & Gottlieb,

2020), especially in terms of the chatbots’ visual appearance.

2.2.2 Perceived ease of use

The perceived ease of use is correlated with the acceptance of new technologies (Davis, 1989). Therefore,

products that are easy to use will be more likely to be accepted by users (Davis, 1989). Davis (1989)

defines perceived ease of use as “the degree to which an individual believes that using a particular

Page 11: Chatbot anthropomorphism: Adoption and acceptance in

5

technology will be free of mental effort” (Davis, 1989, p. 320). Thus, Davis argues that ease of use

indicates technology acceptance. In other words, perceived ease of use can increase the enjoyment of

using an information system. When technology is easy to use, it has a positive effect on efficacy and

competence.

Perceived ease of use is often linked with the infrastructure of technology, for example, the

interface of a chatbot (Kasilingam, 2020). In other words, the chatbot must be user-friendly, which lowers

the barrier to entry (Kasilingam, 2020). A study conducted by Kasilingam (2020) identified perceived ease

of use as an important factor affecting chatbot use in the mobile shopping environment.

A study conducted by Sheehan, Jin, and Gottlieb (2020) demonstrated that perceived ease of use

plays a role in increasing adoption intent. However, this relationship was mediated by

anthropomorphism, suggesting that people prefer human-like chatbots as they are easier to use because

they mimic human service agents (Sheehan, Jin, & Gottlieb, 2020). Thus, it can be hypothesized that a

chatbot that behaves like a human would be perceived as easier to use.

2.2.3 Perceived competence

Before the use of chatbots in customer service, customers interacted with human support agents.

Already then, the competence of the support agent was important for customers (Verhagen, van Nes,

Feldberg, & van Dolen, 2014). Furthermore, customers are satisfied with interactions when the

communicator appears to be credible, competent, and conveys expertise (Verhagen et al., 2014).

In the context of chatbots, competence has been identified as the most important factor in

explaining trust in them in customer service (Przegalinska, Ciechanowski, Stroz, Gloor, & Mazurek, 2019;

Nordheim, Følstad, & Bjørkli, 2018; Koh & Sundar, 2010). Since the perceived competence of a customer

service agent has been widely recognized in previous research (Corritore, Kracher, & Wiedenbeck, 2003;

Przegalinska al., 2019; Følstad, Nordheim, & Bjørkli, 2018; Koh & Sundar, 2010), it is included as a

dependent variable in this study.

Nordheim et al. (2018) studied how the perceived competence of chatbots influences users’ trust

in them. In their study, expertise concerned the users’ perception of the chatbot’s knowledge,

experience, and competence as reflected in the interactive system. The authors identified perceived

competence as the most important factor influencing trust towards customer service chatbots.

Moreover, competence was linked to four categories: the correct answer, interpretation, concrete

answer, and eloquent answer. Correct answer refers to the accuracy and relevance of the information

that the bot provides. Interpretation is linked to the chatbot’s (in)correct interpretation of an answer,

and how it expresses misunderstandings. Concrete answers refer to clear and easily understandable

answers given by the chatbot. Lastly, eloquent answer means that the chatbot sounds professional.

Nordheim et al. (2018) suggest that the expertise of a chatbot is perceived as important because

they do not yet possess natural communication skills. Thus, the chatbot must adequately adapt to the

users’ needs; if the bot misinterprets a request or provides impartial answers in a style that is not

adapted to the dialogical context, it is perceived as less competent (Luger & Sellen, 2016).

Ciechanowski et al. (2019) studied chatbots’ perceived competence in the context of

anthropomorphism, attempting to investigate the extent to which participants are willing to collaborate

with bots on different anthropomorphic levels. To manipulate anthropomorphism, the authors tested

two chatbots without and with an avatar. The results showed that the less a chatbot was perceived as

human, the less competent it seemed to the participants. Thus, it can be hypothesized that a chatbot that

appears more human would be perceived as more competent by the users.

Page 12: Chatbot anthropomorphism: Adoption and acceptance in

6

2.2.4 Perceived helpfulness

Helpfulness has been identified as one of the core tenets of customer service; a customer service

situation that ends with customers getting answers to their questions leads to more positive attitudes

about those services (Coyle, Smith, & Platt, 2012; Walther, Liang, Ganster, Wohn, & Emington, 2012;

Zarouali et al., 2018). Zarouali et al. (2018) define the perceived helpfulness of a chatbot as “the degree

to which the responses of the chatbot are perceived to be relevant, hereby resolving consumers’ need for

information” (Zarouali et al., 2018, p. 493).

It is very important for customers to be able to communicate and get helpful assistance from

companies online; previous studies have established that the helpfulness of a chatbot is imperative to

influencing positive attitudes (Følstad, Nordheim, & Bjørkli, 2018; Zarouali et al., 2018). It is not a surprise

that customers appreciate chatbots that can help them save time or obtain information easily

(Brandtzaeg & Folstad, 2017).

The perceived helpfulness of a customer service chatbot has been noted to increase positive

attitude towards services (Zarouali et al., 2018). The ease of receiving help and information has been

identified as the most important motivation for using chatbots (Brandtzaeg & Folstad, 2017; Zarouali et

al., 2018). As the perceived helpfulness of a chatbot plays such an important role, it is important to

examine the extent to which chatbot design features influence its perceived helpfulness.

Next to perceived usefulness, the perceived helpfulness of a chatbot has been highlighted to play

a role in determining users’ attitudes towards them (Følstad & Bjørkli, 2018). A study conducted by

Følstad, Nordheim, and Bjørkli (2018) examined the acceptance of customer service chatbots in the

context of trust. The authors tested four chatbots and measured factors that affect the participants’ trust

in them. The results indicated that the quality of the chatbot’s interpretation of the users’ request and

advice is one of the most important factors influencing its perceived trust.

Recent work by Laban and Araujo (2020) focused on users’ perceptions of chatbots in customer

service settings. The authors hypothesized that a chatbot’s perceived anthropomorphism mediates the

relationship between perceiving the agent as more cooperative. They concluded that anthropomorphic

chatbot design features are associated with higher perceptions of cooperation. Cooperation was defined

as a ‘’human personality trait that is embodying qualities such as social tolerance, empathy, helpfulness,

and compassion” (Laban & Araujo, 2020, p. 3). Thus, it can be hypothesized that a chatbot that looks or

converses like a human would be perceived as more helpful than a chatbot that does not resemble a

human.

2.2.5 Attitude towards using chatbots

According to Ajzen and Fishbein (1980), people with favorable attitudes towards technology are more

inclined to perform a particular behavior. Davis, Bagozzi & Warshaw (1989) defined attitude as an

individual’s positive or negative feeling about using technology.

It seems that anthropomorphic design cues in chatbots increase customers’ feeling of social

presence (Go & Sundar, 2019). When a chatbot is perceived as having a social presence, its’ perceived

homophily is increased. Homophily is defined as “the amount of similarity two people perceive

themselves as having” (Rocca & McCroskey, 1999, p. 309). Consequently, highly homophilic chatbots

play a role in creating favorable attitudes towards them (Go & Sundar, 2019). Furthermore, human-like

cues in chatbots are rated more favorably than non-human resembling agents (Koda, 1996). Koda (1996)

studied the personification of poker software agents, including the effects of face and facial expressions.

He found that people have more favorable attitudes towards agents with a face.

Page 13: Chatbot anthropomorphism: Adoption and acceptance in

7

Additionally, Sundar et al., (2016) show that chatbot dialogue plays an important role in creating

favorable attitudes towards them. Bots that can have high message interactivity (human-like

conversation) boost positive attitudes towards chatbots (Go & Sundar, 2019). Thus, based on the

findings in the literature, it can be hypothesized that chatbots with anthropomorphic qualities increase

customers’ attitudes towards the bot.

2.2.6 Trust in chatbots as a mediator

Trust is present in most economic and social interactions, especially in uncertain situations (Pavlou,

2003). Trust plays a key role in determining the success or failure of online businesses (Lu et al., 2016).

Therefore, it is important to explore the role trust towards chatbots plays in an online customer service

setting, as users’ trust towards chatbots is determined by their trusting beliefs about the agents'

perceived level of competence, benevolence, and integrity (Mayer, Davis, & Schoorman, 1995; McKnight

et al., 2002). The link between anthropomorphism and trust is supported by several studies, indicating

that people tend to trust human-like behavior, such as anthropomorphic appearance and conversational

style (Cassell & Bickmore, 2000; Ho & MacDorman, 2010; Nordheim, Følstad & Bjørkli, 2018). For

example, high interaction in conversations and social presence (Go & Sundar, 2019) elicit trust, which are

highly anthropomorphic traits (Toader et al., 2020). Moreover, trust has been identified as a determinant

of perceived ease of use and perceived usefulness (Pavlou, 2003). Thus, it can be hypothesized that trust

moderates the relationship between the dependent variables.

2.3 Chatbot anthropomorphism as a design feature Chatbot designers should keep in mind that humans tend to respond to computers in a human-like

manner, even when aware that they are interacting with a computer (Nass & Moon, 2000; Reeves &

Nass, 1996). For example, people tend to act politely and friendly towards chatbots, indicating that

humans apply social interaction rules to computers (Nass, Steuer, & Tauber, 1994). Moreover, people

attribute human characteristics to computers, such as ethnicity, obtaining social rules within these

categories (Nass & Moon, 2000). For instance, Sproull et al. (1996) discovered that participants applied

personality traits to interfaces with a face and a voice compared to a computer with just a text display.

In the real world, people are good at communicating with other people and can relate to them

(Laurel, 1997). Consequently, humans apply this skill when interacting with inanimate objects by

anthropomorphizing them. Anthropomorphism is defined as “The representation of Gods, nature, or not-

human animals, as having human form, or as having human thoughts and intentions” (Oxford Reference,

n.d.). Additionally, anthropomorphism is quite normal in everyday life, such as applying human-like

qualities to objects like houses, cars, and ships (Laurel, 1997).

Different theories exist in the literature regarding chatbot anthropomorphism. Laurel (1997)

states that anthropomorphism benefits human-robot interactions. There are certain tasks that chatbots

are meant to do, and those should reflect on their design (Laurel, 1997). For example, customer service

chatbots are often used to do repetitive tasks, such as answering FAQs. Therefore, Laurel (1997) suggests

that chatbots should have two anthropomorphic qualities: responsiveness and the capacity to perform

actions. In turn, these qualities can be expressed in terms of character traits. Anthropomorphizing a

chatbot means that it is attributed to a character; as in traditional drama, characters have traits that are

represented through appearance, sound, and communication style (Laurel, 1997).

Laurel (1997) argues three essential arguments to support anthropomorphizing chatbots in

human-robot interactions. First, personalized chatbots help users make assumptions about their

Page 14: Chatbot anthropomorphism: Adoption and acceptance in

8

behavior. For example, users have certain expectations of how a customer service agent should behave

and look, and that should be reflected in its design. Second, human-like agents invite the user to an

interaction. Third, the metaphor of the chatbot as a character channels user to perceive it as having

agency. Consequently, users pay more attention to their responsiveness, competence, accessibility, and

ability to perform actions (Laurel, 1997).

In contrast to Laurel’s (1997) defense of anthropomorphizing conversational agents, Erickson

(1997) argues that anthropomorphizing robots contradict users’ need for simple, effective interfaces.

Erickson (1997) states that humanizing robots may lead to systems that try to mimic humans too much;

too emotive and fake humanness may stand in the way of what users need. However, he states that “we

may not have much of a choice” (Erickson, 1997, p.79), as people tend to react to computers as they

would to humans (Erickson, 1997; Reeves & Nass, 1996; Nass & Moon, 2000). Therefore, it is important

to investigate how chatbots with different anthropomorphic levels are perceived, whether their visual

appearance or conversational style contribute to users’ tendency to anthropomorphize chatbots.

Go & Sundar (2010) propose that people tend to evaluate chatbots’ performance based on their

pre-existing stereotypes about robots and computers. In other words, when users know that they are

interacting with a bot, they place more emphasis and expectations on their pre-existing perceptions of

robots and computers. On the other hand, a chatbot with several human-like identity cues is evaluated

based on users’ expectations of other humans. That being said, it is important to consider the visual

aspects of a chatbot. Ultimately, the development of these bots is based on the understanding of the

users’ needs and motivations (Følstad & Bjørkli, 2018).

2.3.1 Visual appearance

In the natural world, people categorize one another based on various aspects, such as their physical

characteristics (Argyle, 1988). Similarly, as people interact with others online, they create mental models

of each other (Nowak & Biocca, 2003; Reeves & Nass, 1996). Thus, it is likely that the virtual image

influences the categorization of the environment and the medium (Nowak & Biocca, 2003). For example,

when humans are presented with an image, they perceive the people and the environment as more

“real” (Taylor, 2002).

The appearance of the chatbot can be an important feature to consider when designing its

interface (Appel, von der Pütten, Krämer & Gratch, 2012). Appel et al. (2012) suggest emphasizing the

right design of a chatbot since its appearance influences the user’s interaction and perceptions of it.

Moreover, an international study that involved 7000 participants across continents reported that 46% of

consumers prefer chatbots with human-like images; 20% of those would like to see them as an avatar for

a chatbot (Singh, 2017). Thus, by creating chatbot avatars, designers aim to compensate for the lack of

social presence in a virtual environment.

2.3.2 Conversational style

Since much of customer service operations are now conducted online rather than in person, research has

focused on simulating natural human language in computers. Most of the interactions between humans

and chatbots are still only text based. Text-based chatbots are popular due to their ease of

implementation; most of them rely on scripts developed by designers rather than natural language

processing. As these interactions are scripted, chatbot developers must understand which chatbot

language characteristics positively influence users’ perceptions of the bot.

Page 15: Chatbot anthropomorphism: Adoption and acceptance in

9

Computer-Mediated Communication (CMC) is a unique field since the communication process of

a written chatbot lacks body language cues, vocal tones, and communicative pauses (Hill, Ford, Ferrares,

2015). Nevertheless, there is still much to learn about how CMC can achieve the expectations that

humans have towards interaction with a chatbot. Additionally, both the comprehension and generation

of human language are extremely complex; while computers and humans can communicate with each

other, A.I. scientists have underestimated the complexity of human language for a long time (Hill, Ford &

Ferrares, 2015). Indeed, the biggest hurdle for computers is to understand what words mean and adapt

to the variability of expressions and words (Hill, Ford, & Ferrares, 2015).

A study conducted by Gnewuch, Maedche, and Morana (2017) identified the current issues for

conversational agents in customer service. For example, they found that bots have only a limited

understanding of natural language. Currently, chatbots offer too much generic information unrelated to

the customer’s questions. Furthermore, current bots are not able to hold longer conversations that reach

a specific goal. Moreover, chatbots are not advanced enough to determine the direction of a

conversation; they are often not able to detect and recover from misunderstanding, nor they can ask for

clarification when they do not understand customer inquiries. Moreover, as mentioned earlier, the

authors noted that chatbots often lack traditional characteristics of customer service agents, such as

understanding context-dependent cues (Gnewuch, Maedche, and Morana, 2017).

Natural conversation flow can be enhanced by implementing a Conversational Human Voice

(CHV) (Kelleher, 2009). Scholars have noted that certain aspects of conversational style can affect

chatbot anthropomorphism, such as empathy, informal attitude, personalization, and humor (Liebrecht &

van Hooijdonk, 2018). There are many ways to increase the humanness of a chatbot in the way it

converses through text-based platforms. For example, studies have found that word frequency, response

latency, and styles influence the extent to which the chatbot is anthropomorphized (Gnewuch, Morana,

Adam & Maedche, 2018).

The use of CHV allows the bot to use informal speech and be open to dialogue (Liebrecht & van

Hooijdonk, 2020). Liebrecht and van Hooijdonk (2018) have identified three linguistic elements for CHV:

personalization, informal speech, invitational rhetoric. Personalization refers to the bots’ ability to

address users personally. The second element, informal speech, refers to the extent that the bot uses

casual language that differs from corporate language. For example, the bot could use emojis ( ) or

interjections (such as “haha”). The third strategy refers to the flow of conversation that creates a mutual

understanding between the user and the bot (Liebrecht & van Hooijdonk, 2018).

2.4 Hypotheses As described, this research will focus on studying how chatbots’ visual appearance and conversational

style influence perceived usefulness, ease of use, competence, helpfulness, and attitude towards

chatbots. Based on the described expectations, the hypotheses are defined as follows as shown in Table

1:

Page 16: Chatbot anthropomorphism: Adoption and acceptance in

10

Table 1

Overview hypotheses

Hypothesis Description

H1a The chatbot with a human visual appearance will have a more positive effect on

the perceived usefulness than a chatbot that is not represented by human visual

appearance

H1b The chatbot with a human visual appearance will have a more positive effect on

the perceived ease of use than a chatbot that is not represented by a human

visual appearance

H1c The chatbot with a human visual appearance will have a more positive effect on

the perceived competence than a chatbot that is not represented by human

visual appearance

H1d The chatbot with a human visual appearance will have a more positive effect on

the perceived helpfulness than a chatbot that is not represented by a human

visual appearance

H1e The chatbot with a human visual appearance will have a more positive effect on

the attitude towards using chatbots than a chatbot that is not represented by a

human visual appearance

H1f The chatbot with a human visual appearance will have a more positive effect on

trust towards chatbots than a chatbot that is not represented by a human visual

appearance

H2a The chatbot with a human-like conversational style will have a more positive

effect on the perceived usefulness than a chatbot that uses a technical

conversational style

H2b The chatbot with a human-like conversational style will have a more positive

effect on the perceived ease of use than a chatbot that uses a technical

conversational style

H2c The chatbot with a human-like conversational style will have a more positive

effect on the perceived competence of use than a chatbot that uses a technical

conversational style

H2d The chatbot with a human-like conversational style will have a more positive

effect on the perceived helpfulness of use than a chatbot that uses a technical

conversational style

H2e The chatbot with a human-like conversational style will have a more positive

effect on the attitude towards using chatbots than a chatbot that uses a

technical conversational style

Page 17: Chatbot anthropomorphism: Adoption and acceptance in

11

H2f The chatbot with a human-like conversational style will have a more positive

effect on the trust than a chatbot that uses a technical conversational style

H3a The possible effects of human visual appearance and human conversational style

on the perceived usefulness will be mediated by trust

H3b The possible effects of human visual appearance and human conversational style

on the perceived ease of use will be mediated by trust

H3c The possible effects of human visual appearance and human conversational style

on the perceived helpfulness will be mediated by trust

H3d The possible effects of human visual appearance and human conversational style

on the perceived helpfulness will be mediated by trust

2.5 Research model The following model (Figure 2) serves as the theoretical model to guide the research.

Figure 2.

Research Model

3 Study 1 – Online experiment 3.1 Methodology

3.1.1 Research design As shown in Figure 1, this study tested the research model by conducting a 3 (human avatar, robot

avatar, logo avatar) x 2 (formal and informal conversational style) online experiment. During this

experiment, the independent variables were manipulated to test the effects on perceived usefulness,

ease of use, helpfulness, competence, attitude, and trust. By using a 3x2 between-subjects experiment,

Page 18: Chatbot anthropomorphism: Adoption and acceptance in

12

participants of the experiment were randomly assigned in one of the six conditions in which the avatar

and conversational style of the chatbot were manipulated. Table 2 shows an overview of the

experimental conditions.

Table 2

Experiment conditions

Condition number Avatar Conversational style

1 Human Formal

2 Human Informal

3 Robot Formal

4 Robot Informal

5 Logo Formal

6 Logo Informal

3.1.2 Stimulus material To test the six conditions, two different chatbot conversational styles were created. Additionally, three

different chatbot avatars were developed. The human-like conversational style incorporated the key

linguistic elements that are in line with anthropomorphic qualities as suggested by Liebrecht and

Hooijdonk (2019): humor, empathy, emoticons, and informal style of speech. Moreover, the informal

chatbot had a longer response time, imitating the way a human would take a short time while typing a

response. The other conversational style was more machinelike, including formal, straight-to-the-point

answers without the use of emoticons or colloquialisms. The formal chatbot gave an instant answer to

the users’ questions. Additionally, Appendix 1 shows a graphical presentation of all the six conditions.

Appendix 2 shows the full conversations in each of the conditions. The visual appearance of the chatbot

was designed by the author, including either a human named Olivia, a robot named Skip, and a logo of

the fictional company “Tech Paradise”, as presented in Figure 2.

Figure 2

Chatbot avatars from human (left) to logo (right)

The two conversational styles were manipulated according to the findings from the literature. The formal

conversational style was mechanic, had no response delay, and has no colloquialisms. In contrast, the

informal conversational style attempted to mimic the way a human would chat, using emojis, slang,

Page 19: Chatbot anthropomorphism: Adoption and acceptance in

13

delayed responses, lexical bundles, and active voice. Figure 3 shows an example of the conversational

style differences in the conditions.

Figure 3.

Conversational style from formal (first) to informal (second)

Pre-test

3.1.3 Pre-test 1 A preliminary test was conducted to check the materials. The test was conducted to determine whether

the avatar was correctly perceived as a human, robot, or logo. Moreover, the conversational style was

tested to see if the participants can distinguish between the formal and informal styles. At the end of the

test, the participants could write comments about the interaction.

The anthropomorphism of the chatbot’s avatar was measured with a 5-point semantic

differential scale of Bartneck, Croft, Kulic, and Zoghbi (2009). The scale used in the pre-test includes 4

items: Fake/Natural, Machinelike/Humanlike, Unconscious/Conscious, Artificial/Lifelike. One item

Page 20: Chatbot anthropomorphism: Adoption and acceptance in

14

(Moving rigidly/moving elegantly) was dropped because the chatbot’s avatar is a static image. The

anthropomorphism in the chatbot’s conversational style was measured with a 5-point semantic

differential scale (Bartneck, et al., 2009). The scale includes 5 items: Stagnant/Lively, Mechanical/Organic,

Artificial/Lifelike, Inert/Interactive, and Apathetic/Responsive.

25 people participated in the pre-test. Each respondent was exposed to one of the six conditions.

The results of the preliminary test indicated that the participants could correctly distinguish between the

formal (M= 2.52, SD 0.79) and informal (M=3.77, SD = 0.65) conversational styles. The independent

sample T-test result of t (-4.28) p < .001 shows that the two groups were perceived differently in terms of

anthropomorphism. Thus, H0 can be rejected, and conclude that humanlike and machinelike

conversational styles are perceived differently. However, there were no statistically significant

differences between the appearance group means as determined by one-way ANOVA (F (2,20) = .106, p =

.900). Thus, another pre-test was conducted to explore the causes for these results.

3.1.4 Pre-test 2 The second pre-test focused on measuring the anthropomorphism of the chatbot’s visual appearance.

The participants of the first pre-test did not successfully differentiate between the three groups.

Therefore, a different scale was used to measure the chatbot’s visual appearance. The

anthropomorphism in the chatbots’ appearance was measured with a 5-point semantic scale from

Bartneck et al. (2009). The scale ranges from 1 = “strongly disagree” to 5 “strongly agree”. The scale

includes 7 items, for example, “The impression of the chatbot's picture felt natural”.

41 people participated in the second pre-test. Each respondent was exposed to one of the six

conditions. The results of the second pre-test indicated that the participants, again, could not correctly

distinguish between the human (M= 3.01, SD= .78) robot (M=2.95, SD= .92), and logo (M=3.06, SD= .81)

avatar having different levels of anthropomorphism. One-way ANOVA indicated that there were no

statistically significant differences between the means of the three groups (F (2,40) = .039, p= .962). To

further investigate the results, a third pre-test was conducted for the chatbot avatar.

3.1.5 Pre-test 3 The third pretest was conducted to determine whether the participants can distinguish between the

human, robot, and logo avatars. In Qualtrics survey, the participants were shown three different images

of the avatars in After viewing each avatar, the respondent had to indicate whether the avatar shows a

human (yes/no), logo (yes/no), or robot (yes/no). 35 people took part in the pre-test. 66.7% could

indicate that the human avatar was a human. Additionally, 82.8% could indicate that the robot avatar

represented a robot. Finally, 84.6% could indicate that the logo avatar represented a logo. Thus, it could

be concluded that the participants could correctly differentiate between the different avatar types.

3.2 Main study

3.2.1 Procedure In the main study, the participants were asked to interact with one of the chatbot conditions. First, the

participants had to read a fictive scenario about an online web store that specializes in technology. In the

scenario, the participant is considering buying headphones, but they want to ask some questions from

the chatbot first.

The chatbot was embedded in a Qualtrics survey. The participants were presented with seven

questions that they must type to the chatbot. The interaction took approximately five minutes,

Page 21: Chatbot anthropomorphism: Adoption and acceptance in

15

Filling the survey took approximately 15 minutes. First, the participants answered demographic

questions related to gender, educational status, and age. After that, the participants had to read the

scenario and proceed to the interaction with the chatbot. After the interaction, the participants had to

answer questions about the conversational style, visual appearance, perceived usefulness, ease of use,

competence, helpfulness, attitude towards using chatbots, and trust towards chatbots. Finally, the

participants could leave their e-mail addresses to volunteer for an interview. The questionnaire can be

found in Appendix 5. Based on their experience the participants filled in a questionnaire.

The quantitative data file was exported to SPSS and prepared for analysis. After cleaning the

data, several statistical analyses were performed, which are explained more in detail in the later section

of this paper.

3.2.2 Participants

For the experiment, a total of 429 participants filled in the survey. 89 responses were deleted due to

incomplete answers, and one response was deleted because of negative consent, resulting in a total of

339 respondents. The participants were recruited through online social media channels Facebook,

WhatsApp, and LinkedIn. The survey was online from the 5th of November 2020 to the 29th of December

2020.

Every condition had an approximately equal number of males and females. Most of the

respondents were aged between 18-24 (66.4%), followed by 25-34 (29.2%). Additionally, most of the

respondents (44.0%) reported a bachelor’s degree as their highest completed education, followed by

high school (34.8%) and master’s degree (18.3%), and a Ph.D. (2.1%). Table 4 shows the demographics

across the six conditions.

Table 4

Demographics across conditions

Condition N Age % Gender %

Human avatar + formal conversational style 58 18-25

25-34

35-45

46-54

55-64

65.5

31.0

0.0

3.4

0.0

Female Male

79.3 20.7

Human avatar + informal conversational style 56 18-25

25-34

35-45

46-54

55-64

14.7

19.2

33.3

50.0

0.0

Female Male

73.2 26.8

Robot avatar + formal conversational style 55 18-25

25-34

35-45

46-54

55-64

61.8

36.4

1.8

0.0

0.0

Female Male

74.5 25.5

Page 22: Chatbot anthropomorphism: Adoption and acceptance in

16

Robot avatar + informal conversational style 54 18-25

25-34

35-45

46-54

55-64

75.9

20.4

1.9

1.9

0.0

Female Male

79.6 20.4

Logo avatar + formal conversational style 57 18-25

25-34

35-45

46-54

55-64

70.2

26.3

0.0

3.5

0.0

Female Male

66.7 33.3

Logo avatar + informal conversational style 59 18-25

25-34

35-45

46-54

55-64

66.1

27.1

3.4

0.0

3.4

Female Male

66.1 33.9

3.3 Measurements In this section, the quantitative and qualitative measurements are described. The online experiment

measured the perceived usefulness, ease of use, helpfulness, and competence of the chatbot.

Additionally, the participants were asked about their trust towards the chatbot, as well as their attitude

towards using chatbots in the future. Moreover, interviews were conducted to discover opinions that

were not apparent from the results of the online experiment. The quantitative measures can be found in

Appendix 3 and the interview protocol can be found in Appendix 4.

3.3.1 Perceived usefulness

The scale for perceived usefulness is adapted from Davis (1989) (α=0.97) and Scheerder (2018). The

perceived usefulness of the used four items measuring aspects regarding the effectiveness and

usefulness of the chatbot. The original scale measures PU before using the technology, from 1 (likely) to 7

(unlikely). However, as this study attempted to measure PU after using the bot, the scale was being

adapted from 1 (disagree) to 7 (agree).

3.3.2 Perceived ease of use

The Perceived ease of use was measured with four items. The items on the scale measure the effort,

time, and complexity of using the chatbot. The scale is adapted from Dabholkar’s (1994) and Scheerder

(2018) scale (α=0.86). The scale is a 7-point Likert scale ranging from 1 (disagree) to 7 (agree).

Page 23: Chatbot anthropomorphism: Adoption and acceptance in

17

3.3.3 Perceived helpfulness

The perceived helpfulness scale was adapted from Sen and Lerman (2007), and Yin, Bond, and Zhang

(2014). The original scale measured the helpfulness of online product reviews. In this study, the scale was

used to measure aspects of the chatbot's helpfulness during the interaction. The scale is based on 9-point

semantic differential-scaled items (α = 0.85), ranging from 1 (Not helpful at all/not useful at all/not

informative at all) to 9 (Very helpful/useful/informative).

3.3.4 Perceived competence

The perceived competence was be measured with six items using a scale adapted from Cho (α= 0.99). The

scale items measured the aspects of the chatbot's competence, proficiency, training, experience, and

knowledge.

3.3.5 Attitude towards using chatbots

Attitude towards using the chatbot was measured with 4 items. The scale was adapted from Dabholkar

(1994), measuring the respondents’ feelings toward using a chatbot to contact a company.

3.3.6 Trustworthiness

The chatbot’s trustworthiness was measured using a 7-point scale from Toader et al. (2019) (α = 0.91).

The items measured aspects of the chatbot’s sincerity, truthfulness, honesty, credibility, reliability, and

overall trust in the chatbot.

3.4 Construct validity and reliability

3.4.1 Factor analysis

To evaluate the research’s construct validity, a Principal Component Analysis (PCA) was conducted on the

items with orthogonal rotation (varimax) with 25 items. The Kaiser-Meyer-Olkin measure verified the

sampling adequacy for the analysis, KMO = .92, which is ‘superb’, according to Field (2009). Furthermore,

all KMO values for the individual items were above > .80, which is above the acceptable limit of .5 (Field,

2009). Bartlett’s test of sphericity χ² (210) = 6163.79, p < .001, indicated that correlations between the

items were sufficiently large for PCA.

An initial analysis was run to obtain eigenvalues for each component. All the components had

eigenvalues over Kaiser’s criterion of 1 and in combination explained 72.99% of the variance. The

components with an eigenvalue over 1 explain the relationship between the items the best. The factor

loading with values lesser than 0.4 was disregarded as they were considered to have an insignificant

effect on a factor (Field, 2009). For the dataset, factor loadings under .40 were suppressed.

The items of Perceived Helpfulness loaded as proposed in one factor. That was also true for the

items for Trust and Competence. Therefore, these items in the construct were not changed. Item 4 for

Perceived Ease of Use (“The chatbot is flexible to interact with”) loaded under the same construct as the 6

items for Competence and was deleted. Moreover, Ease_of_use_4 (“Using the chatbot to contact as a

company takes a lot of effort”) showed cross-loading and was deleted from further analysis.

The Perceived Usefulness item 1 (“Using the chatbot to contact a company enables me to

accomplish my goal more quickly”) did not load under any of the other constructs and was deleted.

Moreover, Usefulness item 2 (“Using the chatbot enhances my effectiveness”) did not load to any

constructs and was deleted. Usefulness item 3 (“Using the chatbot makes it easier to contact a

company”) was deleted as it loaded in the construct of helpfulness. Furthermore, Usefulness item 4 (“I

Page 24: Chatbot anthropomorphism: Adoption and acceptance in

18

find the chatbot useful when contacting a company”) loaded under the same construct as Attitude and

was merged with that construct.

The final factor analysis resulted in 21 items. To ensure reliability, Cronbach’s alpha was

calculated for all the remaining constructs. The Cronbach’s Alpha is above .70, meaning that all the

constructs can be considered as reliable. The reliability and the factor analysis can be found in Table 5.

Table 5

Factor analysis with 21 items and 5 constructs

Construct α Item Components

1 2 3 4 5

Competence .96 I believe the chatbot knew what it was doing

.71

I believe the chatbot is competent

.74

I think the chatbot is proficient .71

I think the chatbot is trained .79

I believe the chatbot is experienced

.85

I believe the chatbot is knowledgeable

.67

Attitude .91 Using the chatbot is a good idea

.79

Using the chatbot is a wise idea

.78

I like the idea of using the chatbot

.86

Using the chatbot would be pleasant

.82

I find the chatbot useful when contacting a company

.59

Trust towards chatbots

.91 The chatbot seemed sincere during our interaction

.78

I felt that the chatbot was honest in our interaction

.87

I believe the chatbot was truthful when conversing with me

.86

I believe that the chatbot was credible during our conversation

.76

Perceived helpfulness

.96 The chatbot was useful .79

The chatbot was helpful .78

The chatbot was informative .79

Perceived ease of use

.88

Using the chatbot to contact a company is complicated

.89

Page 25: Chatbot anthropomorphism: Adoption and acceptance in

19

Using the chatbot to contact a company is confusing

.89

Using the chatbot to contact a company is confusing

.84

Explained variance 7.86%

7.28% 4.91% 48.03% 4.91%

Eigenvalue 1.65 1.53 1.03 10.09 1.03

4 Results 4.1 Multivariate analysis of variance (MANOVA) The main effects have been tested with multivariate analysis of variance (MANOVA). To investigate the

effects of the chatbot’s visual appearance (avatar) and conversational style on the perceived ease of use,

helpfulness, competence, and attitude towards using chatbots, a Wilk’s Lambda (Λ) was performed using

IBM SPSS Statistics. Before the analysis, it was investigated that all the underlying assumptions for

performing MANOVA were met.

There visual appearance did not have an effect on the dependent variables, (F(8, 666) = 1.163, p

= .319; Wilk's Λ = 0.973). Additionally, there was no statistically significant difference in the effects of the

conversational style on the dependent variables, (F(4, 334) = 0.751, p = .558; Wilk's Λ = 0.991).

Additionally, no significant results were found when exploring the interaction effect between the avatar

and the conversational style (F(8, 660) = 1.186, p = .305; Wilk's Λ = 0.972).

Table 6

Results of multivariate analysis of variance

Λ

F p

Visual appearance .973 1.164 .319

Conversational style .991 0.751 .558

Visual appearance *

Conversational style

.972 1.186 .305

4.2 Main effects

4.2.1 Main effects of visual appearance The mean scores and the standard deviation of the dependent variables are displayed in Table 7, showing

that visual appearance did not affect the dependent variable.

It was hypothesized that an avatar with a human appearance would affect the perceived ease of

use. However, no significant effect was found for the main effect of the visual appearance on the

perceived ease of use. The difference in mean scores between the human avatar the robot avatar, and

the logo avatar was not significant (F= 1.673, p = .189). Thus, hypothesis 1b is not supported.

It was also hypothesized that an avatar with a human appearance would have a larger effect on

the perceived helpfulness of the chatbot. A significant effect was found for the main effect of the avatar

on the perceived helpfulness. The difference in the mean scores between the human avatar the robot,

and the logo avatar showed a weak significance (F=2.018, p=.05), indicating that the logo avatar had the

highest helpfulness mean score. Post hoc comparisons using the Tukey HSD test indicated that the mean

Page 26: Chatbot anthropomorphism: Adoption and acceptance in

20

score for the human avatar was not significantly different than the robot or the logo, hypothesis 1c is not

supported.

It was hypothesized that an avatar with a human appearance would have the largest effect on

the perceived competence of the chatbot. The results yielded no significant effect. The difference in the

mean scores between the human avatar (M=4.83, SD=1.27), the robot avatar (M=5.00, SD=1.15), and the

logo avatar (M=5.0, SD=1.12) was not significant (F=.773, p=.462). Thus, hypothesis 1d is not supported.

Additionally, it was hypothesized that a chatbot with a human avatar would have a larger effect

on the attitude towards using chatbots. However, the results showed no significant effect. The difference

in the mean scores between the human avatar (M=5.16, SD=1.20), a robot avatar (M=5.36, SD=1.03), and

the logo avatar (M=5.28, SD=1.19) was not significant (F=.904, p=.406). Thus, hypothesis 1e is not

supported. Lastly, it was hypothesized that a chatbot with a human avatar would have a larger effect on

the trust towards chatbots. However, the results showed no significant effect. The differences in the

mean scored between the human avatar (M=5.20, SD=1.18), robot avatar (M=5.33, SD=.97), and the logo

avatar (M=5.45, SD=1.12) was not significant (F=1.592, p=.205).

Table 7

Mean and standard deviation values for the main effects of avatar

Independent variable

Dependent variable Manipulation Mean SD

Avatar Perceived ease of use Human 3.00 1.55

Robot 2.67 1.44

Logo 2.98 1.50

Perceived helpfulness Human 5.54 1.29

Robot 5.88 0.99

Logo 5.86 0.99

Perceived competence

Human

4.83 1.27

Robot 5.01 1.15

Logo 5.00 1.12

Attitude towards using chatbots

Human 5.16 1.20

Robot 5.36 1.03

Logo 5.28 1.19

Trust Human 5.20 1.18

Robot 5.33 0.97

Logo 5.45 1.12

4.2.2 Main effects of the conversational style The mean scores and the standard deviation for the main effects of conversational style on the

dependent variables are shown in Table 8. The table shows that there is no significant effect for the

conversational style on the dependent variables.

It was hypothesized that an informal conversational tone would have a larger effect on the

perceived ease of use of the chatbot (H2b). However, no significant effect was found for the main effect

of the conversational stone on the perceived ease of use. The difference in the mean scores between the

Page 27: Chatbot anthropomorphism: Adoption and acceptance in

21

formal (M=2.86, SD=1.44) and informal (M=2.92, SD=1.56) was not significant (F=.111, p=.739). Thus,

hypothesis 2b is not supported.

It was also hypothesized that a chatbot with an informal conversational tone would have a larger

effect on the perceived competence of the chatbot. The results showed no significant effect for the main

effect of the conversational tone on the perceived competence. The difference in the mean scores

between the formal and informal was not significant (F=1.195, p=.275). Thus, hypothesis 2c is not

supported.

It was hypothesized that a chatbot with an informal conversational tone would have a larger

effect on the perceived helpfulness of the chatbot. However, the results yielded no significant effect for

the main effect of the conversational tone on the perceived helpfulness of the chatbot. The difference in

the mean score between the formal and informal was not significant (F=.350, p=.554). Thus, hypothesis

2d is not supported.

It was hypothesized that a chatbot with an informal conversational tone would have a larger

effect on the attitude towards using chatbots. However, the results showed no significant effect for the

conversational tone on the attitude towards using chatbots. The differences in the mean scores between

the formal and informal were not significant (F=.102, P=.750). Thus, hypothesis 2e is not supported.

Lastly, it was hypothesized that a chatbot with an informal conversational tone would have a

larger effect on the trust towards using chatbots. However, the results showed no significant effect. The

difference between the mean score between the formal and informal was not significant (F=.003,

p=.953).

Table 8

Mean and standard deviation values for the main effects of conversational style

Independent variable

Dependent variable

Manipulation Mean SD

Conversational style

Perceived ease of use

Formal 2.86 1.44 Informal 2.92 1.56

Perceived helpfulness

Formal 5.71 1.05 Informal 5.78 1.16

Perceived competence

Formal 4.88 1.19 Informal 5.02 1.68

Trust

Formal

5.33

1.0

Informal

5.32 1.1

Attitude towards using chatbots

Formal 5.28 1.03 Informal 5.25 1.25

4.3 Interaction effects There was no interaction effect between the visual appearance and the conversational style as shown in

Table 6 (F= 1.186, p = .305). The results of the MANOVA analysis led to the conclusion that there was no

interaction between the two independent variables. Table 9 shows the means and standard deviations

for each dependent variable.

Page 28: Chatbot anthropomorphism: Adoption and acceptance in

22

Table 9

Mean and standard deviation values for the interaction effects of avatar and conversational style

Independent variable Dependent variable Conversational style

Avatar Mean SD

Conversational style * Avatar

Perceived ease of use Formal Human 3.11 2.51

Robot 2.55 1.40

Logo 5.86 1.39

Informal Human

2.90

1.60

Robot 2.55 1.48

Logo 2.91 1.62

Perceived helpfulness

Formal

Human

5.36

1.27

Robot 5.87 0.83

Logo 5.92 0.93

Informal

Human

5.73

1.30

Robot 5.83 1.34 Logo 5.86 1.06

Perceived competence

Formal

Human

4.82

1.31

Robot 4.86 1.22

Logo 4.96 1.96

Informal

Human

4.86

1.24

Robot 5.16 1.08 Logo 5.05 1.82

Attitude towards using chatbots

Formal

Human

5.15

1.05

Robot 5.34 9.92

Logo 5.38 1.11

Informal Human 5.17 1.34 Robot 5.38 1.46

Logo 5.20 1.27

Trust Formal

Human

5.05

1.95

Robot 5.39 1.01 Logo 5.57 0.98

Informal Human 5.35 1.67 Robot 5.28 0.94

Logo 3.35 1.70

Page 29: Chatbot anthropomorphism: Adoption and acceptance in

23

4.4 Trust as a mediator It was hypothesized that trust towards chatbots would mediate the dependent variables. A mediator

variable is caused by the independent variable (avatar and conversational tone) and explains the cause

between the independent and dependent variable. To investigate the mediating role of social presence,

PROCESS v3.5 by Andrew F Hayes (model number 4) was performed. However, as there was no

relationship between the independent variables and the dependent variables, no mediation takes place,

meaning that hypotheses 3a to 3e are not supported.

4.5 Overview of the hypotheses Table 10

Summary of results of the tested hypotheses

Hypothesis Supported

H1a The chatbot with a human visual appearance will have a more positive effect on the perceived usefulness than a chatbot that is not represented by human visual appearance

No

H1b The chatbot with a human visual appearance will have a more positive effect on the perceived ease of use than a chatbot that is not represented by a human visual appearance

No

H1c The chatbot with a human visual appearance will have a more positive effect on the perceived competence than a chatbot that is not represented by human visual appearance

No

H1d The chatbot with a human visual appearance will have a more positive effect on the perceived helpfulness than a chatbot that is not represented by a human visual appearance

No

H1e The chatbot with a human visual appearance will have a more positive effect on the attitude towards using chatbots than a chatbot that is not represented by a human visual appearance

No

H1f The chatbot with a human visual appearance will have a more positive effect on trust towards chatbots than a chatbot that is not represented by a human visual appearance

No

H2a The chatbot with a human-like conversational style will have a more positive effect on the perceived usefulness than a chatbot that uses a technical conversational style

No

H2b The chatbot with a human-like conversational style will have a more positive effect on the perceived ease of use than a chatbot that uses a technical conversational style

No

H2c The chatbot with a human-like conversational style will have a more positive effect on the perceived competence of use than a chatbot that uses a technical conversational style

No

H2d The chatbot with a human-like conversational style will have a more positive effect on the perceived helpfulness of use than a chatbot that uses a technical conversational style

No

H2e The chatbot with a human-like conversational style will have a more positive effect on the attitude towards using chatbots than a chatbot that uses a technical conversational style

No

H2f The chatbot with a human-like conversational style will have a more positive effect on the trust towards chatbots than a chatbot that uses a technical conversational style

No

H3a The possible effects of human visual appearance and human conversational style

on the perceived usefulness will be mediated by trust towards chatbots

No

Page 30: Chatbot anthropomorphism: Adoption and acceptance in

24

H3b The possible effects of human visual appearance and human conversational style

on the perceived ease of use will be mediated by trust in chatbots

No

H3c The possible effects of human visual appearance and human conversational style

on the perceived competence will be mediated by trust in chatbots

No

H3d The possible effects of human visual appearance and human conversational style

on the perceived helpfulness will be mediated by trust in chatbots

No

H3e The possible effects of human visual appearance and human conversational style

on the attitude towards using chatbots will be mediated by trust in chatbots

No

5 Study 2 – An interview study 5.1 Methodology 12 semi-structured interviews (two per condition) were conducted to explore users’ further opinions

about the chatbot designed for this study. The interviewees were recruited through the online survey.

After the participants finished the online survey, they could provide their e-mail addresses to volunteer

as interviewees.

The interviews aimed to reveal additional information about how the participants feel about the

different chatbots. The interviews started with an explanation of the purpose of the study. Additionally,

the interviewees read the scenario for the interaction with the chatbot. Thus, before the interview, the

participants interacted with one of the chatbots. The interviews lasted for approximately 15 minutes. The

questions were based on the dependent variables and the chatbots’ visual appearance and

conversational style. Additionally, the interviewees were invited to elaborate their opinions about the

interaction and the specific chatbot.

The interviews were conducted online using the messaging application Discord or Zoom. The

interviews were recorded with the interviewee’s permission. The audio recordings were transcribed using

Microsoft Office Word. Additionally, the audio files were transcribed and coded using deductive coding.

After coding, 11 categories were created: advantages, disadvantages, visual appearance, conversational

style, ease of use, usefulness, helpfulness, competence, attitude towards using chatbots, trust, and

transparency. The codebook can be found in Appendix 6.

5.2 Results of the interviews The results of the interviews follow a similar pattern to the online experiment. The results indicate that

the participants did not perceive the different chatbots as substantially different in their usefulness, ease

of use, helpfulness, or competence. Additionally, the attitudes towards using chatbots in the future did

not differ between the conditions. It was expected that a chatbot with a human visual appearance and

conversational style would result in higher usefulness, ease of use, helpfulness, trust, and attitude

towards using chatbots.

Almost all the participants agreed that the chatbot they were using was easy to use and gave

helpful answers to their questions. However, the chatbot with a human avatar and human-like

conversational style was preferred slightly more than the other variations. However, the preferences

were general and not related to the dependent variables. Nevertheless, the interviews revealed some

interesting things about the chatbot’s appearance and conversational style. On top of the pre-defined

dependent variables, the interviews yielded additional insights about transparency related to the

Page 31: Chatbot anthropomorphism: Adoption and acceptance in

25

chatbot’s nature; it seems that users would like that the chatbot discloses itself as a computer. The

complete results of the interview are shown in Appendix 7.

When interacting with the chatbot with a human avatar, the interviewees appreciated the avatar

depicting a human. The human avatar was described as making the interaction feel more personal and

friendly, which was stated as being an important part of customer service: “The avatar is very nice. It’s

like, the fact that it’s a person in it. It makes it more personal. I like the icon that depicts a person. It looks

like I would be talking to an actual person and that’s what I am used to. I like that. I would not want to see

something very mechanical.” [P1, Human/Formal].

However, two respondents stated that having a human avatar can confuse the user to think that

the chatbot is a human customer service agent. It is important to note that a human avatar might make

users feel tricked to think that they are talking with a human customer service agent. Moreover, two

respondents stated that using an illustrated picture is better than using a photograph of a real human:

“Maybe if it is a chatbot and they have a human avatar, maybe I feel a bit tricked.” [P2, Human/Formal].

“I prefer this. Especially when it’s a chatbot, I prefer it like this and not a real image. […] Then I would feel

a bit misleaded [sic]. Like, why, why use a real picture when you are not using a human?” [P3,

Human/Informal].

The participants in the robot avatar condition had mostly positive or neutral feelings about the

chatbot’s appearance. They stated that the robot avatar fits because they are talking with a computer

and not a human: “ […] I think it fits it because it was, like, a cartoon-style robot… I thought it was cute”

[P7, Robot/Informal]. One respondent referred to the uncanny feeling that human-like robots can cause:

“I think if something really tries to be human-like so it's a bit uncanny. It's not positive. [..] if I know it's a

robot, I'd rather prefer a robot.” [P5, Robot/Formal]; “I mean, it certainly, it's nice to know when you if

you talk to a bot [to see a picture of a bot]” [P6, Robot/Informal].

The company logo avatar received mixed responses. In this condition, most of the participants

would have preferred to interact with a chatbot that has either a robot or a human avatar. A logo avatar

indicated that the respondent is not sure who they are talking to: “[…] It would be more pleasant if it was

some sort of figure, could either be a fictional figure as a robot or something” [P9, Logo/Formal]. One

respondent said that it is important to have the feeling that they are talking to someone: “I would put a

picture of like even if it's just like one of those in Adobe illustrator. Yeah. Flat drawings of a person or a

robot. That would be better, I think, because I feel like I'm talking to something and I don't know, this is

like talking to… I don't know what I'm talking to.” [P11, Logo/Informal].

Another respondent stated that having a logo avatar may be beneficial, especially because they know

they were interacting with a chatbot and not a human: “It doesn't appear to be a human that way,

doesn't want to be human. That's why it's just like it's all linked to the company and not like anything or

anyone in particular. Which in this case, I liked.” [P10, Logo/Formal].

The chatbot’s conversational style received varied comments. First, it was clear that the interviewees

appreciated that the bot answered quickly, but an instant answer in the formal conversational style was

too quick. Moreover, one of the interviewees stated that it is positive that the chatbot uses personal

pronouns: “[…] Maybe it was a little bit too quick for me… almost like it read my mind. I would like to

have a little bit of like, a quarter of a second…” [P1, Human/Formal]. The formal conversational style was

also depicted as unenthusiastic and inhuman, making the interaction feel unrealistic. The feelings were

mostly based on the interviewees' pre-existing expectations of an interaction with a customer service

agent: “I noticed was sort of lack of enthusiasm in the bots, the sort of human traits that I missed…I think

it's a part of customer service and such bots… to have a sort of sound more human in the sort sense that if

it sounds too formal and I wouldn’t to talk to customer service like this if I would have talked to a human.

Page 32: Chatbot anthropomorphism: Adoption and acceptance in

26

So it doesn't really feel the same. […] “I would like to both be more enthusiastic, have a casual tone, sort

of a re-enforce or tell the customer, let the customer know that it's not a bad question to ask. I would like

the bot sort of asking me, “did this answer your question”? Yeah, and then you can say “yes or no” so the

chatbot knows if it if you need more information or not.” [P9, Logo/Formal].

However, one respondent appreciated that the chatbot with a formal conversational style

responded immediately. Interestingly, it was also apparent that the instant response made it clear that

the user is talking with a chatbot: “I liked it. It was quick. It didn't wanted [sic] to be a human to say so.

Like, it takes time to type the question. No, it's just like, OK, this is the answer. Yeah. Um, which I

appreciate if it's clear that it's chatbot because otherwise, you could be in doubt like somebody typing,

wasn't a case here. It's just like, OK, here's your answer.” [P10, Logo/Formal].

The informal conversational style received mostly positive responses. According to the interviewees, the

informal conversational style made the interaction more personal and humanlike. One interviewee liked

the follow-up questions: “I like that it has its like you human-ish touch to the answers. And it was a

friendly as well.” [P11, Logo/Informal]. Another interviewee appreciated that the chatbot asked follow-up

questions, which made it appear as friendly: “[…]I also noticed that when I typed a question, like after two

questions so I think, they said ‘’Oh, can I help, do you have any other questions, is there anything else I can

help you with…?’’ So it was a useful chatbot. I think the conversation style was very friendly.” [P7.

Robot/Informal].

However, the emojis received mixed responses. One participant stated that the use of emojis

makes the interaction less serious, which is inappropriate in a customer service setting. Interestingly,

these feelings were connected to the fact that the user knew that they are talking with a computer:

“Especially if you know that it's a chatbot… I would find it really like stupid. Yeah, because that I would

think, like, OK, you're not a human. And someone thought this was a good idea, which now it's just like I

think emojis in like serious conversations with companies are always a little bit stupid because emojis for

me are more like a fun thing. And if you're having a problem or a serious question and a company is using

emojis, then I'm like, you're not taking me seriously.” [P10, Logo/Formal].

On the other hand, one stated that the use of emojis is appropriate and makes the interaction

fun, especially when the chatbot’s avatar is a robot: “I just think it had a friendly demeanor despite being

robotic because it didn’t speak robotic. I think it was also nice that it used emojis, I thought it made it

more fun and personal and I am a visual person so the emojis made it fun. I liked the conversation with

the robot, I thought it was cute and fun and not traditional boring chatbot” [P7, Robot/Informal].

An interesting new finding regarding chatbot transparency was revealed from the interviews. Many of

the respondents indicated that they would like to know if they are interacting with a chatbot or a human.

They stated that this would help them to adjust their expectations of the interaction. When it is clear that

they are having a conversation with a computer, they would not expect to get a clear answer to all of

their questions. By being able to adjust their expectations, the interactions would be less frustrating: “I

think it should be obvious, or either you're told or it should be obvious. So I couldn’t mistake it for an

actual human being. Where the interaction is quite different with a human being, so. Either in the name

or in the or in the first message, I would prefer to know that it’s a bot.” [P9, Logo/Formal]. Additionally,

another interviewee stated that pretending that the chatbot is a human customer service agent violates

their trust: “That would make it worse than you're projecting, or pretending to be something that you're

not. And then you're actually playing with that trust, which can go very wrong if people find out.” [P10,

Logo/Formal].

Page 33: Chatbot anthropomorphism: Adoption and acceptance in

27

One respondent mentioned who interacted with the logo avatar chatbot stated that they liked

that the chatbot had no name, and was introduced as “Tech Paradise support agent”. They mentioned

that a chatbot with a name elicits “weird” feelings because it is clear that they are talking with a

computer: “I think, sometimes with chatbots, they like to give the chatbot an actual name. But then, it’s

very obvious that you’re talking to a chatbot. I feel like sometimes it’s a bit weird. I like that with this

chatbot, it’s just like the “Tech Paradise” thing, and not like pretending that you’re talking to an actual

person which sometimes feels a bit weird to me when you’re talking with a chatbot.” [P12,

Logo/Informal].

All the interviewees agreed that the chatbot was easy to use. There was no difference between the

different chatbot conditions. When asked about the ease of use, the interviewees often referred to the

user interface, indicating that it is user-friendly and familiar. Additionally, the interviewees recognized the

chatbot avatar in the right corner of the website. One respondent would have preferred a chatbot with a

message that indicates that a chatbot is available for a discussion. Additionally, the interviewees stated

that using the chatbot would be a viable option to contact a company: “Okay, because in the bottom-

most like the little circle, which for me is recognizable, that that's a chat thing. Perhaps it could be like a

little square or something. Textbox above it with a little arrow, like “Chat with us!” or something.” [P10,

Logo/Formal]. “[…] And I think it’s very easy, if you have a question, to reach out to a chatbot…” [P3,

Human/Informal].

Most of the interviewees agreed that the chatbot appeared to be competent and trained to give the right

answers. The participants mostly appreciated that the chatbot explained a bit more than the question

involved: “[…] she answered all my questions and yeah, very specific answers. I think that’s very nice. So it

was immediately clear what the answer was.” [P3, Human/Informal].

However, almost all the participants stated that they would not think the chatbot would be

competent enough to answer difficult or personal questions. However, these opinions were based on the

interviewees’ previous experiences with customer service chatbots: “Maybe if I had a different

question…. Then it would rather talk to a human but I think for the questions I had it was sufficient, but if

you’re asking, yeah, questions about delivery times or about store opening hours or if you need or ID, I

think it’s fine, but I think if you have a more complicated issue, like, my package was stolen and something

like that, I would want to talk to a human…” [P7, Robot/Informal].

Moreover, the respondents stated that while using a chatbot is a good idea, they would prefer a chatbot

that can direct them to a human customer support agent. Thus, it seems that while the interviewees had

predominantly positive opinions about a chatbot, they would always prefer to have access to a human:

“Because I’m fine talking to chatbots, but a lot of times I go to chatbots when I want to talk to an actual

person. So if there is something wrong with my order I go to the chatbot and click all the things and if

‘’talk to an agent’’ comes up, then I click ‘’talk to an agent’” [P7, Robot/Informal].

When discussing the interviewees' attitudes towards using chatbots, almost all the respondents

talked unprompted about appropriate questions for a chatbot. Most of the respondents indicated that

chatbots should be used only for FAQs. These opinions were formed because of negative past

experiences with chatbots when asking complicated questions. The respondents indicated that their

interactions with chatbots are mostly positive when they had simple questions: “I would definitely use

that if I have a question like about delivery. Those are very easy, straightforward questions which don't

need human interaction at all because most of the time they're already on a frequently asked questions

Page 34: Chatbot anthropomorphism: Adoption and acceptance in

28

page. Yeah. And I think with most of these questions that's the case. It's just nothing special. You just

want your information. And I think it could work very well for that purpose” [P10, Logo/Formal].

Most of the respondents indicated that, in general, they trust chatbots. However, they stated that they

would hesitate to give their personal information to a chatbot. In this case, the interviewees would have

preferred to give their personal details to a human customer service agent: “I do think I'm also a little bit

more cautious when I know when I'm talking to a chatbot than talking to a human. Yeah, I can't really tell

you why. Perhaps also because it isn't like that new of a technology” [P10, Logo/Formal]. Another

interviewee suggested that a chatbot could “prove” itself to be trustworthy: “[…]So maybe the robot

would like… Prove itself by saying, I'm going to send you a text message on your phone since we know

your number from the store. This will give you a code of the code will show that's like a show that we're

legit and we're not trying to scam you out of your details” [P8, Robot/Informal].

6 Discussion The purpose of this study was to explore how the visual appearance and the conversational style of

customer service chatbots influence their perceived usefulness, ease of use, helpfulness, competence,

trust, and attitude towards using chatbots. More specifically, two research questions were explored:

RQ1: To what extent does the visual appearance of a customer service chatbot influence their

acceptance?

RQ2: To what extent does the conversational style of a customer service chatbot influence their

acceptance?

These questions were explored via an online experiment where the avatar of the chatbot was

manipulated on three levels (human, robot, logo) and the conversational style was manipulated on two

levels (formal/informal). It was expected that a chatbot with a humanlike avatar and a humanlike

conversational style would increase its perceived usefulness, ease of use, helpfulness, competence, and

attitude towards using chatbots. It was also expected that trust towards chatbots would mediate this

relationship. However, none of the hypotheses were supported. To elaborate on the possible reason for

these findings, 12 semi-structured online interviews were conducted.

The interviews revealed that across all the chatbots, their perceived usefulness, ease of use,

helpfulness, trust, competence, and attitude towards using chatbots did not have considerable

differences. However, the interviewees presented interesting opinions about the chatbots’

conversational style and appearance, especially linked to trust. Additionally, the interviews revealed that

transparency about the service agent’s nature is an important aspect in an online customer service

setting. In other words, the chatbot should disclose itself as a computer at the beginning of the

conversation.

Even though chatbot transparency was not explored in the first study, the qualitative findings

revealed important information about users’ need for transparency in an interaction with a chatbot. First,

some of the interviewees stated that they would want to know whether they are talking with a human or

a computer. While the interviewees said that almost, in any case, they would prefer a chatbot that

incorporates humanlike traits, it should be clearly stated that they are not talking with a human.

Some authors advise against disclosing chatbots’ identity (De Cicco, Lima da Costa e Silva, &

Palumbo, 2020) due to users’ tendency to trust computers less than humans. However, Mozafari, Weiger,

Page 35: Chatbot anthropomorphism: Adoption and acceptance in

29

and Hammerschmidt (2021) state that not disclosing chatbot’s identity might be problematic, as “This, in

turn, is problematic for service providers, as they want to avoid negative user reactions, but will be

obligated to disclose chatbot identity sooner or later” (p. 2916). Mozafari, Weiger, and Hammerschmidt

(2020) refer to this as the “chatbot disclosure dilemma”. Mozafari and colleagues (2020) note that the

focus of disclosing should shift from “whether to how to disclose chatbot identity” (p. 2916).

In the same light, most of the participants reported feeling uncomfortable when disclosing

personal information (such as their username or phone number) to a chatbot. Additionally, the

interviewees would prefer to discuss private matters with a human customer service agent. The findings

are not surprising, as several studies have noted that people do not fully trust artificial intelligence, even

when delivering better quality service than human agents (Dietvorst, Simmons, & Massey, 2015).

Dietvorst and colleagues (2015) refer to this as algorithm aversion, where humans lose confidence in

algorithms quicker than humans. For example, when a GPS makes a mistake in navigating, humans lose

confidence in the technology faster than a human error.

While the qualitative findings of this study indicated that disclosing the chatbots’ identity is

beneficial for trust and the quality of the interaction, it cannot be concluded that this would be the case

with a bigger sample. Nevertheless, these results raise an interesting opportunity to further explore how

chatbots should disclose themselves in a customer service setting.

During the interviews, the participants revealed that the chatbot (regardless of the condition) came

across as a competent agent who gave clear, detailed, and complete answers. In fact, there were no

differences in the perceived ease of use, usefulness, trust, or helpfulness between the chatbots. Similarly,

the participants had mostly favorable attitudes towards using chatbots in the future to contact a

company. Most frequently, the interviewees' focus was on the content of the answer rather than the

visual appearance or the conversational style.

Most of the interviewees stated that they trust the chatbot to be competent with simple

interactions, such as asking about delivery times. While earlier studies have noted that anthropomorphic

chatbots might increase users’ expectations of their capabilities (Luger & Sellen, 2016) the interviewees

seem to hold realistic expectations of their competence. These findings are like those of Følstad and

Skjuve (2019) who found that users hold rather realistic expectations of chatbots’ capabilities, expecting

them to handle simple questions. Thus, it seems that it is important to focus on designing interactions

that satisfy users’ expectations of the chatbot’s abilities. In other words, the chatbots’ visual appearance

and conversational style do not seem to influence how competent they are perceived.

When users already hold rather realistic expectations of chatbots’ abilities, they might not expect

the visual appearance or conversational style to have enough influence to change their pre-existing

beliefs. Consequently, customer service chatbots’ competence might be largely based on the correct

answers and interpretations of the users’ inquiries.

These findings correlate with those of Nordheim, Følstad, and Bjørkli (2019) who said that

“Chatbot expertise concerns the provision of accurate and relevant information; in short, a correct

answer” (p. 325). These findings may be again explained by the users’ goal orientation chatbot’s ability to

answer simple inquiries seems to be enough to satisfy users (Følstad & Skjuve, 2019). Similarly, the factor

analysis showed that the perceived helpfulness of the chatbot explained nearly half (48.03%) of the

variance, implying that perceived helpfulness plays a big role in users’ perceptions of a chatbot. In fact,

the interviewees agreed that all the chatbots were helpful. The answers were closely related to the

chatbot's competence; when probed, they said that the chatbot was able to answer their questions

sufficiently. Therefore, it seems that users place less importance on how a chatbot conveys information,

but what the content of the answer is, as suggested by Følstad & Skjuve (2019).

Page 36: Chatbot anthropomorphism: Adoption and acceptance in

30

These findings also correlate with those of Følstad and Brandtzaeg (2017) who studied users’

motivation for using chatbots. The authors found that users prefer chatbots that provide the necessary

help because users have a high preference for efficient, productive interactions with chatbots. They

recommend designing a chatbot that supports productivity by adding social elements to the

conversation, supporting users in finishing their tasks in a manner that is enjoyable and social (Følstad &

Brandtzaeg, 2017). Thus, while users might be largely task-oriented, adding limited human elements to

the interaction does not seem to be counterproductive to their perceived helpfulness.

It may be that users still expect some level of human elements in the interaction, but do not

want the chatbot to appear “fake” or try too hard to be a human. This can be partly explained by the

uncanny valley effect (Ciechanowski et al., 2019). The uncanny valley effect refers to the phenomenon

where a too-humanlike robot elicits unpleasant feelings in humans (Mori, Macdorman, Kageki, 2012).

When it comes to the visual appearance of the chatbot, the interviewees preferred a chatbot with a

human or a robot avatar. The human avatar created a “personal” feeling to the interaction. Interestingly,

one respondent preferred the logo avatar, as it “did not pretend to be a human”. Indeed, the

participants also stated that a human avatar might trigger the uncanny valley effect as the chatbot is not

a real human. Additionally, the interviewees stated that the robot avatar was suitable for the interaction,

as it was clear that the chatbot is a computer. It was appreciated that the robot avatar fit the interaction;

the chatbot responded too quickly to be a human. Consequently, it was clear that the participants were

not interacting with a human. Thus, the robot avatar was assumed to be a good fit.

Aside from the avatar, the interviewees appreciated the familiarity of the chatbot’s user

interface (UI), indicating that a familiar chat platform might make the chatbot easy to use. These findings

are also reported by Nordheim, Følstad, and Bjørkli (2019) who found that users reported familiar

chatbot interface and dialogue as easy to use. While not directly linked to the perceived ease of use, Jain,

Kumar, Kota, and Patel (2018) reported similar findings, stating that users prefer using chatbots that use

a familiar, turn-based messaging interface. Thus, it may be that the chatbots UI plays a bigger role in the

perceived ease of use than its avatar and conversational style. It may be that when customers look for a

chatbot in a webpage, they pay more attention to the chat interface than the appearance of the chatbot.

For example, if the UI resembles that of familiar messaging interface such as Facebook, users might feel

that the chatbot is easier to operate than a chatbot with an unfamiliar interface. This, in turn, could be

linked to users’ goal-oriented behavior in customer service situations.

These findings correlate with those of Følstad and Skjuve (2019) who studied differences in user

satisfaction between a customer service chatbot with a human and a robot avatar. The authors found a

few interesting things. First, while the participants reported having an increased user experience with a

human avatar chatbot, they were concerned that a human-like avatar could trick users into thinking that

they are talking with a human. Second, the participants did not generally find the chatbot avatars

important to their overall experience. Third, the findings revealed that a robot avatar could help signify

the fact that the users are talking with a computer (Følstad & Skjuve, 2019). As Følstad and Skjuve (2019)

suggest, chatbot appearance might play a smaller role in the customer service context than previously

suggested.

Regarding the conversational style of the chatbot, even though no statistically significant results were

found, the results of the interviews indicated that the informal conversational style was slightly preferred

to the formal conversational style. These findings have been confirmed by other studies (Kim, Lee, &

Gweon, 2019). Thus, it seems that users would like to have an interaction that mimics a conversation

Page 37: Chatbot anthropomorphism: Adoption and acceptance in

31

with a human; the interviewees in the formal conversational condition wished the chatbot would have

been friendlier, more enthusiastic, and ask follow-up questions.

Drawing upon McCallum and Harrison’s (1985) ideas, “Service encounters are first and foremost

social encounters” (p. 35). These findings are supported by Verhagen et al. (2014) who found that bots

with socially-oriented conversational styles lead to a higher satisfaction than task-oriented bots.

Therefore, a chatbot with an informal, humanlike conversational style can foster a shared feeling of

human contact between the chatbot and the customer (Vergahen et al., 2014). Thus, while the chatbots’

most important task should be problem-solving, humanlike conversational elements can promote

positive feelings about the interaction (Verhagen et al., 2014).

While the interviewees appreciated the informal conversational style, it should be noted that

emojis should be approached with caution. The use of emojis can make the conversation fun but is not

appropriate in a serious situation. These findings go in line with those of Derks, Bos, and von Grumbkow

(2007) who studied the influence of emojis in Internet communication, differentiating between task-

oriented and social context. The authors found that emojis are used less in task-oriented and negative

contexts (Derks, Bos & von Grumbkow, 2007). Customer service context is often task oriented (Følstad &

Skjuve, 2019) and involves uncertainty (Ribbinj, van Riel, Lijander, & Streukens, 2004), so emojis can be

considered as an unnecessary distraction.

Earlier studies have noted that people prefer chatbots that look and act like a human (Nowak & Rauh,

2006). However, this preference might not translate to more favorable attitudes towards using chatbots.

As Erickson (1997) stated, fake humanness may interfere in what matters: getting assistance from the

chatbot. Even though Erickson (1997) said that chatbot designers might not any other choice than

designing human-like chatbots, it could be possible that the most important aspect of human-chatbot

interaction is that it can effectively help the users.

Thus, it seems that while users may be largely task-oriented, it is important that the chatbot

incorporates at least some elements of human-like conversation. The interviewees appreciated that the

informal chatbot asked further questions and confirmed that it understood the question. Moreover, the

use of personal pronouns was noted as a positive quality, as it created a personal feeling to the

interaction and made the participant feel heard. Considering chatbot design, these findings might imply

that a chatbot’s textual content and the type of interaction are more important than their appearance or

conversational style. Thus, it might be interesting to focus on what the chatbot writes instead of how it is

written. In conclusion, users hold positive attitudes towards using chatbots if they are friendly, efficient,

and do not try to act too much like humans and are limited to simple interactions.

6.1 Theoretical implications This work contributes to existing knowledge of the design and the role of anthropomorphism in customer

service chatbots by providing quantitative and qualitative data. While the results of the quantitative data

derived from the online experiment did not yield statistically significant results, the interviews revealed

important aspects of how different chatbots are perceived.

First, the interviews revealed that users attribute certain humanlike characteristics to computers,

even when they know they are talking with a computer. For example, the participants referred to the

chatbot with gender pronouns, described it as “friendly”, or “lacking empathy”. These findings are

consistent with the CASA paradigm (Reeves & Nass, 1996, Nass & Moon, 2000). Thus, users

anthropomorphize chatbots, as the Computers as Social Actors (CASA) (Reeves & Nass, 1996; Nass &

Moon, 2000) paradigm suggests. According to the CASA paradigm, humans apply social rules to

computers even when they know they are interacting with a computer.

Page 38: Chatbot anthropomorphism: Adoption and acceptance in

32

However, there is a limit to how much a chatbot should act like a human, as a too humanlike robot might

trigger the uncanny valley effect (Ciechanowski et al., 2019).

Second, none of the chatbots were held on particularly higher expectations than the other,

regardless of their anthropomorphic qualities. This finding is contrary to previous studies that have

suggested that users hold humanlike chatbots to higher expectations (Gnewuch et al., 2017). However,

these results reflect those of Følstad & Skjuve (2019) who point out that users hold rather realistic

expectations towards chatbots’ abilities.

Third, it seems that a chatbot avatar plays a smaller role in user acceptance than previously

thought, as suggested by Følstad & Skjuve (2019). Thus, users might pay more attention to a chatbot’s

conversational style, and expect it to be humanlike, but only to a certain extent. For example, the

informal conversational style was mostly preferred to the formal conversational style. However, the use

of emojis received mixed messages, some claiming that the chatbot seemed fake.

Finally, these findings can be used by scholars when studying the role of anthropomorphism in

chatbot acceptance. The findings of this research showed that a customer service chatbot with a human

avatar and informal, human-like conversational style does not result in higher perceived ease of use,

usefulness, helpfulness, competence, or trust. Additionally, it was found that attitudes towards using

chatbots are not based on their appearance or conversational style.

In conclusion, the acceptance of customer service chatbots is largely based on users’ goal

orientation. This can also be reflected in the importance of perceived helpfulness from the online

experiment. Thus, more research should be conducted to find out what kind of chatbot satisfies the need

to efficiently resolve users’ inquiries. For example, further studies could compare different answer styles

and message lengths to explore what is the best way to deliver information to customers.

6.2 Practical implications The results of the online experiment show that an anthropomorphic visual appearance or conversational

style does not significantly influence a chatbot’s perceived ease of use, helpfulness, competence, trust, or

attitude towards using chatbots.

The interviews shed light on how users feel about chatbots that have different anthropomorphic

levels. The chatbot with either a human or a robot avatar was preferred over a logo avatar, and the

informal conversational style was slightly preferred over the formal conversational style. In other words,

it seems that a figure is more inviting to the users than just a logo. It seems that users are accustomed to

the feeling of talking to someone via chat, and a company logo is not enough to trigger that feeling,

whereas even a robot avatar can satisfy that. Thus, chatbot designers would benefit from designing a

chatbot that has an avatar with a human or a robot. It is not recommended to use an avatar with only a

company logo and formal conversational style, as it might create a feeling of disconnection in the

customer service situation.

When it comes to the conversational style of a chatbot, chatbot designers should focus on

friendliness, asking follow-up questions, and addressing the user with personal pronouns. For example,

the chatbot can ask “Do you need any more help?”. Additionally, the use of emojis should be limited, and

they should not be used when the user is trying to tackle a serious matter. An overly friendly chatbot

might annoy the users, while a too machinelike approach leaves the users in need of human empathy and

human touch.

Another interesting finding from the interviews is that chatbot users would like to know whether

they are talking with a chatbot or with a human customer support agent. Transparency is important

because customers have different expectations from humans and computers. Therefore, chatbot

designers should consider including a clear message in the chatbot’s introduction, indicating that the

Page 39: Chatbot anthropomorphism: Adoption and acceptance in

33

customer is talking with a computer. When the customers know that they are not talking with a human,

they do not expect to have perfect answers from the bot.

Lastly, the top priority of chatbots should be resolving user inquiries. Thus, the chatbot should

not converse in a way that might hinder users’ goals for solving problems. For example, a too chatty

chatbot might be perceived as annoying, as customers do not expect to have a completely humanlike

conversation with a chatbot; they are there to solve a problem for the customer.

6.3 Limitations The first limitation of this study relates to the online experiment. The participants had an interaction

with the chatbot during the experiment. However, they were given a list of questions and they could

not modify the questions in any way. Such an interaction might feel too scripted to elicit meaningful

feelings about the chatbot. As many of the interviewees said, their experience with a chatbot in real

life has been different from the chatbot in the online experiment. In this case, the chatbot could not

give any wrong answers or get confused, because the interaction was programmed to be successful

every time. Consequently, even though the results from the online experiment were not significant,

the interviewees liked the interaction with the chatbot regardless of the condition. However, with a

small sample size, caution must be applied, as the findings might not be generalizable.

An additional limitation is the measurements used for this study. While the original scales

were reliable, perceived usefulness and perceived helpfulness were closely related to each other.

Future studies should focus on revising a scale that appropriately measures the chatbots’ perceived

usefulness.

The interview sample poses a few limitations. First, only two people were interviewed per

chatbot condition. More interviews should be conducted to better understand how users feel about

different customer service chatbots. Moreover, the interviews were conducted online because of

COVID-19 restrictions. Therefore, it is possible that some nonverbal cues were missed in the interview

process. Lastly, the interviews were coded by one person. Two or more coders would allow for testing

interrater reliability, and thus increasing the reliability of the qualitative results.

6.4 Recommendations for future research To increase the ecological validity of future studies, it is recommended to allow users to phrase a

question in their own words based on a pre-determined scenario. This might be challenging as a chatbot

with more sophisticated AI is required.

Another recommendation for future research would be to include chatbot disclosure as a

variable. This was an important aspect to the interviewees and might significantly change how the

chatbot’s avatar and conversational style are perceived. Additionally, future research could test the same

variables in different contexts. It seems that users are task-oriented when using customer service

chatbots. However, it might be interesting to explore whether the visual appearance and conversational

style matter in other fields, such as marketing or the travel industry.

To get the most out of customer service chatbots, it should be explored how chatbots can handle

different types of inquiries as efficiently as possible. Thus, future research could incorporate task

complexity as one of the variables. It could be that differently anthropomorphic chatbots are better

suited for complex interactions.

Page 40: Chatbot anthropomorphism: Adoption and acceptance in

34

7 Conclusion This research aimed to explore how text-based customer service chatbots’ visual appearance and

conversational style influence their perceived ease of use, helpfulness, and competence. Additionally, it

was investigated how the visual appearance and conversational style influence users’ trust towards

chatbots and attitude towards using chatbots in the future. These effects were explored with mixed-

method research, employing an online experiment and semi-structured interviews.

The findings of the online experiment revealed no significant effects of the chatbots’ visual

appearance or the conversational style on the dependent variables. The interview results revealed that

while the users slightly preferred a chatbot with a human or a robot avatar and a human-like, informal

conversational style. However, the preference was not strictly related to their perceived ease of use,

usefulness, helpfulness, competence, trust, or attitude towards using chatbots.

When it comes to the visual appearance of a chatbot, it is recommended to use either a human

or a robot avatar. The participants preferred illustrated pictures to realistic pictures of humans or robots.

Additionally, the conversational style of a chatbot should be friendly and casual, but not incorporate too

many emojis to make the chatbot appear too humanlike, triggering the uncanny valley effect.

Additionally, chatbot users need transparency. Therefore, it is recommended that the chatbot

discloses itself at the beginning of the interaction. While users do not seem to hold anthropomorphic

chatbots on higher standards, their previous experiences dictate their expectations of chatbot

interactions. However, further studies are needed to determine what is the optimal way for a chatbot to

disclose itself. Lastly, it seems that users prefer chatbots when handling simple, FAQ-type questions and

prefer a human customer service agent for complex situations.

These findings offer interesting opportunities for further research. Based on the discoveries

made in this study, chatbot professionals should focus on creating a chatbot that can hold efficient,

friendly conversations and handle users’ inquiries. While an illustrated human or robot chatbot avatars

might not negatively influence the interaction, users focus more on problem-solving than the chatbot’s

appearance.

Page 41: Chatbot anthropomorphism: Adoption and acceptance in

35

8 References Adam, M., Wessel, M. & Benlian, A (2020). AI-based chatbots in customer service and their effects on user

compliance. Electronic Markets, 1, 1-19. doi:10.1007/s12525-020-00414-7

Agarwal, R., & Karahanna, E. (2000). Time flies when you're having fun: Cognitive absorption and beliefs

about information technology usage. MIS Quarterly, 24(4), 665-694. doi:10.2307/3250951

Ajzen, I., & Fishbein, M. (1980). Understanding attitudes and predicting social behavior. Englewood Cliffs, NJ:

Prentice-Hall.

Anthropomorphism. (n.d.). Oxford Reference. Retrieved from

https://www.oxfordreference.com/view/10.1093/oi/authority.20110803095416392

Appel, J., Rosenthal-von der Püttern, A.M., Krämer, N., & Gratch, J. (2012). Does humanity matter? Analyzing

the importance of social cues and perceived agency of a computer system for the emergence of

social reactions during human-computer interaction. Advances in Human-Computer Interaction, 2.

doi:10.1155/2012/324694

Argyle, M. (1988). Bodily communication. London: Methuen & Co Ltd.

Ashfaq, M., Yun, J., Yu, S. & Loureiro, S. (2020). I, chatbot: modeling the determinants of users’ satisfaction

and continuance intention of AI-powered service agents. Telematics and Informatics, 54(1), doi:

10.1016/j.tele.2020.101473

Bartneck, C., Kulic, D., Croft, E.A., Zoghbi, S. (2009). Measurement instruments for the anthropomorphism,

animacy, likeability, perceived intelligence, and perceived safety of robots. International Journal of

Social Robotics, 1(1), 71-81. doi:10.1007/s12369-008-0001-3

Brandtzaeg, P-B. & Følstad, A. (2018). Chatbots: changing user needs and motivations. Interactions, 25(5),

38-43. doi:10.1145/3236669

Brandtzaeg, P-B. & Følstad, A. (2017). Why people use chatbots. In Kompatsiaris, I., Cave, J., Satsiou, A.,

Carle, G., Passani, A., Kontopoulos, E., Diplaris, S., & McMillan, D. (Eds.), Lecture Notes in Computer

Science: Vol. 10673. Internet Science (pp. 377-392). doi:10.1007/978-3-319-70284-1_30

Brave, S., Nass, C. (2002). Emotion in human-computer interaction. In Jacko, J.A., Sears, A. (Eds.)

The Human-Computer Interaction Handbook: Fundamentals, Evolving Technologies and Emerging

Applications. (pp. 81–96). Mahwah, NJ: L. Erlbaum Associates Inc.

Callcentre Helper (2020, August 24). 8 ways to improve chatbots and boost customer satisfaction. Callcentre

Helper. Retrieved from https://www.callcentrehelper.com/ways-to-improve-chatbots-boost-

satisfaction-124375.htm

Cassell, J., and Bickmore, T. (2000). External manifestations of trustworthiness in the interface,

Communications of the ACM, 43(12), 50–56. doi:10.1145/355112.355123

Page 42: Chatbot anthropomorphism: Adoption and acceptance in

36

Chen, L.D. & Tan, J. (2004). Technology adaptation in e-commerce: key determinants of virtual stores

acceptance. European Management Journal, 22(1), 74-86. doi:10.1016/j.emj.2003.11.014

Cheng, L-d., Gillenson, M.L., Sherrel, D.L. (2002) Enticing online consumers: an extended technology

acceptance perspective. Information and Management, 39(8), 705-719. doi:10.1016/S0378-

7206(01)00127-6

Cho, J. (2006). The mechanism of trust and distrust formation and their relational outcomes. Journal of

Retailing, 82(1), 25-35. doi:10.1016/j.jretai.2005.11.002

Ciechanowski, L., Przegalinska, A., Magnuski, M., & Gloor, P. (2019). In the shades of uncanny valley: an

experimental study of human-chatbot interaction. Future Generation Computer Systems, 92, 539-

548. doi:10.1016/j.future.2018.01.055

Coniam, D. (2008). Evaluating the language resources of chatbots for their potential in English as a second

language. ReCALL, 20(1), 98-116. doi:10.1017/S0958344008000815

Conversocial. (2017). The definitive guide to social, mobile customer service. Retrieved from

https://www.conversocial.com/hubfs/DefinitiveGuide2016.pdf

Corritore, C. L., Kracher, B. and Wiedenbeck, S. (2003) On-line trust: concepts, evolving themes, a model.

International Journal of Human-Computer Studies, 58(6), 737–758. doi:10.1016/S1071-

5819(03)00041-787i9

Coyle, J.R., Smith, T. and Platt, G. (2012), “I'm here to help”: How companies' microblog responses to

consumer problems influence brand perceptions, Journal of Research in Interactive Marketing, 6(1),

27-41. doi:10.1108/17505931211241350

Cui, L., Huang, S., Wei, F., Tan, C., Duan, C., & Zhou, M. (2017). SuperAgent: A customer service chatbot for E-

commerce websites. In Bansal, M. & Ji, H (Eds.), Proceedings of System Demonstrations. Retrieved

from https://www.aclweb.org/anthology/P17-4000.pdf

Dabholkar, P. A. (1994). Incorporating choice into an attitudinal framework: analyzing models of mental

comparison processes. Journal of Consumer Research, 21(1), 100-118. doi:10.1086/209385

Davis, F. (1989). Perceived usefulness, Perceived ease of use, and user acceptance of information

technology. MIS Quarterly, 13(3), 319-340. doi:10.2307/249008

Davis, F.D., Bagozzi, R.P., & Warshaw, P.R. (1989). User acceptance of computer technology: A comparison of

two theoretical models, Management Science, 35(8), 982-1003. Retrieved from

https://www.jstor.org/stable/2632151

De Cicco, R., Lima Da Costa, S.C., & Palumbo, R. (2020). Should a chatbot disclose itself? Implications for an

online conversational retailer. CONVERSATIONS 2020, the 4th International Workshop on Chatbot

Research, an online virtual event hosted by the University of Amsterdam, the Netherlands. Retrieved

from

https://www.researchgate.net/publication/346312693_Should_a_Chatbot_Disclose_Itself_Implicati

ons_for_an_Online_Conversational_Retailer

Page 43: Chatbot anthropomorphism: Adoption and acceptance in

37

Derks, D., Bos, A.E.R., & von Grumbkow, J. (2004). Emoticons and social interaction on the Internet: the

importance of social context. Computers in Human Behavior, 23, 842-849. doi:

10.1016/j.chb.2004.11.013

Dietvorst, B., Simmons, J. P., & Massey, C. (2015). Algorithm aversion: people erroneously avoid algorithms

after seeing them err. Journal of Experimental Psychology, 144(1), 114-126. doi:10.1037/xge0000033

Erickson, T. (1997). Designing agents as if people mattered. In Bradshaw, J. M. (Ed.), Software Agents (pp.

79–96). Cambridge, MA: MIT Press.

Field, A. P. (2009). Discovering statistics using SPSS: (and sex and drugs and rock 'n' roll). Los Angeles: SAGE

Publications.

Følstad, A., Nordheim, C. & Bjørkli, C. (2018). What makes users trust a chatbot for customer service? An

exploratory interview study. In Bodrunova S. (Ed.): Lecture Notes in Computer Science: Vol. 11193.

Internet Science (pp. 194-208). doi:10.1007/978-3-030-01437-7_16

Følstad, A., Skjuve, M. (2019, August). Chatbots in customer service: User experience and motivation.

Proceedings of the International Conference on Conversational User Interfaces (CUI 2019), 1-9. doi:

10.1145/3342775.3342784

Gnewuch, U., Maedche, A., & Morana. S. (2017, December). Towards designing cooperative and social

conversational agents for customer service. Proceedings of the International Conference on

Information Systems, 2017. Retrieved from:

https://www.researchgate.net/profile/Ulrich_Gnewuch/publication/320015931_Towards_Designing

_Cooperative_and_Social_Conversational_Agents_for_Customer_Service/links/59c8d1220f7e9bd2c

01a38a5/Towards-Designing-Cooperative-and-Social-Conversational-Agents-for-Customer-

Service.pdf

Gnewuch, U., Morana, S., Adam, M.T.P., & Maedche, A., (2018, June). Faster is not always better:

understanding the effect of dynamic response delays in human-chatbot Interaction. European

Conference on Information Systems (ECIS2018), Portsmouth, United Kingdom, 2018. Retrieved from

https://www.researchgate.net/publication/324949980_Faster_Is_Not_Always_Better_Understandin

g_the_Effect_of_Dynamic_Response_Delays_in_Human-Chatbot_Interaction

Go, E. & Sundar, S. (2019). Humanizing chatbots: the effects of visual, identity and conversational cues on

humanness perceptions. Computers in Human Behavior, 97, 304-316. doi:

10.1016/j.chb.2019.01.020

Hald, G. (2018, February 15). 7 Benefits of using chatbots to drive your business goals. Retrieved from

https://medium.com/botsupply/7-benefits-of-using-chatbots-to-drive-your-businessgoals-

5a3a5e809951.

Hill, J., Ford, W. R., & Farreras, I. G. (2015). Real conversations with artificial intelligence: A comparison

between human–human online conversations and human–chatbot conversations. Computers in

human behavior, 49, 245-250. doi: 10.1016/j.chb.2015.02.026

Page 44: Chatbot anthropomorphism: Adoption and acceptance in

38

Ho, C-C., & MacDorman, K.F. (2010). Revisiting the uncanny valley theory: Developing and validating an

alternative to the Godspeed indices. Computers in Human Behavior, 26(6), 1508-1518. doi:

10.1016/j.chb.2010.05.015

Ischen, C., Araujo, T., Voorveld, H., van Noort, G., & Smit, E. (2020). Privacy concerns in chatbot interactions.

In A. Følstad, T. Araujo, S. Papadopoulos, EL-C. Law, O-C. Granmo, E. Luger, & P. B. Brandtzaeg (Eds.),

Chatbot Research and Design (pp. 34-48). Switzerland: Springer International Publishing.

doi:10.1007/978-3-030-39540-7_3

Kasilingam, D.L. (2020). Understanding the attitude and intention to use smartphone chatbots for shopping.

Technology in Society, 62. doi:10.1016/j.techsoc.2020.101280

Kelleher, T. (2009). Conversational voice, communicated commitment, and public relations outcomes in

interactive online communication. Journal of Communication, 59(1), 172-188. doi:10.1111/j.1460-

2466.2008.01410.x

Koda, T. (1996). Agents with faces: a study on the effects of personification of software agents. (Master’s

thesis). Retrieved from

https://www.researchgate.net/publication/37991356_Agents_with_faces_a_study_on_the_effects_

of_personification_of_software_agents

Koh, J. Y. & Sundar, S.S. (2010). Effects of specialization in computers, web sites, and web agents on e-

commerce trust. International Journal of Human Computer Studies, 68(2), 899-912. doi:

10.1016/j.ijhcs.2010.08.002

Kulviwat, S., Bruner II, G. C., Kumar, A., Nasco, S. A., & Clark, T. (2007). Toward a unified theory of consumer

acceptance technology. Psychology & Marketing, 24(12), 1059-1084. doi:10.1002/mar.20196

Laban, G. & Araujo, T. (2020). Working together with conversational agents. In A. Følstad, T. Araujo, S.

Papadopoulos, EL-C. Law, O-C. Granmo, E. Luger, & P. B. Brandtzaeg (Eds.), Chatbot Research and

Design (pp. 34-48). Switzerland: Springer International Publishing. doi:10.1007/978-3-030-39540-

7_15

Laurel, B. (1997). Interface agents: metaphors with character. In Jeffrey M. Bradshaw (Ed.), Software agents

(pp. 67-77.) Cambridge, MA: MIT Press.

Legris, P., Ingham, J., & Collerette, P. (2003). Why do people use information technology? A critical review of

the technology acceptance model. Information and Management, 40(3), 191-204. doi:

10.1016/S0378-7206(01)00143-4

Liebrecht, C., & van Hooijdonk, C. (2020). Creating humanlike chatbots: What chatbot developers could learn

from webcare employees in adopting a conversational human voice. In Følstad, A., Araujo, T.,

Papadopoulos, S., Lai-Chong Law, E., Granmo, O-C., Luger, E., & Brandtzaeg, P.B. (Eds.), Chatbot

Research and Design (pp. 51-64). Switzerland: Springer International Publishing. doi:10.1007/978-3-

030-39540-7

Page 45: Chatbot anthropomorphism: Adoption and acceptance in

39

Limayem, M., Hirt, S. G., & Cheung, C. M. (2007). How habit limits the predictive power of intention: The

case of information systems continuance. MIS Quarterly, 31(4), 705-737. doi:10.2307/25148817

Luger, E. & Sellen, A. (2016). “Like having a really bad PA”: The gulf between user expectation and

experience of conversational agents. In Kurosu, M. (Ed.), Lecture Notes in Computer Science: Vol.

9731. Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (pp. 5286-

5297). doi:10.1145/2858036.2858288

Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). An integrative model of organizational trust. Academy

of management review, 20(3), 709-734. doi:10.2307/258792

McCallum, R. J., & Harrison, W. (1985). Interdependence in the service encounter. In J. A. Czepiel, M. R.

Solomon, & C. F. Suprenant (Eds.), The service encounter: Managing employee/customer interaction

in services business (pp. 82-113). Lexington: Lexington Books.

Mori, M., MacDorman, K. F., & Kageki, N. (2012). The uncanny valley [from the field]. IEEE Robotics &

Automation Magazine, 19(2), 98-100.

Mozafari, N., Weifer, W., & Hammerschmidt, M. (2020, October). The chatbot disclosure dilemma: Desirable

and undesirable effects of disclosing the non-human identity of chatbots. In Proceedings of the 41st

International Conference on Information Systems. doi:10.24251/HICSS.2021.355

Nass, C., & Moon, Y. (2000). Machines and mindlessness: Social responses to computers. Journal of Social

Issues, 56(1), 81–103. doi:10.1111/0022-4537.00153

Nass, C., Steuer, J., & Tauber, E. R. (1994, April). Computers are social actors. In Proceedings of the SIGCHI

conference on Human factors in computing systems (pp. 72-78). doi:10.1145/191666.191703

Nordheim, C.B., Følstad, A., & Bjørkli, C.A. (2019). An initial model of trust in chatbots

for customer service—findings from a questionnaire study. Interacting with computers, 31(3), 317-

335. doi:10.1093/iwc/iwz022

Nowak, K.L. & Rauh, C. (2006). The influence of avatar on online perceptions of anthropomorphism,

androgyny, credibility, homophily, and attraction. Journal of Computer-Mediated Communication,

11(1), 153-178. doi:10.1111/j.1083-6101.2006.tb00308.x

NT, B. (June 26, 2020). 7 best chatbot building platforms that require no coding. Robotics biz. Retrieved from

https://roboticsbiz.com/6-best-chatbot-building-platforms-that-require-no-coding/

Oghuma, A. P., Libaque-Saenz, C. F., Wong, S. F., & Chang, Y. (2016). An expectation-confirmation model of

continuance intention to use mobile instant messaging. Telematics and Informatics, 33(1), 34-47.

doi:10.1016/j.tele.2015.05.006

Pavlou, P. A. (2003). Consumer acceptance of electronic commerce: Integrating trust and risk with the

technology acceptance model. International Journal of Electronic Commerce, 7(3), 101-134. doi:

10.1080/10864415.2003.11044275

Page 46: Chatbot anthropomorphism: Adoption and acceptance in

40

Penn State. (2019, April 18). Adding human touch to unchatty chatbots may lead to bigger letdown.

ScienceDaily. Retrieved November 23, 2020, from

www.sciencedaily.com/releases/2019/04/190418131356.htm

Przegalinska, A., Ciechanowski, L., Stroz, A., Gloor, P., & Mazurek, G. (2019): In bot we trust: A new

methodology of chatbot performance measures. Business Horizons, 62(6), 785-797. doi:

10.1016/j.bushor.2019.08.005

Reeves, B., & Nass, C. (1996). The media equation: How people treat computers, television, and

new media like real people and places. New York, NY: Cambridge University Press.

Rietz, T., Maedche, A., & Benke, I. (2019). The impact of anthropomorphic and functional chatbot design

features in enterprise collaboration systems on user acceptance. Proceedings of the Association for

Information Systems, USA, 1642-1656. Retrieved from

https://www.researchgate.net/publication/330728789_The_Impact_of_Anthropomorphic_and_Fun

ctional_Chatbot_Design_Features_in_Enterprise_Collaboration_Systems_on_User_Acceptance

Rocca, K. A., & McCroskey, J. C. (1999). The interrelationship of student ratings of instructors' immediacy,

verbal aggressiveness, homophily, and interpersonal attraction. Communication education, 48(4),

308-316. doi:10.1080/03634529909379181

Scheerder, J.M. (2018). Chatbots versus Instant Messaging: comparing two technologies used for customer

contact service (Master’s thesis). Retrieved from https://scripties.uba.uva.nl/search?id=656089

Sen, S., & Lerman, D. (2007). Why are you telling me this? An examination into negative consumer reviews

on the web. Journal of interactive marketing, 21(4), 76-94. doi:10.1002/dir.20090

Sheehan, B. Jin, H, S., & Gottlieb, U. (2020). Customer service chatbots: Anthropomorphism and adoption.

Journal of Business Research, 115, 14-24. doi:10.1016/j.jbusres.2020.04.030

Singh, N. (2017, September). Why consumers want your chatbot to be more human? Entrepreneur.

Retrieved from: https://www.entrepreneur.com/article/300412

Smolaks, M. (2019, September 19). The antidote to chatbot frustration. AI Business. Retrieved from

https://aibusiness.com/document.asp?doc_id=761059

Sproull, L., Subramani, M., Kiesler, S., Walker, J., & Waters, K. (1996). When the interface is a face. Human-

computer Interaction, 11(9), 97-124. doi:11. 97-124. 10.1207/s15327051hci1102_1.

Taylor, T. L. (2002). Living digitally: Embodiment in virtual worlds. In R. Schroeder (Ed.), The social life of

avatars; Presence and interaction in shared virtual environments (pp. 40–62). London: Springer-

Verlag.

Thong, J.Y.L., Hong, S-J., & Tam, K.Y. (2006). The effects of post-adoption beliefs on the expectation-

confirmation model for information technology continuance. International Journal of Human-

Computer studies, 64(9), 799-810. doi:10.1016/j.ijhcs.2006.05.001

Page 47: Chatbot anthropomorphism: Adoption and acceptance in

41

Toader, D.C., Boca, C., Toader, R. Macelaru, M., Toader, C., Ighian, D. & Radulescu, A.T. (2020). The effect of

social presence and chatbot errors on trust. Sustainability, 12(1). doi:10.3390/su12010256

Van Hooijdonk, C., & Liebrecht, C. (2018). Wat vervelend dat de fiets niet is opgeruimd! Heb je een

zaaknummer voor mij?: Conversational human voice in webcare van Nederlandse gemeenten.

Tijdschrift voor Taalbeheersing, 40(1), 45-81. doi:10.5117/TVT2018.1.hooi

Verhagen, T., van Nes, J., Feldberg, F., & van Dolen, W. (2014). Virtual customer service agents: using social

presence and personalization to shape online service encounters. Journal of Computer-Mediated

Communication, 19(3), 529-545. doi:10.1111/jcc4.12066

Venkatesh, V. (2000). Determinants of perceived ease of use: Integrating control, intrinsic motivation, and

emotion into the Technology Acceptance Model. Information Systems Research, 11(4), 342-365. doi:

10.1287/isre.11.4.342.11872

Walther, J. B., Liang, Y.J., Ganster, T., Wohn, D. Y., & Emington, J. (2012) Online reviews, helpfulness ratings,

and consumer attitudes: An extension of congruity theory to multiple sources in web 2.0. Journal of

Computer-Mediated Communication, 18(1), 97-112. doi:10.1111/j.1083-6101.2012.01595.x

Yin, D., Bond, S. D., & Zhang, H. (2014). Anxious or angry? Effects of discrete emotions on the perceived

helpfulness of online reviews. MIS quarterly, 38(2), 539-560. doi:10.2307/26634939

Zarouali, B., Van den Broeck, E., Walrave, M., & Poels, K. (2018). Predicting consumer responses to a chatbot

on Facebook. Cyberpsychology, Behavior, and Social Networking, 21(8), 491-497. doi:

10.1089/cyber.2017.0518

Page 48: Chatbot anthropomorphism: Adoption and acceptance in

42

9 Appendix 1: Conditions 9.1 Condition 1: Human avatar + formal conversational style

9.2 Condition 2: Human avatar + Informal conversational style

Page 49: Chatbot anthropomorphism: Adoption and acceptance in

43

9.3 Condition 3: Robot avatar + Formal conversational style

9.4 Condition 4: Robot avatar + Informal conversational style

Page 50: Chatbot anthropomorphism: Adoption and acceptance in

44

9.5 Condition 5: Logo avatar + Formal conversational style

9.6 Condition 6: Logo avatar + Informal conversational style

Page 51: Chatbot anthropomorphism: Adoption and acceptance in

45

10 Appendix 2: Chatbot conversations: Formal and informal style

10.1 Formal Customer Bot

Greetings Welcome to Tech Paradise. My name is Skip, the support bot. How can I assist you today?

Tracking order How can I track my order? If the order has been sent to you using a trackable service, the delivery is traceable. You will receive a shipping confirmation email from our warehouse once the order is on the way. Click on the tracking link in the email to view the up-to-date tracking.

Shipping time How long is the shipping time? If you place your order before 23:59, you will receive the order the next day. Otherwise, the shipping time is 1-2 days.

Pick up an item in a store Can I reserve a product online and pick it up in a store?

If the product you reserved online is in stock, it can be collected the same day in a Tech Paradise store. When reserving a product, indicate which store you would like to retrieve it from. If your chosen item is out of stock, it is possible to reserve it online and pick it up once it's back in stock.

Return Can I return an item to a store? You can return most items purchased at any Tech Paradise store for a full refund within 30 days of purchase.

Exchange Can I exchange my order to a new item?

It is possible. If you know the product you would like to exchange for, you can reserve it on our website. Please note that it is not possible to exchange products after 30 days of purchase.

Page 52: Chatbot anthropomorphism: Adoption and acceptance in

46

Payment When do I have to pay when I pick up an item from a store?

Payment is only required when the item is acquired in one of Tech Paradise stores.

Identification Do I need an ID with me when I pick up from a store?

Yes, an identification card is required. Please bring an ID card and the order number to the store when picking up your items.

10.2 Informal Customer Bot

Greetings Hello there friend, and

welcome to Tech Paradise!

I am Olivia, the support-

bot.

Let me know how I can help.

Tracking order How can I track my order? If your order has been sent to

you using a trackable service,

you can follow the journey to

its new home!

After purchasing a product,

you'll receive an email from

us with a tracking link so you

can be up to date 24/7.

Shipping time How long is the shipping

time?

Well, if you order before

23:59, you'll receive your

order the next day!

Otherwise, the shipping time

is 1-2 days. Super fast, right?

Pick up an item in a store I ordered an item online. Can

I pick it up in the store?

Absolutely!

If the product you want is in

stock, you can pick it up the

same day in one of our

stores. Just indicate which

store you want to visit in

your shopping cart.

If that item happens to be

out of stock in your selected

Page 53: Chatbot anthropomorphism: Adoption and acceptance in

47

store, you can easily reserve

it and just pick it up the next

day.

Return Can I return it to a Tech

Paradise store in my town?

Sure you can! You can

return items to any Tech

Paradise physical store for a

full refund within 30 days of

your purchase. As simple as

that!

Exchange Can I exchange my online

order to a new item?

Of course! Know which

product you’d like to

exchange for? Great! You can

reserve it on the website

before coming to the store.

Remember to do this within

30 days of purchase!

Payment When do I have to pay when I

pick up an item from a store?

There is no need to pay until

you pick up the item. This

includes products you have

reserved online

Identification Do I need an ID with me

when I pick up from a store?

Yes, make sure you bring an

ID card and the order

number. Then everything will

go smoothly, and you’ll

receive the items in no time!

Page 54: Chatbot anthropomorphism: Adoption and acceptance in

48

11 Appendix 3: Measures

Attitude towards using Dabholkar (1994)

How would you describe your feelings toward using chatbots to contact a company? (1) good-bad (2) pleasant-unpleasant (3) harmful-beneficial (4) favorable-unfavorable

Perceived usefulness

Davis (1989); Scheerder (2018)

Using the chatbot to contact a company enables me to accomplish my goal more quickly

Using the chatbot enhances my effectiveness

Using the chatbot makes it easier to contact a company

I find the chatbot useful when contacting a company

Perceived ease of use

Davis (1989); Scheerder (2018)

Using the chatbot to contact a company is complicated Using the chatbot to contact a company is confusing Using the chatbot to contact a company takes a lot of effort The bot is flexible to interact with

Perceived expertise Toader et al. (2019)

I felt that the chatbot knew what he/she was doing

I believe that the chatbot is competent

I think the chatbot is proficient

In my opinion, the chatbot is trained

In my opinion, the chatbot is experienced

I believe the chatbot is knowledgeable

Perceived helpfulness

Yin, Bond & Zhang (2014); Sen & Lerman (2007)

The chatbot was helpful

The chatbot was useful

The chatbot was informative

Trust Toader et al. (2019)

The chatbot seemed sincere during our interaction.

I felt that the chatbot was honest in our interaction.

I believe the chatbot was truthful when conversing with me.

I believe that the chatbot was credible during our conversation.

Page 55: Chatbot anthropomorphism: Adoption and acceptance in

49

12 Appendix 4: Interview Protocol

Introduction I want to thank you for taking the time to meet with me today. I would like to talk to you about your perceptions of a customer service chatbot. The interview should take a max of 30 minutes, including the interaction with the chatbot. If you agree, I will be recording the session because I do not want to miss any of your comments. All your responses will be kept confidential. This means that your interview responses will not be shared with anyone outside of this research team, and I will ensure that any information I include in the report does not identify you as the respondent. Remember, you do not have to talk about anything you don’t want to. If you feel uncomfortable, or do not want to continue the interview, please let me know, and we can stop. Do you have any questions about what I have just explained?

Theme Questions Remarks

Small talk Hello, how are you doing?

Explain the bot and the scenario

In this scenario, you are talking to a customer service chatbot. You’re considering buying a pair of headphones from an online web store, but you want to ask a couple of questions first.

General feelings about the interaction

How did you feel during the interaction with the chatbot?

Is there something that you particularly liked about the chatbot?

Is there something that you particularly disliked about the chatbot?

Avatar How do you feel about the chatbot’s appearance?

Conversational style How do you feel about the way the chatbot converses with you?

Ease of use Do you feel that the chatbot was easy to use?

Helpfulness Do you feel that the chatbot was helpful? (e.g. it did not give random answers that did not fit the question)

Competence Do you feel that the chatbot knew what it was doing?

Trust Does the chatbot seem trustworthy? You can think of its appearance, the way it converses with you

Page 56: Chatbot anthropomorphism: Adoption and acceptance in

50

Attitude towards using chatbots in the future

Would you consider a future interaction with this chatbot? Why/why not?

Additional comments Would you like to add something else?

13 Appendix 5: Questionnaire

Start of Block: Introduction

Q1 WELCOME

You are invited to participate in a web-based online survey on customer service chatbots. This study

explores how chatbots can be designed to offer the best customer service experience for people.

This research project is being conducted by Katja Raunio, a student at the University of Twente,

Netherlands. The survey takes about 10 minutes to complete. Please do not use a mobile phone to fill

this survey.

PARTICIPATION

Your participation in this survey is voluntary. You may refuse to take part in the research or exit the

survey at any time without penalty. You are free to decline to answer any particular question you do not

wish to answer for any reason.

You will receive no direct benefits from participating in this research study. However, your responses

may help us to learn more about the best design for customer service chatbots.

CONFIDENTIALITY

Your survey answers will be sent to Qualtrics where data will be stored in a password-protected

electronic format. Personal information, such as gender and age will be collected for demographic

purposes. We do not collect identifying information such as your name, email address, or IP address.

Therefore, your responses will remain anonymous. No one will be able to identify you or your answers,

and no one will know whether or not you participated in the study.

At the end of the survey, you will be asked if you are interested in participating in an additional interview

online. If you choose to provide contact information such as your phone number or email address, your

survey responses may no longer be anonymous to the researcher. However, no names or identifying

information would be included in any publications or presentations based on these data, and your

responses to this survey will remain confidential.

CONTACT

If you have questions at any time about the study or the procedures, you may contact Katja Raunio

Page 57: Chatbot anthropomorphism: Adoption and acceptance in

51

[[email protected]].

Q2 ELECTRONIC CONSENT

Please select your choice below. You may print a copy of this consent form for your records. Clicking on

the “Agree” button indicates that · You have read the above information · You voluntarily

agree to participate · You are 18 years of age or older

o Agree (1)

o Disagree (2)

End of Block: Introduction

Start of Block: Before you start

Q3

The aim of this study This study aims to explore how the design of a customer service chatbot influences

users' perceptions of them.

What is a chatbot?

A Chatbot is an Artificial Intelligence (AI) software application that can simulate a conversation with

people.

In online customer service, chatbots often work alongside human customer service agents, answering

peoples' questions, and addressing their concerns.

Procedure

In this survey, you will have a short interaction with a chatbot. You will ask the chatbot six questions.

Page 58: Chatbot anthropomorphism: Adoption and acceptance in

52

After the interaction, you will be presented with two additional questions about the chatbot's picture

and its conversational style.

On the next page, you will be presented with a scenario. Please read it carefully before proceeding to the

interaction with the chatbot.

End of Block: Before you start

Start of Block: Demographics

Q33 What is your gender?

o Male (1)

o Female (2)

o Other (4)

Page 59: Chatbot anthropomorphism: Adoption and acceptance in

53

Q34 What is your highest level of obtained education?

o Less than high school (1)

o Vocational education (7)

o High school (2)

o Bachelor (3)

o Master (4)

o Doctorate (PhD) (5)

o Other (6)

Q35 What is your age?

o 18-24 (2)

o 25-34 (3)

o 35-45 (4)

o 46-54 (7)

o 55-64 (5)

o 65+ (6)

Page 60: Chatbot anthropomorphism: Adoption and acceptance in

54

Q37 Proceed to the scenario

o Proceed (1)

End of Block: Demographics

Start of Block: Scenario

Q5 The scenario Imagine that you are considering purchasing an item from an electronics webshop.

Before making the final decision, you want to ask a few questions regarding the order, delivery, and the

store's return policy. You decide to head to the store's website and notice that a chatbot is available to

help you.

Q6 I have read the scenario

o Proceed to the interaction with the chatbot (1)

End of Block: Scenario

Start of Block: H/F

Q7 window.sntchChat.Init(113285) Interaction with the bot

The chatbot will appear to the right side of the screen.

Press the chatbot's image to start the conversation.

Then, ask the chatbot the following questions in the order they are presented in the list below.

Questions

How can I track my order?

How long does the delivery take?

I want to order an item online and pick it up in the store.

Can I do that?

Can I exchange or return online purchases in a store?

Page 61: Chatbot anthropomorphism: Adoption and acceptance in

55

When do I have to pay when I pick up an item from a store?

Do I need an ID with me when I pick up an item from a store?

Q38 Proceed to questions

o Proceed (1)

End of Block: H/F

Start of Block: H/IF

Q8 window.sntchChat.Init(113281) Interaction with the bot The chatbot will appear to the right side of

the screen.

Press the chatbot's image to start the conversation.

Then, ask the chatbot the following questions in the order they are presented in the list

below. Questions How can I track my order?

How long does the delivery take?

I want to order an item online and pick it up in the store.

Can I do that?

Can I exchange or return online purchases in a store?

When do I have to pay when I pick up an item from a store?

Do I need an ID with me when I pick up an item from a store?

Q39 Proceed to questions

o Proceed (1)

End of Block: H/IF

Page 62: Chatbot anthropomorphism: Adoption and acceptance in

56

Start of Block: R/F

Q9 window.sntchChat.Init(113729) Interaction with the bot

The chatbot will appear to the right side of the screen.

Press the chatbot's image to start the conversation.

Then, ask the chatbot the following questions in the order they are presented in the list below.

Questions

How can I track my order?

How long does the delivery take?

I want to order an item online and pick it up in the store.

Can I do that? Can I exchange or return online purchases in a store?

When do I have to pay when I pick up an item from a store?

Do I need an ID with me when I pick up an item from a store?

Q40 Proceed to questions

o Proceed (1)

End of Block: R/F

Start of Block: R/IF

Q10 window.sntchChat.Init(113720) Interaction with the bot

The chatbot will appear to the right side of the screen.

Press the chatbot's image to start the conversation.

Then, ask the chatbot the following questions in the order they are presented in the list below.

Questions

How can I track my order? How long does the delivery take? I want to order an item online and

pick it up in the store.

Can I do that? Can I exchange or return online purchases in a store? When do I have to pay when I

pick up an item from a store? Do I need an ID with me when I pick up an item from a store?

Page 63: Chatbot anthropomorphism: Adoption and acceptance in

57

Q41 Proceed to questions

o Proceed (1)

End of Block: R/IF

Start of Block: L/F

Q11 window.sntchChat.Init(133186) Interaction with the bot

The chatbot will appear to the right side of the screen.

Press the chatbot's image to start the conversation.

Then, ask the chatbot the following questions in the order they are presented in the list below.

Questions

How can I track my order?

How long does the delivery take?

I want to order an item online and pick it up in the store.

Can I do that?

Can I exchange or return online purchases in a store?

When do I have to pay when I pick up an item from a store?

Do I need an ID with me when I pick up an item from a store?

Q42 Proceed to questions

o Proceed (1)

End of Block: L/F

Start of Block: L/IF

Page 64: Chatbot anthropomorphism: Adoption and acceptance in

58

Q12 window.sntchChat.Init(133188) Interaction with the bot

The chatbot will appear to the right side of the screen.

Press the chatbot's image to start the conversation.

Then, ask the chatbot the following questions in the order they are presented in the list below.

Questions

How can I track my order?

How long does the delivery take?

I want to order an item online and pick it up in the store.

Can I do that?

Can I exchange or return online purchases in a store?

When do I have to pay when I pick up an item from a store?

Do I need an ID with me when I pick up an item from a store?

Q43 Proceed to questions

o Proceed (1)

End of Block: L/IF

Start of Block: Overall appearance

Page 65: Chatbot anthropomorphism: Adoption and acceptance in

59

Q13 When I think about the chatbot's conversational style, I think it felt...

Strongly disagree

(1)

Disagree (2)

Somewhat disagree

(3)

Neither agree nor disagree

(4)

Somewhat agree (5)

Agree (6) Strongly agree (7)

Alive (1) o o o o o o o Lively (2) o o o o o o o

Natural (3) o o o o o o o Interactive

(4) o o o o o o o Humanlike

(5) o o o o o o o Lifelike (6) o o o o o o o

End of Block: Overall appearance

Start of Block: Conversational style

Page 66: Chatbot anthropomorphism: Adoption and acceptance in

60

Q14 When I look at the chatbot's picture, I think it felt...

Strongly disagree

(1)

Disagree (2)

Somewhat disagree

(3)

Neither agree nor disagree

(4)

Somewhat agree (5)

Agree (6) Strongly agree (7)

Alive (1) o o o o o o o Lively (2) o o o o o o o

Natural (3) o o o o o o o Interactive

(4) o o o o o o o Humanlike

(5) o o o o o o o Lifelike (6) o o o o o o o

End of Block: Conversational style

Start of Block: Perceived usefulness

Page 67: Chatbot anthropomorphism: Adoption and acceptance in

61

Q15 Do you think the chatbot was useful?

Strongly disagree

(1)

Disagree (2)

Somewhat disagree

(3)

Neither agree nor disagree

(4)

Somewhat agree (5)

Agree (6) Strongly agree (7)

Using the chatbot to contact a company

enables me to

accomplish my goal

more quickly (1)

o o o o o o o

Using the chatbot

enhances my effectiveness

(2)

o o o o o o o

Using the chatbot makes it easier to contact a

company (3)

o o o o o o o

I find the chatbot

useful when contacting a company (4)

o o o o o o o

End of Block: Perceived usefulness

Start of Block: Perceived ease of use

Page 68: Chatbot anthropomorphism: Adoption and acceptance in

62

Q16 Do you think the chatbot was easy to use?

Strongly disagree

(1)

Disagree (2)

Somewhat disagree

(3)

Neither agree nor disagree

(4)

Somewhat agree (5)

Agree (6) Strongly agree (7)

Using the chatbot to contact a

company is complicated

(1)

o o o o o o o

Using the chatbot to contact a

company is confusing

(2)

o o o o o o o

Using the chatbot to contact a company

takes a lot of effort (3)

o o o o o o o

The chatbot is flexible to

interact with (4)

o o o o o o o

End of Block: Perceived ease of use

Start of Block: Perceived expertise

Page 69: Chatbot anthropomorphism: Adoption and acceptance in

63

Q17 Do you think the chatbot was competent?

Strongly disagree

(1)

Disagree (2)

Somewhat disagree

(3)

Neither agree nor disagree

(4)

Somewhat agree (5)

Agree (6)

Strongly agree (7)

I felt that the chatbot knew

what it was doing (1)

o o o o o o o I believe that the chatbot is competent (2) o o o o o o o

I think the chatbot is

proficient (3) o o o o o o o In my opinion, the chatbot is

trained (4) o o o o o o o In my opinion, the chatbot is experienced

(5) o o o o o o o

I believe the chatbot is

knowledgeable (6)

o o o o o o o

End of Block: Perceived expertise

Start of Block: Perceived helpfulness

Page 70: Chatbot anthropomorphism: Adoption and acceptance in

64

Q18 Do you think the chatbot was helpful?

Strongly disagree

(1)

Disagree (2)

Somewhat disagree

(3)

Neither agree nor disagree

(4)

Somewhat agree (5)

Agree (6) Strongly agree (7)

The chatbot

was helpful (1)

o o o o o o o The

chatbot was useful

(2) o o o o o o o

The chatbot

was informative

(3)

o o o o o o o

End of Block: Perceived helpfulness

Start of Block: Attitude

Page 71: Chatbot anthropomorphism: Adoption and acceptance in

65

Q19 How do you feel about using a chatbot in the future?

Strongly disagree

(1)

Disagree (2)

Somewhat disagree

(3)

Neither agree nor disagree

(4)

Somewhat agree (5)

Agree (6) Strongly agree (7)

Using the chatbot is

a good idea (1)

o o o o o o o Using the chatbot is

a wise idea (3)

o o o o o o o I like the idea of

using the chatbot

(4)

o o o o o o o

Using the chatbot

would be pleasant

(5)

o o o o o o o

End of Block: Attitude

Start of Block: Trust

Page 72: Chatbot anthropomorphism: Adoption and acceptance in

66

Q20 Do you feel like the chatbot was trustworthy?

Strongly disagree

(1)

Disagree (2)

Somewhat disagree

(3)

Neither agree nor disagree

(4)

Somewhat agree (5)

Agree (6) Strongly agree (7)

The chatbot seemed sincere

during our interaction

(1)

o o o o o o o

I felt that the chatbot was honest

in our interaction

(2)

o o o o o o o

I believe the chatbot was

truthful when

conversing with me (3)

o o o o o o o

I believe that the

chatbot was credible

during our conversation

(4)

o o o o o o o

End of Block: Trust

Start of Block: Interview request

Q21 If you would like to participate in an online interview about your interaction with the chatbot, please

leave your email address below.

________________________________________________________________

End of Block: Interview request

Page 73: Chatbot anthropomorphism: Adoption and acceptance in

67

Page 74: Chatbot anthropomorphism: Adoption and acceptance in

68

14 Appendix 6: Codebook interviews

15 Appendix 7: Interview results

15.1 Human avatar + formal conversational style

Results H/F

General feelings How did you feel during the interaction with the chatbot?

P1 - Very quick, detailed - Almost like she read my mind

P2

- Nice design - Easy to use - Replies fast

Likes Is there something that you particularly liked about the chatbot?

P1 - It was very quick and very detailed

P2

- Answers were very specific, immediately clear what the answer was

Dislikes Is there something that you particularly disliked about the chatbot?

P1 - I would like to have more time [for the bot to answer], like quarter of a second

P2

- Nothing, but opinion might be different if could ask questions with own words

Appearance How do you feel about the chatbot’s appearance?

P1 - Like the fact that it has a person, feels more personal and comfortable

- Makes the interaction feel more personal

P2 - It’s nice, it’s realistic to use an an illustrated avatar instead of real person

Conversational style How do you feel about the way the chatbot converses with you?

P1 - She is too quick - Gives more than just an answer to the

question, has a real conversation - Does not only have buttons, or keywords - She is personal, and not direct, she gives

more than just instructions

P2 - It’s very clear - It was clear that it was a bot and not a

human

Ease of use Do you think the chatbot was easy to use?

- It was easy to use, but it would be useful to get access to more information

Page 75: Chatbot anthropomorphism: Adoption and acceptance in

69

- Yet, it was immediately clear how to use the bot

Helpfulness Do you feel that the chatbot was helpful during the interaction?

P1 - I like the answers, but I would like to see some links where I can get additional information if I want it

P2 - Gave an answer to all the question, answer was immediately clear

Competence Do you feel that the chatbot was competent when giving you answers to your questions?

P1 - It was very competent. The answers were well organized and clear

P2 - Yes, it was very specific

Trust towards chatbots Do you trust this chatbot?

P1 - Yeah, when I don’t know where to look for things.

P2 - Yeah, it's trustworthy. But the questions I ask the chatbotand yeah, I don't know how it is when I use another question

Attitude towards using chatbots Would you consider a future interaction with this chatbot?

P1 - Yes, for simple questions. If it’s more personal [unique] situation, I would like to talk to a human

P2 - Yeah, I’m very positive. But again, I don't know how it is with other questions or maybe considerations, I will have, but for now it's very positive.

15.2 Human avatar + informal conversational style

Results H/IF

General feelings How did you feel during the interaction with the chatbot?

P3 - She was really fast in replying and I really liked it.

P4

- I like the information's nice

Likes Is there something that you particularly liked about the chatbot?

P3 - It was very quick and very detailed

P4

- I also I like to think that in the thing itself, in the window itself, it says the timestamps

Dislikes Is there something that you particularly disliked about the chatbot?

Page 76: Chatbot anthropomorphism: Adoption and acceptance in

70

P3 - Well, yeah, her replies were that fast so of course you don’t know it’s a human…

P4

- No, nothing

Appearance How do you feel about the chatbot’s appearance?

P3 - I prefer this. Especially when it’s a chatbot, I prefer it like this and not a real image.

- Like, why, why use a real picture when

you are not using a human?

P4 - You instantly know it’s a bot, which can be bad

Conversational style How do you feel about the way the chatbot converses with you?

P3 - I usually like it when it’s taking a bit

longer

P4 - I guess it's good because the responses I

would think the responses are made by a

person and they've put some effort into

trying to make the bot sophisticated and

as user friendly as possible.

Ease of use Do you think the chatbot was easy to use?

P3 - It was very easy to find, type, and ask more questions… And I think it’s very easy, if you have a question, to reach out to a chatbot…

P4 - Yes, it was

Helpfulness Do you feel that the chatbot was helpful during the interaction?

P3 - Yes, I’d like to give him some details and let a human take over

P4 - It didn’t persuade me to change to bots

Competence Do you feel that the chatbot was competent when giving you answers to your questions?

P3 - Yeah, absolutely. It… she was, she answered all my questions and yeah, very specific answers. I think that’s very nice.

P4 - It can’t probably help with more complicated questions

Trust towards chatbots Do you trust this bot?

P3 - I don’t actually know who I trust more…[human or a robot] I think, both have benefits and downsides. Still, I would like to know. I think both can handle it the right way. Of course, humans can make mistakes more often.

P4 - Probably not. It's a bot that… For every question they ask, even simple ones, you

Page 77: Chatbot anthropomorphism: Adoption and acceptance in

71

know, when you…when you go online and you enter into a room and somebody starts chatting with you and it's obviously a bot. I mean, that's bad, because, you know instantly that it’s a bot.

Attitude towards using chatbots Would you consider a future interaction with this chatbot?

P3 - Yes, definitely

P4 - Probably I would search for myself because I'm used to it.

- If if the bots were as sophisticated as

[other bots]. Yeah. Then I would

reconsider. Like, seriously reconsider.

15.3 Robot avatar + formal conversational style

Results R/F

General feelings How did you feel during the interaction with the chatbot?

P5 - General feelings. Yeah. Um, I think it felt like a bot.

P6

- Responses were a bit quick

Likes Is there something that you particularly liked about the chatbot?

P5 - it was quick responding and didn't take long.

P6

- , I think it was a pretty normal bot , so I’m

quite neutral about it.

Dislikes Is there something that you particularly disliked about the chatbot?

P5 - Maybe it could have been more elaborate on the answers

- […] For example, like could to give any more shipping options or could have given more payment options.

- So just instead of being very razor accurate, it should be more broad.

P6

- [the quick responses] kind of shows or makes it feel like it doesn't really think about what you're asking. That usually means that the bot is not really, um, intelligent

Page 78: Chatbot anthropomorphism: Adoption and acceptance in

72

Appearance How do you feel about the chatbot’s appearance?

P5 - You know, it's it's a robot dude. OK, if I if I

know it's a robot, I'd rather prefer a

robot.

- I think I think if something really tries to

be human, like so it's a bit uncanny. It's

not not positive. I think so in this case it

was a robot.

P6 - It looks like a bot to me. I mean, it’s certainly…, it's nice to know when you if you talk to a bot

Conversational style How do you feel about the way the chatbot converses with you?

P5 - I think things like response time was

roboty. I think that's appropriate.

- So it was a business orientated, I think. It

didn’t chit chat

- […]I don't go to a customer service to chit

chat. I want answers to my stuff.

- [emojis] I don't think that adds any value

to the chatbot. But now for customer

service, I think in my opinion, it's not, um.

It's unnecessary.

P6 - I think it was fine, it was pretty

straightforward.

Ease of use Do you think the chatbot was easy to use?

P5 - Yeah, it was pretty easy to do. Mm hmm.

P6 - Yes, it's just a normal chat window. So it

is what you're used to.

Helpfulness Do you feel that the chatbot was helpful during the interaction?

P5 - Yes, but again quicker, precise answers to my question.

P6 - I mean, it answered the question so

that’s always good

Competence Do you feel that the chatbot was competent when giving you answers to your questions?

P5 - I think was competent enough for the questions that were asked.

P6 - I mean, is competent in answering the questions that I asked about if I have any difficult questions or I mean, usually I'd rather to talk to a customer support agent

Page 79: Chatbot anthropomorphism: Adoption and acceptance in

73

Trust Do you trust this chatbot?

P5 - I trust chatbots as far as I trust myself

asking specific enough questions. So if I

can ask a question specific enough. Yeah,

then I trust it. I think at first it's just as

much, if not more than human most of

the time.

P6 - With simple questions that it can look up from, um, its configuration, yes, but with more advanced questions… no

Attitude towards using chatbots Would you consider a future interaction with this chatbot?

P5 - I mean, it was the first line of connection

and the quickest way I would do it.

P6 - Yeah, it's fine. If if I wouldn't know where

to find the answer and it was a simple

question, I would use it.

15.4 Robot avatar + informal conversational style

Results R/IF

General feelings How did you feel during the interaction with the chatbot?

P7 - so the first thing I noticed was, well, I

though the chatbot was friendly.

P8

- Uh, It’s a bit wordy.

Likes Is there something that you particularly liked about the chatbot?

P7 - The friendliness, I think it made it more personal.

P8

- Seems very fast

Dislikes Is there something that you particularly disliked about the chatbot?

P7 - Nothing came to my mind, I like this chatbot…

P8

- [wordiness] I don't think it is for the general public, but for me personally, yeah.

Appearance How do you feel about the chatbot’s appearance?

Page 80: Chatbot anthropomorphism: Adoption and acceptance in

74

P7 - I thought it was cute. I think it fits it

because it was , like, a cartoon style

robot. Like it… I just think it had a friendly

demeanor despite being robotic

P8 - I don't have any feelings towards him.

Conversational style How do you feel about the way the chatbot converses with you?

P7 - despite being robotic because it didn’t

speak robotic… It was designed nicely.

- He or she, or it… used a lot of emojis and

exclamation points, I also noticed that

when I typed a question, like after two

questions so I think, they said ‘’Oh, can I

help, do you have any other questions, is

there anything else I can help you

with…?’’ So it was a useful chatbot. I

think the conversation style was very

friendly.

P8 - Shorter answers, or even using bullet

points would be better.

Ease of use Do you think the chatbot was easy to use?

P7 - so it was really easy to use and it had quick and accurate responses.

P8 - Yes, it was easy

Helpfulness Do you feel that the chatbot was helpful during the interaction?

P7 - I thought that they [answers] were very detailed, I mean I think sometimes chatbots they can just respond one word or one sentence, but this, like, had all the information there

P8 - It and answered exactly what I needed to know.

Competence Do you feel that the chatbot was competent when giving you answers to your questions?

P7 - I think it knew what it was doing. I think it’s fine, but I think if you have a more complicated issue, like, my package was stolen and something like that, I would want to talk to a human…

P8 - It seems like it picked up quite well what I was asking

Trust towards chatbots Do you trust this chatbot?

P7 - Yeah, but I also think it depends on my issue. Because I’m fine talking to chatbots, but a lot of times I go to chatbots when I want to talk to an actual person.

Page 81: Chatbot anthropomorphism: Adoption and acceptance in

75

P8 - […]So establishing some sort of trust [is

important] Yeah. So maybe the robot

would like. Prove itself by saying, I'm

going to send you a text message on your

phone, since we know your number from

the store. This will give you a code of the

code will show that's like a show that

we're legit and we're not trying to scam

you out of your details.

-

Attitude towards using chatbots Would you consider a future interaction with this chatbot?

P7 - yeah, but I also think it depends on my issue. Because I’m fine talking to chatbots, but a lot of times I go to chatbots when I want to talk to an actual person. So if there is something wrong with my order I go to the chatbot and click all the things and if ‘’talk to an agent’’ comes up, then I click ‘’talk to an agent’’

P8 - If it's not an urgent matter, then I would

definitely speak to a bot because I have

less anxiety talking to a robot than a

human

15.5 Logo avatar + formal conversational style Results L/F

General feelings How did you feel during the interaction with the chatbot?

P9 - yeah, interactions were good.

P10

- Nothing special, to be honest. It's, it's clear it gave the information that I wanted

Likes Is there something that you particularly liked about the chatbot?

P9 - It was actually more straight to the point than my personal experience with human customer service.

P10

- I liked it. It was quick. It didn't wanted to

be a human to say so. Like, it takes time

to type the question. No, it's just like, OK,

this is the answer. Yeah. Um, which I

appreciate if if it's clear that it's chatbot

because otherwise you could be in doubt

Page 82: Chatbot anthropomorphism: Adoption and acceptance in

76

like somebody typing wasn't a case. It's

just like, OK, here's your answer.

Dislikes Is there something that you particularly disliked about the chatbot?

P9 - I noticed was sort of lack of enthusiasm

in the bots, the sort of human traits that I

missed, sort of

P10

- Nothing special comes to mind

Appearance How do you feel about the chatbot’s appearance?

P9 - It would be more pleasant if it was some sort of figure, could either be fictional figure as a robot or something

P10 - Yeah, yeah, well, I like it, as I said before, it doesn't appear to be a human that way, doesn't want to be human. That's why it's just like it's all linked to the company and not like anything or anyone in particular. Yeah. Which in this case, I liked.

Conversational style How do you feel about the way the chatbot converses with you?

P9 - Well, these messages, they were very concrete and to the point that maybe if I were to ask further questions on a topic like how can I check my order? Either the bot would have know the answer or refer me to a frequently ask questions. I I personally think I would rather, if it's more elaborate to have a Web page than a bot explaining it to me.

- Well, like I said, the casual tone, the things I said in the beginning, that I would like to both be more enthusiastic, have a casual tone, sort of a re enforce or how do you say that? Uh. Tell the customer, let the customer know that it's not a. A bad question to ask. Of course you can, and this to this, it's more of like a tone. I don't know how to describe it. Yeah, maybe I would add. The bot’s sort of asking me, did this answer your question? Yeah, and then you can say yes or no, exactly, so the chatbot knows if it if you need more information or not.

- […] Especially if you know that it's a chatbot… I would find it really like stupid. Yeah, because that I would think, like, OK, you're not a human. And someone

Page 83: Chatbot anthropomorphism: Adoption and acceptance in

77

thought this was a good idea, which now it's just like I think emojis in like serious conversations with companies are always a little bit stupid, because emojis for me are more like a fun thing. And if you're having a problem or a serious question and a company is using emojis, then I'm like, you're not taking me seriously.

P10 - I like that in several questions that were different paragraphs just say so. So it gives you a little bit ease of reading. Yeah, and not like too much text crammed into one small space. And. I also like the. It just directs the sentence to you, like, for example, you are only required to pay in the physical store. Yeah. So it talks to you. Yeah. That talk to you like it. Yeah. It feels a little bit more centered around you, which is of course the case because you wasn't the problem or you are. But that's why I liked it. OK, you recognize that it's there for you to say so.

Ease of use Do you think the chatbot was easy to use?

P9 - Yeah, maybe where I can imagine a

scenario where both would have the

general information and then redirect to

the frequently asked questions for more

information on the subject.

P10 - It's OK because in the bottom most like

the, the little circle, which for me is

recognizable, that that's a chat thing.

Perhaps it could be like a little square or

something. Textbox above it with a little

arrow like “Chat with us!” or something.

Helpfulness Do you feel that the chatbot was helpful during the interaction?

P9 - they gave all the answers that I would have wanted, there was no unanswered question in that sense.

P10 - Yes, those are very easy, straightforward questions which don't need human interaction at all because most of the time they're already on a frequently asked questions page. Yeah. And I think with most of these questions that's the case. It's just nothing special. You just

Page 84: Chatbot anthropomorphism: Adoption and acceptance in

78

want your information. And I think it could work very well for for that purpose.

Competence Do you feel that the chatbot was competent when giving you answers to your questions?

P9 - Yes, it was actually more straight to the point than my personal experience with human customer service.

P10 - Yeah, it just yeah, it just gave the answer, yeah, no, that's good, but also like perhaps even a little bit more than you asked, which I like, because in the first question, like, how can I track my order it just said like, OK, you if you are using a trackable service, the delivery is traceable, and I was like OK that's already the answer. But then you get like even more details

Trust towards chatbots Do you trust this chatbot?

P9 - I’d [give my personal information] if it was required, yes, yeah, if he would ask you, yeah.

P10 - […] for example, when there's also

personal information which is dealt with

or you have to give like all types of

information that you can look in the

database for stuff about you or your

order or stuff like that. And I'm also a bit

like. Yeah, like the computer has to do

that. All Yeah. Of course that person is

also yeah. You can also have the wrong

person and that also could go wrong.

Attitude towards using chatbots Would you consider a future interaction with this chatbot?

P9 - Yeah, probably

P10 - Yeah. It’s not a problem if it works like like this chatbot for example. Yeah. I would definitely use that if I have a question like about delivery.

15.6 Logo + informal conversational style Results L/IF

General feelings How did you feel during the interaction with the chatbot?

P11 - I felt like it answered my question sufficiently. And it was a friendly as well, so I didn't feel I didn't feel any negative feelings towards it, I guess.

P12

- Yeah, good. I think. It’s clear what the chatbot is telling me and it’s a kind tone.

Page 85: Chatbot anthropomorphism: Adoption and acceptance in

79

Likes Is there something that you particularly liked about the chatbot?

P11 - I felt like the responses were kind of they weren't too much to the point. There are some words in there to make it feel more, I guess, human, I don't know.

P12

- [liked the kind tone] Yes, I think so.

Dislikes Is there something that you particularly disliked about the chatbot?

P11 - Nothing to say about it, the responses were long

P12

- No, no I don’t think so. I believe that a little less emoticons would be fine.

Appearance How do you feel about the chatbot’s appearance?

P11 - If I look at the design of the avatar, I don't think it's really suitable.

- because I wouldn't call it like usually I see a profile picture as a quote unquote person, you know, it's supposed to be a chatbot, but here but even here, I would put a picture of like even if it's just a like a what's called like one of those Adobe illustrator. Yeah. Flat drawings of a person or like. Yeah. A robot or a person. That would be better, I think. OK, because I feel like I'm talking to something and I don't know, this is like talking to I don't know what I'm talking to.

P12 - Yeah, it’s not a logo I would choose personally. But it’s not… as an avatar, the little picture, it’s fine. It indicates who you are talking to, sort of. That’s nice. You know you’re talking to someone from Tech Paradise.

Conversational style How do you feel about the way the chatbot converses with you?

P11 - I like to I think that's a little bit to the customer experience and also, quote unquote, human touch. I don't know.

- I like that it has its like you human-ish touch to the answers. And it was a friendly as well.

P12 - I think the tone is fine to be honest. I

think there are a lot of emoticons in the

text, which is not annoying but not

something I would personally use when

talking to a chatbot. But yeah. That’s

something that caught my eye.

Page 86: Chatbot anthropomorphism: Adoption and acceptance in

80

- Well, I think, sometimes with chatbots,

they like give the chatbot an actual name.

I like that with this chatbot, it’s just like

the “Tech Paradise” thing, and not like

pretending that you’re talking to an

actual person which sometimes feels a

bit weird to me when you’re talking with

a chatbot.

- Sometimes the answers are quite long

but you get the answer in a matter of

seconds. Which is fine, because when

you are talking with a chatbot, I think it’s

nice to immediately have an answer. But

then, you know, it’s sort of obvious that

you are talking to a chatbot so there is no

need for me to pretend that I am talking

to an actual person.

Ease of use Do you think the chatbot was easy to use?

P11 - it was very user friendly to use indeed.

P12 - Yes, definitely. [opening the chat] was easy.

Helpfulness Do you feel that the chatbot was helpful during the interaction?

P11 - Definitely felt like it was helpful. The only thing is that one of the responses was massive, in my opinion. Yeah. Yeah. I told you about. Yeah.

P12 - Especially with some of the questions they sort of tell you a little bit of extrta information which is nice because the information you get other than just the pure answer is useful.

Competence Do you feel that the chatbot was competent when giving you answers to your questions?

P11 - It felt like as a I don't know, as a programmer, I find it difficult to say that if you pull a predetermined question into like and it's going to give you a predetermined answers. Right?

- So I find it difficult to say, like, yeah, it was competent because I see competency as something like it solves problems, uh, like dynamically maybe even…

P12 - Well I think with this, uh, a chatbot, it’s quite obvious that you’re not talking with a human. Because sometimes the answers are quite long but you get the answer in a matter of seconds.

Trust towards chatbots Do you trust this chatbot?

Page 87: Chatbot anthropomorphism: Adoption and acceptance in

81

P11 - I don't know if I would give it my personal

information. Why is that? Because maybe

the pretense of the situation, like I

actually have no idea what side I'm on at

the moment.

P12 - Yeah, I think, it’s a good robot.

Attitude towards using chatbots Would you consider a future interaction with this chatbot?

P11 - Uh, sometimes, like if I know the

company, like I think Microsoft actually

uses them as well, like they used to filter

out common problems first. And if the

chatbot can't help you, then you get

redirected to a like a real person.

P12 - Yes, as long as companies consider how they use a chatbot [choosing the tone and the image should be congruent with the brand’s image]