master thesis intelligent personal assistants: privacy

74
Master Thesis Intelligent Personal Assistants: privacy concerns, perceived risk and other factors influencing intention to use IPA technology. Student: Guus Aelen Student number: 11142561 Institution: University of Amsterdam Track: MSc. Business Administration - Digital Business Email: [email protected] Supervisor: dhr. dr. D. (Nick) van der Meulen Submission: 23-06-2017 Version: Final

Upload: others

Post on 13-Jun-2022

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Master Thesis Intelligent Personal Assistants: privacy

Master Thesis

Intelligent Personal Assistants: privacy concerns, perceived risk and other factors influencing intention to use IPA technology.

Student: Guus Aelen Student number: 11142561 Institution: University of Amsterdam Track: MSc. Business Administration - Digital Business Email: [email protected] Supervisor: dhr. dr. D. (Nick) van der Meulen Submission: 23-06-2017 Version: Final

Page 2: Master Thesis Intelligent Personal Assistants: privacy

2

Statement of originality

This document is written by student Guus Aelen who declares to take full responsibility for the contents of this document. I declare that the text and the work presented in this document is original and that no sources other than those mentioned in the text and its references have been used in creating it. The Faculty of Economics and Business is responsible solely for the supervision of completion of the work, not for the contents.

Page 3: Master Thesis Intelligent Personal Assistants: privacy

3

Table of contents Table of contents ..................................................................................................................................... 3

1 Introduction ..................................................................................................................................... 6

1.1 Goal and contribution ............................................................................................................. 7

1.2 Research approach .................................................................................................................. 7

2 Literature review ............................................................................................................................. 8

2.1 Internet of Things (IoT) .......................................................................................................... 8

2.2 Risks of IoT ............................................................................................................................. 9

2.3 Prior research on IoT adoption .............................................................................................. 10

2.4 Intelligent Personal Assistant (IPA) ...................................................................................... 11

2.5 Acceptance of Information Technology................................................................................ 12

2.5.1 Rogers theory of Diffusion of Innovation (DOI) .......................................................... 13

2.5.2 Technology Acceptance Model (TAM) ........................................................................ 14

2.5.3 Unified Theory of Acceptance and Use of Technology (UTAUT) .............................. 15

2.6 Hypothesis and conceptual model ......................................................................................... 17

2.6.1 Behavioral intention (BI) and actual use behavior ........................................................ 18

2.6.2 Concern for information privacy (CFIP)....................................................................... 19

2.6.3 Concern for information privacy (CFIP) , Perceived Risk (PR) and Behavioral Intention (BI) ................................................................................................................................ 20

2.6.4 Performance expectancy (PE), Effort expectancy (EE) and Behavioral Intention (BI) 22

2.6.5 Social Influence (SI), Facilitating conditions (FC) and Behavioral intention (BI) ....... 23

2.6.6 Hedonic motivation (HM), Price value (PV) and Behavioral intention (BI) ................ 25

3 Data collection .............................................................................................................................. 27

3.1 Instrument development ........................................................................................................ 27

3.2 Sampling strategy .................................................................................................................. 28

4 Data analysis and findings ............................................................................................................ 30

4.1 Descriptive measures ............................................................................................................ 30

4.2 Reliability and validity .......................................................................................................... 31

4.3 Correlations ........................................................................................................................... 34

4.4 Analysis................................................................................................................................. 37

5 Discussion ..................................................................................................................................... 45

5.1 Discussion on hypotheses ..................................................................................................... 45

5.2 Discussion on CFIP instrument ............................................................................................. 48

5.3 Implications for practice ....................................................................................................... 49

5.4 Contribution to theory ........................................................................................................... 50

5.5 Limitations and directions for further research ..................................................................... 51

Page 4: Master Thesis Intelligent Personal Assistants: privacy

4

6 Conclusion .................................................................................................................................... 53

7 Bibliography ................................................................................................................................. 54

8 Appendix ....................................................................................................................................... 62

8.1 Items used ............................................................................................................................. 62

8.2 Survey for this study ............................................................................................................. 65

Page 5: Master Thesis Intelligent Personal Assistants: privacy

5

Abstract

This study empirically examines what factors and consumer attitudes influence the

acceptance of Intelligent Personal Assistant (IPA) technology in smart homes. Enabled by

advancements in the field of Internet of Things and Artificial Intelligence, IPA technology,

embodied by devices like Amazon Echo or Google Home, has the potential to revolutionize

the way of human-computer interaction. For this study, the UTAUT2 framework was

complemented with Concern For Information Privacy (CFIP) and Perceived Risk. Using a

multiple regression on a sample of 222 respondents, the model was able to explain 62% of

the variance in behavioral intention to use IPA technology. Performance Expectancy,

Hedonic Motivation, Social Influence and Price Value proved to be significant predictors,

substantiating previous findings claiming that Price Value and Hedonic Motivation are

important predictors in consumer acceptance of technology. No empirical evidence was

found for a significant influence of Facilitating Conditions and Effort Expectancy, which

could be due to the relatively young sample. Concern For Information Privacy did not have a

direct effect on behavioral intention, but significantly increased a person’s Perceived Risk,

which turned out to significantly lower a person’s behavioral intention to use IPA technology

(full mediation).

A CFA shows that the CFIP instrument did not perform as expected, with a low factor

loading of Concern for Collection compared to the other three constructs of the latent

variable. It shows that CFIP should not always be used as a multidimensional construct, and

the negative correlation with other UTAUT constructs shows that the influence of Concern

for Collection is more complex and needs further investigation in context of IPA technology.

Key words: Smart assistant, Intelligent Personal assistant, Technology acceptance, virtual assistant, Concern for Information Privacy, Perceived Risk, UTAUT, Zero UI

Page 6: Master Thesis Intelligent Personal Assistants: privacy

6

1 Introduction The pace by which Internet Technology is changing the way companies work and consumers

live their daily life is increasing. Driven by new improved sensoring, wireless technology and

other technological innovations, companies equip their products with an internet connection

while working to an interconnected network of ‘smart’ related products and services. This

emerging development, that goes by the name of the Internet of Things (IoT) has the potential

to make supply-chains of companies more efficient and increase convenience and efficiency

in the daily life of consumers. According to research of Gartner (2013) the number of

‘connected’ objects will grow to an installed base of 26 billion units by 2020, excluding PCs,

tablets and smartphones. This is an exponential growth of 30 times the installed base of 2009,

when there were ‘only’ 900 million devices connected to the web. The growth in IoT far

exceeds that of any other connected device. To put this in contrast, by 2020 the number of

smartphones, tablets and PCs in use will reach about 7.3 billion units. Gartner claims that this

rise and the turnover of $300 billion by this time are enabled by the decreased components

costs making connectivity a standard feature. Also, the pervasiveness of wireless technology,

standardization of protocols, ubiquitous access to internet and consumer familiarity with

connected things (e.g. vehicles, smart homes) encourage this (W.-J. Lee & Chong, 2016).

IoT, that used to be a fuzzy concept to consumers, is becoming more tangible

nowadays. One of the cutting edge applications of IoT technology is the voice-controlled

Intelligent Personal Assistant (IPA), such as Amazon’s Echo and Google Home. These

devices, enabled by advancements in the field of Artificial intelligence (AI) and Human-

Computer interaction, could change the way we interact with technology and take in a central

role in our future smart homes, functioning as a voice controlled personal assistant to control

smart devices around your house, keep your calendar up-to-date or shop online without the

use of a User Interface (UI). Research firm Tractica estimated that 40 million homes will be

using digital assistants by 2012 (Bergen & Kharif, 2017). IoT recently received a lot of

Page 7: Master Thesis Intelligent Personal Assistants: privacy

7

attention in practical and academic fields. More than 1400 items were published in 2016,

compared to ‘only’ 400 items in 2014 (Web of Science,2017) . The majority of the current

research has a technical approach. Research on technologies, privacy and data security are in

abundance, while little is known about the consumers that will have to adopt IPA technology.

1.1 Goal and contribution This study aims to identify what factors and consumer attitudes influence the behavioral

intentions to use IPA technology, in order to understand how these factors and attitudes

influence future adoption. This study adds to current literature on IoT and technology

adoption since it takes a consumer perspective and is (to the best of my knowledge) the first

that looks into adoption of IPA technology as one of the most integrating IoT applications for

consumers at this time. IPA technology has the potential to change the way humans interact

with technology as it enables moving away from touchscreen and other physical User

Interfaces. This study is also unique in a way that it takes a consumer-centric approach, in a

field where technical research dominates. Practitioners and companies in home industry will

benefit by getting a better understanding of what affects consumer adoption, helping them

improve the fit between their products and consumer needs. For this study, the following

research question is used:

‘’What factors and consumer attitudes influence the acceptance of Intelligent Personal

Assistant technology in smart homes?’’

1.2 Research approach

A deductive research approach is used to conduct this research, involving a survey that

samples consumers. Based on earlier technology acceptance research, combined with

previous studies on Internet of things adoption and similar fields that come close to IPA

Page 8: Master Thesis Intelligent Personal Assistants: privacy

8

functionalities (e.g. E-commerce and smart-home adoption ) and cover focus topics (e.g.

privacy), nine hypotheses are suggested. These hypotheses are tested using data from an non-

probability sample mainly consisting of students (convenience sample). Data is collected via

a web-based survey using Qualtrics analytics.

2 Literature review For this literature review a combination of business journals and peer-reviewed journals are

used. At the start, online business journals like Harvard Business Review (HBR), MIT Sloan

(SMR) and California Management Review (CMR) were consulted to identify how IoT and

IPA technology relates to today’s business and management practice and identify underlying

papers. (trend-)Reports from prominent consulting firms (e.g. McKinsey, PwC, Gartner)

provided insight in challenges, developments and terminology that was used afterwards in

CataloguePlus (UvA) and Web of Science. No filters for business journals were applied,

since a lot of papers are published in the field of computer and information science.

Especially peer-reviewed journals and conference proceedings were selected for further

reading. Search terms used are included in table 1.

Table 1: Search terms

Initial search terms (HBR,SMR,CMR):

Revised search terms (after reading):

‘’Internet of things’’ , ‘’IoT’’, ‘’Personal assistants’’, ‘’smart home’’, ‘’technology acceptance’’, ‘’Technology acceptance model’’

‘’Internet of things’’ and ‘’adoption’’, ‘’IoT’’ and ‘’adoption’’’, ‘’Intelligent Personal assistants’’, ‘’technology adoption’’, ‘’artificial intelligence’’, ‘’intelligent agent’’, ‘’smart speaker’’

2.1 Internet of Things (IoT) Most people will say that the internet connects people and enables them to exchange

information. More recently, fueled by the use of social media, the internet became a place in

which people participate, interact and reside in virtual communities creating their own

content. This user-centric view of the internet goes by the name of web 2.0 (Allen, 2009).

Page 9: Master Thesis Intelligent Personal Assistants: privacy

9

IoT, first introduced in 1999 is different from the web 2.0 view since it takes machines and

objects with embedded sensors centric view that allow them to communicate autonomously

over the internet. Conceptually, the IoT build on three pillars that describe the ability of these

‘smart’ objects: 1) they are identifiable, 2) they communicate, 3) they interact – either among

themselves , resulting in a network of interconnected objects (Machine to Machine), or with

end-users or other entities in a network (Miorandi, Sicari, De Pellegrini, & Chlamtac, 2012).

All these integrated objects create an environment in which consumers are more or less

unaware of all the interacting technology surrounding them. This pervasive availability of

computers also goes by the name of ubiquitous computing. One of the areas wherein IoT is

being used more and more often is within domestic homes. Manufacturers are making

devices and appliances around our homes interconnected, giving us control over our domestic

environment with the ultimate goal to save cost, live more efficient, enhance functionality or

improve quality of live in any other way. This specific market of ‘smart homes’ is expected

to grow from $40 million in 2012 to $26 billion in 2019 (Wilson, Hargreaves, & Hauxwell-

Baldwin, 2015).

2.2 Risks of IoT As the spread of connected devices goes on, the paradigm of ubiquitous computing will soon

be reality. This includes the domestic environment. Our homes are treasured places. All these

smart devices exchange personal data in order to function and integrate into our lives. Data

that can, in theory, be used harmfully. This is why security and privacy risks of IoT devices is

mentioned as a concern in research (C.-L. Hsu & Lin, 2016; Weber, 2010). Privacy violations

like selling data to third parties, identity theft or infrastructure vulnerabilities and the

potential use of (domestic) data in digital forensics are topics that lack knowledge and need

more regulation (Plachkinova, Vo, & Alluhaidan, 2016). Researchers at this point in time are

trying to keep up with the pace of IoT, trying to solve this ‘dark side of IoT’ and its

Page 10: Master Thesis Intelligent Personal Assistants: privacy

10

challenges. At the same time, smart devices are gaining popularity, leaving the untaught or

(purposely) unaware consumers with potential large risks.

2.3 Prior research on IoT adoption We already live in a world with connected vehicles, smart appliances and almost every single

electronic device that we buy having an internet connection. Due to the fact that IoT products

and services are developing at this moment in time, most research thus far is focused on

(technical) challenges like the lack of (open) standards (Miorandi et al., 2012), improving

business models (Ives, Blake, 2016; Manyika, 2015) and addressing vulnerabilities and

privacy related issues (C.-L. Hsu & Lin, 2016; Plachkinova et al., 2016; Yan, Zhang, &

Vasilakos, 2014). It seems like most of this research is pushed by the developers of IoT

technology. When it comes to IoT and customer perspective, specifically consumer

acceptance and influencing determinants, only a few recent studies were found. A recent

study of (C.-L. Hsu & Lin, 2016) provided a framework from the perspective of network

externalities and Concern For Information Privacy in order to find out what drives

consumers in continued using of IoT services. Based on a literature study they used the

predictor variables (PV) Concern For Information Privacy and perceived benefits in

combination with the mediator attitude to measure the OV continued intention to use. Their

findings show that network externalities (e.g. number of services, perceived compatibility)

play a significant role in influencing benefits and thus adoption. Surprisingly, the Concern

For Information Privacy only had a weak (negative) effect on continued intention to use

relative to perceived benefits. Another recent study from Lee & Chong, (2016) also

emphasized the importance of understanding consumer acceptance in order to for IoT to

become commonly accepted. In this study, the authors used a dual-factor model in order to

determine future adoption of smart IoT services and applications. They based their model on

Herzberg’s theory (1959), which is ought to be useful for understanding technology services.

Page 11: Master Thesis Intelligent Personal Assistants: privacy

11

They combined this dual-factor theory (avoiding pain and growing psychologically) in the

form of satisfiers and dissatisfiers with the technology paradox of Mick and Fournier (1998)

to come up with eight PVs influencing satisfaction with IoT (technology attitude) and

therefor intention to adopt OV. They found that several satisfiers such as fulfill needs,

perceived enjoyment and technology trust are effective drivers of attitude towards IoT. They

found that their dissatisfiers (e.g. performance ambiguity, perceived incompetence, perceived

chaos) aren’t powerful anti-drivers of IoT service satisfaction. As an explanation, the authors

mention that this could be due to the fact that IoT services are often invisible, making it hard

to recognize failure, and adoption is still increasing. They argue that this could result in only

few consumers with actual experiences and therefor a weaker ability to connect negative

attitudes and dissatisfaction to it. A limitation to this last study is the sample consisting of

younger college students, to which an explanation of IoT was presented. It would be

interesting to use a more demographically experienced and diverse sample since IoT will

affect young and old, and we need to understand motivations of miscellaneous people for IoT

services to become ubiquitous.

2.4 Intelligent Personal Assistant (IPA) Intelligent Personal assistant (IPA) is a fairly new concept for a phenomenon that has been

around for ages: Personal Assistants. In literature, IPA are described as ‘’autonomous

software agents which assist users in performing specific tasks. They should be able to

communicate, cooperate, discuss and guide people’’ (Garrido, Martinez, & Guetl, 2010). The

basis of an IPA is formed by a software agent, capable of finding and filtering information,

negotiate for services, automate complex tasks, or communicate with other software agents

(e.g. devices) to solve complex problems (Czibula, Guran, Czibula, & Cojocar, 2009).

Ubiquitous computing and advancements in the field of artificial intelligence, understanding

of the semantic web (machine Learning) and speech recognition (capturing the meaning of

Page 12: Master Thesis Intelligent Personal Assistants: privacy

12

spoken language) enabled IPA to become ‘intelligent’. By intelligent is meant that the

software agent is able to understand its surrounding (e.g. sensors or communicate with smart

objects) and has learning capabilities, resulting in autonomous actions supporting humans.

Speech recognition technology enable users to interact with IPA using only their voice,

translating natural voice commands into machine-level commands (Santos, Rodrigues, Casal,

Saleem, & Denisov, 2016). Another way to understand IPA technology is to look at it as an

User Interface. Nowadays we are used to consume information, perform task or control our

environment using mobile devices, wearables or laptops. In the beginning, IPA fulfill the

same function but without the use of a User interface (UI). This development of not relying

on touchscreens and interact with technology in a more natural way also goes by the name of

Zero UI. There are different digital assistants currently available. One of the most familiar

would be Apple’s Siri, the voice assistant that apple builds into their products. Microsoft

named their digital assistant Cortana and Android devices use Google’s assistant. More

recently, Amazon introduced their ‘Amazon echo’ speaker and short afterwards Google

introduced the ‘Google home’ speaker. These competing devices are currently the most

‘intelligent’ personal assistants, meaning they have the most functionalities, solely using

voice commands for interaction. These devices come closest to the definition of IPA and are

the most tangible applications of IPA to contemporary consumers.

2.5 Acceptance of Information Technology Looking into the acceptance of new technology, research draws on different theories and

frameworks. A lot of models that seem to be applicable in the case of IoT can be lend from

the field of IS. Popular models are the TOE framework (Tornatzky, Fleischer, & Chakrabarti,

1990), UTAUT (Venkatesh, Morris, Davis, & Davis, 2003) and the more popular Technology

Acceptance Model (TAM) and Diffusion of Innovation (DOI) (Rogers, 1983; Venkatesh &

Davis, 2000). The TOE framework incorporates an organizational perspective. Since this

Page 13: Master Thesis Intelligent Personal Assistants: privacy

13

study focusses on consumers and IoT adoption, this model is not considered. The other three

and their applicability are discussed hereafter.

2.5.1 Rogers theory of Diffusion of Innovation (DOI) The way new innovations (technologies) are accepted and used by different people in society

over time is what we call diffusion. According to rogers (1983), whether or not and by which

rate innovations are adopted depends on the characteristics of the innovation. The five

characteristics of an innovation that determine the rate of adoption by the social system are

(1) Relative advantage: the degree to which an innovation provides value over the idea it

supersedes. If an innovation is perceived as advantageous, the higher its adoption rate

will be.

(2) Compatibility: the degree to which an innovation fits with existing values,

experiences and needs of potential adopters. It will be more likely for someone to

adopt an innovation if this innovation is compatible with situational needs, values and

their personal life.

(3) Complexity: the degree to which an innovation is difficult to understand or use.

(4) Trialability: the degree to which an innovation can be tested or experimented with.

(5) Observability: the degree in which benefits of an innovation are observed by others.

Another influencing factor is how people go through five interlinked stages that he calls the

innovation-decision process. These stages consist of (1) how an individual goes from first

awareness of an innovation (knowledge), (2) to forming an attitude towards it (persuasion),

(3) decides to reject or adopt (decision), (4) to implementation of the innovation

(implementation) into (5) reflecting on the decision with preceding expectations (reflection).

Page 14: Master Thesis Intelligent Personal Assistants: privacy

14

2.5.2 Technology Acceptance Model (TAM) The Technology Acceptance Model (TAM) was first initiated by Davis (1989), and is perceived as

one of the most used frameworks in IS to measure technology acceptance (figure 1). It’s based on two

central constructs that determine the attitude and behavior towards a technology, trying to

explain the actual intention to use. The two central constructs are perceived ease of use

(EOU) and perceived usefulness (U) later extended by adding two new constructs (TAM2),

being Social Influence and cognitive instrumental processes (Venkatesh & Davis, 2000).

Different external factors influencing the central constructs can be incorporated in the model

of the researcher, depending on the situation. With more than 700 citations of the initiating

article, the TAM model has proven its reliability and perceived usefulness has proven to be a

strong (Beta 0.6) driver of Intention to use (Venkatesh & Davis, 2000). However, there are

also some studies that showed the TAM model isn’t perfect (Chuttur, 2009). For example, the

study of Burton-Jones & Hubona (2006) showed that some external variables have a direct

influence on system usage, over and above their indirect effects. As plead by Chuttur (2009),

who did an overview of the TAM model, the model reached a saturation point. Future

research should combine the strong points of this model, while discarding the weaknesses.

Figure 1: The original TAM model as presented by Davis (1989)

Page 15: Master Thesis Intelligent Personal Assistants: privacy

15

2.5.3 Unified Theory of Acceptance and Use of Technology (UTAUT) The Unified Theory of Acceptance and Use of Technology (UTAUT) model is the result of a

review on user acceptance literature by Venkatesh et al., (2003). The model (figure 2)

integrates elements of eight different acceptance models like the Theory of reasoned action,

Planned behavior, TAM, DOI and social cognitive theory and others. Venkatesh distilled

these theories into the unified UTAUT model, with four core determinants of intention and

usage, being (1) Performance Expectancy, (2) Effort Expectancy, (3) Social Influence, (4)

Facilitating Conditions and four moderators of key relationships of these determinants being

(1) gender, (2) age, (3) experience and (4) voluntariness of use. They tested this model and

found that it outperforms all preceding models in predicting usage with an adjusted R2 of 0.69

(Venkatesh et al., 2003). The UTAUT model was extended by Venkatesh, Thong, & Xu

(2012) into UTAUT2. The authors incorporated three new constructs into the original

UTAUT model by adding Hedonic Motivation, Price Value and habit in order to extend and

tailor the applicability of the model from employee technology acceptance to a more

consumer focused context. Voluntariness of use was removed from the model. In the original

UTAUT, utilitarian constructs like Performance Expectancy proved to be a strong predictor

of behavioral intention (Venkatesh et al., 2003). Because hedonic or intrinsic motivation

proved to be an important perspective in motivation theory (Holbrook & Hirschman, 1982)

and is therefore commonly used as important predictor in consumer behaviour research,

Venkatesh et al. (2012) decided to incorporate this into the UTAUT2 model. Their results

showed that Hedonic Motivation even is a more important driver of Behavioural intention in

non-organizational settings compared to Performance Expectancy (Venkatesh et al., 2012).

Price Value is described as the cognitive trade-off between perceived benefits and monetary

costs for using them. Price Value is included since a positive Price Value (benefits outweigh

costs) is perceived to have a positive impact on intention thus behavioural intention to use.

Habit is operationalized as ‘’a perceptual construct that reflects the results of prior

Page 16: Master Thesis Intelligent Personal Assistants: privacy

16

experiences’’ (Venkatesh et al., 2012). Finally, the authors expect that the different

Facilitating Conditions of behavioural intention are moderated by age, gender and

experience.

Figure 2: The UTAUT2 model as presented by Venkatesh et al., (2012)

Page 17: Master Thesis Intelligent Personal Assistants: privacy

17

2.6 Hypothesis and conceptual model From the previous literature review it is concluded that the UTAUT2 model seems most

applicable for this study, and is therefore used as basis for the conceptual model of this study.

The UTAUT2 integrates different proven theories and frameworks on individual user

acceptance, and is tailored to fit consumer acceptance by adding Hedonic Motivation Price

Value and Habit. Besides it includes social context in which a technology is adopted while

TAM neglects this. This is perceived important since this research is situated within smart

homes, and Social Influence is ought to be important in consumer acceptance. The Habit

construct, defined as ‘’the extent to which people perform behavior automatically because of

learning’’ (Limayem, Hirt, & Cheung, 2008) is not taken into account in this study, since

habit is partly operationalized by prior behavior and learning over time (automaticity), which

is perceived as irrelevant when sampling people that most likely do not have any prior

experience with IPA technology. Hereafter the different constructs and hypothesis of this

study are discussed and operationalized. The conceptual model for this study is visualized in

figure 3.

Page 18: Master Thesis Intelligent Personal Assistants: privacy

18

Concern For

Information

Privacy

Effort

Expectancy

Social

Influence

Price value

Hedonic

Motivation

Facilitating

Conditions

Behavioral

Intention

Performance

Expectancy

Collection

Improper

access

Unauthorized

secondary use

Errors

Collection

Improper

access

Unauthorized

secondary use

Errors

Perceived RiskH2 +

H3 -

Figure 3: Conceptual model of this study

2.6.1 Behavioral intention (BI) and actual use behavior One of the underpinning theories of technology acceptance frameworks is the theory of reasoned

action (TRA) by Fishbein and Azjen (Ajzen, 1980). This theory has proven to be useful in explaining

and predicting actual behavior of a consumer. It describes that a consumer’s use behavior (actual use)

can largely be predicted by analyzing the subjective norms (Social Influence) and attitudes or beliefs

which are influencing one’s behavioral intention. The theory was revised by adding perceived

behavioral control, meaning the perceived possession of requisite resources to perform a behavior

(Ajzen, 1985). In this study, this aspect is incorporated in the construct Facilitating Conditions which

is discussed hereafter. Since IPA technology is new to consumers, they are most likely inexperienced

with the technology and smart products incorporating it. This means actual use cannot be measured.

So in line with other studies on future use of technology (Lee & Chong, 2016; Gao & Bai, 2014;

Pavlou, 2003), use behavior is left out and we follow the logic that actual use follows on behavioral

intention.

Page 19: Master Thesis Intelligent Personal Assistants: privacy

19

2.6.2 Concern for information privacy (CFIP)

Information privacy can be seen as the right to select what information, when, how and to

what extent is known to different people or organizations (Westin, 1967). Nowadays

consumers generate a wealth of data that helps companies personalize and tailor their

products. Not knowing what happens with personal data and lack of information control gives

consumers a vulnerable feeling (Sheng, Nah, & Siau, 2008). This convenience (personalized

services) versus privacy (exchange of data) tradeoff goes by the name of the Personalization

Privacy Paradox. Literature often treats privacy as a multidimensional (composite) construct.

An often used and proven construct is Concern For Information Privacy (CFIP). Smith,

Milberg, & Burke (1996) created a valid measurement for CFIP that described how

organizations touch upon privacy along four dimensions, being collection, improper access,

unauthorized secondary use and errors. A individual is high on CFIP if one perceives that 1)

too much personal data is collected (collection), 2) organizations fail to protect access to

personal data (improper access), 3) personal data is used for undisclosed purposes

(Unauthorized secondary use) or 4) personal data is false or erroneous (Errors) (Stewart &

Segars, 2002). Another construct that is used to measure information privacy is the

multidimensional construct Internet users’ information privacy concerns (IUIPC) as

proposed by (Malhotra, Kim, & Agarwal, 2004). IUIPC is a reaction to CFIP, since

information privacy changed due to the widespread use of internet. This shift in environment

plead for specific dimensions tailored to the online environment. Almost resembling CFIP,

IUIPC is conceptualized from three perceptions, being 1) concern of collection of personal

data, which is only perceived as fair if a person is given 2) control over their data, while a

person is also 3) informed about the purpose of data collection (Malhotra et al., 2004). This

resulted in three (first-order) factors that describe IUIPC: Collection, Control and awareness

of privacy practices. While Malhotra et al. (2004) showed that IUIPC correlated significantly

Page 20: Master Thesis Intelligent Personal Assistants: privacy

20

stronger with criterion variables and proved to have a better model fit compared to CFIP,

researchers still tend to use CFIP to measure Information Privacy Concerns (Belanger &

Crossler, 2011). No clear argument for this was given, and because the two instruments are

almost interchangeable, the more common CFIP instrument is used in this study.

During the literature review, several studies on the adoption of IoT were found. Hsu

& Lin (2016) showed in their study on the adoption of the IOT showed that CFIP has a direct

negative effect on Continued intention to use. This is in line with Fishbein’s (1980) Theory of

Reasoned Action, since privacy concern is perceived a negative antecedent (Fodor & Brem,

2015; Xu & Teo, 2004) that could influence a user’s attitude towards a technology, and

ultimately behavioral intention. More practical, privacy concerns around microphone enabled

devices (e.g. digital assistants or smart toys) and the price of convenience are fuel for

congresses and skeptic news articles. Other studies in the field of IOT also showed that

privacy concerns negatively impacts consumer intention and adoption of technology (C.-L.

Hsu & Lin, 2016; C.-W. Hsu & Yeh, 2016; Sheng et al., 2008). Thus, it is expected that CFIP

negatively influences behavioral intention.

H1: An individual’s Concern For Information Privacy has a significant negative

influence on his/her behavioral intention to use IPA technology.

2.6.3 Concern for information privacy (CFIP) , Perceived Risk (PR) and Behavioral Intention (BI)

The functionalities of IPA technology are built around information of its user. If one wants to

utilize all the functions of IPA technology, that person should be willing to give up control

and personal data to the IPA service provider and let it interact and control other smart

devices (e.g. smart lighting or connected appliances) or integrate with online shopping

services like Amazon. Full integration of an IPA device could result in unwanted actions.

This is illustrated by hundreds of Amazon’s Echo owners, of which the IPA device ordered a

Page 21: Master Thesis Intelligent Personal Assistants: privacy

21

doll house after a news presenter was talking about ‘how she loved that Alexa orders a doll

house for the child’, unintentionally activating and commanding hundreds of Echo devices of

people watching the specific broadcast (Liptak, 2017).

Perceived Risk can be operationalized as ‘’the felt uncertainty regarding possible

negative consequences of using a product or service’’ (Featherman & Pavlou, 2003). The

construct has six different facets, like the possibility of malfunctioning (performance risk),

monetary loss (financial risk), wasting time (time risk), losing status (social risk), a negative

effect on the peace of mind (psychological risk) and the loss of control of personal

information (privacy risk). For this study, the last facet is used, which measures overall

Perceived Risk. Risk concerns proved to be an important barrier in acceptance of technology

(Lee, 2009). During the rise of e-commerce at the beginning of this century, both Perceived

Risk and trust (willingness to depend and perception of competence or integrity of others)

proved to be direct influencers on willingness to interact with online merchants (Pavlou,

2003; Van Slyke, Shim, Johnson, & Jiang, 2006). The study of Van Slyke et al., (2006) found

that information privacy concerns are salient in this situation. Not directly influencing

willingness to transact, but mediated by Perceived Risk. On the other hand, the intangible

nature and functioning of IPA technology could also look risky to consumers, implicating

that risk perception could play an important role while interacting with IPA technology.

Since it is expected that Concern For Information Privacy has a negative effect on

Behavioral Intention, it is expected that this relation is partially mediated by Perceived Risk.

Therefore it is proposed that

H2: An individual’s Concern For Information Privacy has a significant positive

influence on his/her Perceived Risk of IPA technology

H3: An individual’s Perceived Risk has a significant negative influence on his/her

behavioral intention to use IPA technology

Page 22: Master Thesis Intelligent Personal Assistants: privacy

22

2.6.4 Performance expectancy (PE), Effort expectancy (EE) and Behavioral Intention (BI)

The strongest predictor in UTAUT on Behavioral intention is Performance Expectancy (PE).

In an IS setting, Performance Expectancy means ‘’the degree to which an individual believes

that using the system will help him or her to attain gains in job performance’’ (Venkatesh et

al., 2003). The construct is derived from five root constructs, being perceived usefulness

(TAM/TAM2), extrinsic motivation (motivational model), job-fit (Model of PC

Utilization/MPCU), relative advantage (Innovation Diffusion Theory/IDT) and outcome

expectations (Social Cognitive Theory) (Venkatesh et al., 2003) . In this study, the redefined

definition from UTAUT2 is used, saying that Performance Expectancy is the ‘’degree to

which using a technology will provide benefits to consumer in performing certain activities’’

(Venkatesh et al., 2012). A meta-analytic review of 37 studies using the UTAUT model

showed that the impact of Performance Expectancy can be classified as medium with a Zr of

0,53 (Taiwo & Downe, 2013). Another phenomenon that could substantiate the relation

between Performance Expectancy and Behavioral Intention in a human-computer interaction

situation is that humans suffer from algorithm aversion (Dietvorst, Simmons, & Massey,

2015; Yuksel, Collisson, & Czerwinski, 2017). Humans have a higher intolerance for errors

made by algorithms (inseparable to IPA technology) compared to human error. The fact that

confidence is lost more quickly if errors are made by technology (no benefits are derived)

could plead for a substantial importance of Performance Expectancy in this study. It is

expected that

H4: An individual’s Performance Expectancy will have a significant positive

influence on his/her behavioral intention to use IPA technology

Another antecedent of Behavioral intention is Effort Expectancy. This is described as the ease

of use that is associated with a certain technology. The concept of Effort Expectancy is based

Page 23: Master Thesis Intelligent Personal Assistants: privacy

23

on perceived ease of use (TAM/TAM2) and complexity of a system (MPCU/IDT)

(Venkatesh et al., 2003). With both of the constructs being proven predictors within

UTAUT(2), it is expected that

H5: An individual’s Effort Expectancy will have a significant positive influence on

his/her Behavioral intention to use IPA technology

2.6.5 Social Influence (SI), Facilitating conditions (FC) and Behavioral intention (BI)

Social Influence is derived from subjective norm (TAM/TAM2) , social factors (MPCU) and

image (IDT). All of these labels are based on the notion that an individual’s Behavioral

Intention to use a system is influenced by the judgement or meaning about them of others,

resulting from using a certain technology (Venkatesh et al., 2003). In a consumer setting,

Social Influence is described as ‘’the extent to which consumers perceive that important

others (e.g. family and friends) believe they should use a particular technology’’ (Venkatesh

et al., 2003). This would mean that information or encouragement of others influences a

user’s Behavioral Intention to use IPA technology. The last decade, it can be concluded that

consumers are even more subject to Social Influence because of constant connectivity with

peers and their opinion via social media. Consumers share their experience, refer and use

review websites to gain knowledge before buying a certain product. Research by Risselada,

Verhoef, and Bijmolt (2013) showed that there is a significant positive effect of Social

Influence on adoption of high-technology products, to which IPA technology also belongs.

They also showed that this effect of Social Influence decreases from introduction onwards.

As an explanation, Risselada et al. (2013) state that this is due to information substitution; as

time goes by more information and common knowledge becomes available about a

technology, decreasing the need for opinions or obtaining information from others (Chen,

Wang, & Xie, 2011). The fact that users of IPA technology have to interact using their own

Page 24: Master Thesis Intelligent Personal Assistants: privacy

24

voice could also be subjected to Social Influence. At this point in time, the propriety of a

human having a conversation with an natural sounding artificial agent is still questionable.

Our homes are common places, and people might feel embarrassed drawing attention while

their asking their IPA what is in the fridge or to pre-condition the Tesla. Studies on how

people use voice-assistants like Apple’s Siri also indicate that the social context and type of

information that is transmitted (private versus non-private) influences how people use this

technology due to perceived discomfort or the feeling that they are being judged (Moorthy &

Vu, 2015). This indicates that the opinion of others influences one’s behavior.

In line with these findings, it is expected that a consumer is prone to the technology

judgements or opinions of important others, meaning that others have a positive influence on

an individual’s behavioral intention to use IPA technology. Therefor it is proposed that

H6: Social Influence on an individual will have a significant positive influence on

his/her behavioral intention to use IPA technology

Facilitating Conditions are defined as the extent to which a consumer perceives that

resources exists to support use of a technology. While Facilitating Conditions is directly

linked to use behavior (not measured) in the original UTAUT, it is also linked to Behavioral

intention in UTAUT2. This is because facilitating factors (e.g. training) are self-evident in

organizations, but consumers do not always have the same access to facilitating factors and

therefore could be more hesitant (Venkatesh et al., 2012). That is why facilitating factors for

IPA technology, like online help or available support from a manufacturer, apart from

directly influencing use behavior, also influences Behavioral Intention. This construct

originally tried to measure to what extent an organizational or technical infrastructure exists

to support use of a system. Compatibility, perceived behavioral control (derived from the

theory of planned behavior) and technology Facilitating Conditions are therefore seen as root

constructs embodied by Facilitating Conditions. This is closely related to a study of Hsu and

Page 25: Master Thesis Intelligent Personal Assistants: privacy

25

Lin (2016), who studied the effect of network externalities on IoT adoption and showed that

indirect network externalities like perceived compatibility and perceived complementarity

significantly influenced the perceived benefits and thus adoption of IoT services. In a smart

home situation, this could implicate that the amount of integrations and (complementary)

products or services that an IPA works with are influencing a consumer’s perceived benefits

of the technology. It is therefore proposed that

H7: Facilitating Conditions will have a significant positive influence on an

individual’s behavioral intention to use IPA technology

2.6.6 Hedonic motivation (HM), Price value (PV) and Behavioral intention (BI) Hedonic Motivation (HM) and Price Value (PV) are two extensions done by Venkatesh et al.,

(2012) to the original UTAUT to make the model more applicable to a consumer context.

Results of this study show that in this context, Hedonic Motivation and Price Value are even

stronger predictors of Behavioral Intention compared to Performance Expectancy in the

original UTAUT. Intrinsic or Hedonic Motivation, which can be described as the perceived

enjoyment or pleasure that is derived from an activity has proven to be an important predictor

in Behavioral Intention (Brown & Venkatesh, 2005). Since IPA technology is based on a

unique and fun experience of human-computer (two-way) interaction, and voice controlled

interaction with a device is fairly new to consumers, Hedonic Motivation will likely play an

important role in the tested model. A study of Van Der Heijden (2004) on pleasure oriented

information systems supports Hedonic Motivation as important determinant, stating that

hedonic systems also focusing on the fun aspect encourage the prolonged usage of a system

rather than utilitarian (productivity) use. Another study of Sun & Zhang (2006) proved that

Hedonic Motivation influences Behavioral Intention via perceived ease of use (Effort

Expectancy) and found no direct relation. However, since this study had no particular

Page 26: Master Thesis Intelligent Personal Assistants: privacy

26

consumer focus, and this study is interested in the specific hedonic nature of IPA technology

in consumer context, it is proposed that.

H8: An individual’s Hedonic Motivation will have a significant positive influence on

his/her Behavioral Intention to use IPA technology

In organizational settings, workers do not bear the monetary costs of an information system.

However, this is the case for consumers adopting new technology. It is evident that the price

or cost of a technology will influence it’s use. Price Value is therefore defined by Venkatesh

et al., (2012) as a ‘’consumers’ tradeoff between the perceived benefits of the application

and the monetary cost for using them’’ (Dodds, Monroe, & Grewal, 1991). As soon as

benefits of a technology outweigh the monetary costs, Price Value of a technology is

perceived as positive and positively impacts Behavioral Intention (Venkatesh et al., 2012).

Therefor it is proposed that

H9: Price Value will have a significant positive influence on an individual’s Behavioral

Intention to use IPA technology

Page 27: Master Thesis Intelligent Personal Assistants: privacy

27

3 Data collection This study uses quantitative, primary survey data in order to understand how the different

constructs affect the behavioral intention of consumer to use IPA technology. In this chapter

the reliability of the items and data collection method is explained.

3.1 Instrument development In the previous paragraph the constructs were operationalized. Before spreading a survey, it

is important that a construct is measuring what is claims to measure. This is also called the

content validity of the constructs one is interested in (Saunders, Lewis, & Thornhill, 2012).

To ensure this, it is important to use validated items that are altered to a minimum while

adapting them to the context of IPA technology. The adjusted and original items can be found

in appendix 1. Since all items are stemming from English studies, and some respondents had

Dutch as their first language, the questionnaire was translated to Dutch for convenience. To

prevent translation errors, questions that were difficult to translate were put into Google

Translate, that in a few cases suggested a better translation. Some of the translated questions

were translated back using the same service to check whether the translation was done

accurate. As described in table 2, a multi-item method is used with 3-4 questions per

construct. The reliability column shows that Composite Reliability (CR) exceeds the

threshold of .70 and the Average Variance Extracted (AVE) exceeds the threshold of .50

(Hair, 2010), showing the measures proposed for this study are internally consistent and

reliable enough for use. Except the CR collection does not exceed this threshold, which is

close to .7 (.68). While there are studies claiming that CR above .60 is also acceptable

(Tseng, Dörnyei, & Schmitt, 2006), this slightly lower reliability score will be kept in mind

during further analysis.

In line with the popular technology acceptance models, a seven point Likert-scale ranging

from 1 (strongly disagree) to 7 (strongly agree) is used for most of the items. Perceived Risk

Page 28: Master Thesis Intelligent Personal Assistants: privacy

28

was measured using a different scale as suggested by Sitkin & Weingart (1995). See item in

appendix 1 for more information.

Table 2 : Overview of previous construct items and reliability

3.2 Sampling strategy The primary data for this study was gathered using Qualtrics. This is the recommended online

survey software licensed by the University of Amsterdam. The entity that is studied, and the

population that it should describe, are normal consumers, not belonging to a certain

(age)group, geographical unit or characterized by a certain social aspect. This makes the unit

of analysis an individual consumer. The final survey consisted of 46 questions, including

demographic data and optional comments. To reduce social desirable answers, the

respondents were prompted a message upfront saying that there are no right or wrong

answers and data is treated anonymously. To enhance the completion rate, a boat trip and

Construct Items Original reliability Retrieved from Concern for Information

Privacy a

- Unauthorized secondary use

- Improper access

- Errors

- Collection

4

CR=.94

CR=.95

CR=.94

CR=.68

(C.-L. Hsu & Lin, 2016; Smith et al., 1996;

Stewart & Segars, 2002)

Behavioral intention (BI) 3 ICR (AVE) = .94 (.82)

(Davis, 1989; Venkatesh et al., 2003)

TAM/UTAUT

Performance Expectancy (PE) 3 ICR (AVE) = .88 (.75) (Venkatesh et al., 2003) TAM

Effort Expectancy (EE) 4 ICR (AVE) = .91 (.74)

(Davis, 1989; Venkatesh et al., 2003) / TAM

UTAUT

Social Influence (SI) 3 ICR (AVE) = .82 (.71) (Venkatesh et al., 2012) / UTAUT2

Price value (PV) 3 ICR (AVE) = .85 (.73) (Venkatesh et al., 2012) / UTAUT2

Hedonic motivation (HM) 3 ICR (AVE) = .86 (.74) (Venkatesh et al., 2012) / UTAUT2

Facilitating conditions (FC) 4 ICR (AVE) = .75 (.73) (Davis, 1989) / TAM

Perceived Risk (PR) 4 ɲ�с�͘ϴϲ

(Featherman & Pavlou, 2003; Sitkin &

Weingart, 1995)

Page 29: Master Thesis Intelligent Personal Assistants: privacy

29

glider flight were raffled amongst complete entries. To prevent fatigue and dishonesty,

answering a question was not mandatory and respondents were able to drop out at any time.

Since respondents were likely unexperienced with the technology, a picture of two Intelligent

Personal Assistants was showed, together with a brief introduction on the technology’s

functionalities and its applications. After the survey was pre-tested among a group of trusted

respondents, the somewhat difficult Likert rating of one of the items of Perceived Risk

(insignificant risk – significant risk) was improved (low risk-high risk). Pre-test entries were

removed from the final dataset. The survey was spread via Facebook, LinkedIn, family,

friends and through a direct mail to around 400 students of a quantitative data workshop,

making the dataset a convenience sample. Since the amount of younger consumers (students)

present is likely to be high, caution should be taken in generalizing the results. Over-

representation of certain groups is typical drawback of convenience sampling (Saunders et

al., 2012). The data was gathered between the 9th and 24th of May 2017. The survey that was

used in this research can be found in appendix 2.

Page 30: Master Thesis Intelligent Personal Assistants: privacy

30

4 Data analysis and findings A total of 258 people entered the survey, which took on average 7.5 minutes to complete.

People that abandoned the survey, did this in the beginning after answering only one or two

questions. Because of this, it was decided to delete all incomplete responses together with

empty responses (n=30). Three responses were deleted because the response seemed

dishonest (e.g. same answer 18 times in a row) in combination with a unlikely response time

under two minutes. No counter-indicative items were included. After further screening, a

total of 222 cases were valid for further analysis. SPSS 22 and SPSS AMOS were used to

analyze the collected data.

4.1 Descriptive measures As described In table 3, the majority of the respondents had the Dutch nationality (94.1%). Of

all respondents, there were 109 female and 103 male respondents. These accounted for

respectively 49.1% and 46.4% percent of the total, implicating a good gender balance. The

age of the respondents lay between 16-63, with most of the respondents in the age category of

20-29 (77.9%).

Table 3: Descriptive sample measures

Variable (N=222) % Nationality Dutch Non-Dutch Not provided

209 10 3

94.1 4.5 1.4

Gender Female Male Not provided

109 103 10

49.1 46.4 4.5

Age category 20 or under 20-29 30-39 40-49 50 or over Not provided

4

173 16 2 25 2

1.8 77.9 7.2 0.9 11.3 0.9

Page 31: Master Thesis Intelligent Personal Assistants: privacy

31

4.2 Reliability and validity After skipped questions were recoded into -99 to exclude these from the analysis, several

assumption checks were performed to check for normality and outliers in the data before

performing any parametric tests. Assumptions specific to multiple regression are discussed

hereafter. It was concluded that skewness for all variables was well within the acceptable

range of -2 , 2 (Field, 2013). The most extreme value was found for the latent variable

Concern for Secondary Use (-1.56) which is part of the second order construct Concern For

Information Privacy. The skewness for this (summated) second order construct was -0.52.

Looking at Q-Q plots, most of the data laid close to the indicator line of a normal distribution.

Overall, around 5 data points could be considered as ‘outlier’. It was decided to not remove

these points because of the nature of the Likert Scale. The answers in other questions seemed

honest, and it did not seem right to delete data because a person ended ‘lower’ or ‘higher’ on

a scale with a small 1-7 spectrum. According to the central limit theorem (Field, 2013), which

claims that scale means approach a normal distribution as sample size increases, the decent

sample size (N=222) also indicates that the scale means are approximately normally

distributed. While this is not hard proof, it does substantiates the other indicators which look

in order.

Opposite to the original UTAUT2 constructs, of which the factor structure is

grounded in different theories, the Concern For Information Privacy instrument is a construct

that has not been used in acceptance studies that often. Former studies showed that the

instrument can be treated as a second order construct (C.-L. Hsu & Lin, 2016; Stewart &

Segars, 2002). However, Stewart & Segars (2002) also mention that the underlying constructs

of Concern For Information Privacy and their applicability should be further investigated in

light of emerging technologies and different research contexts. Ongoing trends like the

Internet of Things and the increased awareness of consumers on how businesses benefit

from- and use their personal data makes it difficult to fully understand what factors influence

Page 32: Master Thesis Intelligent Personal Assistants: privacy

32

consumer attitudes towards information privacy. Since this study uses a multiple regression

to answer the hypotheses, and the model is not tested as a whole as one would do with

Structural Equation Modelling (SEM), a separate Confirmatory Factor Analysis (CFA) was

executed upfront using SPSS AMOS to verify the structure and test whether Concern for

Secondary Use, Improper Access, Errors and Collection actually are the underlying factors in

context of this study as theory would suggest. Results are shown in table 4.

Table 4 : Results from CFA with factor loadings, composite reliability and average variance extracted

In order to decide whether we can use the outcome of the CFA model, the so called Goodness

Of Fit (GOF) is tested. The RSMEA of .08 implicates ‘reasonable fit’ (Bentler & Bonett,

1980). Preferably, this number should be <.05 to be considered as a perfect fit (Hair, 2010).

The CFI fit indice that preferably exceeds .95 for good models, was smaller (.90). All of the

Composite Reliability (CR) scores exceeded the .70 threshold, indicating the summated

scales are reliable (Fornell & Larcker, 1981). The Average Variance Extracted (AVE)

exceeds the threshold of .50 for almost all variables, indicating sufficient convergent validity.

The common CFIP factor only seemed to explain .40 of Concern for Secondary Use, what

could be due to a lower loading of two of the four items within this factor (see figure 4). The

low factor loading of Concern for Collection (.16) is also something that stands out from the

analysis. It indicates that this factor has a weak effect on the CFIP construct, and that it

possesses less similarities with the other three factors explaining CFIP. Based on the CR of

Relation

Factor loading CR AVE

CCOL ĸ CFIP .16 .83 .55

CSU ĸ CFIP .75 .70 .40

CERR ĸ CFIP .57 .84 .57

CIA ĸ CFIP .74 .79 .56

Page 33: Master Thesis Intelligent Personal Assistants: privacy

33

CSU and CCOL (.70, .83) and the fact that the model fit indices do not directly imply for a

perfect model fit, the structure of the CFIP construct is kept as in theory. This decision is also

endorsed by an extra Principal Component Analysis that was done, indicating that extracting

one factor was the best solution. The Eigen Value (EV) dropped below the threshold of 1

when two or more factors were extracted. While the total explained variance increased from

47% to 71% with two factors, a visual inspection of the Scree plot clearly pleaded for one

factor. Substantial support can also be found in the correlation matrix (table 5) discussed

hereafter. In line with the theoretical foundation, the underlying factors of the CFIP scale

were summated into one construct for further analysis, with a reliable Crohnbach alpha

�Į ���).

Figure 4: Outcome of the Confirmatory Factor Analysis on CFIP

Page 34: Master Thesis Intelligent Personal Assistants: privacy

34

4.3 Correlations Table 5 shows the means, standard deviations (SD), reliability and bivariate correlation for

the independent, dependent and control variables. Keeping the confounding variable problem

and direction of causality in mind, a quick glance at the matrix shows that there are

interesting correlations going on. Statistically significant correlations (p<.05) with r= �.4

(moderate) or correlations relevant for the focus of this study are discussed.

Looking at the latent Concern For Information Privacy instrument (13), the results

show strong correlation (r= [.64, .69] , p=<.01) with the four underlying variables of the

FRQVWUXFW��ZKLFK� LV� LQ� OLQH�ZLWK� WKH� UHOLDELOLW\� �Į ������ It also shows a negative correlation

with Performance Expectancy (r= -.19, p=<.01) , Social Influence (r= -.24, p=<.01), Hedonic

Motivation (r= -.24, p=<.01), Price Value (r= -.19, P=<.01) and Behavioral Intention (r= -.22;

p=<.01), what could be a first hint that CFIP plays a role in the adoption of IPA technology.

CFIP is positively correlated with Perceived Risk (r= .32, p=<.01), indicating that one’s

Perceived Risk increases when a their Concern For Information Privacy becomes stronger.

The weak negative correlation of CFIP on Behavioral Intention (r= -.22, p=<.01) and

moderate to strong correlation between Perceived Risk and Behavioral Intention (r= -.46 ,

p=<.01) form a hint that a significant mediating effect could be present over a direct effect.

This effect will be further discussed in the next chapter. Perceived Risk also shows a

moderate negative correlation with Performance Expectancy (r= -.37, p=<.01), Price Value

(r= -.39; p=<.01) and strong negative correlation with Hedonic Motivation (r= -.47, p=<.01).

Furthermore, it can be concluded that Hedonic Motivation, a factor claimed to be

relevant especially for consumer acceptance of technology (Venkatesh et al., 2012), is indeed

strongly correlated with Behavioral Intention to adopt IPA technology (r=.66, p=<.01). While

this study also claimed that Hedonic Motivation is an even stronger predictor of Behavioral

Intention over Performance Expectancy, both constructs have the same strong positive

correlations with Behavioral Intention (r=.66, p=<.01), what could indicate that the both the

Page 35: Master Thesis Intelligent Personal Assistants: privacy

35

fun aspect and benefits of IPA technology play an important role in the adoption of the

technology. The study of Venkatesh et al. (2012) also suggest that if the benefits of IPA

technology outweigh the monetary costs, Price Value of the technology is seen as a positive

factor and therefor positively influences Behavioral Intention. Looking at the correlation

matrix, these findings are reflected in the strong positive correlation between Price Value and

Behavioral Intention (r=.55, p=<.01). But also the moderate positive relation with

Performance Expectancy (which include perceived benefits) (r=.43, p=<.01) and Hedonic

Motivation (r=.54, p=<.01) indicates that Price Value plays an important role in intention to

use IPA technology.

Page 36: Master Thesis Intelligent Personal Assistants: privacy

36

Table 5: Correlation matrix

Correlation matrix of all variables

Mean (SD) 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13.

1. CCOLa

5.01 (1.21) (.80)

2. CSUa 6.26 (0.78) .17* (.65)

3. CIAa 6.32 (0.68) .18** .54** (.83)

4. CERa 5.31 (0.98) .07 .32** .34** (.82)

5. PE 4.55 (1.30) -.35** -.04 .01 .00 (.88)

6. EE 5.44 (0.95) -.23** .17* .13 -.03 .22** (.88)

7. SI 3.08 (1.37) -.24** -.12 -.21** -.05 .38** -.02 (.92)

8. FC 4.99 (0.99) -.20** .10 .13 -.07 .19** .53** .09 (.71)

9. HM 4.84 (1.33) -.42** .03 -.04 -.06 .61** .31** .36** .27** (.87)

10. PV 5.00 (0.97) -.30** -.05 .00 -.05 .43** .15* .34** .22** .54** (.81)

11. BI 3.58 (1.51) -.35** -.07 -.05 -.02 .66** .20** .50** .27** .66** .55** (.92)

12. PR 3.83 (0.92) .55** .05 .06 .02 -.37** -.24** -.27** -.27** -.47** -.39** -.46** (.71)

13. CFIP 5.72 (0.60) .64** .69** .69** .64** -.19** -.04 -.24** -.06 -.24** -.19** -.22** .32** (.79)

14. Genb 0.51 (0.03) .12 .24** .18* .23** -.03 -.14* -.05 -.19** -.10 .05 -.13 .09 .28**

Note: Item reliability (Alpha Į��RQ�GLDJRQDO��&&2/��&RQFHUQ�IRU�&ROOHFWLRQ��&68��&RQFHUQ�6HFRQGDU\�8VH��&,$��&RQFHUQ�

Improper Access; CER: Concern for Errors; PE: Performance Expectancy; EE: Effort Expectancy; SI: Social Influence; FC:

Facilitating Conditions; HM: Hedonic Motivation; PV: Price Value; BI: Behavioral Intention; PR: Perceived Risk; CFIP: Concern

For Information Privacy a is part of latent variable b Gender dummy female

*. Correlation is significant at the 0.05 level (2-tailed).

**. Correlation is significant at the 0.01 level (2-tailed).

Page 37: Master Thesis Intelligent Personal Assistants: privacy

37

4.4 Analysis In the next section, the valid data and statistical procedures are further presented. The role of

Perceived Risk is further investigated using SPSS PROCESS. Afterwards, a multiple

regression is used in SPSS to better understand the relationship between the independent

factors and Behavioral Intention. An alpha level of .05 is used for all statistical tests.

Before trying to explain how the independent variables influence Behavioral Intention

and results are compared with the previously drafted hypotheses, it is important to look at the

assumptions of a multiple regression. As mentioned in the data collection section, all of the

independent and dependent variables are measured on a quantitative scale (scale means),

making them suitable for regression analysis. The correlations at the beginning of the analysis

section already showed that there were now indicators (r=>.8) (Field, 2013) for perfect

multicollinearity. This is highly unwanted, since it would scramble the results of individual

predictors and their effect on the outcome variable. In addition to this r=>.8 ‘ball park’

method, the Variance Inflation Factor (VIF), which is a collinearity diagnostic that should be

below a value of 10 (Myers, 2000) , was calculated for all predictors. With the VIF ranging

between 1.17 and 2.20 and tolerance factors all >.46, no reason to concern for

multicollinearity was found. Another factor that could blur or give false statements about

significance of relations is autocorrelation. This is the case when the residuals terms of

different observations over time correlate with each other. To exclude autocorrelation and test

for this assumption, the Durbin-Watson test was done. This value should preferably be

between 0 and 4, with a value of 2 meaning that the residuals do not correlate with each other

(Field, 2013). The Durbin-Watson test gave a value of 1.94, showing that also this

assumption was met.

Since a multiple regression is ran, it is of the utmost importance to build a model

based on theory. While one is able to add all of the predictor variables in a SEM study at

once, predictors should be carefully chosen when conducting a multiple regression.

Page 38: Master Thesis Intelligent Personal Assistants: privacy

38

‘Overfitting’ a regression model, a situation in which a large number of random predicting

variables are added in the hope that they are relevant, is an often heard statement (Field,

2013). On this point, a strong theoretical framework and usage of constructs that have

theoretically proven to be of importance is key. Naturally, during the literature review, only

relevant constructs in light of IPA technology and technology acceptance were taken into

account and should all be tested in the regression.

Since this study has specific interest in the role of Concern For Information Privacy

on the Behavioral Intention to adopt IPA technology, and it is expected that this relation is

partially mediated by Perceived Risk, the multiple regression was preceded by a test for

mediation using SPSS PROCESS. The results are shown in table 6 and figure 5.

Table 6: Test for mediation using Process using ‘model 4’, showing direct and indirect effect sizes (N=221)

Consequent

PR (M) BI (Y)

Antecedent Coeff. SE P Coeff. SE P

CFIP (X) A1 .487 .097 < .001 C1 -.202 .158 .201

PR (M) --- --- --- B1 -.716 .103 <.001

constant I1 1.041 .563 0.065 I2 7.475 .866 <.001

R2 = .102 R2 = .221

F = 24.77 , P<.001 F = 30.90 , P<.001

Effect SE P LLCI ULCI Direct effect C1’ -.202 .158 >0.05 (.201) -.513 .109 Total effect C1 -.551 .165 < .001 -.875 -.226

Boot SE Boot LLCI Boot ULCI

Indirect effect A1B1 -.349 .093 -.565 -.193

Page 39: Master Thesis Intelligent Personal Assistants: privacy

39

CFIP

BI

PR .487***

-.716 ***

A1

B1

C1

Figure 5: Effect of CFIP with the direct and indirect unstandardized coefficients (b) via mediator Perceived Ris. *. Coefficient is significant at the 0.05 level (2-tailed). **. Coefficient is significant at the 0.01 level (2-tailed). ***. Coefficient is significant at the 0.001 level (2-tailed)

As illustrated in table 6, the effect of CFIP on PR A1 shows that for every 1 unit increase of

CFIP, a person also positively increases in Perceived Risk by .487 units (unstandardized

coefficient). The positive relation indicates that if a person is estimated to be higher in CFIP,

that person’s Perceived Risk also increases. This effect is statistically different from zero,

with p = <.001 and t=4.977, within the 95% confidence interval of .294 to .680. This means

that H2 is supported.

The negative effect B1 = -.716 indicates that two people who experience the same

level of CFIP, but differ by one unit in Perceived Risk, are estimated to differ -.716 in

Behavioral Intention. B1 is negative, what indicates that a person higher in Perceived Risk is

also lower in Behavioral Intention. This effect is statistically different from zero, with a t= -

6.94, p= <.001 and a 95% confidence interval between -.919 and -.513. This means that H3

is supported.

The negative indirect effect a1b1 = -.349 indicates that two persons that relatively

differs one unit in CFIP, defer approximately by -.349 in Behavioral Intention, because these

people are higher in Perceived Risk, what results in a decreased Behavioral Intention to use

IPA technology.

Page 40: Master Thesis Intelligent Personal Assistants: privacy

40

With both of the indirect effects a1 and b1 being significantly different from zero, and

a 95% bootstrap confidence interval entirely below zero (-.565 to -.193) , it can be concluded

that the indirect effect a1b1 also is significantly different from zero.

The direct effect between CFIP and Behavioral Intention b= -.202 shows the difference in BI

between to persons that experience the same amount of PR, but differ in CFIP by one unit.

However, it could not be proven that this direct effect is significantly different from zero with

p=.201 > .05, and that there is an mediating effect present better explaining the variance in

Behavioral Intention. This means sufficient support to accept H1 was not found, which will

be further elaborated on in the discussion section.

Overall, the total effect of CFIP on BI is C1= -.551. This indicates that if two

consumers differ by one unit in CFIP, they are estimated to differ -.551 in Behavioral

Intention to adopt IPA technology. This negative relationship shows that a person perceiving

higher CFIP, could be less likely to use IPA technology in near future. This effect is

statistically different from zero, with t= -3.345, p= <.001 and a 95% confidence interval

between -.875 and -.226. This means that there is no partial mediation, but full mediation

based on this test.

The R2 of the total effect is .221, meaning that it is able to explains 22% of the

variance in Behavioral Intention. This is still far-off the explanatory power that was found in

other studies using UTAUT2, which proved that the framework often exceeds an R2 of .60.

This indicates that there are other variables that better explain the variance in our outcome

variable, for which a multiple regression is performed.

When conducting a multiple regression, the option is given to rely on SPSS for selecting the

predictors that are the most important based on mathematical criteria (e.g. stepwise, or forced

entry). However, this would mean that some constructs are excluded from the model, while

this is not a rational decision from a theoretical point of view. Deciding on how to model the

Page 41: Master Thesis Intelligent Personal Assistants: privacy

41

multiple regression should always be done with the aim of the study and theory kept in mind

(Field, 2013). Another risk of letting SPSS take the methodological decisions is the risk of

underfitting, which is the scenario of leaving out predictors while these are still important,

resulting in a model with less predictive performance. Therefore a hierarchical approach was

chosen. The multiple regression to predict Behavioral Intention consisted of four blocks

resulting in three models (the first block controlled for the variance of Age and Gender). The

second model contained the variables CFIP and Perceived Risk, which is already described

on the previous page since it contained a mediating effect. In the third model, the independent

variables Hedonic Motivation and Price Value were added, based on the discussed findings

that these are important predictors in consumer acceptance context. The fourth and final

model contained all other independent variables and control variables. The results are shown

in table 7.

Table 7: Multiple regression on dependent variable behavioral intention

Model Summarye

Model R R

Square Adjusted R Square

Std. Error of the

Estimate

Change Statistics Durbin-Watson

R Square Change

F Change df1 df2

Sig. F Change

1 .17a .024 .015 1.48829 .024 2.584 2 207 .078 2 .47b .220 .205 1.33718 .196 25.714 2 205 .000 3 .71c .505 .491 1.07021 .285 58.517 2 203 .000 4 .79d .618 .599 .94943 .113 14.733 4 199 .000 1.936 a. Predictors: (Constant), Gender dummy, How old are you? b. Predictors: (Constant), Gender dummy, How old are you?, Mean PR, CFIP_tot c. Predictors: (Constant), Gender dummy, How old are you?, Mean PR, CFIP_tot, Mean PV, Mean HM d. Predictors: (Constant), Gender dummy, How old are you?, Mean PR, CFIP_tot, Mean PV, Mean HM, Mean FC, Mean SI, Mean EE, Mean PE e. Dependent Variable: Mean BI

As indicated by the F-change, a significant regression equation was found. By adding

the original UTAUT constructs, model 4 forms an significant improvement compared to

model 3 (Fchange(4,199)=14.73, p=<.01) with an R2change=.11, R2=.62 and R2

adjusted=.60,

Page 42: Master Thesis Intelligent Personal Assistants: privacy

42

meaning that 62% of the variance in Behavioral Intention could be explained using this

model. The small difference between R2 and R2adjusted (.02) shows that the cross-validity of this

model can be described as very good, meaning that if one would derive the model from a

population instead of this sample, it would account for approximately 2% less variance.

To find out to what extent each variable contributes to this result, the coefficients in

table 8 DUH� FRQVXOWHG�� 7KH� VWDQGDUGL]HG� ȕ� FRHIILFLHQWV� IRUm the best indicator for the

‘importance’ of each of the different predictors, as they are easier to compare and used across

studies for indicating effect size. As expected after the preliminary test for mediation,

Concern For Information Privacy has no significant direct influence on Behavioral Intention

to use IPA technology (ȕ=.03 , p=>.05), indicating that sufficient support to accept H1 was

not found. Perceived Risk (ȕ= -.11. , p=<.05), when combined with the other constructs in

the model, proved to have a significant negative influence with a decrease of -.19 in

Behavioral Intention if increased by one unit. This significant negative effect means support

for H3 was found. A person’s Performance Expectancy of IPA technology proved to have the

highest influence on Behavioral Intention (ȕ=.32 , p=<.001) with an expected increase of .37

in Behavioral Intention for every unit increase in Performance Expectancy. It can be said that

if the perceived benefits of IPA technology increase, and a person better understands how the

technology helps in performing certain activities, this will have a positive influence on their

Behavioral Intention. The significant positive relation also means hypothesis H4 is supported.

Page 43: Master Thesis Intelligent Personal Assistants: privacy

43

Table 8: (Un)standardized coefficients (B, ȕ���6(��W-value, p-value, tolerance and VIF statistics for model 1,2 and 3

Coefficientsa

Model

Unstandardized Coefficients

Standardized Coefficients

t Sig.

Collinearity Statistics

B Std. Error ȕ Tolerance VIF 1 (Constant) 4.161 .308 13.515 .000

Age -.013 .010 -.094 -1.373 .171 .999 1.001 Gender -.365 .206 -.122 -1.776 .077 .999 1.001

2 (Constant) 7.331 .908 8.072 .000

Age -.002 .009 -.011 -.174 .862 .929 1.076 Gender -.218 .192 -.073 -1.137 .257 .924 1.082 PR -.719 .107 -.433 -6.691 .000 .907 1.102 CFIP -.141 .170 -.057 -.831 .407 .816 1.226

3 (Constant) .156 .999 .156 .876

Age .007 .007 .047 .907 .366 .898 1.113 Gender -.250 .156 -.084 -1.606 .110 .900 1.111 PR -.249 .096 -.150 -2.591 .010 .723 1.383 CFIP -.012 .137 -.005 -.090 .928 .806 1.240 HM .520 .074 .450 7.013 .000 .592 1.690 PV .378 .094 .247 4.021 .000 .648 1.542

4 (Constant) -1.346 .984 -1.369 .173

Age .004 .007 .030 .623 .534 .838 1.193 Gender -.262 .142 -.087 -1.847 .066 .856 1.168 PR -.189 .087 -.114 -2.166 .032 .691 1.448 CFIP .080 .124 .032 .646 .519 .766 1.306 HM .279 .075 .242 3.724 .000 .455 2.199 PV .265 .085 .173 3.110 .002 .621 1.609 FC .073 .080 .049 .906 .366 .664 1.506 SI .218 .054 .201 4.034 .000 .774 1.293 EE -.028 .086 -.018 -.323 .747 .633 1.580 PE .365 .066 .315 5.539 .000 .592 1.690

a. Dependent Variable: Mean BI

Page 44: Master Thesis Intelligent Personal Assistants: privacy

44

It was further found that Effort Expectancy did not significantly predicted Behavioral

Intention (ȕ= -.03, p=>.05). The non-significant relation means that insufficient support was

found to support hypothesis H5. Social Influence (ȕ=.20 , P=<.001) proved to have a

significant positive influence on Behavioral Intention. In other words, this significant positive

influence means that people that are one unit higher in Social Influence (more prone to the

opinion of others about whether or not they should use IPA technology) are expected to be

.22 higher in Behavioral Intention. This significant positive effect means that H6 is

supported. Facilitating Conditions (ȕ=.05 , p=>.05) had no significant influence on

Behavioral Intention, meaning that insufficient support was found for H7. Finally, the two

predictors specific to consumer adoption, Hedonic Motivation (ȕ=.24 , p=<.001) and Price

Value (ȕ=.27 , p=<.01) both proved to have a significant positive influence on Behavioral

Intention. A person one unit higher in Hedonic Motivation experiences an increased

Behavioral Intention of .28 or an increased Behavioral Intention of .27 if this would be the

case for Price Value. This means that support was found for hypotheses H8 and H9.

Page 45: Master Thesis Intelligent Personal Assistants: privacy

45

5 Discussion In order to answer the research question ‘’What factors and consumer attitudes influence the

acceptance of Intelligent Personal Assistant technology in smart homes?’’, eight different

variables were hypothesized based on prior literature. The final model is shown in figure 6.

Substantial proof was found to accept six out of the total of nine hypotheses. In this section

the results will be further discussed and placed in perspective with previous findings.

Hypothesis Support

CFIP

EE

SI

PV

HM

FC

BI

PE

CFIP_Col

CFIP_IA

CFIP_SU

CFIP_ERR

PR .32***

-.11***

Figure 6: Factor loadings (CFIP) and standardized beta coefficients. ***. Significant at the .001 level (2-tailed) Concern for 1) Collection(COL), 2) Improper Access(IA), 3) Secondary Use(SU), 4) Errors (ERR); Concern for Information Privacy (CFIP); Performance Expectancy (PE); Effort Expectancy (EE); Social Influence(SI); Facilitating conditions (FC);Hedonic Motivation(HM); Price value(PV); Perceived Risk (PR); Behavioral Intention (BI)

H1: CFIP - BI

No

H2: CFIP + PR Yes H3: PR - BI Yes H4: PE + BI Yes H5: EE + BI No H6: SI + BI Yes H7: FC + BI No H8: HM + BI Yes H9: PV + BI Yes

5.1 Discussion on hypotheses It is safe to say that the average respondent agreed that information privacy is salient topic ( [ࡃ

=5.72), which was also showed by the (optional) commentary on the online survey that often

contained comments linked to this aspect. Finding no evidence for the hypothesized direct

Page 46: Master Thesis Intelligent Personal Assistants: privacy

46

negative influence of Concern For Information Privacy (H1) is notable. Substantial studies

empirically verified that CFIP has a negative effect on Behavioral Intention, albeit in

different contexts (Angst & Agarwal, 2009; Dinev & Hart, 2005; Malhotra et al., 2004). Most

recently, Hsu & lin (2016) found a direct negative influence of CFIP on continued intention

to use, but this aspect could not be measured due to a lack of experienced Dutch users. Their

finding implicates that CFIP could have an effect on whether people will re-use IPA

technology. In this study Concern For Information Privacy only influenced intention to use

IPA technology via Perceived Risk (full mediation via H2 and H3). The findings substantiate

other results found in previous acceptance studies, closely related to the domain of IPA (e.g.

IoT, Smart products, ubiquitous commerce), who showed that CFIP only influences intention

to use new technology via increased risk perception (Chen et al., 2011; Van Slyke et al.,

2006).

Performance Expectancy proved to be the strongest predictor for someone to use IPA

technology, supporting H4 (ȕ=.32, p=<.001). With a mean of 4.6, the sample was not very

convinced about the benefits of IPA technology at this point. A simple explanation could be

the lack of experience with the technology. IPA devices are not yet available in Dutch, and

Dutch service providers have not yet announced integrations. Hsu & Lin (2016) found that

offering more complementary services and applications in the IoT domain increased a user’s

perceived benefits of IoT technology. More integrations and functionalities for IPA, in

combination with the relatively strong Beta (.32) of Performance Expectancy shows that

emphasizing the benefits of IPA technology to consumers could ramp up the acceptance rate.

The insignificant influence of Effort Expectancy (H5) could also be due to a lack of

experience. The items of this construct are based on perceived ease of use (TAM2) and

perceived complexity (Venkatesh et al., 2003). The new way of human-computer interaction

using your voice is something most people will have no experience with, apart from their

Page 47: Master Thesis Intelligent Personal Assistants: privacy

47

mobile ‘smart’ assistant that still requires unnatural commands. More consistent answers, and

perhaps a significant influence, are likely to be found as the technology becomes more

familiar.

This study proved that Social Influence also has a significant positive effect on the

intention to use IPA technology (ȕ=.20 , p=<.001). The support of H6 substantiates earlier

findings of Risselada and colleagues (2013), who claimed that Social Influence has a positive

effect on the acceptance of high-technology products. It suggests that one considers the

opinion of peers and friends about IPA technology or their opinion about oneself using the

technology, and that this opinion influences a person's Behavioral Intention.

The extent to which Facilitating Condition such as resources, knowledge or

compatibility with other devices were present for a person did not prove to be a significant

predictor. There is a good possibility that this is due to the sample that was used. In the

original UTAUT2 study, Venkatesh et al. (2012) found that age (and gender) functioned as a

moderator. A more pronounced effect of Facilitating Conditions was found for older women.

Younger people are more experienced and adapt easier to new technology, what will likely

decrease the need for Facilitating Conditions present in the acceptance of new technology.

The majority of the sample used for this study consisted of participants between the age of

20-29 (80%), which will likely be the reason why no support was found for H7.

Hedonic Motivation (ȕ=.20 , p=<.001) and Price Value (ȕ=.17 , p=<.001) proved to

be significant predictors for Behavioral Intention, but did not exceed the predictive power of

Performance Expectancy as suggested by Venkatesh et al. (2012). This study substantiates

the importance of the two constructs, that indeed appear to play an important role in context

of consumer technology acceptance. Results suggest that the new way of human-computer

interaction is perceived as enjoyable and fun, which positively influences Behavioral

Intention to use IPA technology. The fact that people perceive IPA technology as fun and

Page 48: Master Thesis Intelligent Personal Assistants: privacy

48

enjoyable, could also improve the prolonged use of the system, apart from only benefitting

from the utilitarian functionalities as suggested by (Van Der Heijden, 2004). A price range

was mentioned, stating the cheapest and most expensive IPA device currently available (€50-

150). The positive significant influence of Price Value indicate that the benefits already

outweigh the monetary costs of IPA devices at this moment. This suggests that as soon as

more integrations become available, and the perceived benefits for a consumer increase

(strong correlation of r= .43, p=<.01), the influence of Price Value on Behavioral Intention

could become even stronger.

5.2 Discussion on CFIP instrument The factor loadings of the Concern For Information Privacy instrument partially substantiate

the finding in the study of Hsu & Lin (2016), who suggested that Concern for Improper

Access(.75) appears to be the foundation of the CFIP instrument along with Concern for

secondary use. No support for this last statement was found, since the CFA proved that

Concern for Errors was a better predictor for Concern For Information Privacy (.74)

compared to Concern for secondary use (.58). While Concern for Collection had the lowest

loading in their study as well (.58), it came out much lower in this study (.16). The fact that

people scored substantially different on collection, could be due to several reasons. First, it

should be noted that since 1996 people have become more familiar with the phenomenon of

data collection by companies, and how this data can or cannot be used to their advantage. In

the original article, Smith et al. (1996) already described privacy as an evolving concept as

data collection, storage and retrieval would become more pervasive (Stewart & Segars,

2002). Data is given off by almost everyone, every day. It could be that the personalization-

privacy paradox is shifting, and consumers become less reluctant to data collection over time

in return for better services. In conclusion, collection does not necessarily have to be bad,

especially if a company or technology shows how one can benefit from it, which is clearly

Page 49: Master Thesis Intelligent Personal Assistants: privacy

49

the case when interacting with IPA technology. The other predictors of Concern For

Information Privacy actually have a harmful act in them in which privacy becomes an issue.

This could distinguish them from collection.

Another reason for the deflected result of collection could be because of the nature of

IPA technology. The fact that IPA devices are ‘always on’ and rely on speech interaction,

might be a resemblance to some for someone ‘always listening’. Collection could be the

salient and most visible factor to consumers, influencing Perceived Risk of IPA technology.

This theory is also substantiated after closely looking at the correlation matrix, showing that

Concern for Collection has a significant negative influence on all other predictors, compared

to the other constructs of Concern For Information Privacy. Clearly, the CFIP instrument did

not perform as coherent as a second order construct as it did in other studies. In context of

IPA technology, Privacy concerns seems to be more complex. Especially collection

dimension seems to have an important relation with Perceived Risk in particular.

5.3 Implications for practice Some companies already entered the market with their IPA devices, like Google and

Amazon. More recently, Apple followed by introducing their HomePod smart speaker.

Findings of this study could also be relevant for managers and policymakers. The influence

of Performance Expectancy and the relatively low mean of intention to use IPA technology at

this point emphasize the need for more integrations with other (Dutch) service providers to

increase the perceived benefits. The significant Social Influence on users pleads for guarding

the positive picture and sentiment about IPA technology. Concern For Information Privacy

only proved to be of influence via Perceived Risk. Focusing on actions diminishing the

Perceived Risks of IPA technology , for example by regulations, transparent terms and

conditions, or equipping smart speakers with ‘mute’ buttons (some already have this feature)

could decrease the Perceived Risk, safeguarding the positive sentiment ( and Social

Page 50: Master Thesis Intelligent Personal Assistants: privacy

50

Influence), improving the Behavioral Intention to use IPA technology. The positive influence

of Hedonic Motivation suggest that pleasure and fun orientated aspects of IPA devices should

be leveraged and underlined, since this encourages the prolonged usage. In December 2017,

Apple will introduce their speaker for $349, which could be a risky strategy. Price Value

proved to have a (relatively low but) significant influence on whether or not people will use

IPA technology. This is substantiated by a sample of 2200 (US) polled adults interested in

Apple’s Homepod, for which price appeared to be the a ‘very important’ factor (57%)

(Appleinsider, 2017). A lower introduction price could therefore be a good strategy, while

trying to generate margin elsewhere, for example by charging service providers to use the

company’s IPA platform or integrate other services of the company (Apple music, Amazon

shopping).

5.4 Contribution to theory First of all, this study adds to the literature on technology acceptance by applying the

UTAUT2 framework in the context of IPA technology. To the best of the researcher’s

knowledge, no preliminary research on IPA technology, more specifically smart speakers,

existed. This is the first research that looks into how the different predictors of UTAUT2 can

explain a person’s Behavioural Intention to use IPA technology, in combination with the

added variables Perceived Risk and Concern For Information Privacy. Second, it

substantiates previous findings, claiming that Hedonic Motivation and Price Value are

important predictors in consumer acceptance studies. Thirdly, it also shows that the CFIP

instrument is well measured as a second order construct, but also indicated that the role of

Concern for Collection is more complex in context of IPA technology and needs further

investigation, for example to understand the interrelationship with the different facets of

Perceived Risk. Finally, the weak, but numeral (significant) negative correlations of

Page 51: Master Thesis Intelligent Personal Assistants: privacy

51

Perceived Risk with the original UTAUT predictors also fuel the thought that the influence of

risk perception deserves deepening.

5.5 Limitations and directions for further research One of the main limitations of this study is the limited generalizability. This can be

considered as a typical drawback of a convenience sample. The sample consisted of mostly

Dutch consumers. Dutch consumers are not as familiar with IPA technology as for example

US-citizens. Dutch operated smart speakers (e.g. Home, Alexa) are not (yet) available in

stores. The lack of technology experience likely influences the predictor variables. Looking at

the dispersion of the sample, one has to conclude that the majority consists of younger

respondents, most likely students, between the age of 20-29 (78%). While there was an equal

distribution of men and women, one has to be careful when generalizing results to other age

groups, knowing that age functions as a moderator in the UTAUT framework. If this study

would be repeated, the researcher should account for age, gender and technology experience

as possible moderators.

One could claim that not using Structural Equation Modelling (e.g. AMOS or MPlus)

for the final model is also a limitation. More precise results could be achieved if the model

was tested as a whole, including the mediation in one analysis. Since this study only

consisted of one new latent variable, it was decided to only use SEM for the Confirmatory

Factor Analysis and execute a multiple regression in SPSS due to time constraints. A

suggestion would be to repeat this study using SEM techniques, after people are more

familiar and experienced IPA technology. In addition to Behavioural intention, actual use

behaviour can also be included at that time.

Another limitation of this study is the primary definition of overall Perceived Risk.

Due to the amount of variables included, the construct for overall risk perception was

included. As suggested by Featherman & Pavlou (2003), risk comprises of six different

Page 52: Master Thesis Intelligent Personal Assistants: privacy

52

factors. This study proves that Perceived Risk influences Behavioral Intention, which can be

considered a first direction, but it is unknown which risk factor (e.g. financial risk,

performance risk, social risk) is most salient in IPA context. The interaction of Concern for

Collection with Perceived Risk and the low correlation of other CFIP factors on Perceived

Risk imply that overall risk perception is too broad of a denominator and needs more study to

deepen out the influence of different types of privacy concerns on different facets of risk

perception.

Page 53: Master Thesis Intelligent Personal Assistants: privacy

53

6 Conclusion The purpose of this study was to empirically verify what factors and consumer attitudes

influence the acceptance of intelligent Personal Assistant technology in smart homes. A

survey consisting of 222 mainly Dutch consumers was used for analysis. The model of this

study was able to explain 62% of the variance in Behavioral Intention. Performance

Expectancy (ȕ=.32) proved to be the best predictor to explain one’s Behavioral Intention to

use IPA technology, followed by Hedonic Motivation (ȕ=.24), Social influence (ȕ=.20) and

Price value (ȕ=.17) .No empirical evidence was found for a significant influence of

Facilitating Conditions and Effort Expectancy. Important to practitioners are the findings that

perceived benefits of IPA technology should be further improved by creating more

functionalities and integrations, while safeguarding the positive image of the technology to

improve acceptance. For academia, it is interesting to see that the UTAUT framework was

able to explain a decent amount of variance, and that the CFIP instrument should be used

with caution. Concern For Information Privacy proved to only have an indirect negative

influence on Behavioral Intention, increasing a person’s Perceived Risk (ȕ=.32) which had a

weak negative effect on a person’s Behavioral Intention (ȕ=-.11). Compared to the other

three underlying constructs, Concern for Collection did not prove to be a good predictor of

CFIP in context of this study, and the interrelation of this CFIP factor with Perceived Risk

and other UTAUT factors shows that Concern for Collection needs further investigation.

Page 54: Master Thesis Intelligent Personal Assistants: privacy

54

7 Bibliography Ajzen, I. (1980). Understanding attitudes and predicting social behavior / Icek Ajzen, Martin

Fishbein. Upper Saddle River, N.J: Prentice-Hall.

Ajzen, I. (1985). From Intentions to Actions: A Theory of Planned Behavior. In P. D. J. Kuhl

& D. J. Beckmann (Eds.), Action Control (pp. 11–39). Springer Berlin Heidelberg.

Retrieved from http://link.springer.com/chapter/10.1007/978-3-642-69746-3_2

Allen, M. (2009). Tim O’Reilly and Web 2.0: The economics of memetic liberty and control.

Retrieved from http://espace.library.curtin.edu.au:80/R/?func=dbin-jump-

full&object_id=130976&local_base=gen01-era02

Andrew Liptak. (2017, July 1). Amazon’s Alexa started ordering people dollhouses after

hearing its name on TV. Retrieved April 15, 2017, from

https://www.theverge.com/2017/1/7/14200210/amazon-alexa-tech-news-anchor-

order-dollhouse

Angst, C. M., & Agarwal, R. (2009). Adoption of electronic health records in the presence of

privacy concerns: The elaboration likelihood modeland individual persuasion. MIS

Quarterly: Management Information Systems, 33(2), 339–370.

Belanger, F., & Crossler, R. (2011). Privacy in the Digital Age: A Review of Information

Privacy Research in Information Systems. Management Information Systems

Quarterly, 35(4), 1017–1041.

Bentler, P. M., & Bonett, D. G. (1980). Significance tests and goodness of fit in the analysis

of covariance structures. Psychological Bulletin, 88(3), 588–606.

https://doi.org/10.1037/0033-2909.88.3.588

Brown, S. A., & Venkatesh, V. (2005). Model of Adoption of Technology in Households: A

Baseline Model Test and Extension Incorporating Household Life Cycle. MIS

Quarterly, 29(3), 399–426.

Page 55: Master Thesis Intelligent Personal Assistants: privacy

55

Burton-Jones, A., & Hubona, G. S. (2006). The mediation of external variables in the

technology acceptance model. Information & Management, 43(6), 706–717.

https://doi.org/10.1016/j.im.2006.03.007

Chen, Y., Wang, Q., & Xie, J. (2011). Online Social Interactions: A Natural Experiment on

Word of Mouth Versus Observational Learning. Journal of Marketing Research,

48(2), 238–254. https://doi.org/10.1509/jmkr.48.2.238

Chuttur, M. (2009). Overview of the Technology Acceptance Model: Origins, Developments

and Future Directions. All Sprouts Content. Retrieved from

http://aisel.aisnet.org/sprouts_all/290

Czibula, G., Guran, A. M., Czibula, I. G., & Cojocar, G. S. (2009). IPA - An intelligent

personal assistant agent for task performance support. In 2009 IEEE 5th International

Conference on Intelligent Computer Communication and Processing (pp. 31–34).

https://doi.org/10.1109/ICCP.2009.5284791

Davis, F. D. (1989). Perceived Usefulness, Perceived Ease of Use, and User Acceptance of

Information Technology. MIS Quarterly, 13(3), 319–340.

https://doi.org/10.2307/249008

Dietvorst, B. J., Simmons, J. P., & Massey, C. (2015). Algorithm Aversion: People

Erroneously Avoid Algorithms After Seeing Them Err. Journal of Experimental

Psychology: General, 144(1), 114–126. https://doi.org/10.1037/xge0000033

Dinev, T., & Hart, P. (2005). Internet Privacy Concerns and Social Awareness as

Determinants of Intention to Transact. International Journal of Electronic Commerce,

10(2), 7–29. https://doi.org/10.2753/JEC1086-4415100201

Dodds, W. B., Monroe, K. B., & Grewal, D. (1991). Effects of Price, Brand, and Store

Information on Buyers’ Product Evaluations. Journal of Marketing Research, 28(3),

307–319. https://doi.org/10.2307/3172866

Page 56: Master Thesis Intelligent Personal Assistants: privacy

56

Featherman, M. S., & Pavlou, P. A. (2003). Predicting e-services adoption: a perceived risk

facets perspective. International Journal of Human - Computer Studies, 59(4), 451–

474. https://doi.org/10.1016/S1071-5819(03)00111-3

Field, A. P. (2013). Discovering statistics using IBM SPSS statistics: and sex and drugs and

rock “n” roll (4th edition). Los Angeles: Sage.

Fodor, M., & Brem, A. (2015). Do privacy concerns matter for Millennials? Results from an

empirical analysis of Location-Based Services adoption in Germany. Computers in

Human Behavior, 53, 344–353. https://doi.org/10.1016/j.chb.2015.06.048

Fornell, C., & Larcker, D. (1981). Evaluating Structural Equation Models with Unobservable

Variables and Measurement Error. Journal of Marketing Research, 18(1), 39.

https://doi.org/10.2307/3151312

Garrido, P., Martinez, F., & Guetl, C. (2010). Adding Semantic Web Knowledge to

Intelligent Personal Assistant Agents. School of Information Systems. Retrieved from

http://espace.library.curtin.edu.au:80/R/?func=dbin-jump-

full&object_id=155404&local_base=gen01-era02

Gartner Says the Internet of Things Installed Base Will Grow to 26 Billion Units By 2020.

(n.d.). Retrieved January 25, 2017, from

http://www.gartner.com/newsroom/id/2636073

Hair, J. F. (Ed.). (2010). Multivariate data analysis: a global perspective (7. ed., global ed).

Upper Saddle River, NJ: Pearson.

Holbrook, M. B., & Hirschman, E. C. (1982). The Experiential Aspects of Consumption:

Consumer Fantasies, Feelings, and Fun. Journal of Consumer Research, 9(2), 132–

140.

Hsu, C.-L., & Lin, J. C.-C. (2016). An empirical examination of consumer adoption of

Internet of Things services: Network externalities and concern for information privacy

Page 57: Master Thesis Intelligent Personal Assistants: privacy

57

perspectives. Computers in Human Behavior, 62, 516–527.

https://doi.org/10.1016/j.chb.2016.04.023

Hsu, C.-W., & Yeh, C.-C. (2016). Understanding the factors affecting the adoption of the

Internet of Things. Technology Analysis and Strategic Management, 1–14.

https://doi.org/10.1080/09537325.2016.1269160

Ives, Blake. (2016). Enhancing Customer Service through the Internet of Things and Digital

Data Streams. MIS Quarterly Executive, 20.

Lee, M.-C. (2009). Factors influencing the adoption of internet banking: An integration of

TAM and TPB with perceived risk and perceived benefit. Electronic Commerce

Research and Applications, 8(3), 130–141.

https://doi.org/10.1016/j.elerap.2008.11.006

Lee, W.-J., & Chong, S.-S. (2016). A dual-factor model to explain the future adoption of

smart internet of things service and its implications. International Journal of Software

Engineering and Its Applications, 10(11), 441–450.

https://doi.org/10.14257/ijseia.2016.10.11.35

Limayem, M., Hirt, S. G., & Cheung, C. (2008). How Habit Limits the Predictive Power of

Intention: The Case of Information Systems Continuance. Management Information

Systems Quarterly, 31(4), 705–737.

Lingling Gao, & Xuesong Bai. (2014). A unified perspective on the factors influencing

consumer acceptance of internet of things technology. Asia Pacific Journal of

Marketing and Logistics, 26(2), 211–231. https://doi.org/10.1108/APJML-06-2013-

0061

Malhotra, N. K., Kim, S. S., & Agarwal, J. (2004). Internet Users’ Information Privacy

Concerns (IUIPC): The Construct, the Scale, and a Causal Model. Information

Systems Research, 15(4), 336–355. https://doi.org/10.1287/isre.1040.0032

Page 58: Master Thesis Intelligent Personal Assistants: privacy

58

Manyika, J. (2015, June). THE INTERNET OF THINGS: MAPPING THE VALUE

BEYOND THE HYPE. Mckinsey Global Institute.

Mark Bergen, & Olga Kharif. (2017, January 5). At CES, New Digital Assistants Restart

Smart Home Race. Retrieved March 17, 2017, from

https://www.bloomberg.com/news/articles/2017-01-05/at-ces-new-digital-assistants-

restart-smart-home-race

Mick, D. G., & Fournier, S. (1998). Paradoxes of Technology: Consumer Cognizance,

Emotions, and Coping Strategies. Journal of Consumer Research, 25(2), 123–143.

https://doi.org/10.1086/209531

Miorandi, D., Sicari, S., De Pellegrini, F., & Chlamtac, I. (2012). Internet of things: Vision,

applications and research challenges. Ad Hoc Networks, 10(7), 1497–1516.

https://doi.org/10.1016/j.adhoc.2012.02.016

Moorthy, A. E., & Vu, K.-P. L. (2015). Privacy Concerns for Use of Voice Activated

Personal Assistant in the Public Space. International Journal of Human–Computer

Interaction, 31(4), 307–335. https://doi.org/10.1080/10447318.2014.986642

Myers, R. H. (2000). Classical and modern regression with applications. Australia; Pacific

Grove, CA: Duxbury/Thomson Learning.

Pavlou, P. (2003). Consumer Acceptance of Electronic Commerce: Integrating Trust and

Risk with the Technology Acceptance Model. International Journal of Electronic

Commerce, 7(3), 101–134.

Plachkinova, M., Vo, A., & Alluhaidan, A. (2016). Emerging Trends in Smart Home

Security, Privacy, and Digital Forensics. AMCIS 2016 Proceedings. Retrieved from

http://aisel.aisnet.org/amcis2016/ITProj/Presentations/23

Page 59: Master Thesis Intelligent Personal Assistants: privacy

59

Risselada, H., Verhoef, P. C., & Bijmolt, T. H. A. (2013). Dynamic Effects of Social

Influence and Direct Marketing on the Adoption of High-Technology Products.

Journal of Marketing, 78(2), 52–68. https://doi.org/10.1509/jm.11.0592

Rogers, E. M. (1983). Diffusion of innovations ��UG�HG���1HZ�<RUNௗ��/RQGRQ��)UHH�3UHVVௗ��

Collier Macmillan.

Santos, J., Rodrigues, J. J. P. C., Casal, J., Saleem, K., & Denisov, V. (2016). Intelligent

Personal Assistants Based on Internet of Things Approaches. IEEE Systems Journal,

PP(99), 1–10. https://doi.org/10.1109/JSYST.2016.2555292

Saunders, M. N. K., Lewis, P., & Thornhill, A. (2012). Research methods for business

students ��WK�HG���+DUORZ��(QJODQGௗ��1HZ�<RUN��3HDUVRQ�

Sheng, H., Nah, F. F.-H., & Siau, K. (2008). An Experimental Study on Ubiquitous

commerce Adoption: Impact of Personalization and Privacy Concerns. Journal of the

Association for Information Systems, 9(6). Retrieved from

http://aisel.aisnet.org/jais/vol9/iss6/15

Sitkin, S. B., & Weingart, L. R. (1995). Determinants of risky decision-making behavior: a

test of the mediating role of risk perceptions and propensity. (includes appendix).

Academy of Management Journal, 38(6), 1573.

Smith, H., Milberg, S., & Burke, S. (1996). Information Privacy: Measuring Individuals’

Concerns about Organizational Practices. Management Information Systems

Quarterly, 20(2). Retrieved from http://aisel.aisnet.org/misq/vol20/iss2/3

Stewart, K. A., & Segars, A. H. (2002). An Empirical Examination of the Concern for

Information Privacy Instrument. Information Systems Research, 13(1), 36–49.

https://doi.org/10.1287/isre.13.1.36.97

Page 60: Master Thesis Intelligent Personal Assistants: privacy

60

Sun, H., & Zhang, P. (2006). Causal Relationships between Perceived Enjoyment and

Perceived Ease of Use: An Alternative Approach. Journal of the Association for

Information Systems, 7(9). Retrieved from http://aisel.aisnet.org/jais/vol7/iss9/24

Taiwo, A. A., & Downe, A. G. (2013). The theory of user acceptance and use of technology

(UTAUT): A meta-analytic review of empirical findings. Journal of Theoretical and

Applied Information Technology, 49(1), 48–58.

Tornatzky, L. G., Fleischer, M., & Chakrabarti, A. K. (1990). The processes of technological

innovation. Lexington, Mass.: Lexington Books.

Tseng, W.-T., Dörnyei, Z., & Schmitt, N. (2006). A New Approach to Assessing Strategic

Learning: The Case of Self-Regulation in Vocabulary Acquisition. Applied

Linguistics, 27(1), 78–102. https://doi.org/10.1093/applin/ami046

Van Der Heijden, H. (2004). User Acceptance of Hedonic Information Systems. MIS

Quarterly, 28(4), 695–704.

Van Slyke, C., Shim, J., Johnson, R., & Jiang, J. (2006). Concern for information privacy and

online consumer purchasing. Journal Of The Association For Information Systems,

7(6), 415–444.

Venkatesh, V., & Davis, F. D. (2000). A Theoretical Extension of the Technology

Acceptance Model: Four Longitudinal Field Studies. Management Science, 46(2),

186–204. https://doi.org/10.1287/mnsc.46.2.186.11926

Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User Acceptance of

Information Technology: Toward a Unified View. MIS Quarterly, 27(3), 425–478.

Venkatesh, V., Thong, J. Y. L., & Xu, X. (2012). Consumer acceptance and use of

information technology: Extending the unified theory of acceptance and use of

technology. MIS Quarterly: Management Information Systems, 36(1), 157–178.

Page 61: Master Thesis Intelligent Personal Assistants: privacy

61

Web of Science. (2017, February 20). [Search engine for academia]. Retrieved February 20,

2017, from

http://apps.webofknowledge.com/Search.do?product=WOS&SID=3B7dBX6GkENnn

pyQdp3&search_mode=GeneralSearch&prID=7c586271-aab0-443a-aa7c-

5362cfda4482

Weber, R. H. (2010). Internet of Things – New security and privacy challenges. Computer

Law & Security Review, 26(1), 23–30. https://doi.org/10.1016/j.clsr.2009.11.008

Wednesday, R. F., June 14, 2017, & PT, 12:55 pm. (n.d.). Survey finds 1/3 of people

interested in Apple’s HomePod, still more likely to buy Amazon Echo. Retrieved June

16, 2017, from //appleinsider.com/articles/17/06/14/survey-finds-13-of-people-

interested-in-apples-homepod-still-more-likely-to-buy-amazon-echo

Westin, A. F. (1967). Privacy and Freedom. Atheneum.

Wilson, C., Hargreaves, T., & Hauxwell-Baldwin, R. (2015). Smart homes and their users: a

systematic analysis and key challenges. Personal and Ubiquitous Computing, 19(2),

463–476. https://doi.org/10.1007/s00779-014-0813-0

Xu, H., & Teo, H.-H. (2004). Alleviating consumers’ privacy concerns in location-based

services: a psychological control perspective. ICIS 2004 Proceedings, 64.

Yan, Z., Zhang, P., & Vasilakos, A. V. (2014). A survey on trust management for Internet of

Things. Journal of Network and Computer Applications.

https://doi.org/10.1016/j.jnca.2014.01.014

Yuksel, B., Collisson, P., & Czerwinski, M. (2017). Brains or Beauty: How to Engender

Trust in User-Agent Interactions. ACM Transactions on Internet Technology (TOIT),

17(1), 1–20. https://doi.org/10.1145/2998572

Page 62: Master Thesis Intelligent Personal Assistants: privacy

62

8 Appendix

8.1 Items used Original English Dutch Concern For Information Privacy CFIP- Collection (COL) CCOL 1. It usually bothers me when IoT service providers ask me for personal information. CCOL2. When IoT service providers ask me for personal information, I sometimes think twice before providing it. CCOL3. I am concerned that IoT service providers are collecting too much personal information about me. CCOL4. It bothers me to give personal information to so many IoT service providers. CFIP- Unauthorized Secondary Use (CSU) CSU1. IoT service providers should not use personal information for any unauthorized purpose. CSU2. When people give personal information to the IoT service providers for some reason, the IoT service providers should never use the information for any other purpose. CSU3. IoT service providers should never sell personal information to other companies. CSU4. IoT service providers should never share personal information with other companies unless specifically authorized to do so by user. CFIP - Improper Access (CIA) CIA1. IoT service providers should devote more time and effort to preventing unauthorized access to personal information. CIA2. Computer databases that contain personal information should be protected from unauthorized Access, no matter how much it costs CIA3. IoT service providers should take more steps to make sure that unauthorized people cannot access personal information in their computers. CFIP - Errors (CER) CER1. All the personal information in IoT service providers' computer databases should be double-checked for accuracy - no matter how much this costs. CER2. IoT service providers should take more steps to make sure that the personal information in their files is accurate. CER3. IoT service providers should have better procedures to correct errors in personal information. CER4. IoT service providers should

Concern For Information Privacy CFIP- Collection(COL) CCOL 1. It would bother me when a smart assistant asks me for personal information. CCOL2. When smart assistant service providers ask me for personal information, I would think twice before providing it. CCOL3. I would be concerned that smart assistant service providers collect too much personal information about me. CCOL4. It would bother me when personal information is shared amongst different service partners of the smart assistant. CFIP- Unauthorized Secondary Use (CSU) CSU1. Smart assistant service providers should not use personal information for any unauthorized purpose. CSU2. When people give personal information to the smart assistant service providers for some reason, the service providers should never use the information for any other purpose. CSU3. Smart assistant service providers should never sell personal information to other companies. CSU4. Smart assistant service providers should never share personal information with other companies unless specifically authorized to do so by user. CFIP - Improper Access (CIA) CIA1 Smart assistant service providers should devote enough time and effort to prevent unauthorized access to personal information. CIA2. Databases of smart assistant service providers that contain personal information should be protected from unauthorized access – no matter how much this costs. CIA3 Smart assistant service providers should take enough steps to make sure that unauthorized people cannot access personal information in their computers. CFIP - Errors (CER) CER1. All the personal information in smart assistant service providers' computer databases should be double-checked for accuracy - no matter how much this costs. CER2. Smart assistant service providers should take enough steps to make sure that the personal

Concern For Information Privacy CFIP- Collection(COL) CCOL 1. Het zou mij irriteren als een slimme assistent dienstverlener mij vraagt naar persoonlijke gegevens. CCOL2. Wanneer een slimme assistent dienstverlener mij vraagt naar persoonlijke informatie, zou ik twee keer nadenken voor ik dit verstrek. CCOL3. Ik zou bezorgd zijn dat een slimme assistent dienstverlener te veel persoonlijke informatie over mij verzameld. CCOL4. Het zou mij irriteren als persoonlijke informatie gedeeld wordt met verschillende service partners van de slimme assistent. CFIP - Unauthorized Secondary Use (CSU) CSU1. Dienstverleners van slimme assistenten mogen persoonlijke informatie niet gebruiken voor ongeoorloofde doeleinden. CSU2. Wanneer iemand persoonlijke informatie verstrekt aan een slimme assistent, zou de dienstverlener deze informatie nooit voor andere doeleinden mogen gebruiken. CSU3 Slimme assistent dienstverleners zouden persoonlijke gegevens nooit mogen verkopen aan andere bedrijven. CSU4. Slimme assistent dienstverleners mogen persoonlijke gegevens niet delen met andere bedrijven, tenzij hier specifiek toestemming voor is gegeven door de gebruiker. CFIP - Improper Access (CIA) CIA1. Dienstverleners van een slimme assistent zouden voldoende tijd en aandacht moeten besteden aan het voorkomen van onrechtmatige toegang tot persoonlijke gegevens. CIA2. Databases van slimme assistent dienstverleners die persoonlijke gegevens bevatten moeten beschermd worden tegen ongeoorloofde toegang, ongeacht hoeveel dit kost. CIA3. Dienstverleners van een slimme assistent zouden voldoende stappen moeten nemen om zeker te zijn dat onbevoegden geen toegang hebben tot persoonlijke gegevens. CFIP - Errors (CER) CER1. Alle persoonlijke gegevens in databases van dienstverleners van slimme assistenten zou dubbel gecontroleerd moeten worden op nauwkeurigheid, ongeacht hoeveel dit kost. CER2. Dienstverleners van een slimme

Page 63: Master Thesis Intelligent Personal Assistants: privacy

63

devote more time and effort to verifying the accuracy of the personal information in their databases.

information in their files is accurate. CER3. Smart assistant service providers should have enough procedures to correct errors in personal information. CER4. Smart assistant service providers should devote enough time and effort to verifying the accuracy of the personal information in their databases.

assistent zouden voldoende stappen moeten nemen om te zorgen dat persoonlijke gegevens in hun data accuraat is. CER3. Dienstverleners van een slimme assistent zouden goede procedures moeten hebben om fouten in persoonlijke gegevens te corrigeren. CER4. Dienstverleners van een slimme assistent zouden voldoende tijd en moeite moeten besteden om de juistheid van persoonlijke gegevens in hun databases te verifiëren.

Performance Expectancy PE1. I find mobile Internet useful in my daily life. PE3. Using mobile Internet helps me accomplish things more quickly. PE3. Using mobile Internet increases my productivity.

Performance Expectancy PE1. I would find smart assistant technology useful in my daily life. PE3. Using smart assistant technology would help me accomplish things more quickly. PE3. Using smart assistant technology would increase my productivity.

Performance Expectancy PE1. Ik zou de slimme assistent technologie nuttig vinden in mijn dagelijks leven. PE3. Het gebruik van een slimme assistent zou mij helpen bij het sneller voltooien van taken. PE3. Het gebruik van een slimme assistent zou mijn productiviteit verhogen.

Effort Expectancy EE1. Learning how to use mobile Internet is easy for me. EE2. My interaction with mobile Internet is clear and understandable. EE3. I find mobile Internet easy to use. EE4. It is easy for me to become skillful at using mobile Internet.

Effort Expectancy EE1. Learning how to use smart assistant technology is easy for me. EE2. I think interaction with smart assistant technology would be clear and understandable. EE3. I think smart assistant technology would be easy to use. EE4. It is easy for me to become skillful at using smart assistant technology.

Effort Expectancy EE1. Leren hoe een slimme assistent werkt is makkelijk voor mij. EE2. De omgang met een slimme assistent zou voor mij duidelijk en begrijpelijk zijn. EE3. Ik denk dat een slimme assistent makkelijk te gebruiken is. EE4. Het is eenvoudig voor mij om vaardig te worden in het gebruik van een slimme assistent.

Social Influence SI1. People who are important to me think that I should use mobile Internet. SI2. People who influence my behavior think that I should use mobile Internet. SI3. People whose opinions that I value prefer that I use mobile Internet

Social Influence SI1. People who are important to me would think that I should use smart assistant technology. SI2. People who influence my behavior would think that I should use smart assistant technology. SI3. People whose opinions that I value prefer that I use smart assistant technology.

Social Influence SI1. Mensen die belangrijk zijn voor mij zouden denken dat ik een slimme assistent moet gebruiken. SI2. Mensen die mijn gedrag beïnvloeden zouden denken dat ik een slimme assistent moet gebruiken. SI3. Mensen wiens mening ik waardeer, zouden prefereren dat ik een slimme assistent gebruik.

Facilitating Conditions FC1. I have the resources necessary to use mobile Internet. FC2. I have the knowledge necessary to use mobile Internet. FC3. Mobile Internet is compatible with other technologies I use. FC4. I can get help from others when I have difficulties using mobile Internet.

Facilitating Conditions FC1. I have the resources necessary to use smart assistant technology. FC2. I have the knowledge necessary to use smart assistant technology. FC3. Smart assistant technology is compatible with other technologies I use. FC4. I can get help from others when I have difficulties using smart assistant technology.

Facilitating Conditions FC1. Ik heb de benodigde middelen om een slimme assistent te gebruiken. FC2. Ik bezit de benodigde kennis om een slimme assistent te kunnen gebruiken. FC3. Slimme assistent technologie is compatibel met andere technologieën die ik gebruik. FC4. Ik zou hulp van anderen kunnen krijgen als ik problemen heb bij het gebruik van een slimme assistent.

Hedonic Motivation HM1. Using mobile Internet is fun. HM2. Using mobile Internet is enjoyable. HM3. Using mobile Internet is very entertaining.

Hedonic Motivation HM1. Using smart assistant technology seems fun. HM2. Using smart assistant technology is enjoyable. HM3. Using smart assistant technology seems very entertaining.

Hedonic Motivation HM1. Het gebruik van een slimme assistent lijkt mij leuk. HM2. Het gebruik van een slimme assistent lijkt mij prettig. HM3. Het gebruik van een slimme assistent lijkt mij vermakelijk.

Price Value PV1. Mobile Internet is reasonably priced. PV2. Mobile Internet is a good value

Price Value PV1. Smart assistant technology is reasonably priced. PV2. Smart assistant technology is a

Price Value PV1. De prijs van een slimme assistent is redelijk. PV2. Een slimme assistent biedt waar

Page 64: Master Thesis Intelligent Personal Assistants: privacy

64

for the money. PV3. At the current price, mobile Internet provides a good value.

good value for the money. PV3. At the current price, smart assistant technology would provide a good value.

voor je geld. PV3. Voor de huidige prijs zou een slimme assistent goede waarde te bieden.

Behavioral Intention BI1. I intend to continue using mobile Internet in the future. BI2. I will always try to use mobile Internet in my daily life. BI3. I plan to continue to use mobile Internet frequently

Behavioral Intention BI1. I intend to use smart assistant technology in the future. BI2. I will always try to use smart assistant technology in my daily life. BI3. I plan to use smart assistant technology frequently.

Behavioral Intention BI1. Ik ben voornemens om in de toekomst een slimme assistent te gebruiken. BI2. Ik zou altijd proberen om gebruik te maken van een slimme assistent in mijn dagelijks leven. BI3. Ik ben van plan om regelmatig een slimme assistent te gebruiken in de toekomst.

Perceived risk (alternative 1-7 likert scale) PR1. How would you characterize the decision to transact with this Web retailer? (Significant risk/insignificant risk) PR2. How would you characterize the decision to transact with this Web retailer? (Very negative situation/Very PR3. How would you characterize the decision to buy a product from this Web retailer? (High potential For loss/High potential for gain) PR4. On the whole, considering all sorts of factors combined, about how risky would you say it would be to sign up for and use XXXX? (Not risky at all/very risky)

Perceived risk (alternative 1-7 likert scale) PR1. How would you characterize the decision to use a smart assistant? (insignificant risk/ significant risk) PR2. How would you characterize the decision to interact with a smart assistant? (very positive / very negative) PR3. How would you characterize the decision to use a smart assistant? (High potential for gain/ high potential for loss) PR4. All factors combined, about how risky would you think it is to use a smart assistant? (Not risky at all/ Very risky)

Perceived risk (alternative 1-7 likert scale) PR1. Hoe zou u het gebruik van een slimme assistent karakteriseren ? (insignificant risico /significant risico). PR2. Hoe zou u het besluit tot interactie met een slimme assistent willen karakteriseren? (zeer positief/zeer negatief) PR3. Hoe zou u het besluit tot gebruik van een slimme assistent willen karakteriseren? (Hoge potentie tot winst Hoge potentie voor verlies) PR4. Globaal gezien, hoe risicovol denkt u dat het gebruik van een slimme assistent is? (Niet risicovol/Zeer risicovol)

Page 65: Master Thesis Intelligent Personal Assistants: privacy

65

8.2 Survey for this study

Page 66: Master Thesis Intelligent Personal Assistants: privacy

5/2/2017 Qualtrics Survey Software

https://uvafeb.eu.qualtrics.com/ControlPanel/Ajax.php?action=GetSurveyPrintPreview 1/9

English

Intro

Short explana on: What is a smart assistant?A smart assistant is an intelligent speaker that you can place anywhere in your home. It is controlled by giving it voicecommands. It answers your ques ons, manages your calendar or other ‘smart’ products (e.g. lights or thermostat) andintegrates with all kinds of apps (e.g. Spo fy, Amazon, Uber or JustEat) that can be controlled by using only your voice.

Picture: Two types of smart assistants currently available.

CFIP-Collection

Strongly

disagree Disagree

Somewhat

disagree

Neither

agree nor

disagree

Somewhat

agree Agree

Strongly

agree

It would bother me when a

smart assistant asks me for

personal information.

Strongly

disagree Disagree

Somewhat

disagree

Neither

agree nor

disagree

Somewhat

agree Agree

Strongly

agree

When smart assistant service

providers ask me for personal

information, I would think twice

before providing it.

Strongly

disagree Disagree

Somewhat

disagree

Neither

agree nor

disagree

Somewhat

agree Agree

Strongly

agree

Page 67: Master Thesis Intelligent Personal Assistants: privacy

5/2/2017 Qualtrics Survey Software

https://uvafeb.eu.qualtrics.com/ControlPanel/Ajax.php?action=GetSurveyPrintPreview 2/9

I would be concerned that

smart assistant service

providers collect too much

personal information about

me.

Strongly

disagree Disagree

Somewhat

disagree

Neither

agree nor

disagree

Somewhat

agree Agree

Strongly

agree

It would bother me to give

personal information to so

many smart assistant service

providers.

CFP- Unauthorized secondary use

Strongly

disagree Disagree

Somewhat

disagree

Neither

agree nor

disagree

Somewhat

agree Agree

Strongly

agree

Smart assistant service

providers should not use

personal information for any

unauthorized purpose.

Strongly

disagree Disagree

Somewhat

disagree

Neither

agree nor

disagree

Somewhat

agree Agree

Strongly

agree

When people give personal

information to the smart

assistant service providers for

some reason, the service

providers should never use

the information for any other

purpose.

Strongly

disagree Disagree

Somewhat

disagree

Neither

agree nor

disagree

Somewhat

agree Agree

Strongly

agree

Smart assistant service

providers should never sell

personal information to other

companies.

Strongly

disagree Disagree

Somewhat

disagree

Neither

agree nor

disagree

Somewhat

agree Agree

Strongly

agree

Smart assistant service

providers should never share

personal information with

other companies unless

specifically authorized to do so

by user.

CFIP - Improper acces

Page 68: Master Thesis Intelligent Personal Assistants: privacy

5/2/2017 Qualtrics Survey Software

https://uvafeb.eu.qualtrics.com/ControlPanel/Ajax.php?action=GetSurveyPrintPreview 3/9

Strongly

disagree Disagree

Somewhat

disagree

Neither

agree nor

disagree

Somewhat

agree Agree

Strongly

agree

Smart assistant service

providers should devote extra

time and effort to prevent

unauthorized access to

personal information.

Strongly

disagree Disagree

Somewhat

disagree

Neither

agree nor

disagree

Somewhat

agree Agree

Strongly

agree

Databases of smart assistant

service providers that contain

personal information should

be protected from

unauthorized access – no

matter how much this costs.

Strongly

disagree Disagree

Somewhat

disagree

Neither

agree nor

disagree

Somewhat

agree Agree

Strongly

agree

Smart assistant service

providers should take more

steps to make sure that

unauthorized people cannot

access personal information in

their computers.

CFIP - Errors

Strongly

disagree Disagree

Somewhat

disagree

Neither

agree nor

disagree

Somewhat

agree Agree

Strongly

agree

All the personal information in

smart assistant service

providers' computer databases

should be double-checked for

accuracy - no matter how

much this costs.

Strongly

disagree Disagree

Somewhat

disagree

Neither

agree nor

disagree

Somewhat

agree Agree

Strongly

agree

Strongly

disagree Disagree

Somewhat

disagree

Neither

agree nor

disagree

Somewhat

agree Agree

Strongly

agree

Smart assistant service

providers should take more

steps to make sure that the

personal information in their

files is accurate.

Strongly

disagree Disagree

Somewhat

disagree

Neither

agree nor

disagree

Somewhat

agree Agree

Strongly

agree

Page 69: Master Thesis Intelligent Personal Assistants: privacy

5/2/2017 Qualtrics Survey Software

https://uvafeb.eu.qualtrics.com/ControlPanel/Ajax.php?action=GetSurveyPrintPreview 4/9

Smart assistant service

providers should have better

procedures to correct errors in

personal information.

Strongly

disagree Disagree

Somewhat

disagree

Neither

agree nor

disagree

Somewhat

agree Agree

Strongly

agree

Smart assistant service

providers should devote extra

time and effort to verifying the

accuracy of the personal

information in their databases.

Performance Expectancy

Strongly

disagree Disagree

Somewhat

disagree

Neither

agree nor

disagree

Somewhat

agree Agree

Strongly

agree

I would find smart assistant

technology useful in my daily

life.

Strongly

disagree Disagree

Somewhat

disagree

Neither

agree nor

disagree

Somewhat

agree Agree

Strongly

agree

Using smart assistant

technology would help me

accomplish things more

quickly.

Strongly

disagree Disagree

Somewhat

disagree

Neither

agree nor

disagree

Somewhat

agree Agree

Strongly

agree

Using smart assistant

technology would increase my

productivity.

Effort Expectancy

Strongly

disagree Disagree

Somewhat

disagree

Neither

agree nor

disagree

Somewhat

agree Agree

Strongly

agree

Strongly

disagree Disagree

Somewhat

disagree

Neither

agree nor

disagree

Somewhat

agree Agree

Strongly

agree

Learning how to use smart

assistant technology is easy

for me.

Strongly

disagree Disagree

Somewhat

disagree

Neither

agree nor

disagree

Somewhat

agree Agree

Strongly

agree

Page 70: Master Thesis Intelligent Personal Assistants: privacy

5/2/2017 Qualtrics Survey Software

https://uvafeb.eu.qualtrics.com/ControlPanel/Ajax.php?action=GetSurveyPrintPreview 5/9

I think interaction with smart

assistant technology is clear

and understandable.

Strongly

disagree Disagree

Somewhat

disagree

Neither

agree nor

disagree

Somewhat

agree Agree

Strongly

agree

I think smart assistant

technology is easy to use.

Strongly

disagree Disagree

Somewhat

disagree

Neither

agree nor

disagree

Somewhat

agree Agree

Strongly

agree

It is easy for me to become

skillful at using smart assistant

technology

Social Influence

Strongly

disagree Disagree

Somewhat

disagree

Neither

agree nor

disagree

Somewhat

agree Agree

Strongly

agree

People who are important to

me think that I should use

smart assistant technology.

Strongly

disagree Disagree

Somewhat

disagree

Neither

agree nor

disagree

Somewhat

agree Agree

Strongly

agree

People who influence my

behavior think that I should

use smart assistant

technology.

Strongly

disagree Disagree

Somewhat

disagree

Neither

agree nor

disagree

Somewhat

agree Agree

Strongly

agree

People whose opinions that I

value prefer that I use smart

assistant technology.

Facilitating conditions

Strongly

disagree Disagree

Somewhat

disagree

Neither

agree nor

disagree

Somewhat

agree Agree

Strongly

agree

I have the resources

necessary to use smart

assistant technology.

Page 71: Master Thesis Intelligent Personal Assistants: privacy

5/2/2017 Qualtrics Survey Software

https://uvafeb.eu.qualtrics.com/ControlPanel/Ajax.php?action=GetSurveyPrintPreview 6/9

Strongly

disagree Disagree

Somewhat

disagree

Neither

agree nor

disagree

Somewhat

agree Agree

Strongly

agree

I have the knowledge

necessary to use smart

assistant technology.

Strongly

disagree Disagree

Somewhat

disagree

Neither

agree nor

disagree

Somewhat

agree Agree

Strongly

agree

Smart assistant technology is

compatible with other

technologies I use.

Strongly

disagree Disagree

Somewhat

disagree

Neither

agree nor

disagree

Somewhat

agree Agree

Strongly

agree

I can get help from others

when I have difficulties using

smart assistant technology.

Hedonic motivation

Strongly

disagree Disagree

Somewhat

disagree

Neither

agree nor

disagree

Somewhat

agree Agree

Strongly

agree

Using smart assistant

technology is fun.

Strongly

disagree Disagree

Somewhat

disagree

Neither

agree nor

disagree

Somewhat

agree Agree

Strongly

agree

Using smart assistant

technology is enjoyable.

Strongly

disagree Disagree

Somewhat

disagree

Neither

agree nor

disagree

Somewhat

agree Agree

Strongly

agree

Using smart assistant

technology is very

entertaining.

Block 14

Prices for a Smart assistant start at € 50,- and go up to €130.

Please click next

Price value

Page 72: Master Thesis Intelligent Personal Assistants: privacy

5/2/2017 Qualtrics Survey Software

https://uvafeb.eu.qualtrics.com/ControlPanel/Ajax.php?action=GetSurveyPrintPreview 7/9

Strongly

disagree Disagree

Somewhat

disagree

Neither

agree nor

disagree

Somewhat

agree Agree

Strongly

agree

Smart assistant technology is

reasonably priced.

Strongly

disagree Disagree

Somewhat

disagree

Neither

agree nor

disagree

Somewhat

agree Agree

Strongly

agree

Smart assistant technology is

a good value for the money.

Strongly

disagree Disagree

Somewhat

disagree

Neither

agree nor

disagree

Somewhat

agree Agree

Strongly

agree

At the current price, smart

assistant technology provides

a good value.

Behavioral intention

Strongly

disagree Disagree

Somewhat

disagree

Neither

agree nor

disagree

Somewhat

agree Agree

Strongly

agree

I intend to use smart assistant

technology in the future.

Strongly

disagree Disagree

Somewhat

disagree

Neither

agree nor

disagree

Somewhat

agree Agree

Strongly

agree

I will always try to use smart

assistant technology in my

daily life.

Strongly

disagree Disagree

Somewhat

disagree

Neither

agree nor

disagree

Somewhat

agree Agree

Strongly

agree

Strongly

disagree Disagree

Somewhat

disagree

Neither

agree nor

disagree

Somewhat

agree Agree

Strongly

agree

I plan to use smart assistant

technology frequently

Perceived Risk

Insignificant

risk

Neither

insignificant

or significant

risk

Significant

risk

Page 73: Master Thesis Intelligent Personal Assistants: privacy

5/2/2017 Qualtrics Survey Software

https://uvafeb.eu.qualtrics.com/ControlPanel/Ajax.php?action=GetSurveyPrintPreview 8/9

Male

Female

Under 18

18 - 24

25 - 34

35 - 44

45 - 54

55 - 64

65 - 74

75 - 84

85 or older

How would you characterize

the decision to use a smart

assistant ?

Very

positive

Neither

positive or

negative

Very

negative

How would you characterize

the decision to interact with a

smart assistant ?

High

potential for

gain

Neither

potential for

gain or loss

High

potential for

loss

How would you characterize

the decision to use a smart

assistant?

Not riskfull

at all

Neither not

riskfull or

riskfull Very riskfull

All factors combined, about

how risky would you think it is

to use a smart assistant?

Info

What's your gender?

How old are you?

What's your nationality?

(optional) Any thoughts on smart assistants you'd like to share?

Page 74: Master Thesis Intelligent Personal Assistants: privacy

5/2/2017 Qualtrics Survey Software

https://uvafeb.eu.qualtrics.com/ControlPanel/Ajax.php?action=GetSurveyPrintPreview 9/9

Please fill in your email if you wish to win one of the prizes.