reconciliation of privacy with preventive cybersecurity ... · internet approach jae kyu lee1,2...

13
Reconciliation of Privacy with Preventive Cybersecurity: The Bright Internet Approach Jae Kyu Lee 1,2 & Younghoon Chang 3 & Hun Yeong Kwon 4 & Beopyeon Kim 5 Published online: 22 January 2020 # The Author(s) 2020 Abstract The emergence of a preventive cybersecurity paradigm that aims to eliminate the sources of cybercrime threats is becoming an increasingly necessary complement to the current self-defensive cybersecurity systems. One concern associated with adopting such preventive measures is the risk of privacy infringement. Therefore, it is necessary to design the future Internet infrastructure so that it can appropriately balance preventive cybersecurity measures with privacy protections. This research proposes to design the Internet infrastructure using the preventive cybersecurity measures of the Bright Internet, namely preventive cybersecurity protocol and identifiable anonymity protocol, and ten privacy rights derived from Europes General Data Protection Regulations (GDPR). We then analyze the legitimacy of the five steps of the preventive cybersecurity protocol and the four features of the identifiable anonymity protocol from the perspectives of ten privacy rights. We address the legitimacy from the perspective of potential victimsself-defense rights. Finally, we discuss four potential risks that may occur to the innocent senders and proposed resilient recovery procedures. Keywords Bright internet . Privacy rights . GDPR . Preventive cybersecurity protocol . Identifiable anonymity protocol . Self-defense rights 1 Introduction Recently, the necessity of a preventive cybersecurity paradigm has emerged because existing self-defensive cybersecurity systems are not sufficiently secure and cybersecurity attacks are becoming more and more serious. To overcome the limi- tations of self-defensive systems, the preventive cybersecurity paradigm that we introduce here aims to eliminate the sources of cybercrime threats themselves. To achieve this goal, the Bright Internet architecture is proposed with five principles: (1) origin responsibility, (2) deliverer responsibility, (3) iden- tifiable anonymity, (4) global collaboration, and (5) privacy protection (Lee 2015, 2016; Lee et al. 2018). Interestingly, the Bright Internet explicitly includes the protection of privacy as one of its principles. In the current context, the protection of privacy implies the prevention of privacy infringement that may be caused by adopting preventive cybersecurity measures. However, a comprehensive concept of privacy rights is outlined by the General Data Protection Regulation (GDPR) * Younghoon Chang [email protected] Jae Kyu Lee [email protected] Hun Yeong Kwon [email protected] Beopyeon Kim [email protected] 1 The School of Management, Xian Jiaotong University, No.28, Xianning West Road, Xian, Shaanxi 710049, Peoples Republic of China 2 College of Business, KAIST, 85 Hoegiro, Dongdaemun-gu, Seoul 02455, South Korea 3 School of Management and Economics, Beijing Institute of Technology, 5 South Zhongguancun Street, Haidian District, Beijing 100081, Peoples Republic of China 4 Graduate School of Information Security, Korea University, 145 Anam-ro, Seongbuk-gu, Seoul 02841, South Korea 5 Graduate School of Information Security, Korea University, 145 Anam-ro, Seongbuk-gu, Seoul 02841, South Korea Information Systems Frontiers (2020) 22:4557 https://doi.org/10.1007/s10796-020-09984-5

Upload: others

Post on 17-Jun-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Reconciliation of Privacy with Preventive Cybersecurity ... · Internet Approach Jae Kyu Lee1,2 & Younghoon Chang3 & Hun Yeong Kwon4 & Beopyeon Kim5 Published online: 22 January 2020

Reconciliation of Privacy with Preventive Cybersecurity: The BrightInternet Approach

Jae Kyu Lee1,2& Younghoon Chang3

& Hun Yeong Kwon4& Beopyeon Kim5

Published online: 22 January 2020# The Author(s) 2020

AbstractThe emergence of a preventive cybersecurity paradigm that aims to eliminate the sources of cybercrime threats is becoming anincreasingly necessary complement to the current self-defensive cybersecurity systems. One concern associated with adoptingsuch preventive measures is the risk of privacy infringement. Therefore, it is necessary to design the future Internet infrastructureso that it can appropriately balance preventive cybersecurity measures with privacy protections. This research proposes to designthe Internet infrastructure using the preventive cybersecurity measures of the Bright Internet, namely preventive cybersecurityprotocol and identifiable anonymity protocol, and ten privacy rights derived from Europe’s General Data Protection Regulations(GDPR). We then analyze the legitimacy of the five steps of the preventive cybersecurity protocol and the four features of theidentifiable anonymity protocol from the perspectives of ten privacy rights. We address the legitimacy from the perspective ofpotential victims’ self-defense rights. Finally, we discuss four potential risks that may occur to the innocent senders and proposedresilient recovery procedures.

Keywords Bright internet . Privacy rights . GDPR . Preventive cybersecurity protocol . Identifiable anonymity protocol .

Self-defense rights

1 Introduction

Recently, the necessity of a preventive cybersecurity paradigmhas emerged because existing self-defensive cybersecuritysystems are not sufficiently secure and cybersecurity attacksare becoming more and more serious. To overcome the limi-tations of self-defensive systems, the preventive cybersecurityparadigm that we introduce here aims to eliminate the sourcesof cybercrime threats themselves. To achieve this goal, theBright Internet architecture is proposed with five principles:

(1) origin responsibility, (2) deliverer responsibility, (3) iden-tifiable anonymity, (4) global collaboration, and (5) privacyprotection (Lee 2015, 2016; Lee et al. 2018). Interestingly, theBright Internet explicitly includes the protection of privacy asone of its principles. In the current context, the protection ofprivacy implies the prevention of privacy infringement thatmay be caused by adopting preventive cybersecuritymeasures.

However, a comprehensive concept of privacy rights isoutlined by the General Data Protection Regulation (GDPR)

* Younghoon [email protected]

Jae Kyu [email protected]

Hun Yeong [email protected]

Beopyeon [email protected]

1 The School of Management, Xi’an Jiaotong University, No.28,Xianning West Road, Xi’an, Shaanxi 710049, People’s Republic ofChina

2 College of Business, KAIST, 85 Hoegiro, Dongdaemun-gu,Seoul 02455, South Korea

3 School of Management and Economics, Beijing Institute ofTechnology, 5 South Zhongguancun Street, Haidian District,Beijing 100081, People’s Republic of China

4 Graduate School of Information Security, Korea University, 145Anam-ro, Seongbuk-gu, Seoul 02841, South Korea

5 Graduate School of Information Security, Korea University, 145Anam-ro, Seongbuk-gu, Seoul 02841, South Korea

Information Systems Frontiers (2020) 22:45–57https://doi.org/10.1007/s10796-020-09984-5

Page 2: Reconciliation of Privacy with Preventive Cybersecurity ... · Internet Approach Jae Kyu Lee1,2 & Younghoon Chang3 & Hun Yeong Kwon4 & Beopyeon Kim5 Published online: 22 January 2020

in Europe; its impact is global because even non-Europeancompanies accessing the private information of European cit-izens are subject to penalties for violating these regulations(Tikkinen-Piri et al. 2018; Mourby et al. 2018). The GDPRrequires that information system designs should conform withthe privacy rights of any affected organization (Tikkinen-Piriet al. 2018; Mourby et al. 2018). Similar acts become effectivein California (i.e., The California Consumer Privacy Act) andby 2020 in Nevada (i.e., The Nevada Privacy Act (SB220))(Lexology 2019). Other states’ privacy bills such as those ofMassachusetts (SD341), New Jersey (SB2834), Pennsylvania(HB1049), and The New York Privacy Act are still standingand waiting for the implementation in near future (Lexology2019).

Therefore, information systems design must not only con-sider the needs of preventive cybersecurity but also ensure theprotection of privacy rights. To fulfill these two-dimensionalgoals, our research proposes a framework for designing a plat-form capable of preventing cybersecurity attacks while alsoprotecting individual rights to privacy. For this purpose, weadopt the protocols proposed by the Bright Internet and theprivacy rights stipulated by the GDPR. It is our goal thatindividual organizations will be able to use this kind of plat-form to effectively and efficiently implement their informationsystems in a way that prevents cyberattacks while alsoprotecting privacy.

Hence, we introduce the preventive cybersecurity protocoland the legitimate identifiability protocol adopted from theBright Internet architecture and derive ten privacy rights fromthe GDPR. In addition, we address the situation when Internetusers may voluntarily offer their private information to thetrustful third party so that they can get certificate ofidentifiability as a base of being trusted from the unknowncounterparts. For example, the credit card holders are willingto provide their personal information to the credit card com-panies to gain the benefit of being trusted by the merchantswho will accept their credit cards. Likewise, we consider thetrust aspect that can be supported by the Bright Internet. Wealso consider that the right to self-defense of cybervictimsmaysupersede the privacy rights of malicious cyber attackers. Forinstance, the recipients of malicious e-mails who wish to pre-vent continued attacks can have the self-defense rights ofidentifying the origins and blocking them. As such, our re-search seeks to balance preventive cybersecurity, privacyrights, rights to self-defense, and the benefits of gaining trustusing a platform such as Bright e-mail that is under develop-ment to become a foundational tool of Bright Internet (Lee2019).

As a testbed for a preventive cybersecurity platform, weadopt the architecture of Bright Internet 1.0, which includesthe preventive cybersecurity protocol and the legitimateidentifiability protocol. On top of these protocols, the Brighte-mail can support innocent users who voluntarily want to

guarantee the identifiability of their real name in case e-mailrecipients legitimately request in order to build trustful envi-ronment with the recipients. However, they may use identifi-able pseudonyms during the ordinary transaction stage to en-joy their privacy. The preventive cybersecurity protocol al-lows victims of malignant e-mails to voluntarily report theattacks to the Bright Internet Data Center, which can accumu-late such data and analyze the Origin-Victim matrix, and cal-culate and post the Brightness Indices of spam e-mail origins.The identifiable anonymity protocol in the Bright Internet sup-ports the traceability of origin IP address or network ID (withthe option of anonymity for innocent netizens) and legitimateidentifiability to identify the real names of malicious agentswhen they commit cybercrimes. However, traceability cannotbe ensured merely through the application layers of theInternet using the current TCP/IPv4 protocol; hence, theBright Internet project develops the source address validationarchitecture (SAVA) based protocol on the TCP/IPv6 platformto ensure traceability (Wu et al. 2007) and hacker preventionprotocol to prevent being compromised. A comprehensivedescription of the protocols for the Bright Internet can befound in the technical report of Bright Internet 1.0 Test Bed(Lee 2019). In this paper, we limit the scope of discussionexcluding cross-border issues although it would be necessaryfor global collaboration.

Considering the above aims, this paper proposes the pre-ventive cybersecurity measures for designing a Bright Internetplatform that can support both preventive cybersecurity mea-sures (two protocols mentioned above) and ten privacy rightsbased on GDPR. Beyond this, we consider two additionalassociated factors: (1) the voluntary assurance of providingprivate information to build mutual trust and (2) the victims’rights of self-defense that may overrule the privacy rights ofmalicious attackers. This type of Internet platform will facili-tate the implementation of preventive cybersecurity platformwhile securing individual’s privacy.

Thus, we proceed in Section 2 by reviewing two BrightInternet preventive cybersecurity measures that are relevantto privacy rights, namely, the preventive cybersecurityprotocol and the identifiable anonymity protocol. We identifyfive steps of the preventive cybersecurity protocol and fourfeatures of identifiable anonymity for this research purpose.Section 3 reviews the history of privacy research and derivesten privacy rights from the GDPR: (1) the right to be informed,(2) the right to access, (3) the right to rectification, (4) the rightto be forgotten, (5) the right to restriction of processing, (6) theright to data portability, (7) the right to object, (8) the right toopt-out of automated systems, (9) the right not to be subjectedto unsanctioned privacy invasions, and (10) the right not beknown by unpermitted persons.

Section 4 reviews the literature relevant to balancing cy-bersecurity and privacy. We found that although there is aclear call to conduct such research, little research on this topic

46 Inf Syst Front (2020) 22:45–57

Page 3: Reconciliation of Privacy with Preventive Cybersecurity ... · Internet Approach Jae Kyu Lee1,2 & Younghoon Chang3 & Hun Yeong Kwon4 & Beopyeon Kim5 Published online: 22 January 2020

has actually been conducted. Thus, we propose a researchframework that considers privacy rights in conjunction withpreventive cybersecurity measures. For this purpose, innocentInternet users must be distinguished from malicious attackersso that both groups can be treated differently. Section 5 iden-tifies the risk potential of privacy-right infringements that maybe invoked by the steps of the preventive cybersecurityprotocol and the features of the identifiable anonymityprotocol. We identify the relevant privacy rights for eachstep/feature and discuss whether the privacy rights of mali-cious attackers should be justified or overridden by the poten-tial victims’ rights of self-defense. We also evaluate the pos-sibility of accidentally infringing upon the privacy right ofinnocent origins and propose a resilient recovery procedureto resume such a misunderstanding. Finally, Section 6 reviewsthe implications and discusses the limitations of this researchand potential future research agendas.

2 Preventive Cybersecurity Measures

2.1 Cybersecurity Research Literature

Cybersecurity research in the information system communitymainly studied the behavioral aspects of the individual em-ployees (Anderson and Agarwal 2010; Wang et al. 2015;Steinbart et al. 2016; Chen and Zahedi 2016) and organiza-tional issues (D’Arcy et al. 2009; Herath and Rao 2009;Johnston and Warkentin 2010). The research has focusedmore on organization and employee compliance issues, spe-cifically on employee behaviors (Hu et al. 2012; Herath andRao 2009; Siponen and Vance 2010; Siponen and Vance2010). In the engineering community, the design and technol-ogy aspects of cybersecurity have been mainly studied (Leeet al. 2018).

The research in Information Systems Frontiers (ISF) par-ticularly stressed the cyber risk assessment (Mukhopadhyayet al. 2019), security risk mitigation (Ye et al. 2006), IT secu-rity investment (Ezhei and Ladani 2018), information securitypolicy (Kang and Hovav 2018), and security riskmanagement(Lee and Lee 2012). In this regard, ISF’s research also focusesmore on the organization and employee aspect of cybersecu-rity issues.

Lee (2015) has proposed the notion of origin responsibilityand distinguished the importance of preventive cybersecurity,which attempts to eliminate the source of malicious emanationfrom the origin sites (Lee et al. 2018). To implement the pre-ventive cybersecurity paradigm, we need to design measuresthat can realize the goal. The Bright Internet ProjectConsortium [www.brightinternet.org] is developing theBright Internet 1.0 Test Bed as a collaboration of academicresearchers and industry partners with relevant expertise incybersecurity, e-mail, cloud services, and certification

authority. The notion of preventive cybersecurity measurescan be wide and diverse depending upon the type ofplatforms such as e-mail, web-based system, and mobileplatform.

E-mail system is the most typical platform in studying theresponsibility of origin and/or destination as about 90% ofcybercrimes start with e-mails. Thus, the Bright Internet 1.0Test Bed includes the development of Bright eMail and BrightCloud as the foundational platforms. Once these platforms arebuilt and standards are established, the extension to other plat-forms can be deployed without losing the consistency of tech-nical standards.

The Bright eMail (in short, bMail) is designed to run on thepreventive cybersecurity protocol and the legitimateidentifiability protocol. Therefore, it is effective to explainthe key features of Bright Internet by explaining these twoprotocols in the context of bMail. Spam e-mails are not nec-essarily malicious e-mails. However, most spam e-mails fil-tered by the inbound filtering software can be regarded asmalicious from the wide sense of potential cyberattacks. Letus review here the steps of preventive cybersecurity protocoland features of legitimate identifiability (Lee 2019).

2.2 The Preventive Cybersecurity Protocol

When victims receive malicious e-mails, they can voluntarilyreport the incidents to the Bright Internet Data Center.

Based on accumulated reports, data related to the origin–victim (OV) matrix can be analyzed for a certain period tocollect the evidence necessary to evaluate the origins of mali-cious e-mails. The accumulated OV matrix data can then beused to derive the Origin Brightness Indices that can be usedto pressure malicious agents to reduce or cease the emanationof malicious e-mails. Figure 1 shows an overview of this pro-cess (Lee et al. 2018). The receivers may delegate thereporting role to the inbound filtering software. In this case,the filtering software should be resilient to resume accidental-ly filtered origins to protect the innocent origin’s privacyrights as discussed in Section 5.

The steps of preventive cybersecurity protocol that are rel-evant to the privacy rights are as follows (Lee 2019):

1) Recipient-initiated report of malicious e-mail: The vic-tims who receive malicious e-mails can voluntarily reportsuch incidents to the Bright Internet Data Center. Thisreporting activity may be delegated to the e-mail serviceprovider who runs the inbound filtering software by thepermission of e-mail receivers.

2) Storage of cyberattack records: The reported records arestored in the Bright Internet Database with the receiver’spermission in the form of OV matrices. In this sense, thisdata structure is different from the mere filtering list.

Inf Syst Front (2020) 22:45–57 47

Page 4: Reconciliation of Privacy with Preventive Cybersecurity ... · Internet Approach Jae Kyu Lee1,2 & Younghoon Chang3 & Hun Yeong Kwon4 & Beopyeon Kim5 Published online: 22 January 2020

3) Generating Brightness Indices of origins: The BrightnessIndices are generated by aggregating and analyzing thedatabase in the Bright Internet Data Center.

4) Disclosure of Brightness Indices: Brightness Indices maybe disclosed to publicize those who send malicious e-mails and to pressure them to cease emanation of mali-cious e-mails.

5) Use of the Brightness Indices as filtering solutions: Thedata used to generate the Brightness Indices can be usedto create both inbound and outbound filtering solutions.Outbound filtering solutions are preventative solutionsthat would eventually reduce the risk of receiving mali-cious e-mails. The inbound filtering solution can be effec-tively customized because the Bright Internet Data Centerhas specific origin information.

2.3 Identifiable Anonymity Protocol

When an agent commits a cybercrime, the traceability of IPaddress associated with the malicious origins andidentifiability of the users’ real names is essential. However,it is also important to preserve the privacy of innocent users.Thus, the principle of identifiable anonymity in the BrightInternet (Fig. 2) aims to identify the real name of a criminalorigin or equivalent identity in nearly real time when a validsearch warrant was issued. However, the voluntary anonymityof innocent netizens may be preserved (Lee et al. 2018, p. 73).In this sense, we distinguish innocent users from maliciousones.

To fulfill this goal, we need to design the operation of theidentifiable anonymity protocol in two steps: (1) anonymoustraceability and (2) legitimate identifiability (Lee 2019).Anonymous traceability means that the IP addresses of mali-cious origins should be traceable with the option of permittinganonymity. For instance, the e-mail senders may retain ano-nymity if they do not want to publicly expose their real name.By contrast, the Bright e-mail senders may choose to guaran-tee the identifiability of their real names to establish trust withe-mail recipients. Thus, anonymity is a matter of choice forinnocent users.

Legitimate identifiability is initiated only when cybercrimeis detected by receivers and thus when an authorized agentissues a warrant of requesting the sender’s real name. InGDPR, the EU treats the terms anonymization andpseudonymization as similar. However, these two terms areslightly different. Anonymization implies de-identificationthat is irreversible so that the personal information of the datasubjects can never be identified (Mourby et al. 2018). Bycontrast, pseudonymizationmeans “the processing of personaldata in such a manner that the personal data can no longer beattributed to a specific data subject without the use of addi-tional information, provided that such additional informationis kept separately.”1 In the pseudonymization process, datacontrollers and processors must ensure that safeguarding andcontrols are in place to prevent the re-identification of personalinformation by unauthorized personnel.

In this regard, the anonymous traceability of BrightInternet can be regarded as an anonymized process as longas users are innocent. However, the identifiable anonymityprotocol as a whole, including the process of legitimate iden-tification, can be regarded as a pseudonymized process. Thefour feature options of the identifiable anonymity protocol areas follows (Lee 2019):

1) Sender’s voluntary disclosure of private information togain trust (Option 1): The e-mail receiver can trust thesender if the sender ensures identifiability as necessitatedby the use of Bright eMail.

2) Privacy protection by allowing anonymity (Option 2):The privacy of innocent senders can be protected byallowing the anonymity.

As such, Options 1 and 2 are a matter of the sender’s choiceas long as the sender is innocent.

3) Victimized recipient’s ability to trace malicious senders(Option 3): Victimized e-mail recipients may request totrace the malicious sender’s IP address by an authorized

1 Pseudonymization, GDPR Chapter 1, Article 4(5).

Fig. 1 Preventive Cybersecurity Protocol (Lee et al. 2018, p.67)

48 Inf Syst Front (2020) 22:45–57

Page 5: Reconciliation of Privacy with Preventive Cybersecurity ... · Internet Approach Jae Kyu Lee1,2 & Younghoon Chang3 & Hun Yeong Kwon4 & Beopyeon Kim5 Published online: 22 January 2020

agent. This request may be handled by the Bright InternetData Center.

4) Legitimate identifiability of malicious senders (Option 4):Authorized agents holding search warrants may requestthe real-name identification of senders who committedcybercrimes.

The processes of Options 3 and 4 may cause privacy in-fringements of innocent senders if an innocent e-mail ismisinterpreted as a cybercrime. Thus, the protocol requires amechanism to protect against accidental privacyinfringements.

3 Privacy Rights in GDPR

3.1 History of Privacy Research

Privacy is a fundamental right of individuals and consti-tutes a core principle of modern society. With the massiveexpansion of the online world, privacy issues in cyber-space have become a primary concern. Privacy concernsin cyberspace are invoked when online personal bound-aries are breached or when personal information is col-lected and/or disseminated without permission (Adar et al.2003). Therefore, privacy laws and regulations have aris-en to protect individuals from the invasion of privacy bothonline and offline (Chang et al. 2018). Many fields, in-cluding management information systems, philosophy, so-ciology, political science, law, psychology, marketing, andeconomics, have published privacy studies coveringtopics such as the general right to privacy, general privacyas a commodity (Bélanger and Crossler 2011; Smith et al.2011), general privacy as a state of being apart fromothers (Dinev et al. 2013; Westin 1967), and privacy inrelation to control issues (Smith et al. 2011; Dinev 2014).

ISF’s papers covered a wide range of privacy issues coveringuser behavior on privacy-related issues (Albashrawi and

Motiwalla 2019; Ozturk et al. 2017), privacy-enhanced technol-ogies (Loukas et al. 2012), specific technologies (Carpenter et al.2018), and ethical issues related to privacy (Reay et al. 2013). Inaddition to the aforementioned topics, some papers focus onprivacy policy (Singh et al. 2011; Reay et al. 2013) and GDPR(Martin et al. 2019), wherein the focus is more on the negativeeffect of GDPR on the startup industry. However, there were notmany publications about the GDPR and its influence on cyber-security platforms, and none yet about the preventive cyberse-curity and the notion of origin responsibility.

In practice, the Fair Information Practices (FIPs) were firstproposed by the US Secretary’s Advisory Committee onAutomated Personal Data Systems in 1973 (Breaux andAntón 2008; Elmisery et al. 2016). FIPs are characterized byfive widely accepted principles: notice, access, choice,security, and enforcement (Wang and Emurian 2005). In the1980s and 1990s, the Organization for Economic Cooperationand Development and the Federal Trade Commission revisedthe previous FIPs guidelines in response to government andbusiness sector demands. Over the past two decades, manycountries have used FIPs to develop regulation and as a foun-dation for national privacy policies. Among such privacy pol-icies, the European Union’s GDPR has the most comprehen-sive jurisdiction (Politou et al. 2018; Wachter 2018).

GDPR adopts six privacy principles: (1) lawfulness, fairness,and transparency [Article 5(1)(a)]; (2) limitations on purposes ofcollection, processing, and storage [Article 5(1)(b)]; (3) dataminimization [Article 5(1)(c)]; (4) data accuracy [Article5(1)(d)]; (5) data storage limits [Article 5(1)(e)]; and (6) integrityand confidentiality [Article 5(1)(f)]. Beyond these six principles,accountability is an additional overarching principle [Article5(2)] that requires data custodians to guarantee compliance withall six basic principles.

3.2 Ten Privacy Rights from GDPR

For our research purposes, we derive ten privacy rightsfrom the GDPR as owners of onl ine personal

Fig. 2 Architecture of Identifiable Anonymity Protocol (Lee et al. 2018, p. 77)

Inf Syst Front (2020) 22:45–57 49

Page 6: Reconciliation of Privacy with Preventive Cybersecurity ... · Internet Approach Jae Kyu Lee1,2 & Younghoon Chang3 & Hun Yeong Kwon4 & Beopyeon Kim5 Published online: 22 January 2020

information (Tikkinen-Piri et al. 2018; Presthus andSørum 2018). Beyond the eight privacy rights specifiedin the current GDPR guidelines, we add two morerights (Right 9 and 10) that are closely related to thepreventive security paradigm of Bright Internet(EUGDPR 2018; Lee et al. 2018). The definitions ofprivacy rights are summarized as follows, and thesources of these rights are summarized in Table 1.

1. Right to be informed: Individuals have the right to knowwho is processing their personal data.

2. Right to access: Individuals have the right to access anypersonal data that have been collected about them.

3. Right to rectification: Individuals have the right to re-quire organizations to correct inaccurate personal data.

4. Right to be forgotten: Individuals have the right to havetheir personal data deleted and to prevent further collection.

Table 1 Ten privacy rights ofGDPR Privacy rights Relevant sources in GDPR

(R1) Right to be informed Ch. 3, Section 2 – Information and access to personal data

Article 13: Information to be provided when personal data arecollected from the data subject

Article 14: Information to be provided when personal data have notbeen obtained from the data subject

(R2) Right to access Ch. 3, Section 2 – Information and access to personal data

Article 15: Right of access by the data subject

(R3) Right to rectification Ch. 3, Section 3 – Rectification and erasure

Article 16: Right to rectification

Article 19: Notification obligation regarding rectification or erasureof personal data or restriction of processing

(R4) Right to be forgotten Ch. 3, Section 3 – Rectification and erasure

Articles 17: Right to erasure (“right to be forgotten”)

Article 19: Notification obligation regarding rectification or erasureof personal data or restriction of processing

(R5) Right to restriction of processing Ch. 3, Section 3 – Rectification and erasure

Articles 18: Right to restriction of processing

Article 19: Notification of obligation regarding rectification orerasure of personal data or restriction of processing

(R6) Right to data portability Ch. 3, Section 3 – Rectification and erasure

Articles 20: Right to data portability

(R7) Right to object Ch. 3, Section 4 – Right to object and automated individualdecision-making

Article 21: Right to object

(R8) Right to opt-out of automatedsystems

Ch. 3, Section 4 – Right to object and automated individualdecision-making

Article 22: Automated individual decision-making, includingprofiling

(R9) Right not to be subjected tounsanctioned privacy invasion

Ch. 3, Section 5 – Restrictions

Article 23: Restrictions

Ch. 4, Section 5 – Codes of conduct and certificate

Article 42: Certificate

Article 49 (1–6): Derogations for specific situations

(R10) Right not to be known byunpermitted persons

Ch. 2 – Principles

Article 7(1–4): Controller must get consent from data subject

Ch. 3, Section 5 – Restrictions

Article 23: Restrictions

Ch. 4, Section 5 – Codes of conduct and certificate

Article 42: Certificate

Source: https://eugdpr.org/the-regulation/

50 Inf Syst Front (2020) 22:45–57

Page 7: Reconciliation of Privacy with Preventive Cybersecurity ... · Internet Approach Jae Kyu Lee1,2 & Younghoon Chang3 & Hun Yeong Kwon4 & Beopyeon Kim5 Published online: 22 January 2020

5. Right to restriction of processing: Individuals have theright to require organizations to restrict the processing ofspecific categories of personal data.

6. Right to data portability: Individuals have the right torequire organizations to transfer personal data to a recip-ient of their choice.

7. Right to object: Individuals have the right to consent orwithdraw consent concerning the processing of their per-sonal data.

8. Right to opt-out of automated systems: Individuals havethe right to opt-out of the use of their personal data byautomated systems.

9. Right not to be subjected to unsanctioned privacyinvasion: Individuals have the right to be left alone un-less they violate laws or regulations.

10. Right not to be known by unpermitted persons:Individuals have the right to determine who are allowedto access their personal data and personal online space.

4 Cybersecurity and Privacy

4.1 Need to Consider Cybersecurity and Privacyin Tandem

In the academic domain, cybersecurity and information priva-cy issues have recently attracted much attention from ISscholars (Chang et al. 2018). Many previous studies empha-size the interrelationship between cybersecurity and privacy(Appari and Johnson 2010; Chua et al. 2017; McDaniel andMcLaughlin 2009; Campbell et al. 2002; Takabi et al. 2010;Miyazaki and Fernandez 2000; Martínez-Pérez et al. 2015).For instance, Cunningham (2012) reviewed the balance be-tween global privacy and data security laws. This issue hasbeen addressed in the context of specific technologies, such ascloud computing (Gashami et al. 2016; Takabi et al. 2010), e-commerce (Miyazaki and Fernandez 2000), Internet of things(IoT) (Martínez-Pérez et al. 2015), medical information sys-tems (Appari and Johnson 2010), pervasive computing(Campbell et al. 2002), and smart grid systems (McDanieland McLaughlin 2009). These studies suggest that technolo-gies have many security and privacy vulnerabilities that needto be addressed when developers are designing the systems.

However, few studies cover both security and privacypolicy, especially in terms of the complicated legal mattersinvolving preventive cybersecurity and privacy rights.McDaniel and McLaughlin (2009) and Takabi et al. (2010)suggest that governments should pay more attention to con-sumer protection and revise existing privacy and securitylaws. The GDPR states that data controllers or processorsmust ensure that they provide proper security measures, such

as encryption and pseudonymization, to protect personal da-ta.2 These measures can prevent unauthorized parties fromidentifying personal information. Nevertheless, data holdersstill need strong security systems such as firewalls to blockillegal access by unpermitted users (Mourby et al. 2018). Inthis regard, security schemes can contribute to the preventionof privacy infringement. However, preventive security mayinfringe upon personal privacy. It is therefore important todesign information systems and Internet infrastructure in away that fulfills both security and privacy goals, synergistical-ly balancing the tradeoffs between the two.

4.2 Balancing Privacy with Cybersecurity and Trust

Even though privacy is important, it may not always be the toppriority of choice. There are situations under which privacymay not be paramount: (1) legitimate preventive cybersecurityand (2) voluntary disclosure of private information to gaintrust. The current research seeks to balance privacy with thesefactors in designing Internet infrastructures.

4.2.1 Legitimate Preventive Cybersecurity

If government agencies want to identify an individual for pre-ventive security purposes, they require a permit and mustcomply with local data protection acts and regulations.However, as GDPRArticle 49 states, the disclosure of person-al information can be permitted when it is essential for nation-al security, defense, public security, prosecution of criminals,and other public emergency matters.3 The measures by thePreventive Cybersecurity Protocol can be regarded as mea-sures for public security, because all e-mail receivers are po-tential victims of cyberattacks. Thus, the vulnerable e-mailreceivers should have a right to self-defense againstcyberattacks such as malicious e-mails. We therefore arguethat the privacy of malicious e-mail senders can be overruledto ensure the safety of innocent e-mail recipients.

4.2.2 Voluntary Disclosure of Private Information to GainTrust

Previous information privacy research, especially the privacycalculus theory, has demonstrated that an individual may bewilling to disclose personal information and take risks whenthey perceive that their actions will yield more benefits thanrisks (Dinev and Hart 2006; Chang et al. 2018). In the BrightInternet context, Bright e-mail account holders voluntarilyagree to be identifiable with their real names. In return, theycan reap benefits associated with being implicitly trusted byunknown e-mail recipients.

2 Pseudonymization, GDPR Chapter 1, Article 4(5).3 Restrictions, GDPR section 5, Article 23.

Inf Syst Front (2020) 22:45–57 51

Page 8: Reconciliation of Privacy with Preventive Cybersecurity ... · Internet Approach Jae Kyu Lee1,2 & Younghoon Chang3 & Hun Yeong Kwon4 & Beopyeon Kim5 Published online: 22 January 2020

5 Privacy Rights in the PreventiveCybersecurity Measures

In this section, we analyze the five steps of preventivecybersecurity measures with the perspective of ten pri-vacy rights to review the legal basis for the justifica-tions. For this purpose, we study the legitimacy of thesenders and receivers as summarized in Table 2. Theattacked receivers/senders and Bright Internet DataCenter should have the right to block the repetitive at-tacks and may file criminal charges and demand resti-tution of damages from the cyberattackers.

5.1 Privacy Rights in the Preventive CybersecurityProtocol

For the five steps of the preventive cybersecurity protocol, weidentified the privacy rights of innocent e-mail senders inTable 2. However, if the senders are malicious, we argue thatreceivers can have a right to self-defense against these mali-cious senders and that the victimized receivers thus have theright to report malicious attacks to the appropriateauthorities—in this case, the Bright Internet Data Center—and delegate its generation of necessary preventive measures.Based on the delegation, the center would be permitted to

Table 2 Preventive measures, privacy rights, and analogical acts

Preventive Measures Sender’ Privacy Right Receivers Rights

Protocols Step Innocent Senders Typical Analogical Acts

PreventiveCybersecurityProtocol

1. Report MaliciousEmails

R1 (Informed) R7 (Object) CCTV: Taking pictures The innocent sendersshould have therelevant privacy rights;But the rightsof malicious senderswill be overruledby the receiver’sself-defense right

2. Store MaliciousEmails

R4 (Forgotten) CCTV: Storage of pictures

3. GenerateBrightnessIndices

R5 (Restrict Processing) R8(Opt-out)

Credit information use andprotection act

4. DiscloseBrightnessIndices

R4 (Forgotten) Energy use rationalization act

5. Make FilteringSolution

R10 (Unpermitted) Filtering model for the credit cardauthorization

IdentifiableAnonymityProtocol

1. VoluntaryDisclose

None Credit card application Voluntary submission ofprivate information

2. AllowedAnonymity

R1 (Unpermitted) Any legitimate anonymous trades Permit the anonymity ofinnocent users

3. Traceability R1 (Unpermitted) Investigation of prosecuted cases Victim’s right to report andcriminal charge

4. LegitimateIdentifiability

R4 (Forgotten) Surveillance acts; Surveillance authorityoverrules the privacyrights of malicioussenders

Privacy Protection ofInnocent Sender

1. Accidental FalseReport

R1 (Informed) R2 (Access) R3(Rectify) R4 (Forgotten)

Recovery procedure Resume the privacy rightsresiliently

2. Intentional FalseReport

R1 (Informed) R2 (Access) R3(Rectify R4 (Forgotten)

Recovery procedure Resume the privacy rightsresiliently,and senders may makecriminal charges

3. CompromisedSenders

R1 (Informed) R2 (Access) R3(Rectify)

Prevent being compromised by adoptingthe Hacker Prevention Protocol

Compromised attackersare distinguished frommalicious origin, buthave thedeliverer responsibility

4. Abused Requestof Identification

R4 (Forgotten) Audit trail an recovery procedure Abused request ofidentification shouldbe withdrawn andresume the privacyrights

52 Inf Syst Front (2020) 22:45–57

Page 9: Reconciliation of Privacy with Preventive Cybersecurity ... · Internet Approach Jae Kyu Lee1,2 & Younghoon Chang3 & Hun Yeong Kwon4 & Beopyeon Kim5 Published online: 22 January 2020

store the reports, generate and disclose the Brightness Indices,and create and distribute the preventive cybersecurity solu-tions using the indices. These actions would contribute topreventing repetitive cyberattacks against innocent e-mailrecipients.

In this case, the privacy rights of malicious agents can beoverruled because the victim’s right to self-defense againstcyberattacks in the form of malicious e-mail should have ahigher priority. Thus, the actions of Bright Internet DataCenter would be legitimate as long as the Bright e-mail systemand the Bright Internet Data Center clearly post their policyand confirm that reporting receivers agree to the policy. Wereview the potential privacy rights of innocent senders foreach step, discuss the rights to self-defense, and demonstrateanalogous laws that share the common legal spirit.

1) Recipient-initiated report of malicious e-mails: In thisstep, the reported senders have the “right to be informed”and “right to object,” but the malicious sender’s rights canbe overruled to protect the victim’s right to self-defense.An analogous case is the preventive function of a CCTVsystem that permits recording private information in pub-lic or private vehicles so that individuals can be traced incase of criminal activity. However, the records about in-nocent citizens should be protected, but when a crime isdetected, police can legitimately trace the origin of crime.Likewise, innocent e-mails will not be reported, only themalicious ones.

2) Storage of cyberattack records: The reported sendershave the “right to be forgotten,” but the malicioussenders’ right can be overruled to preserve the victim’sright to self-defense. Storing a CCTV record for a reason-able period is an analogical example. The governmentmay require the certification of Bright Internet DataCenter to assure its public role. To prevent the misuse ofstored data during the operation stage, it will be necessaryto maintain an audit trail, which could, perhaps, be imple-mented by the Blockchain technology.

3) Generating Brightness Indices: The generation ofBrightness Indices is relevant to the “right to restrict pro-cessing” and the “right to opt-out of automated systems.”However, these rights of malicious senders can beoverruled to ensure the capability of identifying potentialcyberattackers. Similar laws already allow the generationof credibility information for preventive measures. Forexample, in Korea, the Act on the Consumer Protectionin Electronic Commerce [Article 27 and 28], CreditInformation Use and Protection Act [Article 15],Financial Investment Services and Capital Markets Act[Article 335(11)], and Regulations on FinancialInvestment Services permit to compute the credibilityinformation. Analogous rules that permit similar analysisinclude the Framework Act on Food Safety in Korea

[Article 24(1)] and an energy efficiency rating system ofelectronic devices.

4) Disclosure of Brightness Indices: Disclosing theBrightness Indices of Origins is relevant to the “right tobe forgotten.” GDPR Articles 33 and 34 address disclo-sure by requiring giving notice to and communicatingwith data subjects. However, we argue that the right ofmalicious senders can be overruled because the purposeof disclosure is to share preventive information with po-tential victims and demotivate the emanation ofmaliciouse-mails. However, organizations with high BrightnessIndices can leverage public disclosure, thereby gainingtrust and improving their reputations. A similar legisla-tion in Korea is the Energy Use Rationalization Act[Article 15(1)] that requires a publicly displayed ratingsystem. In the USA, public disclosure of sex offenders isrequired as a means of preserving public safety andpreventing repeat offenses according to the Act on theProtection of Children and Youth against Sex Offenses[Article 49].

5) Use of the Brightness Indices as filtering solutions:Generation of filtering solutions using the index informa-tion may infringe upon the “right not to be known byunpermitted persons.” However, the right of malicioussenders should be overruled to prevent the repetitive ma-licious behavior that can harm the potential e-mail re-ceivers. This is analogous to sharing the misbehavior in-formation of credit holders with all merchants during theauthorization process. A related law is the FrameworkAct on the Safety of Products in Korea [Article 15(2)],which allows for access to information if consumers haveconcerns about product safety.

Overruling the privacy rights of malicious e-mail senders ina similar context seems legitimate because it contributes to theprevention of repetitive threats to victims and potential vic-tims. As such, preventive cybersecurity measures can be le-gitimized as the self-defensive actions of potential victims.Each country may need to review this issue from the perspec-tive of their own laws, but it seems commensurate with similarlaws in many countries.

5.2 Privacy Right in Identifiable Anonymity Protocol

The privacy issues in four features of the identifiable anonym-ity protocol are the following.

1) Sender’s voluntary disclosure of private information togain trust (Option 1): An e-mail sender may voluntarilyassure its identifiability to gain the trust from the counter-parts in cyberspace. Therefore, in this case, no potentialrisk of privacy infringement occurs.

Inf Syst Front (2020) 22:45–57 53

Page 10: Reconciliation of Privacy with Preventive Cybersecurity ... · Internet Approach Jae Kyu Lee1,2 & Younghoon Chang3 & Hun Yeong Kwon4 & Beopyeon Kim5 Published online: 22 January 2020

2) Privacy protection by allowing anonymity (Option 2): Aslong as an individual does not commit cybercrime (in-cluding sending malicious e-mails), the Bright Internetplatform allows the users’ anonymity. This principle sup-ports the realization of the “right not to be known byunpermitted persons.” Any legitimate anonymous tradesin our daily lives correspond to this practice.

3) Traceability of malicious e-mails (Option 3): Innocentsenders should have the “right of not to be known byunpermitted persons.” However, any recipients of mali-cious e-mails should be authorized to request tracing thesender’s IP address, based on the right of self-defense.However, the actual investigation should be performedby legally authorized agencies in cooperation with a del-egated operator such as the Bright Internet Data Center.This corresponds to the investigation of prosecutedcrimes.

4) Legitimate identifiability of malicious senders (Option 4):The legitimate request for the real name of malicioussenders can be regarded as a legitimate investigation pro-cess according to GDPR Article 49 (1–6) on derogationsfor specific situations. In the USA, the ElectronicCommunications Privacy Act of 1986 (Rosenstein1991) and the Foreign Intelligence Surveillance Act of1978 support the right to investigate malicious acts. TheUnited Kingdom’s Regulation of Investigatory Power Actof 2000 (Article 28) also supports such investigation as ameans of self-defense or a justifiable act (Lin 2016).

For innocent users, privacy rights are supported in Options1 and 2. However, for malicious actors, privacy rights areoverruled in Options 3 and 4.

5.3 Privacy Protection for Innocent Origins

So far, we have primarily analyzed the cybersecurity and pri-vacy rights from an e-mail recipient’s point of view. However,four types of potential errors may occur even in the BrightInternet context: accidental false report, intentional false re-port from malicious e-mail recipients, compromised senders,and misuse of the request for origin’s identification. TheBright Internet should be designed to be resilient to theseoccurrences.

1) Accidental false report from innocent e-mail recipient:The automatic filtering solution may make this kind ofmistake, so the recovery procedure should be equippedwith friendly interaction. In this case, the potentially in-nocent sender should be informed, the stored data shouldbe accessible, and the incorrect report should be rectified.The Bright Internet Data Center should send a confirma-tion e-mail to the sender to confirm the “right to be in-formed” and “right to access.” If a mistake on the part of

the e-mail recipient is confirmed, the innocent e-mailsender should be provided with the “right to rectification”and the “right to be forgotten,” and the record should becorrected accordingly.

2) Intentional false report from malicious e-mail recipient:This would be a reverse malicious attack against an inno-cent e-mail sender if the hackers compromise the re-ceiver’s computer. This would be unlikely to happen inthe properly developed Bright eMail systems with thecapability of the hacker prevention protocol. However,unpredictable attacks may happen. If hackers were ableto penetrate the receiver’s system and harm an innocent e-mail senders and the Bright Internet Data Center itself, theinnocent sender should be able to report the attack to thecenter so that the wrong records could be confirmed anddeleted with the protection offered by the “right to beinformed,” “right to access,” “right to rectification,” and“right to be forgotten.” The Bright Internet Data Centershould be equipped with the ability to protect and recoverthe innocent e-mail senders. The center and attackedsenders would have the right to make criminal chargesand demand rest i tut ion of damages from thecyberattackers.

3) Compromised senders: When a sender’s e-mail account iscompromised, the sender is also a victim. As such, com-promised senders should be protected through the “rightto be informed”, “right to access,” and the “right to recti-fication.” However, the compromised computers alsohave the delivery responsibility as neighbor (Lee et al.2018); thus, their records of compromised attackers can-not be deleted. Therefore, the compromised sendersshould do their best not to be compromised by using ahacker prevention protocol that prevents hackers fromcompromising e-mail accounts. Users who do not dili-gently adopt such a preventive system should bear therelevant responsibility of becoming an instrument of theattack.

4) Abused request for identification: Identification requestsfor real names may be abused accidentally or intentional-ly. To prevent privacy infringement caused by such mis-use, the identification process should be recorded andsecurely logged to ensure its auditability. If an e-mailsender turns out to be innocent after the process of legit-imate identifiability, the “right to be forgotten” should beguaranteed.

6 Conclusion

We have reviewed the legitimacy of Bright Internet preventivecybersecurity measures in conjunction with the privacy rightsrequested by the GDPR. For the five steps of the preventive

54 Inf Syst Front (2020) 22:45–57

Page 11: Reconciliation of Privacy with Preventive Cybersecurity ... · Internet Approach Jae Kyu Lee1,2 & Younghoon Chang3 & Hun Yeong Kwon4 & Beopyeon Kim5 Published online: 22 January 2020

cybersecurity protocol (reporting malicious e-mails, storingthe records of malicious e-mails, generating BrightnessIndices, disclosing Brightness Indices, and creating a filteringsolution using the indices), it seems that the victim’s right toself-defense can overrule the privacy rights of malicioussenders summarized in Table 2. Laws in other security con-texts such as CCTV, credit information use and protection,disclosure of energy efficiency, and credit card authorizationmodel are demonstrated as evidence of balancing betweenpreventive measures against malicious attackers and privacyrights of innocent e-mail senders.

The identifiable anonymity protocol defines the situationswhen the e-mail senders may voluntarily disclose their privateinformation to establish trust with unknown e-mail recipients,and innocent e-mail users may also choose to use anonymity,invoking their “right not to be known by unpermitted per-sons.” In this context, privacy is a matter of user’schoice. However, when the victims are attacked andare vulnerable to repetitive attacks in the future, theyshould have the right to request tracing and identifyingthe sender’s real names to legal agency. In this case, theprosecution and surveillance authorities will overrule theprivacy rights of malicious agents.

Finally, the potential risks that innocent senders may faceand resilient recovery requirements are described. For instance,false reports from receivers may be submitted to the BrightInternet Data Center accidentally by spam filtering solutionsor intentionally by hackers. In these cases, the Bright Internetplatform should be equipped with the effective recovery ofinnocent sender’s privacy rights. Another case of innocentsenders is the compromised malicious senders. They are alsovictims, but they should take the responsibility of not beingcompromised and not becoming the instrument of hacker’s at-tacks. Hence, they should be diligently equipped with the hack-er prevention protocol. Finally, the innocent senders may bevictimized by the abused request of identification. Therefore,the audit trail is necessary to demotivate such requests possiblyby implementing a Blockchain technology.

Through this study, we have justified the legitimacy ofpreventive cybersecurity measures in the Bright Internet pro-tocols. However, we also have identified four new potentialproblems that the Bright Internet protocols may face and pro-posed how to recover when it happens. Thus, we have extend-ed the view of balancing the preventive cybersecurity mea-sures with assuring the privacy of e-mail receivers andsenders.

Currently, specific legislation that balances preventive cy-bersecurity with privacy rights has not been specificallyenacted in most countries. However, our analysis shows thatit seems possible to legitimately adopt the preventive cyber-security measures in conformity with the individual privacyrights based on the common spirit of existing laws in manycountries.

Open Access This article is licensed under a Creative CommonsAttribution 4.0 International License, which permits use, sharing, adap-tation, distribution and reproduction in any medium or format, as long asyou give appropriate credit to the original author(s) and the source, pro-vide a link to the Creative Commons licence, and indicate if changes weremade. The images or other third party material in this article are includedin the article's Creative Commons licence, unless indicated otherwise in acredit line to the material. If material is not included in the article'sCreative Commons licence and your intended use is not permitted bystatutory regulation or exceeds the permitted use, you will need to obtainpermission directly from the copyright holder. To view a copy of thislicence, visit http://creativecommons.org/licenses/by/4.0/.

References

Adar, E., Lukose, R., Sengupta, C., Tyler, J., & Good, N. (2003). Shock:Aggregating information while preserving privacy. InformationSystems Frontiers, 5(1), 15–28.

Albashrawi, M., & Motiwalla, L. (2019). Privacy and personalization incontinued usage intention of mobile banking: An integrative per-spective. Information Systems Frontiers, 21(5), 1031–1043.

Anderson, C. L., & Agarwal, R. (2010). Practicing safe computing: Amultimedia empirical examination of home computer user securitybehavioral intentions. MIS Quarterly, 34(3), 613–643.

Appari, A., & Johnson, M. E. (2010). Information security and privacy inhealthcare: Current state of research. International Journal ofInternet and enterprise management, 6(4), 279–314.

Bélanger, F., & Crossler, R. E. (2011). Privacy in the digital age: A reviewof information privacy research in information systems. MISQuarterly, 35(4), 1017–1042.

Breaux, T., & Antón, A. (2008). Analyzing regulatory rules for privacyand security requirements. IEEE Transactions on SoftwareEngineering, 34(1), 5–20.

Campbell, R., Al-Muhtadi, J., Naldurg, P., Sampemane, G., & Mickunas,M. D. (2002). Towards security and privacy for pervasive comput-ing. In International Symposium on Software Security (pp. 1-15).Springer, Berlin, Heidelberg.

Carpenter, D., McLeod, A., Hicks, C., & Maasberg, M. (2018). Privacyand biometrics: An empirical examination of employee concerns.Information Systems Frontiers, 20(1), 91–110.

Chang, Y., Wong, S. F., Libaque-Saenz, C. F., & Lee, H. (2018). The roleof privacy policy on consumers’ perceived privacy. GovernmentInformation Quarterly, 35(3), 445–459.

Chen, Y., & Zahedi, F. M. (2016). Individual’s internet security percep-tions and behaviors: Polycontextual contrasts between the UnitedStates and China. MIS Quarterly, 40(1), 205–222.

Chua, H. N., Wong, S. F., Chang, Y., & Libaque-Saenz, C. F. (2017).Unveiling the coverage patterns of newspapers on the personal dataprotection act. Government Information Quarterly, 34(2), 296–306.

Cunningham, M. (2012). Privacy in the age of the hacker: Balancingglobal privacy and data security law. George WashingtonInternational Law Review., 44(4), 643–696.

D’Arcy, J., Hovav, A., & Galletta, D. (2009). User awareness of securitycountermeasures and its impact on information systems misuse: Adeterrence approach. Information Systems Research, 20(1), 79–98.

Dinev, T. (2014). Why would we care about privacy? European Journalof Information Systems, 23(2), 97–102.

Dinev, T., & Hart, P. (2006). An extended privacy calculus model for e-commerce transactions. Information Systems Research, 17(1), 61–80.

Dinev, T., Xu, H., Smith, J. H., &Hart, P. (2013). Information privacy andcorrelates: An empirical attempt to bridge and distinguish privacy-

Inf Syst Front (2020) 22:45–57 55

Page 12: Reconciliation of Privacy with Preventive Cybersecurity ... · Internet Approach Jae Kyu Lee1,2 & Younghoon Chang3 & Hun Yeong Kwon4 & Beopyeon Kim5 Published online: 22 January 2020

related concepts. European Journal of Information Systems, 22(3),295–316.

Elmisery, A. M., Rho, S., & Botvich, D. (2016). A fog based middlewarefor automated compliance with OECD privacy principles in internetof healthcare things. IEEE Access, 4, 8418–8441.

EUGDPR. (2018). The EU General Data Protection Regulation. https://eugdpr.org/the-regulation/. Accessed 21 June 2019.

Ezhei, M., & Ladani, B. T. (2018). Interdependency analysis in securityinvestment against strategic attacks. Information Systems Frontiers,1–15. https://doi.org/10.1007/s10796-018-9845-8.

Gashami, J. P. G., Chang, Y., Rho, J. J., & Park, M. C. (2016). Privacyconcerns and benefits in SaaS adoption by individual users: A trade-off approach. Information Development, 32(4), 837–852.

Herath, T., & Rao, H. R. (2009). Encouraging information security be-haviors in organizations: Role of penalties, pressures and perceivedeffectiveness. Decision Support Systems, 47(2), 154–165.

Hu, Q., Dinev, T., Hart, P., & Cooke, D. (2012). Managing employeecompliance with information security policies: The critical role oftop management and organizational culture. Decision Sciences,43(4), 615–660.

Johnston, A. C., & Warkentin, M. (2010). Fear appeals and informationsecurity behaviors: An empirical study. MIS Quarterly, 34(3), 549–566.

Kang, M., & Hovav, A. (2018). Benchmarking methodology for infor-mation security policy (BMISP): Artifact development and evalua-tion. Information Systems Frontiers, 1–22.

Lee, J. K. (2015). Research framework for AIS grand vision of the brightICT initiative.MIS Quarterly, 39(2), iii–xii.

Lee, J. K. (2016). Invited commentary reflections on ICT-enabled brightsociety research. Information Systems Research, 27(1), 1–5.

Lee, J. K. (2019), Technical report of architecture of bright internet 1.0test bed, unpublished working paper with bright eMail capability,work-in-progress.

Lee, J. K., Cho, D., & Lim, G. G. (2018). Design and validation of thebright internet. Journal of the Association for Information Systems,19(2), 63–85.

Lee, M., & Lee, J. (2012). The impact of information security failure oncustomer behaviors: A study on a large-scale hacking incident on theinternet. Information Systems Frontiers, 14(2), 375–393.

Lexology (2019). New State Bills Inspired by the California ConsumerPrivacy Act May Re-appear Next Year. Ropes & Gray LLP(November 7, 2019). https://www.lexology.com/library/detail.aspx?g=46f5bb8e-ae93-45e6-b287-f771a6b751af. Access 30November 2019.

Lin, Patrick. (2016). Ethics of hacking Back: Six arguments from armedconflict to zombies, ethics+emerging sciences group.

Loukas, A., Damopoulos, D., Menesidou, S. A., Skarkala, M. E.,Kambourakis, G., & Gritzalis, S. (2012). MILC: A secure andprivacy-preserving mobile instant locator with chatting.Information Systems Frontiers, 14(3), 481–497.

Martin, N., Matt, C., Niebel, C., & Blind, K. (2019). How data protectionregulation affects startup innovation. Information Systems Frontiers,1–18. https://doi.org/10.1007/s10796-019-09974-2,21.

Martínez-Pérez, B., De La Torre-Díez, I., & López-Coronado, M. (2015).Privacy and security in mobile health apps: A review and recom-mendations. Journal of Medical Systems, 39(1), 181.

McDaniel, P., & McLaughlin, S. (2009). Security and privacy challengesin the smart grid. IEEE Security & Privacy, 7(3), 75–77.

Miyazaki, A. D., & Fernandez, A. (2000). Internet privacy and security:An examination of online retailer disclosures. Journal of PublicPolicy & Marketing, 19(1), 54–61.

Mourby, M., Mackey, E., Elliot, M., Gowans, H., Wallace, S. E., Bell, J.,et al. (2018). Are ‘pseudonymised’data always personal data?Implications of the GDPR for administrative data research in theUK. Computer Law & Security Review, 34(2), 222–233.

Mukhopadhyay, A., Chatterjee, S., Bagchi, K. K., Kirs, P. J., & Shukla,G. K. (2019). Cyber risk assessment and mitigation (CRAM) frame-work using logit and probit models for cyber insurance. InformationSystems Frontiers, 21(5), 997–1018.

Ozturk, A. B., Nusair, K., Okumus, F., & Singh, D. (2017).Understanding mobile hotel booking loyalty: An integration of pri-vacy calculus theory and trust-risk framework. Information SystemsFrontiers, 19(4), 753–767.

Politou, E., Alepis, E., & Patsakis, C. (2018). Forgetting personal dataand revoking consent under the GDPR: Challenges and proposedsolutions. Journal of Cybersecurity, 4(1), tyy001.

Presthus, W., & Sørum, H. (2018). Are consumers concerned about pri-vacy? An online survey emphasizing the general data protectionregulation. Procedia Computer Science, 138, 603–611.

Reay, I., Beatty, P., Dick, S., & Miller, J. (2013). Privacy policies andnational culture on the internet. Information Systems Frontiers,15(2), 279–292.

Rosenstein, S. (1991). Electronic Communications Privacy Act of 1986and Satellite Descramblers: Toward Preventing StatutoryObsolesence. Minnesota Law Review, 76, 1451–1481.

Singh, R. I., Sumeeth, M., & Miller, J. (2011). A user-centric evaluationof the readability of privacy policies in popular web sites.Information Systems Frontiers, 13(4), 501–514.

Siponen, M., & Vance, A. (2010). Neutralization: New insights into theproblem of employee information systems security policy viola-tions. MIS Quarterly, 34(3), 487–502.

Smith, J. H., Dinev, T., & Xu, H. (2011). Information privacy research:An interdisciplinary review. MIS Quarterly, 35(4), 989–1015.

Steinbart, P. J., Keith, M. J., & Babb, J. (2016). Examining the continu-ance of secure behavior: A longitudinal field study of mobile deviceauthentication. Information Systems Research, 27(2), 219–239.

Takabi, H., Joshi, J. B., & Ahn, G. J. (2010). Security and privacy chal-lenges in cloud computing environments. IEEE Security & Privacy,8(6), 24–31.

Tikkinen-Piri, C., Rohunen, A., & Markkula, J. (2018). EU general dataprotection regulation: Changes and implications for personal datacollecting companies. Computer Law & Security Review, 34(1),134–153.

Wachter, S. (2018). Normative challenges of identification in the internetof things: Privacy, profiling, discrimination, and the GDPR.Computer law & security review, 34(3), 436–449.

Wang, J., Xiao, N., & Rao, H. R. (2015). An exploration of risk charac-teristics of information security threats and related public informa-tion search behavior. Information Systems Research, 26(3), 619–633.

Wang, Y. D., & Emurian, H. H. (2005). An overview of online trust:Concepts, elements, and implications. Computers in HumanBehavior, 21(1), 105–125.

Westin, A. F. (1967). Privacy and freedom. New York: Atheneum.Wu, J., Ren, G., & Li, X. (2007). Source address validation: Architecture

and protocol design (pp. 276–283). Beijing: IEEE InternationalConference on Network Protocols.

Ye, N., Farley, T., & Lakshminarasimhan, D. (2006). An attack-normseparation approach for detecting cyber attacks. InformationSystems Frontiers, 8(3), 163–177.

Publisher’s Note Springer Nature remains neutral with regard to juris-dictional claims in published maps and institutional affiliations.

Jae Kyu Lee is the Distinguished Professor of School of Management atXi’an Jiaotong University and Professor Emeritus of Korea AdvancedInstitute of Science and Technology (KAIST). He has been professor ofKAIST since 1985, and finished his tenure as HHI Chair Professor. He isa fellow and was the President (2015-6) of Association for Information

56 Inf Syst Front (2020) 22:45–57

Page 13: Reconciliation of Privacy with Preventive Cybersecurity ... · Internet Approach Jae Kyu Lee1,2 & Younghoon Chang3 & Hun Yeong Kwon4 & Beopyeon Kim5 Published online: 22 January 2020

Systems, the global organization of 4,000 Information System re-searchers. He is the founder of Principles for the Bright Internet andfounded Bright Internet Research Center at KAIST and Xi’an JiaotongUniversity. He also founded the Bright Internet Global Summit (BIGS)2017 in Seoul, Korea and co-chaired BIGS2018 at San Francisco in theUSA and BIGS2019 at Munich in Germany. He founded the BrightInternet Project Consortium in 2019 as posted at www.brightinternet.org. He received his Ph.D. in Information and Operations Managementfrom the Wharton School, University of Pennsylvania in 1985.

Younghoon Chang is an Associate Professor and a Doctoral Supervisorin the School of Management and Economics at Beijing Institute ofTechnology, Beijing, China. He received his Ph.D. degree in Business& Technology Management from Korea Advanced Institute of Scienceand Technology (KAIST), South Korea. His research interests includeinformation privacy, privacy policy & regulation, ITuser behavior, sharedeconomy, crowdsourcing, and smart health. His articles have appeared inInformation Systems Frontiers, Information & Management,Government Information Quarterly, Journal of Global InformationManagement, Behavior and Information Technology, IndustrialManagement & Data Systems, Telecommunications Policy as well as inthe proceedings of international conferences. He is currently serving as anassociate editor of Asia Pacific Journal of Information Systems and an

editorial review board member of Journal of Computer InformationSystems.

Hun Yeong Kwon is a Professor of Graduate School of InformationSecurity at Korea University since 2015. He was a professor in theCollege of Law at the Kwangwoon University from 2008 to 2015.Having majored in administrative and ICT laws, and he has worked forthe Korean government and the academic community for the last 20 yearson informatization and ICTof Korea as well as the legal framework for e-government. He is a president of the Cyber Communication AcademicSociety in Korea, and he was a president of the Korea Society of InternetEthics from 2017 to 2018.

Beopyeon Kim is a Ph.D. candidate and a researcher at the Center forCyber Security Policy of Korea University. Having majored in adminis-trative and ICT laws, and area of research interests are Law and Policy forInformation Security, Law on Internet, and Law on Privacy. She receiveda Master Degree in Law from Kwangwoon University in 2014. She hasworked the Korean Education Development Institute in 2014 as a re-searcher and had a researcher of the Korean Institute of IntellectualProperty in 2016.

Inf Syst Front (2020) 22:45–57 57